نتایج جستجو برای: essay questionnaire

تعداد نتایج: 221805  

Introduction: In order to benefit from the advantages of essay exams, one must be sure to reliably judge students based on essay test scores. The aim of this study was to examine the correlation of scores in the essay and MCQ parts of renal pathophysiology final exam and students’ views about the effect of the type of test on their study. Methods: This descriptive correlational survey wa...

1998
Darrell Laham Thomas K. Landauer

LSA, a mathematical modeling technique, captures the essential relationships between text documents and word meaning, or semantics, the knowledge base which must be accessed to evaluate the quality of content. Several educational applications that employ LSA have been developed: (1) selecting the most appropriate text for learners with variable levels of background knowledge, (2) automatically ...

2011
Xiaoge Jia Jianguo Tian

This essay reports part of the results of a questionnaire investigating 74 college students’ self-evaluation of their writing behaviors and beliefs on the macro discourse level. Then it moves on to analyze the statistics obtained about students’ macro-level writing behaviors and beliefs in terms of making an outline, writing a thesis, providing topic sentences and making conclusions and the pur...

2013
Prema Nedungadi Jyothi L Raghu Raman

In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use...

Journal: :IJLT 2013
Rod D. Roscoe Laura K. Varner Scott A. Crossley Danielle S. McNamara

Various computer tools have been developed to support educators’ assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students’ writing proficiency does not necessarily improve as a result of interacting with the software. ...

2004
Yigal Attali Jill Burstein

E-rater has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater that differs from the previous one (V.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and presents evidence on the validity and reliabil...

Journal: :CoRR 2017
Amber Nigam Vibhore Goyal

Automated Essay Scoring (AES) has been quite popular and is being widely used. However, lack of appropriate methodology for rating nonnative English speakers’ essays has meant a lopsided advancement in this field. In this paper, we report initial results of our experiments with nonnative AES that learns from manual evaluation of nonnative essays. For this purpose, we conducted an exercise in wh...

2015
Peter Phandi Kian Ming Adam Chai Hwee Tou Ng

Most of the current automated essay scoring (AES) systems are trained using manually graded essays from a specific prompt. These systems experience a drop in accuracy when used to grade an essay from a different prompt. Obtaining a large number of manually graded essays each time a new prompt is introduced is costly and not viable. We propose domain adaptation as a solution to adapt an AES syst...

Journal: :CoRR 2016
Emad Fawzi Al-Shalabi

In this article, an automated system is proposed for essay scoring in Arabic language for online exams based on stemming techniques and Levenshtein edit operations. An online exam has been developed on the proposed mechanisms, exploiting the capabilities of light and heavy stemming. The implemented online grading system has shown to be an efficient tool for automated scoring of essay questions.

2011
Kristen Parton Joel R. Tetreault Nitin Madnani Martin Chodorow

We describe our submissions to the WMT11 shared MT evaluation task: MTeRater and MTeRater-Plus. Both are machine-learned metrics that use features from e-rater R ©, an automated essay scoring engine designed to assess writing proficiency. Despite using only features from e-rater and without comparing to translations, MTeRater achieves a sentencelevel correlation with human rankings equivalent t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید