A Report on the Automatic Evaluation of Scientific Writing Shared Task
نویسندگان
چکیده
The Automated Evaluation of Scientific Writing, or AESW, is the task of identifying sentences in need of correction to ensure their appropriateness in a scientific prose. The data set comes from a professional editing company, VTeX, with two aligned versions of the same text – before and after editing – and covers a variety of textual infelicities that proofreaders have edited. While previous shared tasks focused solely on grammatical errors (Dale and Kilgarriff, 2011; Dale et al., 2012; Ng et al., 2013; Ng et al., 2014), this time edits cover other types of linguistic misfits as well, including those that almost certainly could be interpreted as style issues and similar “matters of opinion”. The latter arise because of different language editing traditions, experience, and the absence of uniform agreement on what “good” scientific language should look like. Initiating this task, we expected the participating teams to help identify the characteristics of “good” scientific language, and help create a consensus of which language improvements are acceptable (or necessary). Six participating teams took on the challenge.
منابع مشابه
Automated Evaluation of Scientific Writing: AESW Shared Task Proposal
The goal of the Automated Evaluation of Scientific Writing (AESW) Shared Task is to analyze the linguistic characteristics of scientific writing to promote the development of automated writing evaluation tools that can assist authors in writing scientific papers. The proposed task is to predict whether a given sentence requires editing to ensure its “fit” with the scientific writing genre. We d...
متن کاملThe Effect of Variations in Integrated Writing Tasks and Proficiency Level on Features of Written Discourse Generated by Iranian EFL Learners
In recent years, a number of large-scale writing assessments (e.g., TOEFL iBT) have employed integrated writing tests to measure test takers’ academic writing ability. Using a quantitative method, the current study examined how written textual features and use of source material(s) varied across two types of text-based integrated writing tasks (i.e., listening-to-write vs. reading-to-write) and...
متن کاملEFL Learner’s Evaluation of Writing Tasks in Iran’s TOEFL and IELTS Preparation Courses in Light of the Process-oriented Approach
The purpose of this research was to analyze EFL writing tasks in two of the most popular English for Speakers of Other Languages (ESOL) exam preparation courses in Iran, namely IELTS and TOEFL. Having collected the criteria of writing task appropriateness in light of the process-oriented approach to writing instruction, we asked 60 learner participants to rate EFL writing tasks in 3 IELTS and 3...
متن کاملFeature-Rich Error Detection in Scientific Writing Using Logistic Regression
The goal of the Automatic Evaluation of Scientific Writing (AESW) Shared Task 2016 is to identify sentences in scientific articles which need editing to improve their correctness and readability or to make them better fit within the genre at hand. We encode many different types of errors occurring in the dataset by linguistic features. We use logistic regression to assign a probability indicati...
متن کاملUW-Stanford System Description for AESW 2016 Shared Task on Grammatical Error Detection
This is a report on the methods used and results obtained by the UW-Stanford team for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016 on grammatical error detection. This team developed a symbolic grammar-based system augmented with manually defined mal-rules to accommodate and identify instances of highfrequency grammatical errors. System results were entered both for th...
متن کامل