نتایج جستجو برای: human evaluation

تعداد نتایج: 2394990  

Purpose: The present research aims to comparatively study different methods for evaluating the accessibility of websites and analyze the results of case study concerning websites of ministries of Iranian government, in order to indicate the strengths, weaknesses, and differences in evaluation findings by applying each of website accessibility methods. Methodology: In this paper, initially the ...

2009
Albert Gatt Anja Belz Eric Kow

The TUNA-REG’09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNAREG’09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submi...

1994
Lynette Hirschman

* Cross-system evaluation: This is a mainstay of the periodic ARPA evaluations on competing systems. Multiple sites agree to run their respective systems on a single application, so that results across systems are comparable. This includes evaluations such as message understanding (MUC)[6], information retrieval (TREC)[7], spoken language systems (ATIS)[8], and automated speech recognition (CSR...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه صنعتی شریف - دانشکده مهندسی صنایع 1380

چکیده ندارد.

2011
Shiry Ginosar

A novel method for the evaluation of Interactive IR systems is presented. It is based on Human Computation, the engagement of people in helping computers solve hard problems. The Phetch image-describing game is proposed as a paradigmatic example for the novel method. Research challenges for the new approach are outlined.

2017
Ryan Lowe Michael Noseworthy Iulian Serban Nicolas Angelard-Gontier Yoshua Bengio Joelle Pineau

Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality. Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewe...

2017
Ryan Lowe Michael Noseworthy Iulian V. Serban Nicolas Angelard-Gontier Yoshua Bengio Joelle Pineau

Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality (Liu et al., 2016). Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of n...

2012
LEI HE

As the world economy entered into a high-speed development period, company’s demand of talented person become bigger and bigger, and the human resource value assessment gain more and more attention. Based on the existing human resources value evaluation method, the paper has proposed a human resource value evaluation model based on the assets assessment. The model is established on the basis of...

2008
Om Deshmukh Sachindra Joshi Ashish Verma

Pronunciation evaluation is an important module of every spoken language evaluation system. Automatic evaluation of quality of pronunciation that can mimic the performance of human assessors is a difficult task as human assessment accounts for several nuances of pronunciation including vowel substitutions and quality of consonants. This paper presents a novel approach that combines the knowledg...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید