نتایج جستجو برای: multi word lexical units

تعداد نتایج: 741999  

2013
Thomas Hannagan James S. Magnuson Jonathan Grainger

How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition pheno...

2016
Stefano Faralli Alexander Panchenko Christian Biemann Simone Paolo Ponzetto

We present a new hybrid lexical knowledge base that combines the contextual information of distributional models with the conciseness and precision of manually constructed lexical networks. The computation of our countbased distributional model includes the induction of word senses for single-word and multi-word terms, the disambiguation of word similarity lists, taxonomic relations extracted b...

2010
Jelena Havelka Clive Frankish

Case mixing is a technique that is used to investigate the perceptual processes involved in visual word recognition. Two experiments examined the effect of case mixing on lexical decision latencies. The aim of these experiments was to establish whether different case mixing patterns would interact with the process of appropriate visual segmentation and phonological assembly in word reading. In ...

Journal: :CoRR 2016
Qi Li Tianshi Li Baobao Chang

Word embeddings play a significant role in many modern NLP systems. However, most prevalent word embedding learning methods learn one representation per word which is problematic for polysemous words and homonymous words. To address this problem, we propose a multi-phase word sense embedding learning method which utilizes both a corpus and a lexical ontology to learn one embedding per word sens...

The present corpus-based lexical study reports the development of a Pharmacy Academic Word List (PAWL); a list of the most frequent words from a corpus of 3,458,445 tokens made up of 800 most recent pharmacy texts including research articles, review articles, and short communications in four sub-disciplines of pharmacy. WordSmith (Scott, 2017) and AntWordProfiler (Anthony, 2014) were used to sc...

2011
M. Ali Basha Shaik Amr El-Desoky Mousa Ralf Schlüter Hermann Ney

German is a highly inflected language with a large number of words derived from the same root. It makes use of a high degree of word compounding leading to high Out-of-vocabulary (OOV) rates, and Language Model (LM) perplexities. For such languages the use of sub-lexical units for Large Vocabulary Continuous Speech Recognition (LVCSR) becomes a natural choice. In this paper, we investigate the ...

2007
Stefan Th. Gries

In this paper, I investigate the phonological similarity of different elements of the phonological pole of multi-word units. I discuss two case studies on slightly different levels of abstractness. The first case study investigates lexically fully-specified V-NP idioms such as kick the bucket and lose one's cool; the idioms investigated are taken from the Collins Cobuild Dictionary of Idioms (2...

2012
Marilisa Amoia Massimo Romanelli

In this paper, we describe the system we submitted to the SemEval-2012 Lexical Simplification Task. Our system (mmSystem) combines word frequency with decompositional semantics criteria based on syntactic structure in order to rank candidate substitutes of lexical forms of arbitrary syntactic complexity (oneword, multi-word, etc.) in descending order of (cognitive) simplicity. We believe that t...

Journal: :Journal of Memory and Language 2021

• Chinese modifier-noun idioms are processed as Multi-Constituent Units. MCUs represented lexically single units. Parafoveal processing operates over the whole MCU. likely to be and Units (MCUs, a multi-word unit with lexical representation, see Zang, 2019). 1-character verb 2-character noun structure foveally, but not parafoveally, (Yu et al., 2016), probably because only loosely constrains id...

Journal: :Computational Linguistics 2001
Gökhan Tür Dilek Z. Hakkani-Tür Andreas Stolcke Elizabeth Shriberg

We present a probabilistic model that uses both prosodic and lexical cues for the automatic segmentation of speech into topically coherent units. We propose two methods for combining lexical and prosodic information using hiddenMarkov models and decision trees. Lexical information is obtained from a speech recognizer, and prosodic features are extracted automatically from speech waveforms. We e...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید