نتایج جستجو برای: context free grammar

تعداد نتایج: 944758  

2000
Jose L. Verdú-Mas Jorge Calera-Rubio Rafael C. Carrasco

In this paper, we compare three different approaches to build a probabilistic context-free grammar for natural language parsing from a tree bank corpus: 1) a model that simply extracts the rules contained in the corpus and counts the number of occurrences of each rule 2) a model that also stores information about the parent node's category and, 3) a model that estimates the probabilities accord...

Journal: :Computational Linguistics 2007
David Chiang

We present a statistical machine translation model that uses hierarchical phrases—phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a parallel text without any syntactic annotations. Thus it can be seen as combining fundamental ideas from both syntax-based translation and phrase-based translation. We describe our system’s training and ...

2013
Mahesh Viswanathan V. S. P. Vijay Bhattiprolu

We show that every unary stochastic context-free grammar with polynomially bounded ambiguity, has an equivalent probabilistic automaton.

2010
Shay B. Cohen Noah A. Smith

We consider the search for a maximum likelihood assignment of hidden derivations and grammar weights for a probabilistic context-free grammar, the problem approximately solved by “Viterbi training.” We show that solving and even approximating Viterbi training for PCFGs is NP-hard. We motivate the use of uniformat-random initialization for Viterbi EM as an optimal initializer in absence of furth...

Journal: :J. Artif. Intell. Res. 2013
Ioannis Konstas Mirella Lapata

Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free g...

2008
Katsuhito Sudoh Taro Watanabe Jun Suzuki Hajime Tsukada Hideki Isozaki

The NTT Statistical Machine Translation System consists of two primary components: a statistical machine translation decoder and a reranker. The decoder generates kbest translation canditates using a hierarchical phrase-based translation based on synchronous context-free grammar. The decoder employs a linear feature combination among several real-valued scores on translation and language models...

Journal: :IJCLCLP 1999
Yi-Chung Lin Keh-Yih Su

In this paper, a level-synchronous parsing mechanism, named Phrase-Level Building (PLB), is proposed to incorporate wide-scope contextual information for parsing illformed sentences. This mechanism regards the task of parsing a sentence as the task of building the phrase-levels for the sentence. Therefore, the wide-scope contextual information in the phrase-levels can be used to help narrow dow...

2005
Yan Zhang Hideki Kashioka

The acquisition of grammar from a corpus is a challenging task in the preparation of a knowledge bank. In this paper, we discuss the extraction of Chinese grammar oriented to a restricted corpus. First, probabilistic context-free grammars (PCFG) are extracted automatically from the Penn Chinese Treebank and are regarded as the baseline rules. Then a corpusoriented grammar is developed by adding...

2002
Jose L. Verdú-Mas Mikel L. Forcada Rafael C. Carrasco Jorge Calera-Rubio

In this paper, we compare three different approaches to build a probabilistic context-free grammar for natural language parsing from a tree bank corpus: (1) a model that simply extracts the rules contained in the corpus and counts the number of occurrences of each rule; (2) a model that also stores information about the parent node’s category, and (3) a model that estimates the probabilities ac...

2005
Daniel A. Woods

1 Current methods model RNA sequence and secondary structure as stochastic context-free grammars, and then use a generative learning model to find the most likely parse (and, therefore, the most likely structure). As we learned in class, discriminative models generally enjoy higher performance than generative learning models. This implies that performance may increase if discriminative learning...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید