Robust Training under Linguistic Adversity
نویسندگان
چکیده
Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.
منابع مشابه
A Robust Reliable Closed Loop Supply Chain Network Design under Uncertainty: A Case Study in Equipment Training Centers
The aim of this paper is to propose a robust reliable bi-objective supply chain network design (SCND) model that is capable of controlling different kinds of uncertainties, concurrently. In this regard, stochastic bi-level scenario based programming approach which is used to model various scenarios related to strike of disruptions. The well-known method helps to overcome adverse effects of disr...
متن کاملA replication of the relationship between adversity earlier in life and elderly suicide rates using five years cross-national data
BACKGROUND Although life-long adversity has been suggested as a protective factor for elderly suicides, studies examining protective factors for elderly suicides are scarce. A cross-national study examining the relationship between elderly suicide rates and several proxy measures of adversity earlier in life was undertaken to replicate earlier findings by using five consecutive years data on el...
متن کاملHypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection
The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods base...
متن کاملAdaptive Learning of Linguistic Hierarchy in a
Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances per...
متن کاملAdaptive Learning of Linguistic Hierarchy in a Multiple Timescale Recurrent Neural Network
Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances per...
متن کامل