text
stringlengths 4
222k
| label
int64 0
4
|
---|---|
Fluent information flow is important in any information-intensive area of decision making, but critical in healthcare. Clinicians are responsible for making decisions with even life-and-death impact on their patients' lives. The flow is defined as links, channels, contact, or communication to a pertinent person or people in the organisation (Glaser et al., 1987) . In Australian healthcare, failures in this flow are associated with over onetenth of preventable adverse events (ACS, 2008; ACS, 2012) . Failures in the flow are tangible in clinical handover, that is, when a clinician is transferring professional responsibility and accountability, for example, at shift change (AMA, 2006) . Regardless of verbal handover being accurate and comprehensive, anything from two-thirds to all of this information is lost after three to five shifts if no notes are taken or they are taken by hand (Pothier et al., 2005; Matic et al., 2011) .There is a proposal to use a semi-automated approach of speech to text (STT) and information extraction (IE) for taking the handover notes (Suominen et al., 2013) . First, a STT (a.k.a. speech recognition) engine converts verbal information into written, free-form text. Then, an IE system fills out a handover form by automatically identifying relevant text-snippets for each slot of the form. Finally, this pre-filled form is given to a clinician to proof and sign off.The semi-automated approach evokes an STT challenge.First, the correctness of STT is challenged by background noise, other people's voices, and other characteristics of clinical practise that are far from a typical setting in a peaceful office. Second, the STT errors multiply when cascaded with IE. Third, correctness in cascaded STT and IE needs to be carefully evaluated as excellent, because of the severe implications that errors may have in clinical decision-making. In summary, the original voice (i.e., information) in the big noise from clinical setting and STT errors needs to be heard.Motivated by this challenge, we provide an analysis of STT errors and discuss the feasibility of phonetic similarity for their correction in this paper. Phonetic similarity (PS, a.k.a phonetic distance) addresses perceptual confusion between speech sounds and is used to improve STT (Mermelstein, 1976) . To illustrate phonetically similar words, PS measures can be seen as the rites of righting writing, that is right.The rest of the paper is organised as follows: In Section 2, we provide background for clinical STT and IE. In Section 3, we describe our simulated handover data, STT methods, PS measures, and analysis methods. In Section 4, we present the results of the error analysis and discuss the feasibility of phonetic similarity for error correction. In Section 5, final conclusions and directions for future work are given. | 0 |
Previous research has shown that formal ontologies could be used as a means not only to provide a uniform and flexible approach to integrating and describing heterogeneous data sources, but also to support the final user in querying them, thus improving the usability of the integrated system. To support the wide access to these data sources, it is crucial to develop efficient and user-friendly ways to query them (Wache et al., 2001) .In this paper, we present a Natural Language (NL) interface of an ontology-based query tool, called Quelo 1 , which allows the end user to formulate a query without any knowledge either of the formal languages used to specify ontologies, or of the content of the ontology being used. Following the conceptual authoring approach described in (Tennant et al., 1983; Hallett et al., 2007) , this interface masks the composition of a formal query as the composition of an English text describing the equivalent information needs using natural language generation techniques. The natural language generation system that we propose for Quelo's NL interface departs from similar work (Hallett et al., 2007; Franconi et al., 2010a; Franconi et al., 2011b; Franconi et al., 2010b; Franconi et al., 2011a) in that it makes use of standard grammar based surface realisation techniques. Our contribution is two fold. First, we introduce a chart based surface realisation algorithm which supports the kind of incremental processing required by ontology driven query formulation. Crucially, this algorithm avoids confusing the end user by preserving a consistent ordering of the query elements throughout the incremental query formulation process. Second, we show that grammar based surface realisation better supports the generation of fluent, natural sounding queries than previous template-based approaches.The paper is structured as follows. Section 2 discusses related work and situates our approach. Section 3 describes the task being addressed namely, ontology driven query formulation. It introduces the input being handled, the constraints under which generation operates and the operations the user may perform to build her query. In Section 4, we present the generation algorithm used to support the verbalisation of possible queries. Section 5 reports on an evaluation of the system with respect to fluency, clarity, coverage and incrementality. Section 6 concludes with pointers for further research. | 0 |
People use analogy heavily in written explanations. Instructional texts, for example, use analogy to convey new concepts and systems of related ideas to learners. Any learning by reading system must ultimately include the capability of understanding such analogies. Here we combine Gentner's (1983) structure-mapping theory with ideas from dialogue act theory (Traum, 2000) to describe a catalog of analogical dialogue acts (ADAs) which capture the functional roles that discourse elements play in instructional analogies. We outline criteria for identifying ADAs in text and describe what operations they suggest for discourse processing. We provide evidence that this model captures important aspects of understanding instructional analogies via a simulation that uses knowledge gleaned from reading instructional analogies to answer questions.We start by reviewing the relevant aspects of structure-mapping theory and dialogue act theory. Then we describe our catalog of analogical dialogue acts, based on a theoretical analysis of the roles structure-mapping operations can play in language understanding. A prototype implementation of these ideas is described next, followed by an experiment illustrating that these ideas can be used to understand analogies in text, based on answering questions. We close with a discussion of related and future work. | 0 |
Les lexiques bilingues sont une ressource importante pour différentes applications relevant du traitement automatique des langues comme en traduction assistée par ordinateur ou en recherche d'information inter-langue. Bien que les travaux s'appuyant sur des corpus parallèles 1 aient montré de très bons résultats, ce type de corpus reste difficile à collecter (Fung et Yee, 1998) et 1. Un corpus parallèle est un ensemble de textes accompagnés de leurs traductions dans une ou plusieurs langues (Bowker et Pearson, 2002) .plus particulièrement quand il s'agit de traiter des corpus spécialisés ou des couples de langues rares ou moins usitées . L'exploitation des corpus comparables 2 a marqué un tournant dans la tâche d'extraction de lexiques bilingues, et suscite un intérêt constant depuis le milieu des années 1990 grâce à l'abondance et la disponibilité de tels corpus (Rapp, 1995; Fung, 1995; Rapp, 1999; Déjean et al., 2002; Gaussier et al., 2004; Laroche et Langlais, 2010) . L'essor du Web ayant sensiblement facilité la collecte de grandes quantités de données multilingues, les corpus comparables se sont naturellement imposés comme une alternative aux corpus parallèles. Ils ont donné lieu à plusieurs travaux dont le dénominateur commun est l'hypothèse selon laquelle les mots qui sont en correspondance de traduction, ont de grandes chances d'apparaître dans les mêmes contextes (Rapp, 1999) . Cette hypothèse découle directement de la proposition souvent citée de Firth (1957) : « On reconnaît un mot à ses fréquentations » 3 . Rapp (1995) et Fung (1995) ont été les premiers à introduire les corpus comparables. Ils se sont appuyés sur l'idée de caractérisation du contexte des mots, contrairement aux travaux s'appuyant sur les corpus parallèles, qui eux se basaient sur des informations positionnelles. En 1998 a introduit la méthode directe, reprise dans de nombreux travaux, notamment ceux de (Rapp, 1999) . Dans cette méthode, la traduction d'un mot comporte plusieurs étapes. Le mot est tout d'abord caractérisé par un vecteur représentatif de son contexte. Puis, ce vecteur est traduit dans la langue cible à l'aide d'un dictionnaire aussi appelé lexique de transfert ou lexique pivot. Enfin, il reste à comparer ce vecteur avec tous les vecteurs de contexte des mots de la langue cible, et en extraire les n plus proches comme traductions candidates. Par la suite, une partie des travaux a porté sur l'adaptation et l'amélioration de cette méthode à différents types de corpus (corpus de langue générale ou de spécialité), et à différentes langues et différents types de termes (termes simples, termes complexes, collocations, etc.) (Déjean et Gaussier, 2002) , . De nouvelles méthodes ont également été proposées telles que l'approche par similarité interlangue (Déjean et Gaussier, 2002) , l'utilisation de l'Analyse en Composantes Canoniques (CCA) (Haghighi et al., 2008) . Récemment, Li et Gaussier (2010) et Li et al. (2011) se sont intéressés à l'aspect inverse qui consiste à améliorer la comparabilité des corpus comparables afin d'augmenter l'efficacité des méthodes d'extraction de lexiques bilingues.La plupart des travaux utilisant les corpus comparables ont comme dénominateur commun le contexte, qui représente le coeur de l'extraction lexicale bilingue. La question principale à se poser est alors la suivante : étant donné un mot quelconque, comment choisir les mots qui caractérisent au mieux son contexte ? Selon l'état de l'art, le contexte d'un mot donné est habituellement représenté par les mots faisant partie de son environnement, c'est-à-dire, les mots qui l'entourent. Ces mots sont extraits, soit à l'aide d'une fenêtre contextuelle (Rapp, 1999; Déjean et Gaussier, 2002) , soit à l'aide des relations de dépendances syntaxiques (Gamallo, 2007) . L'un des problèmes sous-jacent au contexte extrait à l'aide des fenêtres contextuelles est le choix de la taille des fenêtres. Celle-ci est habituellement fixée empiriquement, et bien que différentes études aient montré une tendance à choisir des fenêtres de petite taille quand il s'agit de caractériser des mots fréquents, et des fenêtres de grande taille quand il s'agit de caractériser des mots peu fréquents (Prochasson et Morin, 2009) , cela reste imprécis car il n'y a toujours pas de méthode dite optimale pour le choix de la taille de la fenêtre contextuelle. Quant aux relations de dépendances syntaxiques, leur efficacité est très sensible à la taille des corpus, et bien que cette représentation soit plus intéressante d'un point de vue sémantique, elle atteint ses limites lorsqu'il s'agit de traiter des corpus de petite taille. Une proposition, qui vient naturellement à l'esprit consiste à utiliser conjointement ces deux représentations afin de tirer profit de leurs avantages respectifs. Une première approche exploitant les deux représentations proposée par Andrade et al. (2011) combine quatre modèles statistiques et compare les dépendances lexicales pour identifier les traductions candidates. Dans cet article, nous proposons une autre manière de combiner les deux précédentes représentations contextuelles, partant de l'intuition que cette combinaison permettrait un lissage du contexte en prenant en compte deux informations complémentaires qui sont : (i) l'information globale véhiculée par la représentation par fenêtre contextuelle et (ii) une information sémantique plus fine apportée par les relations de dépendances syntaxiques. L'objectif étant d'améliorer la représentation contextuelle et les performances de l'extraction de lexiques bilingues à partir de corpus comparables.Dans la suite de cet article, nous présentons en section 2 les deux principales stratégies de représentations contextuelles. La section 3 décrit ensuite nos deux approches de combinaison de contextes. La section 4 se concentre sur l'évaluation des méthodes mises en oeuvre. Nous terminons enfin par une discussion en section 5 et une conclusion en section 6. | 0 |
Visual icons play a crucial role in providing information about the extra level of social media information. SemEval 2018 shared task for researchers to predict, given a tweet in English or Spanish, its most likely associated emoji (Barbieri et al., 2018 (Barbieri et al., , 2017 (Task 2, Multilingual Emoji Prediction) , which is organized into two optional subtask (subtask 1 and subtask 2) respectively in English and Spanish.For subtask 1, we adopt a combination model to predict emojis, which consists of traditional Natural Language Processing (NLP) methods and deep learning methods. The results returned by the classifier with traditional NLP features, by the neural network model and by the combination model are voted to get the final result. For subtask 2, we only use deep learning model. | 0 |
The task of question answering (QA) in Natural Language Processing typically involves producing an answer for a given question using a context that contains evidence to support the answer. The latest advances in pre-trained language models resulted in performance close to (and sometimes exceeding) a human performance when fine-tuned on several QA benchmarks , (Brown et al., 2020) , (Bao et al., 2020) , (Raffel et al., 2020) . However, to achieve this result, these models need to be fine-tuned on tens of thousands of examples. In a more realistic and practical scenario, where only a handful of annotated training examples are available, their performance degrades significantly. For instance, (Ram et al., 2021) show that, when only 16 training examples are available, the Robertabase (Liu et al., 2019) and SpanBERT-base (Joshi et al., 2020 ) obtain a F1 score of 7.7 and 18.2 respectively on SQuAD (Rajpurkar et al., 2016) . This is far lower than the F1 score of 90.3 and 92.0 when using the full training set of >100000 examples. Through experimental analysis, we observe that this degradation is majorly attributed to the disparities between fine-tuning and pre-training frameworks (a combination of the input-output design and the training objective). And to address this, we propose a fine-tuning framework (referred to as FewshotQA hereby) that is directly aligned with the pre-training framework, in terms of both the input-output design and the training objective. Specifically, we construct the input as a concatenation of the question, a mask token and context (in that order) and fine-tune a text-to-text pre-trained model using the same objective used during its pre-training to recover the answer. These text-totext pre-trained model(s) were originally trained to recover missing spans of text in a given input sequence. And since our proposed fine-tuning setup is very much identical to the pre-training setup, this enables the model to make the best use of the pre-training "knowledge" for the fine-tuning task of question answering.The effectiveness of our FewshotQA system is shown in its strong results (an absolute average gain of 34.2 F1 points) on multiple QA benchmarks in a few-shot setting. We show that the gains extend further when used with larger sized models. We also test FewshotQA on a multilingual benchmark by replacing the pre-trained model with its multi-BERT* BART T5x 1 x 2 m x 4 m x 6 m x 8x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 3 x 5 x 7 m 1 x 2 x 3 m 2 x 5 x 6 x 1 m m x 8 lingual counterpart and observe significant gains in comparison to a strong XLM-Roberta baseline (an absolute gain of 40 F1 points when there are only 16 training examples).2 Few-shot fine-tuning framework designOur proposed few-shot fine-tuning framework design involves a different choice of input-output design and the training objective than the current standard for QA fine-tuning frameworks. We provide a motivation for this design by comparison with the existing frameworks. Figure 1 illustrates this in detail. The pre-training framework is also pictured for comparison. Note that we focus on the bi-directional masked language models (MLMs) instead of the auto-regressive language models (such as GPT-2 (Radford et al., 2019) ) as the MLMs typically are deemed superior for QA tasks , (Lewis et al., 2020) . Figure 1a illustrates the comparison between pretraining setups for three types of models. Firstly, there are BERT-style encoder-only models (referred to as BERT * ) ) that are pre-trained with the standard masked language modeling objective (also called a denoising objective) of predicting the masked tokens in an input sequence I. The masked tokens here typically correspond to a single word or a sub-word. Then, BART (Lewis et al., 2020) uses a corrupted input reconstruction objective to recover the original input. | 0 |
The paper presents a hybrid approach for Deep Semantic Machine Translation. For that purpose, however, the linguistic phenomena that constitute deep semantics have to be defined. A list of such phenomena have been considered in (Hajič, 2011) and (Bos, 2013) , among others. They include but are not limited to the following ones: Semantic Roles (words vs. predicates, Lexical Semantics (Word Sense Disambiguation (WSD)), Multiword Expressions (MWE), Logical Form (LF), Metonymy, Named Entities (NE), Co-reference (pronominal, bridging anaphora), Verb Phrase Ellipsis, Collective/Distributive NPs, Scope (Negation, Quantifiers), Presuppositions, Tense and Aspect, Illocution Force, Textual Entailment, Discourse Structure/ Rhetorical Relations, neo-Davidsonian Events, Background Knowledge, Information Structure etc. All the mentioned phenomena represent various levels of granularity and different linguistic dimensions.In our deep Machine Translation (MT) system we decided to exploit the following components in the transfer phase: Lexical Semantics (WSD), MultiWord Expression (MWE), Named Entities (NE) and Logical Form (LF). For the incorporation of Lexical Semantics through the exploitation of WordNet and Valency dictionary the knowledge-based approach to WSD has been accepted. Concerning the LF, we rely on Minimal Recursion Semantics (MRS) in its two variants -the full one (MRS) and the more underspecified one (Robust MRS (RMRS)). The MWE and NE are parts of the lexicons. We should note that there are also other appropriate LF frameworks that are briefly mentioned below.One of the MRS-related semantic formalisms is the Abstract Meaning Representation (AMR 1 ), which aims at achieving whole-sentence deep semantics instead of addressing various isolated holders of semantic information (such as, NER, coreferences, temporal anchors, etc.). AMR also builds on the available syntactic trees, thus contributing to the efforts on sembanking. It is English-dependent and it makes an extensive use of PropBank framesets (Kingsbury and Palmer, 2002) and (Palmer et al., 2005) . Its concepts are either English words or special keywords. AMR uses approximately 100 relations. They include: frame arguments, general semantic relations, relations for quantities and date-entities, etc.The Groningen Meaning Bank (GMB) integrates various phenomena in one formalism. It has a linguistically motivated, theoretically solid (CCG 2 /DRT 3 ) background.In this paper the NLP strategies are presented for Hybrid Deep Machine Translation in the direction from English-to-Bulgarian. Under Hybrid MT we understand the usage of the automatic Moses system together with a rule-based component at the transfer phase.The paper is structured as follows:in section 2 the components of the hybrid MT architecture is presented. Section 3 discusses the deep semantic processing. Section 4 reports on the current experiments and results. Section 5 concludes the paper. | 0 |
Endeavors to better understand transformer-based masked language models (MLMs), such as BERT, are ever growing since their introduction in 2017 (cf. Rogers et al. (2020) for an overview). While the BERTology movement has enhanced our knowledge on the reasons behind BERT's performance in various ways, still plenty remains unanswered. Less well studied and challenging are linguistic phenomena, where, besides contextual information, identification of an antecedent is needed, such as relative clauses (RCs). , e.g., analyzed BERT's comprehension of function words, showing how relativizers and prepositions are quite challenging for BERT. Similarly, find RCs to be difficult for BERT in the CoLA acceptability tasks. In this paper, we focus on RCs in American English to further enhance our understanding of the grammatical and semantic knowledge captured by pre-trained MLMs, evaluating three models: BERT, RoBERTa, and ALBERT. For our analysis, we train probing classifiers, consider each models' performance on diagnostic cases, and test predictions in a masked language modeling task on selected semantic and grammatical constraints of RCs.RCs are clausal post-modifiers specifying a preceding noun phrase (antecedent) and are introduced by a relativizer (e.g., which). Extensive corpus research (Biber et al., 1999) found that the overall most common relativizers are that, which, and who. The relativizer occupies the subject or object position in a sentence (see examples (1-a) and (1-b)). In subject RCs, the relativizer is obligatory (Huddleston and Pullum, 2002, 1055) , while in object position omission is licensed (e.g., zero in example (1-b)).(1) a. Children who eat vegetables are likely to be healthy. (subject relativizer, relativizer is obligatory) b. This is the dress [that/which/zero] I brought yesterday. (object relativizer, omission possible)Relativizer choice depends on an interplay of different factors. 1 Among these factors, the animacy constraint (Quirk, 1957) is near-categorical: for animate head nouns the relativizer who (see Example 1) is strongly prioritized (especially over which) (D'Arcy and Tagliamonte, 2010) .Our aims are (1) to better understand whether sentence representations of pre-trained MLMs capture grammaticality in the context of RCs, (2) test the generalization abilities and weaknesses of probing classifiers with complex diagnostic cases, and (3) test prediction of antecedents and relativizers in a masked task considering also linguistic constraints. From a linguistic perspective, we ask whether MLMs correctly predict (a) grammatically plausible relativizers given certain types of antecedents (animate, inanimate) and vice versa grammatically plausible antecedents given certain relativizers (who vs. which/that), and (2) semantically plausible antecedents given certain relativizers considering the degree of specificity of predicted antecedents in comparison to target antecedents (e.g. boys as a more specific option than children in Example (1)). Moreover, we are interested in how these findings agree with probing results and investigate model specific behavior, evaluating and comparing the recent pre-trained MLMs: BERT, RoBERTa, and ALBERT. This is to our knowledge the first attempt comparing and analyzing performance of different transformer-based MLMs in such detail, investigating grammatical and semantic knowledge beyond probing.Our main contributions are the following: (1) the creation of a naturalistic dataset for probing, (2) a detailed model comparison of three recent pre-trained MLMs, and (3) fine-grained linguistic analysis on grammatical and semantic knowledge. Overall, we find that all three MLMs show good performance on the probing task. Further evaluation, however, reveals model-specific issues with wrong agreement (where RoBERTa is strongest) and distance between antecedent-relativizer and relativizer-RC verb (on which BERT and ALBERT are better). Considering linguistic knowledge, all models perform better on grammatical rather than semantic knowledge. Out of the relativizers, which is hardest to predict. Considering model-specific differences, BERT outperforms the others in predicting the actual targets, while RoBERTa captures best grammatical and semantic knowledge. ALBERT performs worst overall. | 0 |
The recent years have seen an increased interest as well as rapid progress in semantic parsing and surface realization based on graph-structured semantic representations, e.g. Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Elementary Dependency Structure (EDS; Oepen and Lønning, 2006) and Depedendency-based Minimal Recursion Semantics (DMRS; Copestake, 2009) . Still underexploited is a formal framework for manipulating graphs that parallels automata, tranducers or formal grammars for strings and trees. Two such formalisms have recently been proposed and applied for NLP. One is graph grammar, e.g. Hyperedge Replacement Gram-mar (HRG; Ehrig et al., 1999) . The other is DAG automata, originally studied by Kamimura and Slutzki (1982) and extended by Chiang et al. (2018) . In this paper, we study DAG transducers in depth, with the goal of building accurate, efficient yet robust natural language generation (NLG) systems.The meaning representation studied in this work is what we call type-logical semantic graphs, i.e. semantic graphs grounded under type-logical semantics (Carpenter, 1997) , one dominant theoretical framework for modeling natural language semantics. In this framework, adjuncts, such as adjective and adverbal phrases, are analyzed as (higher-order) functors, the function of which is to consume complex arguments (Kratzer and Heim, 1998) . In the same spirit, generalized quantifiers, prepositions and function words in many languages other than English are also analyzed as higher-order functions. Accordingly, all the linguistic elements are treated as roots in type-logical semantic graphs, such as EDS and DMRS. This makes the typological structure quite flat rather than hierachical, which is an essential distinction between natural language semantics and syntax.To the best of our knowledge, the only existing DAG transducer for NLG is the one proposed by Quernheim and Knight (2012) . Quernheim and Knight introduced a DAG-to-tree transducer that can be applied to AMR-to-text generation. This transducer is designed to handle hierarchical structures with limited reentrencies, and it is unsuitable for meaning graphs transformed from type-logical semantics. Furthermore, Quernheim and Knight did not describe how to acquire graph recognition and transduction rules from linguistic data, and reported no result of practical generation. It is still unknown to what extent a DAG transducer suits realistic NLG.The design for string and tree transducers (Comon et al., 1997) focuses on not only the logic of the computation for a new data structure, but also the corresponding control flow. This is very similar the imperative programming paradigm: implementing algorithms with exact details in explicit steps. This design makes it very difficult to transform a type-logical semantic graph into a string, due to the fact their internal structures are highly diverse. We borrow ideas from declarative programming, another programming paradigm, which describes what a program must accomplish, rather than how to accomplish it. We propose a novel DAG transducer to perform graphto-program transformation ( §3). The input of our transducer is a semantic graph, while the output is a program licensed by a declarative programming language rather than linguistic structures. By executing such a program, we can easily get a surface string. This idea can be extended to other types of linguistic structures, e.g. syntactic trees or semantic representations of another language.We conduct experiments on richly detailed semantic annotations licensed by English Resource Grammar (ERG; Flickinger, 2000) . We introduce a principled method to derive transduction rules from DeepBank (Flickinger et al., 2012) . Furthermore, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph. Taking EDS graphs, a variable-free ERS format, as input, our NLG system achieves a BLEU-4 score of 68.07. On average, it produces more than 5 sentences in a second on an x86 64 GNU/Linux platform with two Intel Xeon E5-2620 CPUs. Since the data for experiments is newswire data, i.e. WSJ sentences from PTB (Marcus et al., 1993) , the input graphs are quite large on average. The remarkable accuracy, efficiency and robustness demonstrate the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our transducer design. | 0 |
It~ecently many kinds of natural lauguage processing systems like machine translation systems have been developed and put into practical use, but ambiguity resolution ill translation and meaning interpretation is still the primary issue in such systems. These systems have conventionally adopted a rule-ba.~ed disambiguation method, using linguistic restrictions described logically in dictionary and grammar to select the suitable equivalent translation and meaning. Generally speaking, it is impossible to provide all the restrictions systematically in advance. Furthermore, such machine translation systems have suffered from inability to select the most suitable equivalent translation if the input expression meets two or more restrictions, and have difficulty in accepting any input expression that meets no restrictions.Ill order to overcome these difficulties, following methods .~r~ proposed these years: Still, each (ff them has inherent problems and is insufficient for ambiguity resolution. For example, either all e×amplc~b~mcd translation method or a statistics-based translation method needs a largescale database of translation exalnpl~, and it is difficult to collect all adequate amount of a bilingual corpus.In this paper, we propose a new method to select the suitable equivalent translation using the statistical data extracted independently from source and target language texts [Muraki 91]. The statistical data used here is linguistic statistics repre: senting the dependency degree on the pairs of expressions in each text, especially statistics for cooccurrence, i.e., how frequently the expressions cooccur in the Sallle seutence~ the sanle paragraph or tile same chapter of each text. The dependency relation in the source language is reflected in the translated text through bilingual dictionary by sc~ lecting the equivalent translation which ma.ximizes both statistics tot co-occurrence in tile source and targ(~t language text. Moreover, the method also provid~ the means to compute tile linguistic statistics on the pairs of meaning expressions. We call tlds method for equivalent translation and meaning selection DMAX Criteria (Double Maximize Criteria based on Dual Corpora).First, we make comments on the characteristics and the linfits of the conventional methods of ambiguity resolution in translation and meaning interpretation in the second section. Next, we describe the details of DMAX Criteria for equivalent translation selection in the third section. And last, we explain the means to compute the linguistic statistics on the pairs of meaning expressions. | 0 |
Topic adaptation is used as a technique to adapt language models based on small contexts of information that may not necessarily reflect an entire domain or genre. In scenarios such as lecture translation, it is advantageous to perform language model adaptation on the fly to reflect topical changes in a discourse. In these scenarios, general purpose domain adaptation techniques fail to capture the nuances of discourse; while domain adaptation works well in modeling newspapers and government texts which contain a limited number of subtopics, the genres of lectures and speech may cover a virtually unbounded number of topics that change over time. Instead of general purpose adaptation, adaptation should be performed on smaller windows of context.Most domain adaptation techniques require the reestimation of an entire language model to leverage the use of out-of-domain corpora in the construction of robust models. While efficient algorithms exist for domain adaptation, they are in practice intended to adapt language models globally over a new translation task. Topic adaptation, on the other hand, intends to adapt language models as relevant contextual information becomes available. For a speech, the relevant contextual information may come in sub-minute intervals. Well-established and efficient techniques such as Mini-mum Discrimination Information adaptation [1, 2] are unable to perform topic adaptation in real-time scenarios for large order n-gram language models. In practice, new contextual information is likely to be available before techniques such as MDI have finished LM adaptation from earlier contexts. Thus spoken language translation systems are typically unable to use the state-of-the-art techniques for the purpose of topic adaptation.In this paper, we seek to apply MDI adaptation techniques in real-time translation scenarios by avoiding the computation of the normalization term that requires all ngrams to be re-estimated. Instead, we only wish to adapt n-grams that appear within an adaptation context. Dubbed "Lazy MDI", our technique uses the same unigram ratios as MDI, but avoids normalization by applying smoothing transformations based a sigmoid function that is added as a new feature to the conventional log-linear model of phrase-based statistical machine translation (SMT). We observe that Lazy MDI performs comparably to classic MDI in topic adaptation for SMT, but possesses the desired scalability features for real-time adaptation of large-order n-gram LMs.This paper is organized as follows: In Section 2, we discuss relevant previous work. In Section 3, we review MDI adaptation. In Section 4, we describe Lazy MDI adaptation for machine translation and review how unigram statistics of adaptation texts can be derived using bilingual topic modeling. In Section 5, we report adaptation experiments on TED talks 1 from IWSLT 2010 and 2012, followed by our conclusions and suggestions for future work in Section 6. | 0 |
Scientific publications play an important role in dissemination of advances, and they are often reviewed and accepted by professionals in the domain before publication to maintain quality. In order to avoid unfairness due to identity, affiliation, and nationality biases, peer review systems have been studied extensively (Yankauer, 1991; Blank, 1991; Lee et al., 2013) , including analysis of the opinions of venue editors (Brown, 2007; Baggs et al., 2008) and evaluation of review systems (Yankauer, 1991; Tomkins et al., 2017) . It is widely believed that a possible solution for avoiding biases is to keep the author identity blind to the reviewers, called double-blind review, as opposed to only hiding the identity of the reviewers, as in single-blind review (Lee et al., 2013) . Since some personal information (e.g., author, affiliation and nationality) could implicitly affect the review results (Lee et al., 2013) , these procedures are required to keep them anonymous in double-blind review, but this is not foolproof. For example, experienced reviewers could identify some of the authors in a submitted manuscript from the context. In addition, the citation list in the submitted manuscript can be useful in identifying them (Brown, 2007) , but is indispensable as it plays an important role in the reviewing process to refer readers to related work and emphasize how the manuscript differs from the cited work.To investigate blindness in double-blind review systems, Hill and Provost (2003) and Payer et al. (2015) train a classifier to predict the authors, and analyze the results. However, they focus primarily on the utility of self-citations in the submitted manuscripts as a key to identification (Mahoney et al., 1978; Yankauer, 1991; Hill and Provost, 2003; Payer et al., 2015) , and do not take author's citation history beyond just self-citations into account. The experiment design in these studies is also limited: they use relatively small datasets, include papers only from a specific domain (e.g., physics (Hill and Provost, 2003) , computer science (Payer et al., 2015) or natural language processing (Caragea et al., 2019) ), and pre-select the set of papers and authors for evaluation (Payer et al., 2015; Caragea et al., 2019) . Furthermore, they focus on author identification, whereas knowing affiliation and the nationality also introduces biases in the reviewing process (Lee et al., 2013) .In this paper, we use the task of author identity, affiliation, and nationality predictions to analyze the extent to which citation patterns matter, evaluate our approach on large-scale datasets in many domains, and provide detailed insights into the ways in which identity is leaked. We describe the following contributions: 1. We propose approaches to identify the aspects of the citation patterns that enable us to guess the authors, affiliations, and nationalities accurately. To the best of our knowledge, this is the first study to do so. Though related studies mainly suggest authors avoid self-citations for increasing anonymity of submitted papers, we show that overlap between the citations in the paper and the author's previous citations is an incredibly strong signal, even stronger than self-citations in some settings. 2. Our empirical study is performed on (i) a realworld large-scale dataset with various fields of study (computer science, engineering, mathematics, and social science), (ii) study different relations between papers and authors, and (iii) two identification situations: "guess-at-leastone" and "cold start". For the former, we identify authors, affiliations and nationalities of the affiliations with 40.3%, 47.9% and 86.0% accuracy respectively, from the top-10 guesses. For the latter, we focus on papers whose authors are not "guessable", and find that the nationalities are still identifiable. 3. We perform further analysis on the results to answer some common questions on blind-review systems: "Which authors are most identifiable in a paper?", "Are prominent affiliations easier to identify?", and "Are double-blind reviewed papers more anonymized than single-blind?". One of the interesting findings is that 93.8% of test papers written by a prominent company can be identified with top-10 guesses. The dataset used in this work is publicly available, and the complete source code for processing the data and running the experiments is also available. 2 | 0 |
Human communication, in real-life situations, is multimodal (Kress, 2010) : To convey and understand a message uttered in natural language, people build on what is present in the multimodal context surrounding them. As such, speakers do not need to "repeat" something that is already provided by the environment; similarly, listeners leverage information from various modalities, such as vision, to interpret the linguistic message. Integrating information from multiple modalities is indeed crucial for attention and perception (Partan and Marler, 1999) since combined information from concurrent modalities can give rise to different messages (McGurk and MacDonald, 1976) .The argument that language and vision convey different, possibly complementary aspects of meaning has been largely made to motivate the need for multimodal semantic representations of words (Ba-roni, 2016; Beinborn et al., 2018) . However, computational approaches to language and vision typically do not fully explore this complementarity. To illustrate, given an image (e.g., the one depicted in Figure 1 ), popular tasks involve describing it in natural language, e.g., "A tennis player about to hit the ball" (Image Captioning; see Bernardi et al., 2016) ; answering questions that are grounded in it, e.g., Q: "What sport is he playing?", A: "Tennis" (Visual Question Answering; see Antol et al., 2015) ; having a dialogue on its entities, e.g., Q: "Is the person holding a racket?", A: "Yes." (visuallygrounded dialogue; see De Vries et al., 2017; Das et al., 2017) . While all these tasks challenge models to perform visual grounding, i.e., an effective alignment of language and vision, none of them require a genuine combination of complementary information provided by the two modalities. All the information is fully available in the visual scene, and language is used to describe or retrieve it.In this work, we propose a novel benchmark, Be Different to Be Better (in short, BD2BB), where the different, complementary information provided by the two modalities should push models develop a better, richer multimodal representation. As illustrated in Figure 1 , models are asked to choose, among a set of candidate actions, the one a person who sees the visual context depicted by the image would do based on a certain intention (i.e., their goal, attitude or feeling). Crucially, the resulting multimodal input (the sum of the image and the intention) will be richer compared to that conveyed by either modality in isolation; in fact, the two modalities convey complementary or nonredundant information (Partan and Marler, 1999) .To illustrate, a model that only relies on the (nongrounded) linguistic information conveyed by the intention, i.e., "If I have tons of energy", might consider as equally plausible any actions that have to do with playing a sport, e.g., "I will play base-ill la ba eball i h he men ill la a game of e i i h he a ill compare image of me hi ing he e i ball ill la ba eball i h he omen ill appla d m fa o ri e e i la e of all ime f ha e on of ene gCAND DATE ACT ONS Figure 1 : One real sample of our proposed task. Given an image depicting, e.g., a tennis player during a match and the intention "If I have tons of energy", the task involves choosing, from a list of 5 candidate actions, the target action that unequivocally applies to the combined multimodal input: "I will play a game of tennis with the man". The task is challening: a model exploiting a language or vision bias could fall into the trap of decoy actions containing words highlighted in blue or orange, respectively. Therefore, selecting the target action requires models perform a genuine integration of the two modalities, whose information is complementary. Best viewed in color.ball with the men" or "I will play a game of tennis with the man". Similarly, a model that only relies on the visual information conveyed by the imagea tennis player during a match-might consider as equally plausible any actions that have to do with 'tennis' and/or 'player', e.g., "I will applaud my favourite tennis player of all time" or "I will play a game of tennis with the man". In contrast, a model that genuinely combines information conveyed by both modalities should be able to select the target action, namely the only one that is both consistent with the intention and grounded in the image, i.e., "I will play a game of tennis with the man". Moreover, similarly to real-life communicative scenarios, in our approach different language inputs modulate differently the same visual context, and this gives rise to various multimodal messages. To illustrate, if the image in Figure 1 is paired with the intention "If I am tired watching", the target action "I will play a game of tennis with the man" is no longer valid. Indeed, the target action in this context is "I will leave the tennis court" (see Figure 3 ). Our work has the following key contributions:• We introduce a novel multimodal benchmark: the set of ∼ 10K image, intention, action datapoints collected via crowdsourcing and enriched with meta-annotation; the multiple choice task, BD2BB, which requires proper integration of language and vision and is specifically aimed at testing SoA pretrained multimodal models. The benchmark, together with the code and trained models, is available at:https://sites.google.com/view/bd2bb• We test various models (including the SoA multimodal, transformer-based LXMERT; Tan and Bansal, 2019) and show that, while BD2BB is a relatively easy task for humans (∼ 80% acc.), best systems struggle to achieve a similar performance (∼ 60% acc.).• We extensively analyze the results and show the advantage of exploiting multimodal pretrained representations. This confirms they are effective, but not enough to solve the task. | 0 |
Transliteration is the transformation of a piece of text from one language's writing system into another. Since the transformation is mostly explained as local substitutions, deletions, and insertions, we treat word transliteration as a sequence labeling problem (Ganesh et al., 2008; Reddy and Waxmonsky, 2009) , using linear-chain conditional random fields as our model (Lafferty et al., 2001; Sha and Pereira, 2003) . We tailor this model to the transliteration task in several ways.First, for the Arabic-English task, each Arabic input is paired with multiple valid English transliteration outputs, any of which is judged to be correct. To effectively exploit these multiple references during learning, we use a training objective in which the model may favor some correct transliterations over the others. Computationally efficient inference is achieved by encoding the references in a lattice.Second, inference for our first-order sequence labeling model requires a runtime that is quadratic in the number of labels. Since our labels are character n-grams in the target language, we must cope with thousands of labels. To make the most of each inference call during training, we apply a mini-batch training algorithm which converges quickly.Finally, we wish to consider some global features that would render exact inference intractable. We therefore use a reranking model (Collins, 2000) .We demonstrate the performance benefits of these modifications on the Arabic-English transliteration task, using the open-source library cdec (Dyer et al., 2010) 1 for learning and prediction. | 0 |
Since OWL (Web Ontology Language) was adopted as a standard in 2004, researchers have sought ways of mediating between the (decidedly cumbersome) raw code and the human users who aspire to view or edit it. Among the solutions that have been proposed are more readable coding formats such as Manchester OWL Syntax (Horridge et al., 2006) , and graphical interfaces such as Protégé (Knublauch et al., 2004) ; more speculatively, several research groups have explored ways of mapping between OWL and controlled English, with the aim of presenting ontologies (both for viewing and editing) in natural language (Schwitter and Tilbrook, 2004; Sun and Mellish, 2006; Kaljurand and Fuchs, 2007; Hart et al., 2008) . In this paper we uncover and test some assumptions on which this latter approach is based.Historically, ontology verbalisation evolved from a more general tradition (predating OWL and the Semantic Web) that aimed to support knowledge formation by automatic interpretation of texts authored in Controlled Natural Languages (Fuchs and Schwitter, 1995) . The idea is to establish a mapping from a formal language to a natural subset of English, so that any sentence conforming to the Controlled Natural Language (CNL) can be assigned a single interpretation in the formal language -and conversely, any wellformed statement in the formal language can be realised in the CNL. With the advent of OWL, some of these CNLs were rapidly adapted to the new opportunity: part of Attempto Controlled English (ACE) was mapped to OWL (Kaljurand and Fuchs, 2007) , and Processable English (PENG) evolved to Sydney OWL Syntax (SOS) (Cregan et al., 2007) . In addition, new CNLs were developed specifically for editing OWL ontologies, such as Rabbit (Hart et al., 2008) and Controlled Language for Ontology Editing (CLOnE) (Funk et al., 2007) .In detail, these CNLs display some variations: thus an inclusion relationship between the classes Admiral and Sailor would be expressed by the pattern 'Admirals are a type of sailor' in CLOnE, 'Every admiral is a kind of sailor' in Rabbit, and 'Every admiral is a sailor' in ACE and SOS. However, at the level of general strategy, all the CNLs rely on the same set of assumptions concerning the mapping from natural to formal language; for convenience we will refer to these assumptions as the consensus model. In brief, the consensus model assumes that when an ontology is verbalised in natural language, axioms are expressed by sentences, and atomic terms are expressed by entries from the lexicon. Such a model may fail in two ways: (1) an ontology might contain axioms that cannot be described transparently by a sentence (for instance, because they contain complex Boolean expressions that lead to structural ambiguity); (2) it might contain atomic terms for which no suitable lexical entry can be found. In the remainder of this paper we first describe the consensus model in more detail, then show that although Logic OWL C D IntersectionOf(C D) ∃P.C SomeValuesFrom(P C) C D SubClassOf(C D) a ∈ C ClassAssertion(C a) [a, b] ∈ P PropertyAssertion(P a b) | 0 |
Automatic PI is the task of detecting if two texts convey the same meaning. For example, the following two sentences from the Microsoft Research Paraphrase Corpus (MSRP) (Dolan et al., 2004) :S 1a :Although it's unclear whether Sobig was to blame, The New York Times also asked employees at its headquarters yesterday to shut down their computers because of "system difficulties." S 1b : The New York Times asked employees at its headquarters to shut down their computers yesterday because of "computing system difficulties."are paraphrases, while these other two are not: * Professor at DISI, University of Trento. S 2a : Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, agreed. S 2b : "We have been somewhat lucky," said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.Most previous work on automatic PI, e.g., (Madnani et al., 2012; Socher et al., 2011) , is based on a direct comparison between the two texts, exploiting different similarity scores into a machine learning framework. However, these methods consider sentences as monolithic units and can thus be misled by ancillary information that does not modify the main meaning expressed in the text.For example, the additional text fragment (ATF), "Although it's unclear whether Sobig was to blame", from S 1a expresses ancillary information, which does not add much to the message of S 1b , thus the sentences are considered paraphrases. In contrast, S 2b contains the ATF, "We have been somewhat lucky", whose meaning is not linked to any constituent of S 1b . Since such text expresses relevant information, the two sentences are not considered paraphrases.In this paper, we study and design models for extracting ATFs from a sentence with respect to another one and classifying if their meaning is ancillary or important. For this purpose, we built a corpus of sentence pairs using MSRP, where at least one pair member always contains ATFs. We use SVMs with tree kernels applied to syntactic representations (Severyn and Moschitti, 2012) of ATFs for learning automatic ATCs.The results derived on MSRP show (i) a promising accuracy of our ATC and (ii) the output of ATC can be used as a feature for improving the state-ofthe-art PI model. | 0 |
The concept of "translating" an error sentence into a correct one was first researched by Brockett et al. (2006) . They proposed a statistical machine translation (SMT) system with noisy channel model to correct automatically erroneous sentences for learners of English as a Second Language (ESL).It seems that a statistical machine translation toolkit has become increasingly popular for grammatical error correction. In the CoNLL-2014 shared task on English grammatical error correction (Ng et al., 2014) , four teams of 13 participants each used a phrase-based SMT system. Grammatical error correction using a phrasebased SMT system can be improved by tuning using evaluation metrics such as F 0.5 (Kunchukuttan et al., 2014; Wang et al., 2014) or even a combination of different tuning algo-rithms (Junczys-Dowmunt and Grundkiewicz, 2014) . In addition, SMT can be merged with other methods. For example, the language modelbased and rule-based methods can be integrated into a single sophisticated but effective system (Felice et al., 2014) .For Chinese, SMT has also been used to correct spelling errors (Chiu et al., 2013) . Furthermore, as is shown in NLP-TEA-1, an SMT system can be applied to Chinese grammatical error correction if we can employ a large-scale learner corpus (Zhao et al., 2014) .In this study, we extend our previous system (Zhao et al., 2014) to the NLP-TEA-2 shared task on Chinese grammatical error diagnosis, which is based on SMT. The main contribution of this study is as follows: We investigate the hierarchical phrasebased model (Chiang et al., 2005) and determine that it yields higher recall and thus F score than does the phrase-based model, but is less accurate. We increase our Chinese learner corpus by web scraping (Yu et al., 2012; Cheng et al., 2014) and show that the greater the size of the learner corpus, the better the performance. We perform minimum error-rate training (Och, 2003) using several evaluation metrics and demonstrate that tuning improves the final F score. | 0 |
Large-scale neural language models have made great strides in a series of language generation tasks such as machine translation (Bahdanau et al., 2014; Vaswani et al., 2017; Raffel et al.) , text summarization (See et al., 2017; Lewis et al., 2019; Zhang et al., 2019a) , conversational dialog generation (Serban et al., 2015; Lowe et al., 2017; Roller et al., 2020; Zhang et al., 2020) , etc.However, despite the successes achieved by these models on several conditional generation tasks, they continue to suffer from degenerate behaviors such as repetition, a lack of diversity, dullness, and, incoherence, especially in open-ended generation settings such as text completion and dialog modeling (Holtzman et al., 2019) . This degenerate behavior is often attributed to a mismatch between the maximum likelihood training and gen- * A part of this work was done when the author was an intern at Borealis AI.† During a part of this work, the author was an employee at Borealis AI.‡ During a part of this work, the author was an Academic Advisor at Borealis AI.1 Source code to reproduce these experiments is available at https://github.com/kushalarora/ quantifying_exposure_bias. eration procedure mismatch (Welleck et al., 2019; Choi et al., 2020; Li et al., 2016) .Maximum likelihood training also referred to as teacher forcing (Williams and Zipser, 1989) , factorizes the language model as a linear chain, and maximizes the log-likelihood of this factorized language model on a training corpus. During this maximum likelihood training, the model learns a distribution of the next tokens conditioned on the contexts from the ground-truth training data.A concern with the MLE-based training is that the ground-truth contexts from the training corpus are not available during generation. Rather, the conditioning contexts during this phase comprise of tokens previously generated by the model itself. The distribution of these contexts seen during the generation phase might be very different from the ones encountered during the training phase. This mismatch is referred to as exposure bias (Ranzato et al., 2016; .A side-effect of exposure bias is that an error at any step during generation might have a cascading effect as the next context will incorporate this erroneous prediction, deviating away from the ground truth context distribution leading to more errors. These errors will result in sequences that degenerate over the sequence length resulting in incoherent text, lack of vocabulary diversity, and detachment from the source sequence resulting in hallucination, and/or word-and phrase-level repetition.There is an active debate in the language generation community on the impact of exposure bias in language generation. Authors have both validated (Xu et al., 2019; Zhang et al., 2019b) and questioned (He et al., 2019) the impact of exposure bias on language generation. Several approaches have been proposed to mitigate exposure bias (Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017; Leblond et al., 2018; Welleck et al., 2019) but these have neither formalized exposure bias clearly nor provide any empiri-cal evidence that these methods mitigate the effect of exposure bias. Finally, previous works have linked exposure bias to out-of-domain (Wang and Sennrich, 2020) and out of distribution (Schmidt, 2019) generalization, and hallucinations (Wang and Sennrich, 2020) but these claims remain weak in absence of a clear and principled formalization of the exposure bias issue.In this paper, we attempt to clarify this confusion by formalizing exposure bias in the terms of accumulation of errors and analyzing its impact on generation quality. We do this by providing a theoretically-grounded understanding of the exposure bias issue by analyzing it from an imitation learning perspective. We use this perspective to show that behavior cloning-an imitation learning algorithm is equivalent to teacher forcing under the choice of a particular loss function. We then exploit this equivalence to borrow the bound on error accumulation caused by behavior cloning and use it to quantify exposure bias and analyze the error accumulation in language generation.Finally, we use this quantifiable definition of exposure bias to demonstrate that models trained using teacher forcing do suffer from an accumulation of errors. We also show, both analytically and empirically, why perplexity fails to capture this error accumulation, and how a lower exposure bias correlates with better generation quality. | 0 |
Recent language models (LMs) such as BERT and its successors are remarkable at memorizing knowledge seen frequently during training, however performance degrades over the long tail of rare facts. Given the importance of factual knowledge for tasks such as questionanswering, search, and personal assistants (Bernstein et al., 2012; Poerner et al., 2020; Orr et al., 2020) , there has been significant interest in injecting these base LMs with factual knowledge about entities (Zhang et al., 2019; Peters et al., 2019, inter alia.) . In this work, we work we propose a simple and effective approach for enhancing LMs with knowledge, called metadata shaping.Existing methods to capture entity knowledge more reliably, typically use the following steps: first annotating natural language text with entity metadata, and next modifying the base LM model to learn from the tagged data. Entity metadata is obtained by linking substrings of text to entries in a knowledge base such as Wikidata, which stores entity IDs, types, descriptions, and relations. Model modifications include introducing continuous vector representations for entities or auxiliary objectives (Zhang et al., 2019; Peters et al., 2019; Yamada et al., 2020; Xiong et al., 2020; Joshi et al., 2020a; Su et al., 2021) . Other methods combine multiple learned modules, which are each specialized to handle fine-grained reasoning patterns or subsets of the data distribution (Chen et al., 2019; Wang et al., 2021) .These knowledge-aware LMs have led to impressive gains compared to base LMs on entityrich tasks. That said, the new architectures are often designed by human experts, costly to pretrain and optimize, and require additional training as new entities appear. Further, these LMs may not use the collected entity metadata effectively -Wikidata alone holds over ∼ 100M unique entities, however many of these entities fall under similar categories, e.g., "politician" entities. Intuitively, if unseen entities encountered during inference share metadata with entities observed during training, the LM trained with this information may be able to better reason about the new entities using patterns learned from similar seen entities. However, the knowledge-aware LMs learn from individual entity occurrences rather than learning these shared reasoning patterns. Implicitly learning entity similarities for 100M entities may be challenging since 89% of the Wikidata entities do not appear in Wikipedia, a popular source of unstruc- Figure 1 : Metadata shaping inserts metadata (e.g., entity types and descriptions) strings into train and test examples. The FewRel benchmark involves identifying the relation between a subject and object string. The above subject and object are unseen in the FewRel training data and the tuned base LM reflects low attention weights on those words. A base LM trained with shaped data reflects high attention weights on useful metadata words such as "politician". Weights are shown for words which are not stop-words, punctuation, or special-tokens. tured training data for the LMs, at all. 1 We thus ask, to what extent can we match the quality of knowledge-aware LM architectures using the base LM itself? We find that applying some simple modifications to the data at train and test time, a method we call metadata shaping, is surprisingly quite effective. Given unstructured text, there are several readily available tools for generating entity metadata at scale (e.g., Manning et al. (2014) ; Honnibal et al. (2020)), and knowledge bases contain entity metadata including type tags (e.g., Barack Obama is a "politician") and descriptions (e.g., Barack Obama "enjoys playing basketball"). Our method entails explicitly inserting retrieved entity metadata in examples as in Figure 1 and inputting the resulting shaped examples to the LM. Our contributions are:Simple and Effective Method We propose metadata shaping and demonstrate its effectiveness on standard benchmarks that are used to evaluate knowledge-aware LMs. Metadata shaping, with simply an off-the-shelf base LM, exceeds the base LM trained on unshaped data by by an average of 4.3 F1 points and is competitive to state-of-theart methods, which do modify the LM. Metadata shaping thus enables re-using well-studied and optimized base LMs (e.g., ).We show that metadata shaping improves tail performance -the observed gain from shaping is on average 4.4x larger for the slice of examples containing tail entities than for the slice containing popular entities. Metadata establish "subpopulations", groups of entities sharing similar properties, in the entity distribution (Zhu et al., 2014; Cui et al., 2019; Feldman, 2020) . For example on the FewRel benchmark (Han et al., 2018) , "Daniel Dugléry" (a French politician) appears 0 times, but "politician" entities in general appear > 700 times in the task training data. Intuitively, performance on a rare entity should improve if the LM has the explicit information that it is similar to other entities observed during training.Explainability Existing knowledge-aware LMs use metadata (Peters et al., 2019; Alt et al., 2020) , but do not explain when and why different metadata help. Inspired by classic feature selection techniques (Guyon and Elisseeff, 2003) , we conceptually explain the effect of different metadata on generalization error.We hope this work motivates further research on addressing the tail challenge through the data. 2 | 0 |
Incremental processing formalisms have increasing importance due to the growing ubiquity of spoken dialogue systems that require understanding and generation in real-time using rich, robust semantics. Dialogue systems benefit from incremental processing in terms of shorter response time to the user's requests when the dialogue system can start interpreting and serving the request (e.g. by consulting databases, doing reference resolution, backchannelling or starting to generate an answer (Aist et al., 2007; Schuler et al., 2009; Skantze and Schlangen, 2009) ) before the request is fully stated. Another use of formalisms that support strict incrementality is psycholinguistic modelling: As there is a substantial amount of evidence that human sentence processing is highly incremental, computational models of human sentence processing should be incremental to the same degree. Such models can then be used to calculate measures of human sentence processing difficulty, such as surprisal, which have been demonstrated to correspond to reading times (e.g., Levy, 2008; .Two strictly incremental versions of treeadjoining grammar (TAG; Joshi et al., 1975) which have been proposed in recent years are DV-TAG (Mazzei et al., 2007) and PLTAG (Demberg-Winterfors, 2010) . Incremental syntax is however only of limited interest without a corresponding mechanism for calculating the incremental semantic interpretation. And for that semantic model to be practically useful in psycholinguistic modelling or NLP applications such as speech recognition or dialogue systems, we believe that the semantic representation should ideally be simple, flat and usefully underspecified, in order to be used in the future in a context of compositional distributional semantics. We propose a framework in which semantic expressions are built synchronously with the syntactic tree. Simple rules are used to integrate an elementary tree's semantic expression with the semantic expression of the prefix tree at each stage. The semantic contribution of the new elementary tree is thereby added to the semantic output expression in a manner that reflects closely the order in which semantic material has arrived. The necessary semantic annotation of elementary trees can be obtained from subcategorization frame information (PropBank, FrameNet) . We use a Neo-Davidsonian eventbased semantics with minimal recursion.Integrating incremental syntactic analysis with a framework of incremental semantic interpretation will allow one to model processing phenomena such as the decreased processing difficulty (1-b) (after Steedman, 2000) in comparison to (1-a) by downranking the main verb analysis of sent when the subject (like flowers) is unlikely to fill the sender role.(1) a. The doctor sent for the patient arrived.b. The flowers sent for the patient arrived.Incrementally generating the semantic interpretation requires the underspecification of the output semantics given the syntax, such as underspecifying the number of arguments of a verb or (to a greater extent than for non-incremental deviations, as we will discuss below) the scope of quantifiers. This paper sets forth the initial proposal for this semantic formalism in terms of underlying desiderata, principles, and basic use cases. It provides one example derivation, and it outlines a way of dealing with the question of scope ambiguities, an issue which affects a number of aspects of the theoretical plausibility of a semantic formalism. | 0 |
Statistical machine translation (SMT) systems are heavily dependent on parallel data. SMT doesn't work well when fewer than several million lines of bitext are available (Kolachina et al., 2012) . When the available bitext is small, statistical models perform poorly due to the sparse word and phrase counts that define their parameters. Figure 1 gives a learning curve that shows this effect. As the amount of bitext approaches zero, performance drops drastically. In this thesis, we seek to modify the SMT model to reduce its dependence on parallel data and, thus, enable it to apply to new language pairs.Specifically, we plan to address the following challenges that arise when using SMT systems in low resource conditions: formance on the Spanish to English translation task increases with increasing amounts of parallel data. Performance is measured with BLEU and drops drastically as the amount of bitext approaches zero. These results use the Europarl corpus and the Moses phrase-based SMT framework, but the trend shown is typical.• Translating unknown words. In the context of SMT, unknown words (or out-of-vocabulary, OOV) are defined as having never appeared in the source side of the training parallel corpus. When the training corpus is small, the percent of words which are unknown can be high.• Inducing phrase translations. In high resource conditions, a word aligned bitext is used to extract a list of phrase pairs or translation rules which are used to translate new sentences. With more parallel data, this list is increasingly comprehensive. Using multi-word phrases instead of individual words as the basic translation unit has been shown to increase translation performance (Koehn et al., 2003) . However, when the parallel corpus is small, so is the number of phrase pairs that can be extracted.• Estimating translation probabilities. In the standard SMT pipeline, translation probabilities are estimated using relative frequency counts over the training bitext. However, when the bitext counts are sparse, probability esti- My thesis focuses on translating into English. We assume access to a small amount of parallel data, which is realistic, especially considering the recent success of crowdsourcing translations (Zaidan and Callison-Burch, 2011; Ambati, 2011; Post et al., 2012) . Additionally, we assume access to larger monolingual corpora. Table 1 lists the 22 languages for which we plan to perform translation experiments, along with the total amount of monolingual data that we will use for each. We use web crawled time-stamped news articles and Wikipedia for each language. We have extracted the Wikipedia pages which are inter-lingually linked to English pages. | 0 |
Semantic relations between entities are essential for many NLP applications such as question answering, textual inference and information extraction (Ravichandran and Hovy, 2002; Szpektor et al., 2004) . Therefore, it is important to build a comprehensive knowledge base consisting of instances of semantic relations (e.g., authorOf) such as authorOf ⟨Franz Kafka, The Metamorphosis⟩. To recognize these instances in a corpus, we need to obtain patterns (e.g., "X write Y") that signal instances of the semantic relations.For a long time, many researches have targeted at extracting instances and patterns of specific relations (Riloff, 1996; Pantel and Pennacchiotti, 2006; De Saeger et al., 2009) . In recent years, to acquire a wider range knowledge, Open Information Extraction (Open IE) has received much attention (Banko et al., 2007) . Open IE identifies relational patterns and instances automatically without predefined target relations (Banko et al., 2007; Wu and Weld, 2010; Fader et al., 2011; Mausam et al., 2012) . In other words, Open IE acquires knowledge to handle open domains. In Open IE paradigm, it is necessary to enumerate semantic relations in open domains and to learn mappings between surface patterns and semantic relations. This task is called unsupervised relation extraction (Hasegawa et al., 2004; Shinyama and Sekine, 2006; Rosenfeld and Feldman, 2007) .A common approach to unsupervised relation extraction builds clusters of patterns that represent the same relation (Hasegawa et al., 2004; Shinyama and Sekine, 2006; Yao et al., 2011; Min et al., 2012; Rosenfeld and Feldman, 2007; Nakashole et al., 2012) . In brief, each cluster includes patterns corresponding to a semantic relation. For example, consider three patterns, "X write Y", "X is author of Y" and "X is located in Y". When we group these patterns into clusters representing the same relation, patterns "X write Y" and "X is author of Y" form a cluster representing the relation authorOf, and the pattern "X is located in Y" does a cluster for locate-dIn. In order to obtain these clusters, we need to know the similarity between patterns. The better we model the similarity of patterns, the better a clustering result correspond to semantic relations. Thus, the similarity computation between patterns is crucial for unsupervised relation extraction.We have two major challenges in computing the similarity of patterns. First, it is not clear how to represent the semantic meaning of a relational pattern. Previous studies define a feature space for patterns, and express the meaning of patterns by using such as the co-occurrence statistics between a pattern and an entity pair, e.g., co-occurrence frequency and pointwise mutual information (PMI) (Lin and Pantel, 2001) . Some studies employed vector representations of a fixed dimension, e.g., Principal Component Analysis (PCA) (Collins et al., 2002) and Latent Dirichlet Allocation (LDA) (Yao et al., 2011; Riedel et al., 2013) . However, the previous work did not compare the effectiveness of these representations when applied to a collection of large-scaled unstructured texts.Second, we need design a method scalable to a large data. In Open IE, we utilize a large amount of data in order to improve the quality of unsupervised relation extraction. For this reason, we cannot use a complex and inefficient algorithm that consumes the computation time and memory storage. In this paper, we explore methods for computing pattern similarity of good quality that are scalable to huge data, for example, with several billion sentences. In order to achieve this goal, we utilize approximate frequency counting and dimension reduction. Our contributions are threefold.• We build a system for unsupervised relation extraction that is practical and scalable to large data.• Even though the proposed system introduces approximations, we demonstrate that the system exhibits the performance comparable to the one without approximations.• Comparing several representations of pattern vectors, we discuss a reasonable design for representing the meaning of a pattern. | 0 |
One of the key advantages of word embeddings for natural language processing is that they enable generalization to words that are unseen in labeled training data, by embedding lexical features from large unlabeled datasets into a relatively low-dimensional Euclidean space. These low-dimensional embeddings are typically trained to capture distributional similarity, so that information can be shared among words that tend to appear in similar contexts. However, it is not possible to enumerate the entire vocabulary of any language, and even large unlabeled datasets will miss terms that appear in later applications. The issue of how to handle these out-of-vocabulary (OOV) words poses challenges for embedding-based methods. These challenges are particularly acute when working with lowresource languages, where even unlabeled data may be difficult to obtain at scale. A typical solution is to abandon hope, by assigning a single OOV embedding to all terms that do not appear in the unlabeled data.We approach this challenge from a quasigenerative perspective. Knowing nothing of a word except for its embedding and its written form, we attempt to learn the former from the latter. We train a recurrent neural network (RNN) on the character level with the embedding as the target, and use it later to predict vectors for OOV words in any downstream task. We call this model the MIMICK-RNN, for its ability to read a word's spelling and mimick its distributional embedding.Through nearest-neighbor analysis, we show that vectors learned via this method capture both word-shape features and lexical features. As a result, we obtain reasonable near-neighbors for OOV abbreviations, names, novel compounds, and orthographic errors. Quantitative evaluation on the Stanford RareWord dataset (Luong et al., 2013) provides more evidence that these character-based embeddings capture word similarity for rare and unseen words.As an extrinsic evaluation, we conduct experiments on joint prediction of part-of-speech tags and morphosyntactic attributes for a diverse set of 23 languages, as provided in the Universal Dependencies dataset (De Marneffe et al., 2014) . Our model shows significant improvement across the board against a single UNK-embedding backoff method, and obtains competitive results against a supervised character-embedding model, which is trained end-to-end on the target task. In low-resource settings, our approach is particularly effective, and is complementary to supervised character embeddings trained from labeled data. The MIMICK-RNN therefore provides a useful new tool for tagging tasks in settings where there is limited labeled data. Models and code are available at www.github.com/ yuvalpinter/mimick . | 0 |
Essay writing is a common task evaluated in schools and universities. In this task, students are typically given a prompt or essay topic to write about. Essay writing is included in high-stakes assessments, such as Test of English as a Foreign Language (TOEFL) and Graduate Record Examination (GRE). Manually grading all essays takes a lot of time and effort for the graders. This is what Automated Essay Scoring (AES) systems are trying to alleviate.Automated Essay Scoring uses computer software to automatically evaluate an essay written in an educational setting by giving it a score. Work related to essay scoring can be traced back to 1966 when Ellis Page created a computer grading software called Project Essay Grade (PEG). Research on AES has continued through the years.The recent Automated Student Assessment Prize (ASAP) Competition 1 sponsored by the Hewlett Foundation in 2012 has renewed interest on this topic. The agreement between the scores assigned by state-of-the-art AES systems and the scores assigned by human raters has been shown to be relatively high. See Shermis and Burstein (2013) for a recent overview of AES.AES is usually treated as a supervised machine learning problem, either as a classification, regression, or rank preference task. Using this approach, a training set in the form of human graded essays is needed. However, human graded essays are not readily available. This is perhaps why research in this area was mostly done by commercial organizations. After the ASAP competition, research interest in this area has been rekindled because of the released dataset.Most of the recent AES related work is promptspecific. That is, an AES system is trained using essays from a specific prompt and tested against essays from the same prompt. These AES systems will not work as well when tested against a different prompt. Furthermore, generating the training data each time a new prompt is introduced will be costly and time consuming.In this paper, we propose domain adaptation as a solution to this problem. Instead of hiring people to grade new essays each time a new prompt is introduced, domain adaptation can be used to adapt the old prompt-specific system to suit the new prompt. This way, a smaller number of training essays from the new prompt is needed. In this paper, we propose a novel domain adaptation technique based on Bayesian linear ridge regression.The rest of this paper is organized as follows. In Section 2, we give an overview of related work on AES and domain adaptation. Section 3 describes the AES task and the features used. Section 4 presents our novel domain adaptation algorithm.Section 5 describes our data, experimental setup, and evaluation metric. Section 6 presents and discusses the results. We conclude in Section 7. | 0 |
Native Language Identification (NLI), in which an author's first language is derived by analyzing texts written in his or her second language, is often treated as a text classification problem. NLI has proven useful in various applications, including in language-learning settings. As it is wellestablished that a speaker's first language informs mistakes made in a second language, a system that can identify a learner's first language is better equipped to provide learner-specific feedback and identify likely problem areas.The Treebank of Learner English (TLE) is the first publicly available syntactic treebank for English as a Second Language (Berzak et al., 2016) . One particularly interesting feature of the TLE is This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/ its incorporation of an annotation scheme for a consistent syntactic representation of grammatical errors. This annotation system has the potential to be useful to native language identification, as the ability to parse ungrammatical and atypical dependency relations could improve the informativeness of dependency-based features in such a classification task.Assessing this potential has been accomplished by training a parser on the original treebank and using it to extract dependency relations in a learner English corpus. Those dependency relations were then used as features in a machine learning classification task. The success of this classification was then assessed by comparing the results to a classification on features extracted by a parser trained on the error-corrected version of the treebank, based on the assumption that the original version of the treebank will more accurately handle grammatical errors in learner texts. This is a novel approach in that other similar experiments have used dependency parsers trained on grammatical treebanks to extract dependency relations.We found that using the original version of the corpus gave slightly better results on native language classification than using the error-corrected version. However, when we investigated parsing results, the original version gave much lower results on parsing both for original and errorcorrected texts. This seems to suggest that there is useful information in the types of errors made by this parser. | 0 |
This paper presents the current status of development and the main motivations of an opensource shallow-transfer machine translation (MT) engine for the Romance languages of Spain (the main ones being Spanish (es), Catalan (ca) and Galician 1 (gl)) as part of a larger governmentfunded project which will also include MT engines for non-Romance languages such as 1 Most scholars consider Galician and Portuguese (pt) the same language; however, the official orthography of Galician is very different from the ones used for European and Brazilian Portuguese. Therefore, while grammatical resources will be rather reusable, lexical resources will not easily be.Basque (eu) and involving four universities and three linguistic technology enterprises. 2 The shallow-transfer architecture will also be suitable for other pairs of closely related languages which are not Romance, for example, Czech-Slovak, Danish-Swedish, etc.The multilingual nature of Spain is recognized, to a varying extent, in laws and regulations corresponding to the various levels of go-2 TALP (Universitat Politècnica de Catalunya), SLI (Universidade de Vigo), Transducens (Universitat d'Alacant), IXA (Euskal Herriko Unibertsitatea), ima-xin|software (Santiago de Compostela), Elhuyar Fundazioa (Usurbil), and Eleka Ingeniaritza Linguistikoa (Usurbil, coordinator) . vernment (the Constitution of Spain and the Statutes of Autonomy granted to Aragon, the Balearic Islands, Catalonia and Valencia (ca), Galicia (gl), and Navarre and the Basque Country (eu)). On the one hand, demand by many citizens in these territories make private companies increasingly interested in generating information (documentation for products and services, customer support, etc.) in languages different from Spanish. On the other hand, the various levels of government (national, autonomic, provincial, municipal) must respect, in the mentioned territories, the linguistic rights recognized to their citizens and promote the use of such languages. Machine translation is a key technology to meet these goals and demands.Existing MT programs for the es-ca and the es-gl pairs (there are no programs for the es-eu pair) are mostly commercial or use proprietary technologies, which makes them very hard to adapt to new usages, and use different technologies across language pairs, which makes it very difficult to integrate them in a single multilingual content management system.The MT architecture proposed here uses finite-state transducers for lexical processing, hidden Markov models for part-of-speech tagging, and finite-state based chunking for structural transfer, and is largely based upon that of systems already developed by the Transducens group such as interNOSTRUM 3 (Spanish-Catalan, Canals-Marote et al. 2001) and Traductor Universia 4 (Spanish-Portuguese, Garrido-Alenda et al. 2003) ; these systems are publicly accessible through the net and used on a daily basis by thousands of users.One of the main novelties of this architecture is that it will be released under an opensource license 5 (together with pilot linguistic data derived from other open-source projects such as Freeling (Carreras et al. 2004) or created specially for this purpose) and will be distributed free of charge. This means that anyone having the necessary computational and linguisthere will be two different licenses: one for the machine translation engine and tools, and another one for the linguistic data. tic skills will be able to adapt or enhance it to produce a new MT system, even for other pairs of related languages. The whole system will be released at the beginning of 2006. 6 We expect that the introduction of a unified open-source MT architecture will ease some of the mentioned problems (having different technologies for different pairs, closed-source architectures being hard to adapt to new uses, etc.). It will also help shift the current business model from a licence-centred one to a services-centred one, and favour the interchange of existing linguistic data through the use of the XML-based formats defined in this project.It has to be mentioned that this is the first time that the government of Spain funds a large project of this kind, although the adoption of open-source software by administrations in Spain is not new. 7 The following sections give an overview of the architecture (sec. 2), the formats defined for the encoding of linguistic data (sec. 3), and the compilers used to convert these data into an executable form (sec. 4); finally, we give some concluding remarks (sec. 5). | 0 |
Compounds are extremely common in Icelandic, accounting for over 88% of all words in the Database of Icelandic Morphology (DIM) (Bjarnadóttir, 2017; Bjarnadóttir et al., 2019) . As compounding is so productive, new compounds frequently occur as out-of-vocabulary (OOV) words, which may adversely affect the performance of NLP tools. Furthermore, Icelandic is a morphologically rich language with a complex inflectional system. There are 16 inflectional categories (i.e., word forms with unique part-of-speech (PoS) tags) for nouns, for adjectives 120, and for verbs 122, excluding impersonal constructions. The average number of inflectional forms per headword in DIM is 21.7. Included in this average are all uninflected words as well inflectional variants, i.e., dual word forms with the same PoS tag. Compounds are formed by combining two words, which may be compounds themselves. The former word is known as a modifier and the second as a head, assuming binary branching (Bjarnadóttir, 2005) . Theoretically, there is no limit to how many constituents a compound can be composed of, although very long words such as uppáhaldseldhúsinnréttingaverslunin 'the favorite kitchen furniture store' (containing 7 constituent parts) are rare. The constituent structure of a compound word can be represented by a full binary tree, as shown in Figure 1 . Compound splitting, or decompounding, is the process of breaking compound words into their constituent parts. This can significantly reduce the number of OOV words for languages where compounding is productive. Compound splitting has been shown to be effective for a variety of tasks, such as machine translation (Brown, 2002; Koehn and Knight, 2003) , speech recognition (Adda-Decker and Adda, 2000) and information retrieval (Braschler et al., 2003) . In this paper, we present a character-based bidirectional long short-term memory (BiLSTM) model for splitting Icelandic compound words, and evaluate its performance for varying amounts of training data. Our model is trained on a dataset of 2.9 million unique word forms and their constituent structures from DIM. The model learns how to split compound words into two parts and can be used to derive the constituent structure of any word form. The model outperforms other previously published methods when evaluated on a corpus of manually split word forms. Our method has been integrated into Kvistur, an Icelandic compound word analyzer. Finally, preliminary experiments show that our model performs very well when evaluated on a closely related language, Faroese. | 0 |
For Asian languages such as Japanese and Chinese that do not contain explicitly marked word boundaries, word segmentation is an important first step for many subsequent language processing tasks, such as POS tagging, parsing, semantic role labeling, and various applications. Previous studies for POS tagging and syntax parsing on these languages sometimes assume that gold standard word segmentation information is provided, which is not the real scenario. In a fully automatic system, a pipeline approach is often adopted, where raw sentences are first segmented into word sequences, then POS tagging and parsing are performed. This kind of approach suffers from error propagation. For example, word segmentation errors will result in tagging and parsing errors. Additionally, early modules cannot use information from subsequent modules. Intuitively a joint model that performs the three tasks together should help the system make the best decisions.In this paper, we propose a unified model for joint Chinese word segmentation, POS tagging, and parsing. Three sub-models are independently trained using the state-of-the-art methods. We do not use the joint inference algorithm for training because of the high complexity caused by the large amount of parameters. We use linear chain Conditional Random Fields (CRFs) (Lafferty et al., 2001) to train the word segmentation model and POS tagging model, and averaged perceptron (Collins, 2002) to learn the parsing model. During decoding, parameters of each sub-model are scaled to represent its importance in the joint model. Our decoding algorithm is an extension of CYK parsing. Initially, weights of all possible words together with their POS tags are calculated. When searching the parse tree, the word and POS tagging features are dynamically generated and the transition information of POS tagging is considered in the span merge operation.Experiments are conducted on Chinese Tree Bank (CTB) 5 dataset, which is widely used for Chinese word segmentation, POS tagging and parsing. We compare our proposed joint model with the pipeline system, both built using the state-of-the-art submodels. We also propose an evaluation metric to calculate the bracket scores for parsing in the face of word segmentation errors. Our experimental results show that the joint model significantly outperforms the pipeline method based on the state-of-the-art sub-models. | 0 |
Traditional accounts of ambiguity have generally assumed that each use of a linguistic expression has a unique intended interpretation in context, and attempted to develop a model to determine it (Nakov and Hearst, 2005; Brill and Resnik, 1994) . However, disambiguation is not always appropriate or even desirable (Poesio and Artstein, 2008) . Ambiguous text may be interpreted differently by different readers, with no consensus about which reading is the intended one. Attempting to assign a preferred interpretation may therefore be inappropriate. Misunderstandings among readers do occur and may have undesir-able consequences. In requirements engineering processes, for example, this results in costly implementation errors (Boyd et al., 2005) .Nonetheless, most text does not lead to significant misinterpretation. Our research aims to establish a model that estimates how likely an ambiguity is to lead to misunderstandings. Our previous work on nocuous ambiguity (Chantree et al., 2006; Willis et al., 2008) cast ambiguity not as a property of a text, but as a property of text in relation to a set of stakeholders. We drew on human judgments -interpretations held by a group of readers of a text -to establish criteria for judging the presence of nocuous ambiguity. An ambiguity is innocuous if it is read in the same way by different people, and nocuous otherwise. The model was tested on co-ordination ambiguity only.In this paper, we implement, refine and extend the model. We investigate two typical ambiguity types arising from coordination and anaphora. We extend the previous work (Willis et al., 2008) with additional heuristics, and refine the concept of ambiguity threshold. We experiment with alternative machine learning algorithms to find optimal ways of combining the output of the heuristics. Yang et al. (2010a) describes a complete implementation in a prototype tool running on full text. Here we present our experimental results, to illustrate and evaluate the extended methodology.The rest of the paper is structured as follows. Section 2 introduces the methodology for automatic detection of nocuous ambiguity. Sections 3 and 4 provide details on how the model is applied to coordination and anaphora ambiguity. Experimental setup and results are reported in Section 5, and discussed in Section 6. Section 7 reports on related work. Conclusions and future work are found in Section 8. | 0 |
Documents often appear within a network structure: social media mentions, retweets, and follower relationships; Web pages by hyperlinks; scientific papers by citations. Network structure interacts with the topics in the text, in that documents linked in a network are more likely to have similar topic distributions. For instance, a citation link between two papers suggests that they are about a similar field, and a mentioning link between two social media users often indicates common interests. Conversely, documents' similar topic distributions can suggest links between them. For example, topic model (Blei et al., 2003, LDA) and block detection papers (Holland et al., 1983) are relevant to our research, so we cite them. Similarly, if a social media user A finds another user B with shared interests, then A is more likely to follow B.Our approach is part of a natural progression of network modeling in which models integrate more information in more sophisticated ways. Some past methods only consider the network itself (Kim and Leskovec, 2012; Liben-Nowell and Kleinberg, 2007) , which loses the rich information in text. In other cases, methods take both links and text into account (Chaturvedi et al., 2012) , but they are modeled separately, not jointly, limiting the model's ability to capture interactions between the two. The relational topic model (Chang and Blei, 2010, RTM) goes further, jointly modeling topics and links, but it considers only pairwise document relationships, failing to capture network structure at the level of groups or blocks of documents.We propose a new joint model that makes fuller use of the rich link structure within a document network. Specifically, our model embeds the weighted stochastic block model (Aicher et al., 2014, WSBM) to identify blocks in which documents are densely connected. WSBM basically categorizes each item in a network probabilistically as belonging to one of L blocks, by reviewing its connections with each block. Our model can be viewed as a principled probabilistic extension of Yang et al. (2015) , who identify blocks in a document network deterministically as strongly connected components (SCC). Like them, we assign a distinct Dirichlet prior to each block to capture its topical commonalities. Jointly, a linear regression model with a discriminative, max-margin objective function (Zhu et al., 2012; Zhu et al., 2014) is trained to reconstruct the links, taking into account the features of documents' topic and word distributions (Nguyen et al., 2013) , block assignments, and inter-block link rates.We validate our approach on a scientific paper abstract dataset and collection of webpages, with citation links and hyperlinks respectively, to predict links among previously unseen documents and from those new documents to training documents. Embedding the WSBM in a network/topic model leads to substantial improvements in link prediction over previous models; it also improves block detection and topic interpretability. The key advantage in embedding WSBM is its flexibility and robustness in the face of noisy links. Our results also lend additional support for using maxmargin learning for a "downstream" supervised topic model (McAuliffe and Blei, 2008) , and that predictions from lexical as well as topic features improves performance (Nguyen et al., 2013) .The rest of this paper is organized as follows. Section 2 introduces two previous link-modeling methods, WSBM and RTM. Section 3 presents our methods to incorporate block priors in topic modeling and include various features in link prediction, as well as the aggregated discriminative topic model whose posterior inference is introduced in Section 4. In Section 5 we show how our model can improve link prediction and (often) improve topic coherence. | 0 |
Natural language understanding (NLU) refers to the ability of a system to 'comprehend' the meaning (semantics) and the structure (syntax) of human language to enable the interaction with a system or device. Cross-lingual natural language understanding (XNLU) alludes to a system that is able to handle multiple languages simultaneously (Artetxe and Schwenk, 2019; Hu et al., 2020) . We focus on task-oriented XNLU that comprises two correlated objectives: i) Intent Classification, which identifies the type of user command, e.g. 'edit_reminder', 'send_message' or 'play_music' and ii) Entity/Slot Recognition, which identifies relevant entities in the utterance including their types such as dates, messages, music tracks, locations, etc. In a modular dialogue system, this information is used by the dialogue manager to decide how to respond to the user (Casanueva et al., 2017; . For neural XNLU systems, the limited availability of annotated data is a significant barrier to scaling dialogue systems to more users (Razumovskaia et al., 2021) . Therefore, we can use cross-lingual methods to zero-shot transfer the knowledge learnt in a high-resource language such as English to the target language of choice (Artetxe et al., 2020; . To this end, we introduce a variety of alignment methods for zero-shot cross-lingual transfer, most notably CrossAligner. Our methods leverage unlabelled parallel data and can be easily integrated on top of a pretrained language model, referred to as XLM 1 , such as XLM-RoBERTa (Conneau et al., 2020) . Our methods help the XLM align its cross-lingual representations while optimising the primary XNLU tasks, which are learned only in the source language and transferred zero-shot to the target language. Finally, we also investigate the effectiveness of simple and weighted combinations of multiple alignment losses, which leads to further model improvements and insights. Our contributions are summarised as follows:• We introduce CrossAligner, a cross-lingual transfer method that achieves SOTA performance on three benchmark XNLU datasets. • We introduce Translate-Intent, a simple and effective baseline, which outperforms its commonly used counterpart 'Translate-Train'. • We introduce Contrastive Alignment, an auxiliary loss that leverages contrastive learning at a much smaller scale than past work. • We introduce weighted combinations of the above losses to further improve SOTA scores. • Qualitative analysis aims to guide future research by examining the remaining errors. | 0 |
In the past decade, new forms of communication, such as microblogging and text messaging have emerged and became ubiquitous. These short messages are often used to share opinions and sentiments. The Sentiment Analysis in Twitter task promotes research that will lead to a better understanding of how sentiment is conveyed in tweets and texts. In this paper, we describe our contribution at task 2 of Se-mEval 2013 (Wilson et al., 2013) . For the Contextual Polarity Disambiguation subtask, covered in section 2, we use a system that combines a lexicon based approach to sentiment detection with two types of supervised learning methods, one used for polarity shift identification and one for tweet segment classification in the absence of lexicon words. The third section presents the Message Polarity Classification subtask. We focus here on the influence of domain information on sentiment classification by detecting words that change their polarity across domains. | 0 |
For speakers of a language whose nouns have no gender (such as modern English), making the leap to a language that does (such as German), does not come easy. With no or few rules or heuristics to guide him, the language learner will try to draw on the "obvious" parallel between grammatical and natural gender, and will be immediately baffled to learn that girl -Mädchen -is neuter in German. Furthermore, one may refer to the same object using words with different gender: car can be called (das) Auto (neuter) or (der) Wagen (masculine). Imagine that after hard work, the speaker has mastered gender in German, and now wishes to proceed with a Romance language, for example Italian or Spanish. He is now confronted with the task of relearning to assign gender in these new languages, made more complex by the fact that gender does not match across languages: e.g. sun -feminine in German (die Sonne), but masculine in Spanish (el sol), Italian (il sole) and French (le soleil); moon -masculine in German (der Mond), but feminine in Spanish (la luna), Italian (la luna) and French (la lune). Gender doesn't even match within a single language family: travel -masculine in Spanish (el viage) and Italian (il viaggio), but feminine in Portuguese (a viagem).Grammatical gender groups nouns in a language into distinct classes. There are languages whose nouns are grouped into more or less than three classes. English for example has none, and makes no distinction based on gender, although Old English did have three genders and some traces remain (e.g. blonde, blond).Linguists assume several sources for gender: (i) a first set of nouns which have natural gender and which have associated matching grammatical gender; (ii) nouns that resemble (somehow) the nouns in the first set, and acquire their grammatical gender through this resemblance. Italian and Romanian, for example, have strong and reliable phonological correlates (Vigliocco et al., 2004b, for Italian) . (Doca, 2000, for Romanian) . In Romanian the majority of feminine nouns end inȃ or e. Some rules exists for German as well (Schumann, 2006) , for example nouns ending in -tät, -ung, -e, -enz, -ur, -keit, - in tend to be feminine. Also, when specific morphological processes apply, there are rules that dictate the gender of the newly formed word. This process explains why Frau (woman) is feminine in German, while Fräulein (little woman, miss) is neuter -Fräulein = Frau + lein. The existing rules have exceptions, and there are numerous nouns in the language which are not derived, and such suffixes do not apply.Words are names used to refer to concepts. The fact that the same concept can be referred to using names that have different gender -as is the case for car in German -indicates that at least in some cases, grammatical gender is in the name and not the concept. We test this hypothesis -that the gender of a noun is in its word form, and that this goes beyond word endings -using noun gender data for German and Romanian. Both Romanian and German have 3 genders: masculine, feminine and neuter. The models built using machine learning algorithms classify test nouns into gender classes based on their form with high accuracy. These results support the hypothesis that in gendered languages, the word form is a strong clue for gender. This supplements the situation when some concepts have natural gender that matches their grammatical gender: it allows for an explanation where there is no such match, either directly perceived, or induced through literary devices.The present research has both theoretical and practical benefits. From a theoretical point of view, it contributes to research on phonology and gender, in particular in going a step further in understating the link between the two. From a practical perspective, such a connection between gender and sounds could be exploited in advertising, in particular in product naming, to build names that fit a product, and which are appealing to the desired customers. Studies have shown that especially in the absence of meaning, the form of a word can be used to generate specific associations and stimulate the imagination of prospective customers (Sells and Gonzales, 2003; Bedgley, 2002; Botton et al., 2002) . | 0 |
In recent years, rich contextual embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have enabled rapid progress on benchmarks like GLUE (Wang et al., 2019a) and have seen widespread industrial use (Pandu Nayak, 2019) . However, these methods require significant computational resources (memory, time) during pretraining, and during downstream task training and inference. Thus, an important research problem is to understand when these contextual embeddings add significant value vs. when it is possible to use more efficient representations without significant degradation in performance.As a first step, we empirically compare the performance of contextual embeddings with classic embeddings like word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) . To further understand what performance gains are attributable to improved embeddings vs. the powerful downstream models that leverage them, we also compare with a simple baseline-fully random embed- * Equal contribution.dings-which encode no semantic or contextual information whatsoever. Surprisingly, we find that in highly optimized production tasks at a major technology company, both classic and random embeddings have competitive (or even slightly better!) performance than the contextual embeddings. 1, 2 To better understand these results, we study the properties of NLP tasks for which contextual embeddings give large gains relative to non-contextual embeddings. In particular, we study how the amount of training data, and the linguistic properties of the data, impact the relative performance of the embedding methods, with the intuition that contextual embeddings should give limited gains on data-rich, linguistically simple tasks.In our study on the impact of training set size, we find in experiments across a range of tasks that the performance of the non-contextual embeddings (GloVe, random) improves rapidly as we increase the amount of training data, often attaining within 5 to 10% accuracy of BERT embeddings when the full training set is used. This suggests that for many tasks these embeddings could likely match BERT given sufficient data, which is precisely what we observe in our experiments with industry-scale data. Given the computational overhead of contextual embeddings, this exposes important trade-offs between the computational resources required by the embeddings, the expense of labeling training data, and the accuracy of the downstream model.To better understand when contextual embeddings give large boosts in performance, we identify three linguistic properties of NLP tasks which help explain when these embeddings will provide gains:• Complexity of sentence structure: How interdependent are different words in a sentence?• Ambiguity in word usage: Are words likely to appear with multiple labels during training?• Prevalence of unseen words: How likely is encountering a word never seen during training?Intuitively, these properties distinguish between NLP tasks involving simple and formulaic text (e.g., assistant commands) vs. more unstructured and lexically diverse text (e.g., literary novels). We show on both sentiment analysis and NER tasks that contextual embeddings perform significantly better on more complex, ambiguous, and unseen language, according to proxies for these properties. Thus, contextual embeddings are likely to give large gains in performance on tasks with a high prevalence of this type of language. | 0 |
Reasoning is an important part of human logical thinking. It gives us the ability to draw fresh conclusions from some of the known points (Judea, 1988) . Argument is the basis for reasoning. Except for the argument's claim and reason, usually, it needs some additional information. Therefore, what we know is the additional information and arguments reason. The claim also needs warrants for an explanation. An example is shown in Table 1 .Obviously, A is a reasonable explanation. The task is to get the reader to find a reasonable explanation for the known messages and claims in the two warrants. Due to the small number of alternative warrants, this problem can be considered to be a binary classification problem. This idea can be used as the baseline model. However, for system scalability and effectiveness, we treat this problem as the regression problem of probability prediction. The idea calculates the probability for each warrant that it is correct. Because of the diversity of natural language expression, there are many ways in which the same meaning can be expressed. Thus, this approach can be better to address this situation (Collobert et al., 2011) . Another benefit of addressing the problem in this way is to make the problem similar in form to the multi-choice question-answering system. The question-answering system is a classic problem of natural language processing. Many methods and models can be used for reference.The traditional question-answering system is based on semantic and statistical methods (Alfonseca et al., 2002) . This method requires an enormous background knowledge base. In addition, it is not very effective for nonstandard language expression. The state-of-the-art methods are usually based on neural networks. The trained word embedding can fully express the semantics and knowledge. Therefore, the new method is usually better than the traditional statistical-based method.In this paper, we proposed a bi-directional L-STM with an attention model. The model uses a bi-LSTM network to encode the original word embedding. Then, the semantic outputs are fed into the dense decoder with an attention mechanism. Due to the uncertainty of a single model, ensemble learning is used to enhance the performance of the model.The remainder of the paper consists of 3 parts. The second part introduces the proposed model in detail, and the implementation is presented in the third part, while the last part presents our conclusions. | 0 |
The Internet has been surging in popularity as well as general availability. This has considerably increased the amount of user generated content present online. This has, however, brought up a few issues. One of the issues is hate speech detection, as manual detection has been made nearly impossible by the quantity of data. The only real solution is automated hate speech detection. Our task is detection of hate speech towards immigrants and women on Twitter (Task A).Hate speech can be defined as "Any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics." (Basile et al., 2019) This proves to be a very broad definition, because utterances can be offensive, yet not hateful (Davidson et al., 2017) . Even manual labeling of hate speech related data is notoriously difficult as hate speech is very subjective in nature (Nobata et al., 2016; Waseem, 2016) .The provided dataset consists of collected messages from Twitter in English or Spanish language. Hate speech datasets are very prone to class imbalances (Schmidt and Wiegand, 2017) . The pro-vided dataset does not suffer from this problem. The English data contains 10,000 messages with 42.1% of the messages labeled as hate speech. The Spanish data contains 4969 messages and similarly to the English part, 41.5% were labeled as hate speech. This gives us a dataset with 14969 messages of which 6270 are categorized as hatespeech. We have not used any additional sources of training data for our models. More information about the data can be found in the Task definition (Basile et al., 2019) .Most research dealing with hate speech has been done in English due to labelled dataset availability. However, this issue is not unique to English-based content. In our work, we explore multilingual approaches, as we recognize data imbalance between languages as one of major challenges of NLP. Multilingual approaches could help remedy this problem, as one could transfer knowledge from a data-rich language (English) to a datapoor language (Spanish).We focus on neural network approaches, as they have been achieving better performance than traditional machine learning algorithms (Zhang et al., 2018) . We explore both monolingual and multilingual learning paradigms. Multilingual approaches enable us to use both English and Spanish datasets for training.The most popular input features in deep learning are word embeddings.Embeddings are fixed length vectors with real numbers as components, used to represent words in a numeric way.The input layers to our models consist of MUSE (Conneau et al., 2017) or ELMo (Peters et al., 2018) word embeddings.MUSE embeddings are multilingual embeddings based on fastText. They are available in different languages, where the words are mapped into the same vector space across languages, i.e. words with similar meanings across languages have a similar vector representation.ELMo provide a deep representation of words based on output of a three layer pre-trained neural network. The representation for a word is based on the context in which the word is used. However, they are not multilingual representations.To work around the monolinguality of ELMo, we use a technique called adversarial learning (Ganin and Lempitsky, 2014) . Adversarial networks consist of three parts:• Feature extractor responsible for creating representations belonging to the same distribution regardless of input data distribution i.e. of the language the messages are in. This transformation is learned during training.• Classifier responsible for the classification i.e. labeling hateful utterances.• Discriminator responsible for predicting the language of a given message.During backpropagation, the loss from classifier (L cls ) is computed the standard way. The loss from discriminator (L dis ) has its sign flipped and is multiplied by adversarial lambda (λ). The discriminator works adversarialy to the classificator.Loss = L cls − λL dis (1)The loss from the discriminator encourages the feature extractor to create indistinguishable representations for messages across languages. This is most often implemented by a gradient reversal layer.2 Implementation details | 0 |
The importance of automatic methods for enriching lexicons, taxonomies and knowledge bases from free text is well-recognized. For rapidly changing domains such as current affairs, static knowledge bases are inadequate for responding to new developments, and the cost of building and maintaining resources by hand is prohibitive.This paper describes experiments which develop automatic methods for taking an original taxonomy as a skeleton and fleshing it out with new terms which are discovered in free text. The method is completely automatic and it is completely unsupervised apart from using the original taxonomic skeleton to suggest possible classifications for new terms. We evaluate how accurately our methods can reconstruct the WordNet taxonomy (Fellbaum, 1998) .The problem of enriching the lexical information in a taxonomy can be posed in two complementary ways.Firstly, given a particular taxonomic class (such as fruit) one could seek members of this class (such as apple, banana) . This problem is addressed by Riloff and Shepherd (1997) , Roark and Charniak (1998) and more recently by . Secondly, given a particular word (such as apple), one could seek suitable taxonomic classes for describing this object (such as fruit, foodstuff). The work in this paper addresses the second of these questions.The goal of automatically placing new words into a taxonomy has been attempted in various ways for at least ten years (Hearst and Schütze, 1993) . The process for placing a word w in a taxonomy T using a corpus C often contains some version of the following stages:• For a word w, find words from the corpus C whose occurrences are similar to those of w. Consider these the 'corpus-derived neighbors' N (w) of w.• Assuming that at least some of these neighbors are already in the taxonomy T , map w to the place in the taxonomy where these neighbors are most concentrated. Hearst and Schütze (1993) added 27 words to Word-Net using a version of this process, with a 63% accuracy at assigning new words to one of a number of disjoint WordNet 'classes' produced by a previous algorithm. (Direct comparison with this result is problematic since the number of classes used is not stated.) A more recent example is the top-down algorithm of Alfonseca and Manandhar (2001) , which seeks the node in T which shares the most collocational properties with the word w, adding 42 concepts taken from The Lord of the Rings with an accuracy of 28%.The algorithm as presented above leaves many degrees of freedom and open questions. What methods should be used to obtain the corpus-derived neighbors N (w)? This question is addressed in Section 2. Given a collection of neighbors, how should we define a "place in the taxonomy where these neighbors are most concentrated?" This question is addressed in Section 3, which defines a robust class-labelling algorithm for mapping a list of words into a taxonomy. In Section 4 we describe experiments, determining the accuracy with which these methods can be used to reconstruct the WordNet taxonomy. To our knowledge, this is the first such evaluation for a large sample of words. Section 5 discusses related work and other problems to which these techniques can be adapted.2 Finding semantic neighbors: Combining latent semantic analysis with part-of-speech information.There are many empirical techniques for recognizing when words are similar in meaning, rooted in the idea that "you shall know a word by the company it keeps" (Firth, 1957) . It is certainly the case that words which repeatedly occur with similar companions often have related meanings, and common features used for determining this similarity include shared collocations (Lin, 1999) , co-occurrence in lists of objects and latent semantic analysis (Landauer and Dumais, 1997; Hearst and Schütze, 1993) .The method used to obtain semantic neighbors in our experiments was a version of latent semantic analysis, descended from that used by Hearst and Schütze (1993, §4) . First, 1000 frequent words were chosen as column labels (after removing stopwords (Baeza-Yates and Ribiero-Neto, 1999, p. 167) ). Other words were assigned co-ordinates determined by the number of times they occured within the same context-window (15 words) as one of the 1000 column-label words in a large corpus. This gave a matrix where every word is represented by a rowvector determined by its co-occurence with frequently occuring, meaningful words. Since this matrix was very sparse, singular value decomposition (known in this context as latent semantic analysis (Landauer and Dumais, 1997) ) was used to reduce the number of dimensions from 1000 to 100. This reduced vector space is called WordSpace (Hearst and Schütze, 1993, §4) . Similarity between words was then computed using the cosine similarity measure (Baeza-Yates and Ribiero-Neto, 1999, p. 28) . Such techniques for measuring similarity between words have been shown to capture semantic properties: for example, they have been used successfully for recognizing synonymy (Landauer and Dumais, 1997) and for finding correct translations of individual terms . The corpus used for these experiments was the British National Corpus, which is tagged for parts-of-speech. This enabled us to build syntactic distinctions into WordSpace -instead of just giving a vector for the string test we were able to build separate vectors for the nouns, verbs and adjectives test. An example of the contribu-tion of part-of-speech information to extracting semantic neighbors of the word fire is shown in Table 2 . As can be seen, the noun fire (as in the substance/element) and the verb fire (mainly used to mean firing some sort of weapon) are related to quite different areas of meaning. Building a single vector for the string fire confuses this distinction -the neighbors of fire treated just as a string include words related to both the meaning of fire as a noun (more frequent in the BNC) and as a verb.Part of the goal of our experiments was to investigate the contribution that this part-of-speech information made for mapping words into taxonomies. As far as we are aware, these experiments are the first to investigate the combination of latent semantic indexing with part-ofspeech information. | 0 |
Formal grammar used in statistical machine translation (SMT), such as Bracketing Transduction Grammar (BTG) proposed by (Wu, 1997) and the synchronous CFG presented by (Chiang, 2005) , provides a natural platform for integrating linguistic knowledge into SMT because hierarchical structures produced by the formal grammar resemble linguistic structures. 1 Chiang (2005) attempts to integrate linguistic information into his formally c 2008.Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. 1 We inherit the definitions of formal and linguistic from (Chiang, 2005) which makes a distinction between formally syntax-based SMT and linguistically syntax-based SMT. syntax-based system by adding a constituent feature. Unfortunately, the linguistic feature does not show significant improvement on the test set. In this paper, we further this effort by integrating linguistic knowledge into BTG.We want to augment BTG's formal structures with linguistic structures since they are both hierarchical. In particular, our goal is to learn a more linguistically meaningful BTG from real-world bitexts by projecting linguistic structures onto BTG formal structures. In doing so, we hope to (1) maintain the strength of phrase-based approach since phrases are still used on BTG leaf nodes; (2) obtain a tight integration of linguistic knowledge in the translation model; (3) and finally avoid inducing a complicated linguistic synchronous grammar with expensive computation. The challenge, of course, is that BTG hierarchical structures are not always aligned with the linguistic structures in the syntactic parse trees of source or target language.Along this line, we propose a novel approach: Linguistically Annotated BTG (LABTG) for SMT. The LABTG annotates BTG rules with linguistic elements that are learned from syntactic parse trees on the source side through an annotation algorithm, which is capable of labelling both syntactic and non-syntactic phrases. The linguistic elements extracted from parse trees capture both internal lexical content and external context of phrases. With these linguistic annotations, we expect the LABTG to address two traditional issues of standard phrase-based SMT (Koehn et al., 2003) in a more effective manner. They are (1) phrase translation: translating phrases according to their contexts; (2) phrase reordering: incorporating richer linguistic features for better reordering.The proposed LABTG displays two unique characteristics when compared with BTG-based SMT (Wu, 1996; Xiong et al., 2006) . The first is that two linguistically-informed sub-models are introduced for better phrase translation and reordering: annotated phrase translation model and annotated reordering model. The second is that our proposed annotation algorithm and scheme are capable of conveying linguistic knowledge from source-side syntax structures to BTG structures. We describe the LABTG model and the annotation algorithm in Section 4. To better explain the LABTG model, we establish a unified framework of BTG-based SMT in Section 3. We conduct a series of experiments to study the effect of the LABTG in Section 5. | 0 |
The neural approach is revolutionising machine translation (MT). The main neural approach to MT is based on the encoder-decoder architecture (Cho et al., 2014; Sutskever et al., 2014) , where an encoder (e.g a recurrent neural network) reads the source sentences sequentially to produce a fixed-length vector representation. Then, a decoder generates the translation from the encoded vector, which can dynamically change using the attention mechanism.One of the main premises about natural language is that words of a sentence are inter-related according to a (latent) hierarchical structure, i.e. a syntactic tree. Therefore, it is expected that modeling the syntactic structure should improve the performance of NMT, especially in low-resource or linguistically divergent scenarios, such as English-Farsi. In this direction, (Li et al., 2017 ) uses a sequence-to-sequence model, making use of linearised parse trees. (Chen et al., 2017b ) has proposed a model which uses syntax to constrain the dynamic encoding of the source sentence via structurally constrained attention. (Bastings et al., 2017; Shuangzhi Wu, 2017) have incorporated syntactic information provided by the dependency tree of the source sentence. (Marcheggiani et al., 2018 ) has proposed a model to inject semantic bias into the encoder of NMT model. Recently, (Eriguchi et al., 2016; Chen et al., 2017a) have proposed methods to incorporate the hierarchical syntactic constituency information of the source sentence. In addition to the embedding of words, computed using the vanilla sequential encoder, they compute the embeddings of phrases recursively, directed by the top-1 parse tree of the source sentence generated by a parser. Though the results are promising, the top-1 trees are prone to parser error, and furthermore cannot capture semantic ambiguities of the source sentence.In this paper, we address the aforementioned issues by using exponentially many trees encoded in a forest instead of a single top-1 parse tree. We capture the parser uncertainty by considering many parse trees and their probabilities. The encoding of each source sentence is guided by the forest, and includes the forest nodes whose representations are computed in a bottom-up fashion using our ForestLSTM architecture ( §3). Thus, in the encoding stage of this approach, different ways of constructing a phrase are taken into consideration along with the probability of rules in the corresponding trees. We evaluate our approach on English to Chinese, Farsi and German translation tasks, showing that forests lead to better performance compared to top-1 tree and sequential encoders ( §4). | 0 |
Natural language processing (NLP) tasks often leverage word-level features to exploit lexical knowledge. Segmenting a sentence into a sequence of words, especially for languages without explicit word boundaries (e.g., Chinese) not only extracts lexical features, but also shortens the length of the sentence to be processed. Thus, word segmentation, detecting word boundaries, is a crucial pre-processing task for many NLP tasks. In this aspect, Chinese word segmentation (CWS) is widely acknowledged as an essential task for Chinese NLP.CWS has made substantial progress in recent studies on several benchmarks, which is reported by Huang and Zhao (2007) and Zhao et al. (2019) . In particular, pretrained language models (PLMs), like BERT (Devlin et al., 2019) , have established new state-of-the-art in sequence labeling (Meng et al., 2019) . Various fine-tuning methods have been proposed to improve the performance of indomain and cross-domain CWS based on PLMs Tian et al., 2020) . The two challenging problems in CWS, segmentation ambiguity and out-of-vocabulary (OOV) words, have been significantly mitigated by PLM-based methods that are fine-tuned on large-scale annotated CWS corpora. Such methods are even reaching human performance on benchmarks. Nevertheless, CWS is more valuable as a prelude for downstream NLP tasks than as a standalone task. Intrinsic evaluation of CWS on benchmark datasets only examines the effectiveness of current neural methods on word boundary detection. To better apply CWS in downstream NLP tasks, we should comprehensively re-think CWS from the perspective of practicability. In this paper, we define the practicability of CWS with two aspects: low complexity as a standalone task and high beneficiality to downstream tasks.The complexity is twofold: 1) complexity of implementation and 2) time and space complexity of a CWS algorithm. Previous neural methods usually require additional resources (Zhou et al., 2017; Ma et al., 2018; Zhang et al., 2018b; Zhao et al., 2018; Qiu et al., 2020) , such as external pre-trained embeddings. The complexity of implementation is reflected in the difficulty of acquiring external resources. External resources vary in quality and the length of time for computation, For example, it is time-consuming to obtain effective pre-trained embeddings as they are trained on a huge amount of data. Generally, it is difficult to maintain high CWS performance for many previous neural methods in a low-resource environment. Neural methods with external resources achieve high CWS performance but at the cost of a high complexity of implementation. On the other hand, for training and inference, PLM-based CWS methods also consume large memory to store a huge number of parameters of their models. The speed of inference is usually slow. The huge memory consumption and slow inference prevent PLM-based CWS models from being deployed in small-scale smart devices. And, as CWS is often used with downstream models, this even weakens the applicability on smart devices as CWS is not supposed to take too much overhead in this situation.The second is the beneficiality to downstream tasks. CWS is rarely used as a standalone task in industry. Existing CWS evaluations only rely on benchmarks and analyze the behavior of segmentation methods in a static scenario. Some well-known benchmarks are quite old (e.g., Bakeoff-2005) and not challenging for neural CWS anymore. Such evaluations are intrinsic, which are not associated with downstream NLP tasks. High CWS performance (e.g., Precision and F 1 ) does not mean that segmentation results are beneficial to downstream processing. Additionally, benchmark datasets have a plenty of segmentation noises that affect CWS training and evaluation. For instance, although the structure of "副" (vice) + "X" is segmented as two words: "副" (vice) and "X" in training data and never unified as a single word, "副校长" (vicepresident) appears as one word in test data, note that: X presents any job titles, e.g., "总统" (president) and "经理" (manager). There are also many obvious errors due to annotation inconsistency in data. We have found, in one benchmark dataset, the word "操作系统" (operating system) is regarded as two words ["操作" (operate) + "系统" (system)] 6 times and appears as one word 14 times, respectively. Therefore, to measure and improve the beneficiality of CWS to downstream tasks, intrinsic evaluations on CWS benchmark datasets are not sufficient. We should perform extrinsic evaluations with downstream tasks.To address the aforementioned practicability issue of CWS, we propose a semi-supervised neu-ral method via pseudo labels. The method consists of two parts: a teacher model and a student model. First, we use a fine-tuned CWS model that is trained on the annotated CWS data as the teacher model, which can achieve competitive performance in traditional perspective for CWS. Then we collect massive unlabeled data and distill knowledge from the teacher model to the student model by generating pseudo labels. We filter out noisy pseudo labels to provide reliable knowledge for training the student model. The unlabeled data is easier to obtain than other external resources (e.g., lexicon and pretrained embeddings) and can be updated anytime at a low cost. And we use the lightweight student model for inference, hence significantly reducing the memory consumption and inference time complexity. The practicability of our proposed method is competitive.To sum up, the contributions of this work are as follows:• Our proposed method distills knowledge from the teacher model via unlabeled data to coach the lightweight student model. The proposed method achieves a noticeable improvement over strong baselines for CWS by the traditional intrinsic evaluation.• The lightweight student can be deployed on a small-scale device, even in a non-GPU environment. We abandon the PLM neural architectures (teacher model) during decoding. The speed of decoding is thus fast for practical application. Our method reduces the complexity of implementation, inference time, and memory consumption.• We empirically investigate the effectiveness of the proposed method to downstream Chinese NLP tasks and analyze the impact of segmentation results on them via extrinsic evaluations. | 0 |
Scholars of Natural Language Processing technology rely on access to gold standard annotated data for training and evaluation of learning algorithms. Despite successful attempts to create machine readable document formats such as XML and HTML, the Portable Document Format (PDF) is still widely used for read-only documents which require visual markup, across domains such as scientific publishing, law, and government. This presents a challenge to NLP practitioners, as the PDF format does not contain exhaustive markup information, making it difficult to extract semantically meaningful regions from a PDF. Annotating text extracted from PDFs in a plaintext format is difficult, because the extracted text stream lacks any organization or markup, such as paragraph boundaries, figure placement and page headers/footers.Existing popular annotation tools such as BRAT (Stenetorp et al., 2012) focus on annotation of user provided plain text in a web browser specifically designed for annotation only. For many labeling tasks, this format is exactly what is required. However, as the scope and ability of natural language processing technology goes beyond purely textual processing due in part to recent advances in large language models (Peters et al., 2018; Devlin et al., 2019, inter alia) , the context and media in which datasets are created must evolve as well.In addition, the quality of both data collection and evaluation methodology is highly dependent on the particular annotation/evaluation context in which the data being annotated is viewed (Joseph et al., 2017; Läubli et al., 2018) . Annotating data directly on top of a HTML overlay on an underlying PDF canvas allows naturally occurring text to be annotated in its original context -that of the PDF itself.To address the need for an annotation tool that goes beyond plaintext data, we present a new annotation tool called PAWLS (PDF Annotation With Labels and Structure). In this paper, we discuss some of the PDF-specific design choices in PAWLS, including automatic bounding box uniformity, freeform annotations for non-textual image regions and scale/dimension agnostic bounding box storage. We report agreement statistics from an initial round of labelling during the creation of a PDF structure parsing dataset for which PAWLS was originally designed. | 0 |
Cette dernière décennie a vu l'émergence de l'oral comme objet de recherche à part entière aussi bien dans les descriptions linguistiques qu'en TALN. Nous allons nous intéresser ici au français parlé, plus particulièrement au "spontané", dans une perspective d'annotation manuelle des relations syntaxiques à partir de transcriptions humaines assistées par ordinateur (logiciel Transcriber). Les conventions de transcription sont celles de (DELIC, à paraître). Nous comparerons l'oral avec le français écrit non standard. (cf. Habert et al., 1997 ; Abeillé et al., 2001) . Or, la constitution de tels corpus constitue un enjeu majeur, à la fois pour la communauté des linguistes (comparaison oral/écrit de certaines structures, extraction automatique de concordances plus précises…) et pour la communauté des chercheurs en TALN (entraînement des parseurs sur l'oral, dialogue homme-machine…).Dans le cadre de la campagne d'évaluation EASY (Evaluation des Analyseurs SYntaxique) du projet Technolangue EVALDA, nous avons été amenés à nous interroger sur les problèmes posés par l'annotation syntaxique de corpus oraux afin de savoir si cette tâche représentait un problème spécifique, compte tenu du fait que les analyseurs seront évalués sur des corpus écrits et oraux authentiques. Nous présentons dans cet article une typologie retraçant une partie des problèmes rencontrés à l'oral. Nous montrerons que l'étude de l'oral "spontané" permet d'aborder la question du traitement des "Nouvelles Formes de Communication Ecrite" (NFCE) (e-mails, forums, chats, SMS…), écrits plus ou moins normés dont le Web et la téléphonie mobile constituent une demande colossale notamment en terme de filtrage et d'analyse de contenus.La réflexion autour des outils, du formalisme et du standard d'annotation liés aux diverses sorties des analyseurs syntaxiques a été largement abordée dans une perspective d'évaluation des analyseurs syntaxiques (cf. Caroll et al., 2003 , la conférence associée à TALN 2003 "Evaluation des analyseurs syntaxiques"). En revanche, la question concernant le choix des annotations de référence est beaucoup moins débattue (cf. Aït-Mokhtar et al., 2003, pour l'écrit) et dépasse largement le seul problème d'évaluation. C'est cette question que nous allons aborder ici.Nous avons trois pré-requis méthodologiques : une analyse superficielle liée à l'orientation contemporaine vers des analyseurs syntaxiques robustes ; des structures syntaxiques en dépendances ; et la conservation de l'intégralité de l'information transcrite (amorces, répétitions, reformulations...) liée à la possibilité d'une identification ultérieure plus fine des intentions des locuteurs (Antoine et al., 2003 : 29) et au fait que, selon nous, l'analyse syntaxique commence avec la transcription fidèle des paroles, toute suppression constituant déjà une analyse syntaxique en soi (cf. Blanche-Benveniste, Jeanjean, 1986). | 0 |
Social scientists rely on event data to quantitatively study the behavior of political actors. Public protest (demonstrations, industrial strikes, petition campaigns, political and symbolic violence) accounts for a large part of events involving sub-state actors. Protest event data are central to the study of protest mobilization, political instability, and social movements (Hutter, 2014; Koopmans and Rucht, 2002) .To advance the machine coding 1 of protest data, we have been building a manually annotated corpus of protest events. Our protest event coding follows guidelines adapted from successful manual coding projects. All coding decisions are supported by careful token-level annotation inspired by annotation standards for event extraction. Both event cod-ing and token-level annotation are performed by domain experts. We find that domain experts without specialist linguistic knowledge can be trained well to follow token-level annotation rules and deliver sufficient annotation quality.Contentious politics scholars often need more fine-grained information on protest events than can be delivered by available event coding software. Our event schema includes issues-the claims and grievances of protest actors-and the number of protesters. We also code protest events that are not the main topic of the report. This is often desirable (Kriesi et al., 1995) , although event coding systems would not always code them by design.We code newswire reports from the widely used English Gigaword corpus and will release all annotations. 2 | 0 |
Currently, many off-the-shelf named entity recognition solutions are available, and these can be used to recognize mentions in clinical notes denoting diseases and disorders. We decided to use the Stanford NER tool (Finkel et al., 2005) to train CRF models based on annotated biomedical text.The use of unsupervised methods for inferring word representations is nowadays also known to increase the accuracy of entity recognition models (Turian et al., 2010) . Thus, we also used Brown clusters (Brown et al., 1992; Turian et al., 2009) inferred from a large collection of non-annotated clinical texts, together with domain specific lexicons, to build features for our CRF models.An important challenge in entity recognition relates to the recognition of overlapping and noncontinuous entities (Alex et al., 2007) . In this paper, we describe how we modified the Stanford NER system to be able to recognize noncontinuous entities, through an adapted version of the SBIEO scheme .Besides the recognition of medical concepts, we also present the strategy used to map each of the recognized concepts into a SNOMED CT identifier (Cornet and de Keizer, 2008) . This task is particularly challenging, since there are many ambiguous cases. We describe our general approach to address the aforementioned CUI mapping problem, based on similarity search and on the information content of SNOMED CT concept names. | 0 |
State-of-the-art statistical machine translation (SMT) systems use large amounts of parallel data to estimate translation models. However, parallel corpora are expensive and not available for every domain.Recently different works have been published that train translation models using only nonparallel data. Although first practical applications of these approaches have been shown, the overall decipherment accuracy of the proposed algorithms is still low. Improving the core decipherment algorithms is an important step for making decipherment techniques useful for practical applications.In this paper we present an effective beam search algorithm which provides high decipherment accuracies while having low computational requirements. The proposed approach allows using high order n-gram language models, is scalable to large vocabulary sizes and can be adjusted to account for a given amount of computational resources. We show significant improvements in decipherment accuracy in a variety of experiments while being computationally more effective than previous published works. | 0 |
Community-based Question Answering (CQA) systems such as Yahoo! Answers 1 , StackOverflow 2 and Baidu Zhidao 3 have become dependable sources of knowledge to solve common user problems. Unlike factoid question answering 4 , CQA systems focus on crowdsourcing how and why questions and their answers. As is the case with any system where content is generated by web users, the generated content would be of varying quality, reliability, readability and abstraction. Thus, manual curation of such datasets is inevitable to weed out low quality and duplicate content to ensure user satisfaction. A natural way to aid manual curation of such broad-based CQA archives is to employ clustering so that semantically related QAs are grouped together; this would help organize the corpus in a way that experts engaged in manual curation be assigned specific clusters relating to areas of their expertise. Clustering also provides a platform to enable tagging the QA dataset; cluster topics could be used as tags, or other QAs in the same cluster could be tagged as being related to a QA. The fundamental difference between CQA archives and general text document collections is the existence of a two-part structure in QAs and the difference in lexical "character" between the question and answer parts. This lexical chasm (i.e., gap) (Berger et al., 2000) between question and answer parts has been a subject of much study, especially, in the context of improving QA retrieval. In this paper, we consider using the two-part structure in QAs for clustering CQA datasets.Motivating Example: Table 1 lists four example QAs from the context of a CQA system focused on addressing myriad technical issues. These QAs have been tagged in the table with a manually identified root-cause to aid understanding; the root-cause is not part of the CQA data per se. QA1 and QA2 are seen to address related issues pertaining to routers, whereas QA3 and QA4 are focused on the same nar-row issue dealing with java libraries. Since QA1 and QA2 address different problems, they may not be expected to be part of the same cluster in finegrained clusterings. On the other hand, the solutions suggested in QA3 and QA4 are distinct and different legitimate solutions to the same problem cause. Thus, from a semantics perspective, it is intuitive that QA3 and QA4 should be part of the same cluster in any clustering of the CQA dataset to aid actioning on them together; a human expert might decide to merge the question parts and tag one of the answers as an alternative answer. Let us now examine the lexical relatedness between the pairs as illustrated in Table 2 . State-of-the-art text similarity measures that quantify word overlaps are likely to judge QA1 and QA2 to be having a medium similarity when either the question-part or the answerpart are considered. For the pair (QA3, QA4), the question-part similarity would be judged to be high and the answer-part similarity as low. Thus, the high similarity between the root-causes of QA3 and QA4 manifest primarily in their question-parts. Analogously, we observed that some QAs involving the same root-cause lead to high answer-part similarity despite poor question-part similarity. This is especially true in cases involving suggestion of the same sequence of solution steps despite the question-part being divergent due to focusing on different symptoms of the same complex problem. From these observations, we posit that high similarities on either the question-space or answer-space is indicative of semantic relatedness. Any clustering method that uses a sum, average or weighted sum aggregation function to arrive at pair-wise similarities, such as a K-Means clustering that treats the collated QA as a single document, would intuitively be unable to heed to such differential manifestation of semantic similarities across the two parts. Our Contributions: We address the problem of harnessing the two-part structure in QA pairs to improve clustering of CQA data. Based on our observations on CQA data such as those illustrated in the example, we propose a clustering approach, MixK-Means, that composes similarities (dissimilarities) in the question and answer spaces using a max (min) operator style aggregation. Through abundant empirical analysis on real-world CQA data, we illustrate that our method outperforms the state-of-the-art approaches for the task of CQA clustering. | 0 |
Linguistic alignment is the tendency that interlocutors have to change the way they talk to accommodate their conversational partners. This can happen through mirroring the partner's linguistic behavior on many levels such as the choice of words, syntactic structures, and semantic topics. Linguistic alignment is considered an important mechanism for establishing common ground and rapport, fostering successful communicative interactions (Clark, 1996) . In addition, understanding this coordination in its natural context is crucial for the design of conversational systems that interact with people in a natural and effective fashion (Zhao et al., 2016; Loth et al., 2015; Park et al., 2017) .While alignment has been largely studied with adults (Pickering and Garrod, 2004; Fusaroli et al., 2012; Dale et al., 2013; Dideriksen et al., 2019) , little has been done to investigate how it manifests in the context of childadult early communication and how it evolves across development. This is a significant gap in the literature. The child-adult early communication cannot be thought of as a simple extension of conversational dynamics between adults; it involves strong asymmetries in terms of cognitive abilities and social roles and, thus, requires more dedicated research (Clark, 2015) . In addition, the study of child-caregiver linguistic interaction informs our theories of children's cognitive development. On the one hand, children's developing abilities in managing a conversation -through mechanisms such as interactive alignment -is a window into their emerging social-cognitive skills (Tomasello, 2009) . On the other hand, the way caregivers use alignment across development allows us to understand whether and how adults tune their talk to children's developing cognitive abilities. Such tuning has been suggested to play a pedagogical role, supporting linguistic and conceptual learning (Snow, 1972; Fourtassi et al., 2014 Fourtassi et al., , 2019 .Our study investigates children's interactive alignment in natural conversations with adults. Previously, Dale and Spivey (2006) used recurrence analysis to investigate child-caregiver syntactic alignment (operationalized as sequences of parts of speech) and found evidence for syntactic coordination. Using a similar computational framework, Fernández and Grimm (2014) extended Dale and Spivey's findings to the lexical and conceptual levels. Nevertheless, both studies were based on data from three children only. While such a small sample size allows for a detailed examination of development for specific children, it does not allow us to characterize general developmental patterns that could be shared by the majority of children. Indeed, both studies found large individual variability and, thus, no strong conclusions about development could be drawn.In a more recent work, Yurovsky et al. (2016) studied a large-scale corpus of child-caregiver interactions containing two orders of magnitude more children than previous work. Using hierarchical Bayesian models, they found that both children and caregivers decreased their alignment over the first five years of development. Work by Yurovsky et al. (2016) thus provided a much more robust test of interactive alignment. However, it focused on the special case of function words. It is still an open question how development unfolds across the entire lexicon and along more abstract levels such as syntax and semantics. The current study is an effort to fill this gap in the literature. We leverage NLP tools to test interactive alignment at the lexical, syntactic, and conceptual levels, using a large-scale corpus of children's natural language. | 0 |
Our aim here was to build thematic timelines for a general domain topic defined by a user query. This task, which involves the extraction of important events, is related to the tasks of Retrospective Event Detection (Yang et al., 1998) , or New Event Detection, as defined for example in Topic Detection and Tracking (TDT) campaigns (Allan, 2002) .The majority of systems designed to tackle this task make use of textual information in a bag-ofwords manner. They use little temporal information, generally only using document metadata, such as the document creation time (DCT). The few systems that do make use of temporal information (such as the now discontinued Google timeline), only extract absolute, full dates (that feature a day, month and year). In our corpus, described in Section 3.1, we found that only 7% of extracted temporal expressions are absolute dates.We distinguish our work from that of previous researchers in that we have focused primarily on extracted temporal information as opposed to other textual content. We show that using linguistic temporal processing helps extract important events in texts. Our system extracts a maximum of temporal information and uses only this information to detect salient dates for the construction of event timelines. Other types of content are used for initial thematic document retrieval. Output is a list of dates, ranked from most important to least important with respect to the given topic. Each date is presented with a set of relevant sentences.We can see this work as a new, easily evaluable task of "date extraction", which is an important component of timeline summarization.In what follows, we first review some of the related work in Section 2. Section 3 presents the resources used and gives an overview of the system. The system used for temporal analysis is described in Section 4, and the strategy used for indexing and finding salient dates, as well as the results obtained, are given in Section 5 1 . | 0 |
Over the past few years, an increasing number of people have begun to express their opinion through social networks and microblogging services. Twitter, as one of the most popular of these social networks, has become a major platform for social communication, allowing its users to send and read short messages called 'tweets'. Tweets have become important in a variety of tasks, including the prediction of election results (O'Connor et al., 2010) . The emergence of online expressions of opinion has attracted interest in sentiment analysis of tweets in both academia and industry. Sentiment analysis, also known as opinion mining, focuses on computational treatments of sentiments (emotions, attitudes, opinions) in natural language text. In this paper we describe our submission to Task 10, subtask B: Message Polarity Classification. The task is defined as: 'Given a message, classify whether the message is of positive, negative, or neutral sentiment. For a message conveying both a positive and negative sentiment, whichever is the stronger sentiment should be chosen' (Rosenthal et al., 2015) .This paper describes a system which utilizes a Naive Bayes classifier to determine the sentiment of tweets. This paper describes the resources used, the system details, including preprocessing steps taken, feature extraction and classifier implemented, and the test runs and end results. | 0 |
The sheer amount of natural language data provides a great opportunity to represent named entity mentions by their probability distributions, so that they can be exploited for many Natural Language Processing (NLP) applications. However, named entity mentions are fundamentally different from common words or phrases in three aspects. First, the semantic meaning of a named entity mention (e.g., a person name "Bill Gates") is not a simple summation of the meanings of the words it contains ("Bill" + "Gates"). Second, entity mentions are often highly ambiguous in various local contexts. For example, "Michael Jordan" may refer to the basketball player or the computer science professor. Third, representing entity mentions as mere phrases fails when names are rendered quite differently, especially when they appear across multiple languages. For example, "Ang Lee" in English is "Li An" in Chinese.Fortunately, entities, the objects which mentions refer to, are unique and equivalent across languages. Many manually constructed entity-centric knowledge base resources such as Wikipedia 2 , DBPedia (Auer et al., 2007) and YAGO (Suchanek et al., 2007) are widely available. Even better, they are massively multilingual. For example, up to August 2018, Wikipedia contains 21 million interlanguage links 3 between 302 languages. We propose a novel cross-lingual joint entity and word (CLEW) embedding learning framework based on multilingual Wikipedia and evaluate its effectiveness on two practical NLP applications: Crosslingual Entity Linking and Parallel Sentence Mining.Wikipedia contains rich entity anchor links. As shown in Figure 2 , many mentions (e.g., "小米" (Xiaomi)) in a source language are linked to the entities in the same language that they refer to (e.g., zh/小 米 科 技 (Xiaomi Technology)), and some mentions are further linked to their corresponding English entities (e.g., Chinese mention "苹果" (Apple) is linked to entity en/Apple_Inc. in English). We replace each mention (anchor link) in the source language with its corresponding entity title in the target language if it exists, or in the source language otherwise. After this replacement, each entity mention is treated as a unique disambiguated entity, then we can learn joint entity and word embedding representations for the source language and target language respectively.Furthermore, we leverage these shared target language entities as pivots to learn a rotation matrix and seamlessly align two embedding spaces into one by linear mapping. In this unified common space, multiple mentions are reliably disambiguated and grounded, which enables us to directly compute the semantic similarity between a mention in a source language and an entity in a target language (e.g., English), and thus we can perform Cross-lingual Entity Linking in an unsupervised way, without using any training data. In addition, considering each pair of Wikipedia articles connected by an inter-language link as comparable documents, we use this multilingual common space to represent sentences and extract many parallel sentence pairs.The novel contributions of this paper are:• We develop a novel approach based on rich anchor links in Wikipedia to learn crosslingual joint entity and word embedding, so that entity mentions across multiple languages are disambiguated and grounded into one unified common space.• Using this joint entity and word embedding space, entity mentions in any language can be linked to an English knowledge base without any annotation cost. We achieve state-ofthe-art performance on unsupervised crosslingual entity linking.• We construct a rich resource of parallel sentences for 302 2 language pairs along with accurate entity alignment and word alignment. | 0 |
In this digital era we live in, almost everyone is communicating online. As of January 2021, Facebook, YouTube, and WhatsApp each have over 2 billion users, which means many differing viewpoints and perspectives being shared (Statista, 2021) . With such a huge exchange of ideas, there is bound to be some toxicity within the comments. Aside from discouraging users to continue with or join conversations, toxic comments can also taint users' perceptions on news sites (Tenenboim et al., 2019) . Thus it is important to moderate online conversations without fully censoring users.While forums typically rely on human moderators, with such vast amounts of data coming in, it can be difficult for humans to keep up (Nobata et al., 2016) . Advances in deep learning and machine learning is making text processing a viable option to replace, or at least assist, human moderators clean up comment sections (Consultants, 2019) . Some methods rely on simply classifying whether a comment is toxic or not, but identifying what parts of the text are actually toxic can assist moderators and provide insight into what makes language toxic. The SemEval task 5 aims to evaluate systems that detect toxic spans wihtin text using datasets where spans within the comments are labelled as toxic, differing from previously released datasets where whole comments were labelled as toxic or non-toxic (Pavlopoulos et al., 2021) . This is inherently a natural language processing task, similar to text classification and sentiment analysis. This study focuses on training a recurrent neural network to determine the indices of a given string that represent the toxic portions of a comment. Recurrent neural networks are classically used for natural language and sequence labelling task, and one could view this task as a form of sequence labelling. The goal of sequence labelling is, given a sequence as input, assign a sequence of labels. Because recurrent neural networks (RNNs) are flexible in their use of context information and can recognize sequential patterns, they are an attractive and commonly used choice in sequence labelling (Graves, 2012) . This paper approaches the task at hand with a sequence labelling methodology, applying an RNN and comparing the use of gated reccurent unit (GRU) and long-short term memory unit (LSTM) layers in the RNN. | 0 |
A dramatic progress has been achieved in singleturn dialogue modeling such as open-domain response generation (Shang et al., 2015) , question answering (Rajpurkar et al., 2016) , etc. By contrast, multi-turn dialogue modeling is still in its infancy, as users tend to use incomplete utterances which usually omit or refer back to entities or concepts appeared in the dialogue context, namely ellipsis and coreference. According to previous studies, ellipsis and coreference exist in more than 70% of the utterances (Su et al., 2019) , for which a dialogue system must be equipped with the ability of understanding them. To tackle the problem, early works include learning a hierarchical representation (Serban et al., 2017; Zhang et al., 2018) and concatenating the dialogue utterances selectively (Yan et al., 2016) . Recently, researchers focus on a more explicit and explainable solution: the task of Incomplete Utterance Rewriting (IUR, also known as context rewriting) (Kumar and Joshi, 2016; Su et al., 2019; Work done during an internship at Microsoft Research.Utterance (Translation) x 1 (A) 北京今天天气如何 How is the weather in Beijing todayx 2 (B) 北京今天是阴天 Beijing is cloudy todayx 3 (A)为什么总是这样 Why is always thisx * 3 北京为什么总是阴天 Why is Beijing always cloudy Table 1 : An example dialogue between user A and B, including the context utterances (x 1 , x 2 ), the incomplete utterance (x 3 ) and the rewritten utterance (x * 3 ).2019a; Pan et al., 2019; Elgohary et al., 2019; . IUR aims to rewrite an incomplete utterance into an utterance which is semantically equivalent but self-contained to be understood without context. As shown in Table 1 , the incomplete utterance x 3 not only omits the subject "北京"(Beijing), but also refers to the semantic of "阴 天"(cloudy) via "这 样"(this). By explicitly recovering the hidden semantics behind x 3 into x * 3 , IUR makes the downstream dialogue modeling more precise.To deal with IUR, a natural idea is to transfer models from coreference resolution (Clark and Manning, 2016) . However, this idea is not easy to realize, as ellipsis also accounts for a large proportion. Despite being different, coreference and ellipsis both can be resolved without introducing out-of-dialogue words in most cases. That is to say, words of the rewritten utterance are nearly from either the context utterances or the incomplete utterance. Observing it, most previous works employ the pointer network (Vinyals et al., 2015) or the sequence to sequence model with copy mechanism (Gu et al., 2016; See et al., 2017) . However, they generate the rewritten utterance from scratch, neglecting a key trait that the main structure of a rewritten utterance is always the same as the incomplete utterance. To highlight it, we imagine the rewritten utterance as the outcome after a series of edit operations (i.e. substitute and insert) on the incomplete utterance. Taking the example from Table 1 , x * 3 can be obtained by substituting "这样"(this) in x 3 with "阴天"(cloudy) in x 2 and inserting "北京"(Beijing) before "为什 么"(Why), much easier than producing x * 3 via decoding word by word. These edit operations are carried out between word pairs of the context utterances and the incomplete utterance, analogous to semantic segmentation (a well-known task in computer vision): Given relevance features between word pairs as an image, the model is to predict the edit type for each word pair as a pixel-level mask (elaborated in Section 3). Inspired by the above, in this paper, we propose a novel and extensive approach which formulates IUR as semantic segmentation 1 . Our contributions are as follows:• As far as we know, we are the first to present such a highly extensive approach which formulates the incomplete utterance rewriting as a semantic segmentation task.• Benefiting from being able to capture both local and global information, our approach achieves state-of-the-art performance on several datasets across different domains and languages.• Furthermore, our model predicts the edit operations in parallel, and thus obtains a much faster inference speed than traditional methods. | 0 |
Multiword expressions (MWEs) are word combinations idiosyncratic with respect to e.g. syntax or semantics (Baldwin and Kim, 2010) . One of their most emblematic properties is semantic noncompositionality: the meaning of the whole cannot be straightforwardly deduced from the meanings of its components, as in cut corners 'do an incomplete job '. 1 Due to this property and to their frequency (Jackendoff, 1997) , MWEs are a major challenge for semantically-oriented downstream applications, such as machine translation. A prerequisite for an MWE processing is their automatic identification.MWE identification aims at locating MWE occurrences in running text. This task is very challenging, as signaled by Constant et al. (2017) , and further confirmed by the PARSEME shared task on automatic identification of verbal MWEs . One of the main difficulties stems from the variability of MWEs, especially verbal ones (VMWEs). That is, even if a VMWE has previously been observed in a training corpus or in a lexicon, it can re-appear in morphosyntactically diverse forms. Examples (1-2) show two occurrences of a VMWE with variation in the components' inflection (cutting vs. cut), word order, presence of discontinuities (were), and syntactic relations (obj vs. nsubj).(1) Some companies were cutting corners obj to save costs.(2) The field would look uneven if corners nsubj were cut.However, unrestricted variability is not a reasonable assumption either, since it may lead to literal or coincidental occurrences of VMWEs' components (Savary et al., 2019b) , as in (3) and (4), respectively. 2 (3) Start with :::::: cutting one :::::: corner of the disinfectant bag. (4) If you ::: cut along these lines, you'll get two acute ::::::: corners. train dev test # tokens # VMWEs # tokens # VMWEs # seen % seen # tokens # VMWEs # seen % seen FR 432389 4550 56254 629 485 77.1 39489 498 251 50.4 PL 220465 4122 26030 515 387 75.1 27823 515 371 72.0 PT 506773 4430 68581 553 409 74.0 62648 553 397 71.8 RO 781968 4713 118658 589 555 94.2 114997 589 561 92.2 Table 1 : PARSEME shared task corpora for the 4 languages in focus (FR, PL, PT, RO) in terms of the number of tokens, annotated VMWEs and seen VMWEs (those whose multiset of lemmas also appear annotated in train).Our paper addresses VMWE variability, so as to distinguish examples (1-2) from (3-4). We focus on a subproblem of VMWE identification: the identification of previously seen VMWEs. Section 2 describes the corpora and best systems of the PARSEME shared task 1.1, Sections 3 and 4 motivate and describe our system Seen2020 dedicated to the task of seen VMWE identification. Experimental results are shown in Section 5, an interpretation is proposed in Section 6 and we conclude in Section 7. | 0 |
Online multimedia content becomes more and more accessible through digital TV, social networking sites and searchable digital libraries of photographs and videos. People of different ages and cultures attempt to make sense out of this data and re-package it for their own needs, these being informative, educational and entertainment ones. Understanding and generation of multimedia discourse requires knowledge and skills related to the nature of the interacting modalities and their semantic interplay for formulating the multimedia message.Within such context, intelligent multimedia systems are expected to parse/generate such messages or at least assist humans in these tasks. From another perspective, everyday human communication is predominantly multimodal; as such, similarly intuitive human-computer/robot interaction demands that intelligent systems master -among others-the semantic interplay between different media and modalities, i.e. they are able to use/understand natural language and its reference to objects and activities in the shared, situated communication space.It was more than a decade ago, when the lack of a theory of how different media interact with one another was indicated (Whittaker and Walker, 1991) . Recently, such theoretical framework has been developed and used for annotating a corpus of audiovisual documents with the objective of using such corpus for developing multimedia information processing tools (Pastra, 2008) . In this paper, we provide a brief overview of the theory and the corresponding annotated corpus and present a text-based search interface that has been developed for the exploration and the automatic expansion/generalisation of the annotated semantic relations. This search interface is a support tool for the theory and the related corpus and a first step towards its computational exploitation. | 0 |
In recent years, social media platforms in the Arabic region have been evolving rapidly. Twitter provides an easy form of communication that enables users to share information about their activities, opinions, feelings, and views about a wide variety of social events. It has been a great platform to disseminate events as they happen and released immediately, even before they are announced in traditional media. Tweets' contents have become major sources for extracting information about real-world events. Critical events such as violence, disasters, fires, and traffic accidents that need emergency awareness require an extreme effort to detect and track. Twitter users' posts have been utilized as data provider to detect high-risk events with their locations, such as earthquakes (Sakaki et al., 2010) , Traffic incidents (Gu et al., 2016) and floods (Arthur et al., 2018) . An earlier work done by Sakaki et al. (2010) predicted and detected the location of an earthquake in Japan more quickly than the Japan Meteorological Agency. (Gu et al., 2016) identified five categories of traffic incidents in the city of Pittsburgh and Philadelphia (USA) using twitter data. A recent study by Arthur et al. (2018) utilized tweets to locate and detect flood in the UK. Recently, event detection has been considered an active area of researches due to the widespread availability of data in social media. However, researches about event detection on Twitter applying it on Arabic is hampered by the lack of datasets that could be used to design and develop an event detection system. Until now, the dataset of (Almerekhi et al., 2016) and (Alhelbawy et al., 2016) are the only published Arabic datasets for event detection purposes that are freely available for research. To detect an event in the Arabic region, constructing a dataset of Arabic events is mandatory. Leveraging Twitter popularity in Saudi Arabia, we aim to build a dataset containing tweets written in both Modern Standard Arabic (MSA) and Saudi dialect to detect flood, dust storm, and traffic accidents. We focus on the flood, dust storm, and traffic accident events according to their significant 1 https://github.com/BatoolHamawi/FloDusTA influence on human life and economy in Saudi Arabia (Youssef et al, 2015; Karagulian et al.,2019; Mansuri et al., 2015) . To the best of our knowledge, this is the first publicly available Arabic dataset for the aim of detecting flood, dust storm, and traffic accident events. In this paper, the main contributions are:• We describe an Arabic dataset of Saudi event tweets FloDusTA: Flood, Dust Storm, Traffic Accident Saudi Event dataset. The dataset will be publicly available for the research community 1 . • A preliminary set of experiments were conducted to establish a baseline for future work on building an event detection system. The rest of this paper is organized as follows. Section 2 reviews the related works. Section 3 describes how tweets were collected and the cleaning and filtering that were deployed to extract a dataset of Saudi event tweets. In Section 4 we explain the annotation process in detail. In Section 5 the experiments are illustrated. Finally, we conclude and discuss future work. | 0 |
In speech recognition and understanding systems, many kinds of language model may be used to choose between the word and sentence hypotheses for which there is evidence in the acoustic data. Some words, word sequences, syntactic constructions and semantic structures are more likely to occur than others, and the presence of more likely objects in a sentence hypothesis is evidence for the correctness of that hypothesis. Evidence from different knowledge sources can be combined in an attempt to optimize the selection of correct hypotheses; see e.g. Alshawi and Carter (1994) ; Rayner et al (1994) ; Rosenfeld (1994) .Many of the knowledge sources used for this purpose score a sentence hypothesis by calculating a simple, typically linear, combination of scores associated with objects, such as N-grams and grammar rules, that characterize the hypothesis or its preferred linguistic analysis. When these scores are viewed as log probabilities, taking a linear sum corresponds to making an independence assumption that is known to be at best only approximately true, and that may give rise to inaccuracies that reduce the effectiveness of the knowledge source.The most obvious way to make a knowledge source more accurate is to increase the amount of structure or context that it takes account of. For example, a bigram model may be replaced by a trigram one, and the fact that dependencies exist among the likelihoods of occurrence of grammar rules at different locations in a parse tree can be modeled by associating probabilities with states in a parsing table rather than simply with the rules themselves (Briscoe and Carroll, 1993) .However, such remedies have their drawbacks. Firstly, even when the context is extended, some important influences may still not be modeled. For example, dependencies between words exist at separations greater than those allowed for by trigrams (for which long-distance N-grams [Jelinek et al, 1991] are a partial remedy), and associating scores with parsing table states may not model all the important correlations between grammar rules. Secondly, extending the model may greatly increase the amount of training data required if sparseness problems are to be kept under control, and additional data may be unavailable or expensive to collect. Thirdly, one cannot always know in advance of doing the work whether extending a model in a particular direction will, in practice, improve results. If it turns out not to, considerable ingenuity and effort may have been wasted.In this paper, I argue for a general method for extending the context-sensitivity of any knowledge source that calculates sentence hypothesis scores as linear combinations of scores for objects. The method, which is related to that of Iyer, Ostendorf and Rohlicek (1994) , involves clustering the sentences in the training corpus into a number of subcorpora, each predicting a different probability distribution for linguistic objects. An utterance hypothesis encountered at run time is then treated as if it had been selected from the subpopulation of sentences represented by one of these subcorpora. This technique addresses as follows the three drawbacks just alluded to. Firstly, it is able to capture the most important sentence-internal contextual effects regardless of the complexity of the probabilistic dependencies between the objects involved. Secondly, it makes only modest additional demands on training data. Thirdly, it can be applied in a standard way across knowledge sources for very different kinds of object, and if it does improve on the unclustered model this constitutes proof that additional, as yet unexploited relationships exist between linguistic objects of the type the model is based on, and that therefore it is worth looking for a more specific, more powerful way to model them.The use of corpus clustering often does not boost the power of the knowledge source as much as a specific hand-coded extension. For example, a clustered bigram model will probably not be as powerful as a trigram model. However, clustering can have two important uses. One is that it can provide some improvement to a model even in the absence of the additional (human or computational) resources required by a hand-coded extension. The other use is that the existence or otherwise of an improvement brought about by clustering can be a good indicator of whether additional performance can in fact be gained by extending the model by hand without further data collection, with the possibly considerable additional effort that extension would entail. And, of course, there is no reason why clustering should not, where it gives an advantage, also be used in conjunction with extension by hand to produce yet further improvements.As evidence for these claims, I present experimental results showing how, for a particular task and training corpus, clustering produces a sizeable improvement in unigram-and bigram-based models, but not in trigram-based ones; this is consistent with experience in the speech understanding community that while moving from bigrams to trigrams usually produces a definite payoff, a move from trigrams to 4-grams yields less clear benefits for the domain in question. I also show that, for the same task and corpus, clustering produces improvements when sentences are assessed not according to the words they contain but according to the syntax rules used in their best parse. This work thus goes beyond that of Iyer et al by focusing on the methodological im-portance of corpus clustering, rather than just its usefulness in improving overall systemperformance, and by exploring in detail the way its effectiveness varies along the dimensions of language model type, language model complexity, and number of clusters used. It also differs from Iyer et al's work by clustering at the utterance rather than the paragraph level, and by using a training corpus of thousands, rather than millions, of sentences; in many speech applications, available training data is likely to be quite limited, and may not always be chunked into paragraphs. | 0 |
Recent work demonstrated that word embeddings induced from large text collections encode many human biases (e.g., Bolukbasi et al., 2016; Caliskan et al., 2017) . This finding is not particularly surprising given that (1) we are likely project our biases in the text that we produce and (2) these biases in text are bound to be encoded in word vectors due to the distributional nature (Harris, 1954) of the word embedding models (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) . For illustration, consider the famous analogy-based gender bias example from Bolukbasi et al. (2016) : "Man is to computer programmer as woman is to homemaker". This bias will be reflected in the text (i.e., the word man will co-occur more often with words like programmer or engineer, whereas woman will more often appear next to homemaker or nurse), and will, in turn, be captured by word embeddings built from such biased texts. While biases encoded in word embeddings can be a useful data source for diachronic analyses of societal biases (e.g., Garg et al., 2018) , they may cause ethical problems for many downstream applications and NLP models.In order to measure the extent to which various societal biases are captured by word embeddings, Caliskan et al. (2017) proposed the Word Embedding Association Test (WEAT). WEAT measures semantic similarity, computed through word embeddings, between two sets of target words (e.g., insects vs. flowers) and two sets of attribute words (e.g., pleasant vs. unpleasant words). While they test a number of biases, the analysis is limited in scope to English as the only language, GloVe (Pennington et al., 2014) as the embedding model, and Common Crawl as the type of text. Following the same methodology, McCurdy and Serbetci (2017) extend the analysis to three more languages (German, Dutch, Spanish), but test only for gender bias.In this work, we present the most comprehensive study of biases captured by distributional word vector to date. We create XWEAT, a collection of multilingual and cross-lingual versions of the WEAT dataset, by translating WEAT to six other languages and offer a comparative analysis of biases over seven diverse languages. Furthermore, we measure the consistency of WEAT biases across different embedding models and types of corpora. What is more, given the recent surge of models for inducing cross-lingual embedding spaces (Mikolov et al., 2013a; Hermann and Blunsom, 2014; Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018; Hoshen and Wolf, 2018, inter alia) and their ubiquitous application in cross-lingual transfer of NLP models for downstream tasks, we investigate cross-lingual biases encoded in cross-lingual embedding spaces and compare them to bias effects present of corresponding monolingual embeddings.Our analysis yields some interesting findings: biases do depend on the embedding model and, quite surprisingly, they seem to be less pronounced in embeddings trained on social media texts. Furthermore, we find that the effects (i.e., amount) of bias in cross-lingual embedding spaces can roughly be predicted from the bias effects of the corresponding monolingual embedding spaces. | 0 |
The need for in-domain data in machine learning is a wellestablished problem and should be well motivated in previous papers (e.g [1] ). We briefly observe, however, that across domains system performance is tied to the similarity between training and testing data. The testing data used for guiding system development is almost synonymous with in-domain data. It follows directly that training data should also resemble the in-domain as closely as possible. In-domain data however is also almost always the most limited kind. This necessitates supplementing it with out-of-domain or nondomain-specific data in order to achieve satisfactory model estimates.In this paper we consider the training of language models for speech recognition and machine translation of uni-versity lectures, which are very domain-specific. Typically this means adapting existing systems to a new topic. Perhaps unique to this application is that the in-domain data for lectures is normally of a very small size. A one-hour lecture may produce under a thousand utterances and roughly ten thousand words. The necessity of rapid system development and testing in this context encourages us to limit training data size. What we want, then is a way to reduce large amounts of data and at the same time improve its relevance. Ideally we would also be able to do so using only a very small amount of in-domain data.We improve the work of [2] by drawing a better representative sample of out-of-domain data and language model (LM) vocabulary. However, more centrally, we extend the work of [2] by using a word-association based on a broad definition of similarity to extend these language models. With this extension, we do not compare solely the exact matching words from in-domain and out-of-domain corpora, but also their semantically associated words. These semantic associations can be inferred, as in the example of this paper through the use of pre-existing non-domain-specific parallel and/or monolingual corpora, or through hand-made thesauri. Then with a small amount of in-domain data we use the aforementioned extended language models to rank and select out-ofdomain sentences.The starting point and reference of our work is that found in [2] , which is to our knowledge one of the most recent and popular methods in a series of methods on data selection [3, 4, 5] . Their approach assumes the availability of enough in-domain data to train a reasonable in-domain LM, which is used to compute a cross-entropy score for the outof-domain sentences. The sentence is also scored by another, out-of-domain LM resulting from a similar-sized random out-of-domain sample. If the difference between these two scores exceeds a certain threshold the sentence is retained, the threshold being tuned on a small heldout in-domain set. This approach can be qualified as one based on the perplexity of the out-of-domain data. The in-domain data used in [2] is the EPPS corpus, which contains more than one million sentences. This stands in contrast to the lecture case with very specific domains and very limited data sizes. The authors report their results in terms of perplexity, for which their technique outperforms a baseline selection method by twenty absolute points. Their approach has been shown to be effective for selecting LM training data, at least from the perspective of a Statistical Machine Translation (SMT) system with a specific domain task [6, 7, 8] . We note that the main task of these systems was to translate TED talks. 1 The work in [2] was extended to parallel data selection by [9, 10] . However, the last work concludes that the approach is less effective in the parallel case.The approach of differential LM scores used in the aforementioned papers has a long history in the information retrieval (IR) domain [11, 12] . However, only unigram language models are considered in the context of IR, since the order in this task is meaningless.Enriching the LM capability by incorporating word relationships has also been proposed in IR and is referred to as a translation model therein [13, 14] . 2 More closely related to our approach, [15] uses word similarities to extend LMs in all orders. They show that extended LMs with properly computed word similarities significantly improve their performance at least in a speech recognition task.The translation of talks and lectures between natural languages has gained attention in recent years, with events such as the International Workshop on Spoken Language Translation (IWSLT) sponsoring evaluations of lecture translation systems for such material as TED talks. From the perspective of Automatic Speech Recognition (ASR), talks and lectures are an interesting domain where the current state of the art can be advanced, as the style of speaking is thought to lie somewhere between spontaneous and read speech.As noted previously, university lectures in particular are very domain-specific and thus in-domain data tends to be quite limited. The typical approach for language modeling in such a scenario is to include as much data as possible, both in-and out-of-domain, and allow weighted interpolation to select the best mixture based on some heldout set. However, if a satisfactory method could be found to choose only those parts of the out-of-domain set most similar to the in-domain set, this would reduce the amount of necessary LM training data. Not only would this save training time, it would also produce LMs that are smaller and possibly more adapted to the task at hand.We perform text selection using variations of our technique and train language models on the resulting selected data. These LMs are then evaluated in terms of their perplexity on a heldout set, the word-error-rate of a speech recogniser, and an SMT system using the LM. We also apply the technique of [2] to our selection task as a reference.The remainder of the paper is structured as follows. In section 2 we describe the theory behind our enhancements to the standard selection algorithm. First, we discuss our method of intelligently selecting the out-of-domain LM used for crossentropy selection. Next, we discuss our experiments with a more careful selection of the cross-entropy in-domain and out-of-domain language model vocabularies. In section 3.1 we introduce our association-based approach. We describe how we compute lexicons and how we use them to extend the cross-entropy language models. The results of our experiments are presented in section 5. We end the paper with section 6 in which we draw conclusions and discuss future work. | 0 |
Fact verification aims to verify whether a fact is entailed or refuted by the given evidence, which has attracted increasing attention. Recent researches mainly focus on the unstructured text as the evidence and ignoring the evidence with the structured or semi-structured format. A recently proposed dataset TABFACT (Wenhu Chen and Wang, 2020) fills this gap, which is designed to deal with the table-based fact verification problem, namely, verifying whether a statement is correct by the given semi-structured table evidence.It is well accepted that symbolic information (such as count and only) plays a great role in understanding semi-structured evidence based statements (Wenhu Chen and Wang, 2020) . However, most existing approaches for fact verification (Thorne et al., 2018; Nie et al., 2019; Zhong et al., 2020b; Soleimani et al., 2020) focus on the understanding of natural language, namely, linguistic reasoning, but fail to consider symbolic information, which plays an important role in complex reasoning (Liang et al., 2017; Dua et al., 2019; . Due to the diversity of natural language expressions, it is difficult to capture symbolic information effectively from natural language directly. Consequently, how to leverage symbolic information effectively becomes a crucial problem. To alleviate this problem, Zhong et al. (2020a) propose a graph module network that concatenates graphenhanced linguistic-level representations and program-guided symbolic-level representations together to predict the labels. However, their method focuses on the representation of symbolic information, rather than take advantage of the combination of both types of information. More specifically, we believe that the concatenation operation between two types of representations is not effective enough to leverage the linguistic information and symbolic information to perform reasoning.In recent studies, graph neural networks show their powerful ability in dealing with semi-structured data (Bogin et al., 2019a; Bogin et al., 2019b) . Under this consideration, we propose to use graph neural networks that learn to combine linguistic information and symbolic information in a simultaneous fashion. Since the representations of different types of information fall in different embedding spaces, Figure 1 : Example of the TABFACT dataset, which are expected to combine both linguistic information in the statement and the table and symbolic information in the programs. Given a table and a statement, the goal is to predict whether the label is ENTAILED or REFUTED. Program is a kind of LISP-like logical form. The program synthesis and selection process are described in Section 3.2. a heterogeneous graph structure is suitable to reason and aggregate over different types of nodes to combine different types of information.In this paper, we propose a heterogeneous graph-based neural network for table-based fact verification named HeterTFV, to learn to combine linguistic information and symbolic information. Given a statement and a table, we first generate programs with the latent program algorithm (LPA) algorithm proposed by Wenhu Chen and Wang (2020) . After that, we construct a program graph to capture the inner structure in the program and use gated graph neural network to encode the programs to learn the semantic compositionality. Then a heterogeneous graph is constructed with statement nodes, table nodes, and program nodes to incorporate both linguistic information and symbolic information, which is expected to exploit the structure in the table and build connections among the statement, table, and programs. Finally, a graph-based neural network is proposed to reason over the constructed heterogeneous graph, which enables the message passing processes of different types of nodes to achieve the purpose to combine linguistic information and symbolic information.We conduct experiments on the TABFACT (Wenhu Chen and Wang, 2020), a large-scale benchmark dataset for table-based fact verification. Experimental results show that our model outperforms all baselines and achieves state-of-the-art performance.In summary, the main contributions of this paper are three-fold:• We construct a heterogeneous graph by introducing program nodes, to incorporate both linguistic information and symbolic information.• We propose a graph-based approach to reason over the constructed heterogeneous graph to perform different types of message passing processes, which makes an effective combination of linguistic information and symbolic information.• Experimental results on the TABFACT dataset illustrate the advantage of our proposed heterogeneous graph-based approach: our model outperforms all the baseline systems and achieves a new state-of-the-art performance. | 0 |
End of preview. Expand
in Dataset Viewer.
Dataset for Classification of Sections of Academic Papers
A dataset mapping sections of academic papers to one of the following section types:
0: Introduction
1: Background
2: Methodology
3: Experiments and Results
4: Conclusion
The dataset was collected by taking the GROBID parses of academic papers in the ACL-OCL dataset and matching the section headings to one of the synonyms of each section type. Sections that did not have a match were disregarded. The following synonyms are used:
Section Type | Synonyms |
---|---|
Introduction | Introduction |
Background | Background Related Work Historical Review |
Methodology | Methodology Method Algorithm Properties |
Experiments and Resutls | Experiments Results Experimental Design Empirical Evaluation Experiments and Analysis Ablation Studies Evaluation |
Conclusion | Conclusion Conclusion & Discussion Discussion and Conclusions Conclusion and Outlook Further Work Discussions and Future Directions |
- Downloads last month
- 53