Comparison of TnT, Max.Ent, CRF Taggers for Urdu Language M.HUMERA KHANAM 1, K.V.MADHUMURTHY 2, MD.A.KHUDHUS 3 1

Similar documents
2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Indian Institute of Technology, Kanpur

ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

An Evaluation of POS Taggers for the CHILDES Corpus

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

Linking Task: Identifying authors and book titles in verbose queries

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger

Parsing of part-of-speech tagged Assamese Texts

DEVELOPMENT OF A MULTILINGUAL PARALLEL CORPUS AND A PART-OF-SPEECH TAGGER FOR AFRIKAANS

ScienceDirect. Malayalam question answering system

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

Grammars & Parsing, Part 1:

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Training and evaluation of POS taggers on the French MULTITAG corpus

arxiv:cmp-lg/ v1 7 Jun 1997 Abstract

Disambiguation of Thai Personal Name from Online News Articles

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Named Entity Recognition: A Survey for the Indian Languages

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

CS 598 Natural Language Processing

A Case Study: News Classification Based on Term Frequency

BULATS A2 WORDLIST 2

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Using dialogue context to improve parsing performance in dialogue systems

Development of the First LRs for Macedonian: Current Projects

Prediction of Maximal Projection for Semantic Role Labeling

Applications of memory-based natural language processing

Modeling Attachment Decisions with a Probabilistic Parser: The Case of Head Final Structures

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Corrective Feedback and Persistent Learning for Information Extraction

Phonological Processing for Urdu Text to Speech System

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Problems of the Arabic OCR: New Attitudes

Switchboard Language Model Improvement with Conversational Data from Gigaword

Ensemble Technique Utilization for Indonesian Dependency Parser

Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

Modeling function word errors in DNN-HMM based LVCSR systems

Accurate Unlexicalized Parsing for Modern Hebrew

Distant Supervised Relation Extraction with Wikipedia and Freebase

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

BYLINE [Heng Ji, Computer Science Department, New York University,

Introduction to HPSG. Introduction. Historical Overview. The HPSG architecture. Signature. Linguistic Objects. Descriptions.

Specifying a shallow grammatical for parsing purposes

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Cross Language Information Retrieval

Learning Computational Grammars

Semi-supervised Training for the Averaged Perceptron POS Tagger

Learning Methods in Multilingual Speech Recognition

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

The Discourse Anaphoric Properties of Connectives

Memory-based grammatical error correction

Exploiting Wikipedia as External Knowledge for Named Entity Recognition

The taming of the data:

Natural Language Processing. George Konidaris

A Framework for Customizable Generation of Hypertext Presentations

Beyond the Pipeline: Discrete Optimization in NLP

Procedia - Social and Behavioral Sciences 154 ( 2014 )

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

AQUA: An Ontology-Driven Question Answering System

The Karlsruhe Institute of Technology Translation Systems for the WMT 2011

! # %& ( ) ( + ) ( &, % &. / 0!!1 2/.&, 3 ( & 2/ &,

Short Text Understanding Through Lexical-Semantic Analysis

Words come in categories

Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers

THE VERB ARGUMENT BROWSER

Word Segmentation of Off-line Handwritten Documents

Intension, Attitude, and Tense Annotation in a High-Fidelity Semantic Representation

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

A Syllable Based Word Recognition Model for Korean Noun Extraction

The stages of event extraction

Rule Learning With Negation: Issues Regarding Effectiveness

Modeling function word errors in DNN-HMM based LVCSR systems

Corpus Linguistics (L615)

Constructing Parallel Corpus from Movie Subtitles

LTAG-spinal and the Treebank

Writing a composition

Matching Similarity for Keyword-Based Clustering

Project in the framework of the AIM-WEST project Annotation of MWEs for translation

What the National Curriculum requires in reading at Y5 and Y6

Adjectives tell you more about a noun (for example: the red dress ).

The Smart/Empire TIPSTER IR System

Reducing Features to Improve Bug Prediction

Context Free Grammars. Many slides from Michael Collins

Modeling full form lexica for Arabic

Linguistic Variation across Sports Category of Press Reportage from British Newspapers: a Diachronic Multidimensional Analysis

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

Loughton School s curriculum evening. 28 th February 2017

Measuring the relative compositionality of verb-noun (V-N) collocations by integrating features

Advanced Grammar in Use

Three New Probabilistic Models. Jason M. Eisner. CIS Department, University of Pennsylvania. 200 S. 33rd St., Philadelphia, PA , USA

Lecture 1: Machine Learning Basics

What Can Neural Networks Teach us about Language? Graham Neubig a2-dlearn 11/18/2017

Transcription:

1164 Comparison of TnT, Max.Ent, CRF Taggers for Urdu Language M.HUMERA KHANAM 1, K.V.MADHUMURTHY 2, MD.A.KHUDHUS 3 1 Department of Computer Science and Engineering, S.V University College of Engineering, S.V.University, Tirupati, Andhra Pradesh, India. 2 Department of Computer Science and Engineering, SV University College of Engineering, S.V.University, Tirupati, Andhra Pradesh, India. 3 J.E, BSNL, Tirupati, Andhra Pradesh, India humera_svce@yahoo.co.in, kvmurthy@gmail.com, mkhudhus@yahoo.co.in ABSTRACT The development of statistical taggers for Urdu language is an important milestone toward Urdu language processing. In this paper we look at the efficient methods of computational linguistics. We did our Experiments with some of the widely used POS Tagging approaches on Urdu language. Part-of-Speech (POS) Tagging is a process that attaches each word in a sentence with a suitable tag from a given Tag set. In this paper, three stateof-art probabilistic taggers i.e. TnT tagger, Maximum Entropy tagger and CRF (Conditional Random Field) taggers are applied to the Urdu language. A training corpus of 100000 tokens is used to train the models. We compare all the three taggers with same training data and finally we concluded that CRF Tagger shows the better accuracy. Keywords Urdu Languages, Statistical POS taggers, Corpus, Tag set. I. INTRODUCTION Part-of-Speech (POS) tagging is the process of assigning a part of speech or lexical class marker to each word in corpus. Tags are also usually applied to punctuation markers, thus tagging for natural language is the same process as tokenization for computer languages although tags for natural languages are much more ambiguous [6] and plays fundamental role in various Natural Language Processing (NLP) applications such as speech recognition, information extraction, machine translation and word sense disambiguation etc. POS tagging particularly plays very important role in word-free languages because such languages have relatively complex morphological structure of sentences than other languages. Indic and Urdu are good candidate examples of such word-free languages. Although POS-tagging for Indic languages has gained an increased interest over the past few years, yet the lack of availability of annotated corpora resources obstruct the research and investigations, beside other disambiguation problems. Standardization is another problem because so far no standard tag sets are available for such languages. While so far this is the situation for Indic languages, Urdu has relatively more issues as it is quite far less studied and researched 1.1 Urdu Language Urdu belongs to the Indo-Aryan language family. It is the national language of Pakistan and is one of the official languages of India. The majority of the speakers of Urdu spread over the area of South Asia, South Africa and the United King-dom. Urdu is a free order language with general word order SOV. It shares its phonological, morphological and syntactic structures with Hindi. Urdu is written in Persoarabic script and inherits most of the vocabulary from Arabic and Persian. Urdu is a morphologically rich language. Forms of the verb, as well as case, gender, and number are expressed by the morphology. 1.1.1 Word order Urdu is a word-free order language as compared to other languages, like English and European. Table 1 presents a clearly demonstration of free-word characteristic of Urdu. Table 1: Word order and semantic meaningfulness in urdu language Sentence in Urdu Correctness Sentence in English correctness چڈیے پیڈ کے اوپر بیٹےھے True Birds tree on the sat False پیڈ کے اوہر چڈیے بیٹے ھے True Tree the on birds sat False اوپر پیڈ کے چڈیے بیٹے ھے True On tree the birds sat False چڈیے ھے پیڈ اوپر پیڈ کے True Birds sat on the tree True پیڈ کے اوپر چڈیے ھے بیٹے True Tree the on birds sat False بیٹے ھے چیٹے پیڈ کے اوپر True Sat birds tree the on False 2. URDU TAGSET With respect to the tagset, the main feature that concerns us is its granularity, which is directly related to the size of the tagset. If the tagset is too coarse, the tagging accuracy will be much higher, since only the important distinctions are considered, and the classification may be easier both by human manual annotators as well as the

1165 machine. But, some important information may be missed out due to the coarse grained tagset. On the other hand, a too fine-grained tagset may enrich the supplied information but the performance of the automatic POS tagger may decrease. A much richer model is required to be designed to capture the encoded information when using a fine grained tagset and hence, it is more difficult to learn. Even if we use a very fine grained tagset, some fine distinction in POS tagging can not be captured only looking at purely syntactic or contextual information, and sometimes pragmatic level. Some studies have already been done on the size of the tagset and its influence on tagging accuracy. There are various questions that need to be answered during the design of a tagset. The granularity of the tagset is the first problem in this regard. A tagset may consist either of general parts of speech only or it may consist of additional morpho syntactic categories such as number, gender and case. In order to facilitate the tagger training and to reduce the lexical and syntactic ambiguity, we decided to concentrate on the syntactic categories of the language. Purely syntactic categories lead to a smaller number of tags which also improves the accuracy of manual tagging. One of these complexities is word segmentation issue of the language. Suffixes in Urdu are written with an orthographic space. Words are separated on the basis of space and so suffixes are treated same as lexical words. Hence it is hard to assign accurate tag for an automatic tagger. Although the tagset is designed considering details, but due to larger number of tags it is hard to get a high accuracy with a small sized corpus. Urdu is influenced from Arabic, and can be considered as having three main parts of speech, namely noun, verb and particle. However, some grammarians proposed ten main parts of speech for Urdu. The work of Urdu grammar writers provides a full overview of all the features of the language. However, in the perspective of the tagset, their analysis is lacking the computational grounds. The semantic, morphological and syntactic categories are mixed in their distribution of parts of speech. For example, Haq (1987) divides the common nouns into situational (smile, sadness, darkness), locative (park, office, morning, evening), instrumental (knife, sword) and collective nouns (army, data). In 2003, Hardie proposed the first computational part of speech tagset for Urdu. It is a morpho-syntactic tagset based on the EAGLES guidelines. The tagset contains 350 different tags with information about number, gender, case, etc. The EAGLES guidelines are based on three levels, major word classes, recommended attributes and optional attributes. Major word classes include thirteen tags: noun, verb, adjective, pronoun/determiner, article, adverb, ad position, con-junction, numeral, interjection, unassigned, residual and punctuation. The recommended attributes include number, gender, case, finiteness, voice, etc. The tagset used in the experiments reported in this paper contains 42 tags including three special tags. Nouns are divided into noun (NN) and proper name (PN). Demonstratives are divided into personal (PD), KAF (KD), adverbial (AD) and relative demonstratives (RD). All four categories of demonstratives are ambiguous with four categories of pronouns. Pronouns are divided into six types i.e. personal (PP), reflexive (RP), relative (REP), adverbial (AP), KAF (KP) and adverbial KAF (AKP) pronouns. Based on phrase level differences, genitive reflexive (GR) and genitive (G) are kept separate from pronouns. The verb phrase is divided into verb, aspectual auxiliaries and tense auxiliaries. Numerals are divided into cardinal (CA), ordinal (OR), fractional (FR) and multiplicative (MUL). Conjunctions are divided into coordinating (CC) and subordinating (SC) conjunctions. All semantic markers except /se/ are kept in one category. Adjective (ADJ), adverb (ADV), quantifier (Q), measuring unit (U), intensifier (I), interjection (INT), negation (NEG) and question words (QW) are handled as separate categories. Adjectival particle (A), KER (KER), SE (SE) and WALA (WALA) are ambiguous entities which are annotated with separate tags. When we make use of a tagset for the POS disambiguation task, some issues needs to be considered. Such issues include the type of applications (some application may required more complex information whereas only category information may sufficient for some tasks), tagging techniques to be used (statistical, rule based which can adopt large tagsets very well, supervised/unsupervised learning). Further, a large amount of annotated corpus is usually required for statistical POS taggers. A too fine grained tagset might be difficult to use by human annotators during the development of a large annotated corpus. Hence, the availability of resources needs to be considered while the usage of a tagset. 3. TAGGING METHODOLOGIES 3.1 Rule based The work on automatic part of speech tagging started in early 1960s. Klein and Simmons(1963) rule based POS tagger[5] can be considered as the first automatic tagging system. The earliest algorithms for automatically assigning part-of-speech were based on a two-stage architecture(klein and Simmons, 1963; Green and Rubin, 1971; Hindle, 1989; Chanod and Tapanainen 1994). The first stage used a dictionary to assign each word a list of potential part-of-speech. The second stage used large list of a hand written disambiguation rules to winnow down this list to a single part-of speech for each word. The rule base has the disadvantage of the more time complexity and space complexity.

1166 3.2 Stochastic Part-of-speech Tagging The use of probability in tags is quit old; probabilities in tagging were first used by (Stolz et al., 1965),a complete probabilistic tagger with Viterbi decoding was sketched by Bahl and Mercer(1976), and various stochastic taggers were built in the 1980s (Marshall,1983;Garside,1987;Church,1988;DeReso,1988). The next section describes a particular Stochastic tagging algorithm generally know as the Hidden Markov Model or HMM tagger. The intuition behind all stochastic taggers is a simple generalization of the Pick the most-likely tag for this word 3.2.1 TnT tagger As a standard HMM tagger, The TnT tagger[2] is used for the experiments. The TnT tagger is a trigram HMM tagger in which the transition probability depends on two preceding tags. The performance of the tagger was tested on NEGRA corpus and Penn Treebank corpus. The average accuracy of the tagger is 94% to 95% [2]. The second order Markov model used by the TnT tagger requires large amounts of annotated corpus to get reasonable frequencies of POS trigrams. The TnT tagger smooths the probability with linear interpolation to handle the problem of data sparseness. The Tags of unknown words are predicted based on the word suffix. The longest ending string of an unknown word having one or more occurrences in the training corpus is considered as a suffix. The tag probabilities of a suffix are evaluated from all the words in the training corpus (Brants, 2000). 3.2.2 Maximum Entropy MaxEnt[4] stands for Maximum Entropy model. It is relatively easy to train a Maximum Entropy model. There is a toolkit [MaxEnt] for Maximum Entropy Model is freely available on the net. It consists of both C++ and Python modules to implement Maximum Entropy Modeling[4]. Moreover, there is a separate tagset and language independent toolkit in Python (MaxEnt) for building a POS tagger. MaxEnt is straightly worn to build POS tagger for Urdu. The Maximum Entrophy tagger was tested for Urdu and found that average performance was 89.58 which is also comparatively less when compared to European languages. 3.2.3 Conditional Random Fields One of the most common methods for performing POS sequence labeling task is that of employing Hidden Markov Models (HMMs) to identify the most likely POS tag sequence for the words in a given sentence. HMMs are generative models, which maximize the joint probability distribution p(x, Y) where X and Y are random variable respectively representing the observation sequence (i.e. the word sequence in a sentence) and the corresponding label sequence (i.e. the POS tag sequence for the word of a sentence). Due to the joint probability distribution of the generative models, the observation at any given instant of time, may only directly depend on the state or label at that time. This assumption may work for a simple data set. However for the problem of the POS labelling task, the observation sequence may depend on multiple interacting features and long distance dependencies. One way to satisfy the above criteria is to use a model that defines conditional probability p(y x) over label sequences given a particular observation sequence x, rather than a joint probability distribution over both label and observation sequence. Conditional models are used to label an unknown observation sequence, by selecting the label sequence that maximizes the conditional probability. Conditional Random Fields (CRFs) [13] are a probabilistic framework for labelling sequential data based on the conditional approach described above. A CRF is an undirected graphical model that defines a single exponential model over label sequence given the particular observation sequence. The primary advantage of the CRF over the HMMs is the conditional nature, resulting in the relaxation of the independence assumption required by HMMs. CRF also avoid the label bias problem [3] of the Maximum Entropy model and on other directed graphical models. Thus CRFs outperforms HMM and ME models on a number of sequence labeling tasks [13][14][5]. 3.3. Corpora A Urdu corpus of approx 1,00,000 tokens was taken from a news corpus (www.jang.com.pk). Our test corpus consisted of 1000 sentences and 20000 tokens. The data was randomly divided into two parts, 90% training corpus and 10% test corpus. A part of the training set was also used as held out data to optimize the parameters of the taggers. All the data provided for the Urdu language uses the SSF format described in which is generally used to support different kinds of linguistic analysis at different levels such as chunking and tagging on the same data. But as we worked solely on POS Tagging for the current study, we converted all the data from the SSF format to the much simpler format used by the Brown corpus, included in NLTK [11] for our convenience. 4. EXPERIMENTS A Urdu corpus of approx 100,000 tokens was taken from a news corpus (www.jang.com.pk). In the filtering phase, diacritics were removed from the text and normalization was applied to keep the Unicode of the characters consistent. The problem of space insertion and space deletion was manually solved and space is defined as the word boundary. All the data provided Urdu language uses the SSF format described in which is generally used to support different kinds of linguistic analysis at different levels, such as chunking and tagging on the same data. But as we worked solely on POS Tagging for the current study, we converted all the data from

1167 the SSF format to the much simpler format used by the Brown corpus, included in NLTK [11] for our convenience. The data was randomly divided into two parts, 80% training corpus and 20% test corpus. The statistics of the training corpus and test corpus are shown in table 2 and table 3. Table 2: Statistics Of Training And Test Data Training corpus Test corpus Tokens 80,000 20000 Types 2500 3931 Unknown -- 354 Tokens Unknown -- 209 Types Table 3: Eight most frequent tags in the test corpus. Tag Total Un- known NN 2500 328 P 1316 10 VB 811 81 ADJ 510 68 PN 406 80 AA 349 0 TA 285 0 ADV 138 32 Table 4 and Figure1shows the accuracies of all the taggers for Urdu. The baseline result where each word is annotated with its most frequent tag, irrespective of the context, is 94%. Table 4 : Shows The Accuracies of All The Taggers for Urdu Tagger Accuracy TnT 93.56% MaxEnt 91.58% CRF 94.13% 95 94 93 92 91 TnT Max.Ent CRF 90 Fig1: comparisons of TnT,Max.Ent and CRF Taggers 5. ANALYSIS OF RESULTS We have observed from the results of a previous study, that the HMM based tagger performs better than n- grams based taggers starting from a very small corpus for English using the Brown corpus provided in NLTK[11].The difference in performance also continues to grow as the corpus size increases. In our present work, we used corpora with over 100000 annotated tokens for Urdu. Under these conditions, we observed that CRF tagger achieves accuracies of 94.13 for Urdu.TnT tagger manages to obtain 93.56 for Urdu. So the experiments confirm that CRF tagger is a better choice for tagging Urdu languages using small to medium sized corpora. 6. FUTURE WORK Several modifications to the baseline POS taggers are suggest the use of techniques like pre-tagging problematic phrases using Finite State Transducers (FST) to speed up the operation of the tagger. We would like to

1168 incorporate these in our tagging models. We need develop more manually annotated data, and some efficient tag set of Urdu for getting accurate results with the taggers. CONCLUSION In this paper, probabilistic part of speech tagging technologies are tested on the Urdu language. The main goal of this work is to investigate whether general disambiguation techniques and standard POS taggers can be used for the tagging of Urdu Language. The results of the taggers clearly answer this question positively. With the small training corpus, all the taggers showed accuracies around 93%. The CRF tool shows the best accuracy around 95%. We also proposed a reason behind the better performance of the CRF. REFERENCES [1] Part of Speech Tagging, Seminar in Natural Language Processing and Computational Linguistics (Prof. Nachum Dershowitz), Yair Halevi, School of Computer Science, Tel Aviv University, April 2006. [2] Brants, Thorsten. 2000. TnT a statistical part-of-speech tagger. In Proceedings of the Sixth Ap-plied Natural Language Processing Conference ANLP-2000 Seattle, WA. [3] Brill, E. 1992. A simple rule-based part of speech tagger, Department of Computer Science, University of Pennsylvania. [4] A Maximum Entropy Model for Part-Of-Speech Tagging, Adwait Ratnaparkhi, University of Pennsylvania, Dept. of Computer and Information Science. Eric Brill, A Simple Rule-Based Part-of-Speech Tagger, In Proceeding Of The Third Conference on Applied Natural Language Processing, Trento, Italy, 1992, pp. 152-155. [5] Bahl, L. R. and Mercer, R. L. 1976. Part of speech assignment by a statistical decision algo-rithm, IEEE International Symposium on Infor-mation Theory, pp. 88-89 [6] Chanod, Jean-Pierre and Tapananinen, Pasi 1994. Statistical and constraint-based taggers for French, Technical report MLTT-016, RXRC Grenoble. [7] Green, B. and Rubin, G. 1971. Automated grammatical tagging of English, Department of Linguistics, Brown University. [8] A. M. Deroualt and B. Merialdo, Natural Language Modeling For Phoneme-To-Text Transposition, IEEE Transactions on Pattern Analysis and Machine Intelligence,1986. RatnaParki, A.1998. A Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. Paper., University of Pennsylvania, Philadelpia, PA, USA. [9] Steven Bird and Edward Loper, Natural Language Toolkit, http://nltk.sourceforge.net/, 2006. [10] Manoj Kumar C, Stochastic Models for POS Tagging,IIT Bombay, 2005. [11] Lafferty J., McCallum A. and Pereira F., 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning. 282-289. [12] Pinto D., McCallum A., Wei X. And Croft W. B., 2003. Table extraction using conditional random fields. Proceedings of the ACM SIGIR, 2003. [13] Sha F. and Pereira F., 2003. Shallow parsing with conditional random fields. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology,, Edmonton, Canada.134-141.