Chinese Radicals in NLP Tasks

Similar documents
The Internet as a Normative Corpus: Grammar Checking with a Search Engine

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Disambiguation of Thai Personal Name from Online News Articles

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Investigation on Mandarin Broadcast News Speech Recognition

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Online Updating of Word Representations for Part-of-Speech Tagging

Linking Task: Identifying authors and book titles in verbose queries

The Karlsruhe Institute of Technology Translation Systems for the WMT 2011

A Case Study: News Classification Based on Term Frequency

COPING WITH LANGUAGE DATA SPARSITY: SEMANTIC HEAD MAPPING OF COMPOUND WORDS

Calibration of Confidence Measures in Speech Recognition

Switchboard Language Model Improvement with Conversational Data from Gigaword

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Distant Supervised Relation Extraction with Wikipedia and Freebase

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Detecting English-French Cognates Using Orthographic Edit Distance

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Language Model and Grammar Extraction Variation in Machine Translation

Mandarin Lexical Tone Recognition: The Gating Paradigm

Short Text Understanding Through Lexical-Semantic Analysis

Learning Methods in Multilingual Speech Recognition

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Lecture 1: Machine Learning Basics

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Parallel Evaluation in Stratal OT * Adam Baker University of Arizona

ARNE - A tool for Namend Entity Recognition from Arabic Text

Implementing a tool to Support KAOS-Beta Process Model Using EPF

Page 1 of 11. Curriculum Map: Grade 4 Math Course: Math 4 Sub-topic: General. Grade(s): None specified

Speech Recognition at ICSI: Broadcast News and beyond

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Toward a Unified Approach to Statistical Language Modeling for Chinese

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

CS Machine Learning

Dublin City Schools Mathematics Graded Course of Study GRADE 4

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Large vocabulary off-line handwriting recognition: A survey

Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models

Assignment 1: Predicting Amazon Review Ratings

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

arxiv: v1 [cs.cl] 2 Apr 2017

Indian Institute of Technology, Kanpur

Exploiting Wikipedia as External Knowledge for Named Entity Recognition

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Rule Learning With Negation: Issues Regarding Effectiveness

Prediction of Maximal Projection for Semantic Role Labeling

B. How to write a research paper

Word Segmentation of Off-line Handwritten Documents

Noisy SMS Machine Translation in Low-Density Languages

Training and evaluation of POS taggers on the French MULTITAG corpus

Cross-Lingual Text Categorization

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Text-mining the Estonian National Electronic Health Record

THEORY OF PLANNED BEHAVIOR MODEL IN ELECTRONIC LEARNING: A PILOT STUDY

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

NCEO Technical Report 27

Probabilistic Latent Semantic Analysis

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

A Graph Based Authorship Identification Approach

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

Multi-Lingual Text Leveling

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Reinforcement Learning by Comparing Immediate Reward

Urban Analysis Exercise: GIS, Residential Development and Service Availability in Hillsborough County, Florida

A Comparison of Two Text Representations for Sentiment Analysis

Truth Inference in Crowdsourcing: Is the Problem Solved?

Chapter 2 Rule Learning in a Nutshell

PRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Greedy Decoding for Statistical Machine Translation in Almost Linear Time

Universiteit Leiden ICT in Business

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Rule Learning with Negation: Issues Regarding Effectiveness

The MSR-NRC-SRI MT System for NIST Open Machine Translation 2008 Evaluation

Ontologies vs. classification systems

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

Corrective Feedback and Persistent Learning for Information Extraction

The Effect of Extensive Reading on Developing the Grammatical. Accuracy of the EFL Freshmen at Al Al-Bayt University

AQUA: An Ontology-Driven Question Answering System

Memory-based grammatical error correction

Extracting Verb Expressions Implying Negative Opinions

CLASSROOM USE AND UTILIZATION by Ira Fink, Ph.D., FAIA

The Role of String Similarity Metrics in Ontology Alignment

METHODS FOR EXTRACTING AND CLASSIFYING PAIRS OF COGNATES AND FALSE FRIENDS

Parsing of part-of-speech tagged Assamese Texts

ADDIS ABABA UNIVERSITY SCHOOL OF GRADUATE STUDIES MODELING IMPROVED AMHARIC SYLLBIFICATION ALGORITHM

Transcription:

Chinese Radicals in NLP Tasks Alex Fandrianto afandria@stanford.edu Anand Natarajan anandn@stanford.edu December 7, 2012 Hanzhi Zhu hanzhiz@stanford.edu 1 Introduction The Chinese writing system uses a set of tens of thousands of characters, each of which represents a syllable, and usually a morpheme, in the Chinese language. (In Chinese, almost all morphemes are monosyllabic). Although the number of distinct characters is very large, they have internal structure, and all of them are made out of a few hundred common graphic components called radicals. Moreover, the vast majority of Chinese characters are so-called phonosemantic compounds: they consist of two parts, a semantic part which is a single radical that indicates something about the meaning of the character, and a phonetic part which gives a hint as to its pronunciation. For example, the character 妈 (mā), meaning mother is composed of a semantic component 女, meaning woman, and phonetic component 马, pronounced mǎ. C hinese lexicographers have developed standardized sets of radicals, which are used to organize characters in dictionaries, as well as in the Unihan database which is a part of Unicode. Normally when processing Chinese, this internal structure is ignored. However, we believe that radicals carry useful information for many NLP tasks. In this project, we investigated how using radicals affected performance on three tasks: language modeling, part-of-speech tagging, and word segmentation. 2 Language Modeling 2.1 Theory Most modern language models are n-gram models, i.e. they assume that the probability of a word is dependent only on the previous (n-1) words. These probabilities are estimated by counting the occurrences of all n-grams in a corpus, and then smoothing the counts to handle unobserved n-grams. Usually, smoothing involves backing off to probabilities from k-gram models for all k < n; we back off by dropping words from our conditioning context starting from the least recent word. In Chinese, since text is usually not word segmented, it is reasonable to treat each character as a separate word for the purposes of language modeling, which is what we did in our experiments. One can generalize n-gram models to include other features of the previous words in the following manner: if f i is a feature of the i-th word w i, then we replace the probability p(w 0 w 1 w 2... w n+1 ) with p(w 0 w 1 f 1... w n+1 f n+1 )p(f 1... f n+1 w 1... w n+1 ). If the feature is a deterministic function of the word, e.g. the semantic radical, then the last term in the in the probability can be ignored. The remaining term is a conditional probability that can be estimated by counting and smoothing, as with an n-gram model. Since the feature we added is deterministic, the model as defined above should give identical results to an n-gram model. However, the features can make a difference when performing interpolation or backoff, when a word w i has not been previously observed but the feature value f i has been. Thus, having a good backoff model is expected to be important for getting good performance from the added features. Moreover, 1

unlike the case of n-grams, it is no longer obvious in what order to drop conditioning factors when backing off. In general one could have a backoff graph, where each node is a probability model that backs off to some function (e.g. max, min, mean) of the node below it. The space of possible backoff graphs grows with the factorial of the number of conditioning variables, so it can be very large even for a trigram model with one additional feature. Searching this space effectively was a major challenge in this study. 2.2 Experiments In our particular case, we used the SRILM factored language model package [1], which supports a generalization of n-gram language models that includes models of the type described above. We used simple word-based bigram and trigram models as baselines. To these, we added as a feature the semantic radical, as determined from the Unihan database. For the sake of comparison, we tried models that used the part-of-speech tag of the word containing the given character instead of the semantic radical. For bigram models, we explored the backoff space by hand, but for trigram models, this proved infeasible so we used a genetic algorithm-based tool provided with the package. We experimented with both Kneser-Ney and Witten-Bell smoothing for backoff graph nodes. The dataset used was taken from the Chinese Treebank, with the training set consisting of around 18,000 sentences, and the dev and test sets consisting of around 350 sentences. Performance was measured by the perplexity of the language model. 2.3 Results and Discussion We consistently observed a decrease in performance (i.e. increased for perplexity) with radicals as compared to baseline, in contrast to a significant increase in performance with part-of-speech tags. This held true for both the bigram and trigram models, and over many runs of the genetic algorithm with different random seeds. For instance, for one configuration, a bigram model achieved a perplexity of 166.3 on the dev set and bigrams with part-of-speech tags achieved a perplexity of 116.8, whereas bigrams with radicals had a perplexity of 193. Differences of similar magnitude were obtained for many language model configurations. Such a robust trend indicates that radicals are likely not actually very useful features in language modeling. Another way of stating this is that the current character is not strongly correlated to the radicals of the preceding characters. This conclusion is consistent with results from part-of-speech tagging experiments (see section 3), where we found that radicals of previous word are not a helpful feature, although the radical of the current word is. To further investigate this claim, we performed the following very simple experiment: we created a language model to predict the radical of the current character given the radical of the previous character, and compared the perplexity of this model to one with uniform probabilities. The uniform model had a perplexity of 100.2, while the radical model had a perplexity of 80.84. This suggests that radicals of successive characters are indeed not very strongly correlated, which implies that the previous radical is not helpful in predicting the next character. We believe that the performance decreases when radicals are added because during backoff, probability mass must be removed from the full model and given to the models with dropped conditioning factors. This will decrease the probability the model assigns to sentences without unknown words, and thus increase the perplexity on those sentences. Since the vast majority of words (i.e. characters) in our dev set had appeared in our training set, this resulted in overall perplexities increasing. 3 Part-of-speech Tagging As mentioned in the introduction, one of the radicals in the character usually carries semantic information about the character. From examining a few examples, it becomes clear that certain semantic radicals occur more often in words with a certain part of speech. For instance, the radical 扌 (a reduced form of the character 手 meaning hand ) has a connotation of applying force to something, and occurs almost exclusively in verbs. We thus have strong reason to believe that radicals should be a good feature for part-of-speech tagging tasks. 2

The Stanford POS-Tagger, a maxent classifier built on top of Stanford s Core NLP library, was used to investigate the addition of radical features to POS-tagging [5]. Features are specified in configuration files for training. Chinese radicals were added as part of the specified FeatureExtractors through the Word- ShapeClassifier and RadicalMap. There were several ways of selecting the radical features; each Chinese word is composed of potentially multiple characters. The main problem is that while there may be a variable number of characters, only a fixed number of specified features per word can be extracted. radicalfirst takes the first character s radical and returns that as the feature. radicalconcatenated takes each radical from every character and concatenates them. Upon obtaining preliminary results from these methods, radical3 and radicallast were implemented. The former computes 3 features, the first radical, second radical, and third radical, returning the empty string if there is no corresponding character in the Chinese word. The latter returns the radical of the last character in the word. Once these specifications were complete, various combinations of features were used to train the POS- Tagger. The baselines compared were unigram, simple, and normal. The first feature set consists of the current word being tagged. The second is a trigram model. The last was the default nodistsim model, which consists of many features. In general, the addition of radicals improves tagging accuracy, especially for unknown words. Improvements in performance are less visible when the feature set is more complex. Table 1 shows the accuracy data for the various models when run on dev and test. Figures 1 and 2 illustrate model performance when applied to known words and unknown words. dev test dev unknown test unknown unigram 63.451107 62.35015 0 0 u+radicalfirst 74.094708 76.735764 21.338156 34.172662 u+radicals 79.313884 79.420579 11.392405 7.194245 u+radical3 83.638763 82.854645 56.238698 50.719424 u+radicallast 72.819235 75.774226 25.135624 38.848921 simple 92.420466 94.218282 54.611212 71.582734 s+radicalfirst 93.329424 94.343157 60.940325 71.582734 s+radicals 93.2268 94.368132 59.674503 71.223022 s+radical3 94.69286 94.68032 75.406872 79.496403 normal 95.748424 95.37962 84.629295 85.971223 n+radicalfirst 95.865709 95.442058 85.714286 88.129496 n+radicals 95.689782 95.454545 84.448463 87.05036 n+radical3 95.89503 95.404595 85.714286 84.532374 radical3 83.638763 82.742258 56.962025 51.079137 radical3bigram 86.761472 86.426074 57.685353 67.625899 3

Figure 1: POS-Tagging Accuracy on Known Words. The addition of radical information significantly improves the tagging performance on known words. More mileage is gained when the original feature set is small, like the unigram model, while improvements are less visible when the feature set is large, like in the normal model. radical3 gives the biggest performance gain and taken alone, performs comparably to unigram+radical3. 4

Figure 2: POS-Tagging Accuracy on Unknown Words. The addition of radical information greatly the tagging performance on unknown words. Unlike the unknown word, its radicals have been seen during training. The hefty gains in performance imply that these radicals do in fact correlate with POS. The data demonstrates that radical3, taking the first 3 radicals as separate features, provides the biggest improvement to the POS-tagging models. When broken down into radicalfirst and radicallast, the performance gains are not as large, implying that radicallast and radicalfirst do not solely determine POS. It seems that every radical in the Chinese word can potentially affect the final POS tag. The feature radicals, the concatenation of the primary radicals in the Chinese word, does not generalize to unknown words very well. On the plus side, an exact match of radical ordering does seem to add a strong performance gain for known words. While not fully explored, the use of radical3bigrams did not add much beyond radical3. It is possible that part of speech doesn t depend much on previous words radicals. Some limitations of the Stanford POS-Tagger were discovered during the course of training. It is not possible to only train on a single feature. That is, the unigram model below was not fully optimized. Further, it was not possible to use radical4 or radical5 as features because the Maxent Tagger would fail during the optimization process. It is likely that not enough Chinese words are 4 or 5 characters long, but information is still lost if these later character s radicals are unused. 4 Word Segmentation The processing of written Chinese presents an additional challenge compared to that of most other languages due to the fact that separate Chinese words are not separated from each other by a word divider (i.e. a whitespace). A Chinese word is composed of a positive number of characters, averaging around two. Thus, the task of Chinese word segmentation becomes necessary to suitably model Chinese. We view this as a label tagging problem which can be modeled by a maximum entropy or a conditional random field (CRF) framework, the latter which seems to be more commonly adopted [2, 3]. Using CRF for word segmentation, we have two possible tags. For each character we observe, we can either label it as B, if it begins new word, or I, if it is a continuation of a word (i.e. if it is part of the same word as the previous character). 5

4.1 Features We consider using radicals as features for our CRF system. Let us define C x to be the character x positions after the current one, and R x to be the radical of C x. Thus, R 0 is the radical of the current character, R 1 is that of the previous character, R 1 that of the next one, and so forth. We implement three features groups, which we will call RadicalUnigram, RadicalBigram, and RadicalTrigram. RadicalUnigram uses R 0, R 1, R 1 as features, namely the unigram. RadicalBigram uses as features the bigram concatenations of adjacent characters radicals: R 2 R 1, R 1 R 0, and R 0 R 1. Similarly, RadicalBigram uses the trigram concatenations of three adjacent character s radicals, beginning with the trigram starting at R 3 up until the trigram starting at R 0. Our intuition behind choosing which n-grams to use in relation to the current character is as follows. The task of classifying C 0 as B or I is equivalent to deciding whether there is a word boundary between C 1 and C 0. The features relevant to this decision should thus be n-grams which include information about the character on either side of this boundary: the n-grams should contain R 1 and/or R 0. For example, given the sentence 布朗一行于今晚离沪赴广州, a bigram containing the radicals of 今晚 should intuitively have no relation with whether 行 begins a new word or not. 4.2 Experiments For our experiment, we started with the most recent release of the Stanford NLP Group s Word Segmenter [4]. We used the same dev and test files as Language Modeling, but we start with a reduced train set that contains around 1600 sentences. We run the CRF on test without adding the radical feature groups, and then iteratively add each n-gram feature group. Without the radical features, the CRF achieved an F score of 0.951. RadicalUnigram did not affect this score, RadicalBigram increased it by 0.1% to 0.952, and RadicalTrigram reduced it to 0.950. Given the slight increase in performance when using Radical Bigram, we train again our CRF using the full train set as used by Language Modeling. However, without the radical features, we obtain an F measure of 0.981 whereas with them, the F measure reduces slightly to 0.979. Using the smaller train set, we notice that the out-of-vocabulary (OOV) recall rate has a slightly larger improvement than the overall F-score from running without radical features to running with RadicalBigram, shown below. This was coupled with the fact that using the smaller train set, the rate of unknown (OOV) words occurring was relatively high (16.3%) compared to using the larger train set (3.5%). Intuitively, since training over the radicals generalizes over the space of feature values, the CRF performs better when seeing new character n-grams since the chance that it has seen the radicals of these characters is much higher. It seems that RadicalBigram performs well with foreign proper names, which tend to get radicals which are most common in such foreign transliterations. With the RadicalBigram feature group, we correctly segmented 理查德 (Richard) and 吉尔吉斯 (Kyrgyz), both tokens of which do not appear in the smaller train set. These two tokens are not correctly segmented without radical features turned on. 6

Figure 3: OOV Recall Rates. The rate of OOV recall trained over the smaller dataset for each radical n-gram feature. Thus, it seems that RadicalBigram only helps improve performance on OOV words, and even then only slightly. When there are fewer OOV words such as when training over a large dataset, the radical features are essentially effectless. It seems that in practice, the high performance of existing CRF word segmenters for Chinese make it so that the additional features will not have a significant impact on their accuracy. 5 Conclusions and Future Work For future work, we plan to study radicals beyond the main semantic radical, especially for POS tagging. Also, we would like to explore the differences between Simplified and Traditional Chinese for these NLP tasks, since character simplification can be modeled as dropping radicals from characters. We are especially interested in seeing whether simplification dropped radicals in a information-theoretically optimal way. 6 Acknowledgements We would like to thank Mengqiu Wang for his guidance with this project. References [1] K. Kirchhoff, J. Bilmes, and K. Duh, Factored Language Models Tutorial, (2008), http://ssli.ee. washington.edu/people/duh/papers/flm-manual.pdf. [2] http://nlp.stanford.edu/pubs/sighan2005.pdf. [3] http://bcmi.sjtu.edu.cn/~zhaohai/pubs/csb-sighan5_20071015-rev.pdf. [4] http://nlp.stanford.edu/software/segmenter.shtml. [5] http://nlp.stanford.edu/software/tagger.shtml. 7