+ Word Sense Disambiguation CS4 pril, 206 Professor Meteer Thanks for Jurafsky & Martin & James Pustejovksy for slides
+ Word Sense Disambiguation (WSD) n Given n word in context n fixed inventory of potential word senses n Decide which sense of the word this is n What set of senses n English-to-Spanish MT n Inventory is set of Spanish translations n Speech Synthesis n Inventory is homographs with different pronunciations like bass and bow n utomatic indexing of medical articles n MeSH (Medical Subject Headings) thesaurus entries n In general: n the senses in a thesaurus like WordNet
+ Two variants of WSD task n Lexical Sample task n Small pre-selected set of target words (line, plant) n nd inventory of senses for each word n Supervised machine learning: train a classifier for each word n ll-words task n Every word in an entire text n lexicon with senses for each word n Data sparseness: can t train word-specific classifiers
+ WSD Methods n Supervised Machine Learning 4 n Thesaurus/Dictionary Methods n Semi-Supervised Learning
+ Supervised Machine Learning pproaches n Supervised machine learning approach: n a training corpus of words tagged in context with their sense n used to train a classifier that can tag words in new text n Summary of what we need: n the tag set ( sense inventory ) n the training corpus n set of features extracted from the training corpus n classifier
+ Supervised WSD : WSD Tags n What s a tag dictionary sense n For example, for WordNet an instance of bass in a text has 8 possible tags or labels (bass through bass8).
+ 8 senses of bass in WordNet. bass - (the lowest part of the musical range) 2. bass, bass part - (the lowest part in polyphonic music) 3. bass, basso - (an adult male singer with the lowest voice) 4. sea bass, bass - (flesh of lean-fleshed saltwater fish of the family Serranidae) 5. freshwater bass, bass - (any of various North merican lean-fleshed freshwater fishes especially of the genus Micropterus) 6. bass, bass voice, basso - (the lowest adult male singing voice) 7. bass - (the member with the lowest range of a family of musical instruments) 8. bass - (nontechnical name for any of numerous edible marine and freshwater spiny-finned fishes)
+ Inventory of sense tags for bass WordNet Spanish Roget Sense Translation Category Target Word in Context bass 4 lubina FISH/INSECT... fish as Pacific salmon and striped bass and... bass 4 lubina FISH/INSECT... produce filets of smoked bass or sturgeon... bass 7 bajo MUSIC... exciting jazz bass player since Ray Brown... bass 7 bajo MUSIC... play bass because he doesn t have to solo...
+ Supervised WSD 2: Get a corpus n Lexical sample task: n Line-hard-serve corpus - 4000 examples of each n Interest corpus - 2369 sense-tagged examples n ll words: n Semantic concordance: a corpus in which each open-class word is labeled with a sense from a specific dictionary/ thesaurus. n SemCor: 234,000 words from Brown Corpus, manually tagged with WordNet senses n SENSEVL-3 competition corpora - 208 tagged word tokens
+ SemCor <wf pos=prp>he</wf> 0 <wf pos=vb lemma=recognize wnsn=4 lexsn=2:3:00::>recognized</wf> <wf pos=dt>the</wf> <wf pos=nn lemma=gesture wnsn= lexsn=:04:00::>gesture</wf> <punc>.</punc>
+ Supervised WSD 3: Extract feature vectors Intuition from Warren Weaver (955): If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words But if one lengthens the slit in the opaque mask, until one can see not only the central word in question but also say N words on either side, then if N is large enough one can unambiguously decide the meaning of the central word The practical question is: What minimum value of N will, at least in a tolerable fraction of cases, lead to the correct choice of meaning for the central word
+ Feature vectors n simple representation for each observation (each instance of a target word) n Vectors of sets of feature/value pairs n Represented as a ordered list of values n These vectors represent, e.g., the window of words around the target
+ Two kinds of features in the vectors n Collocational features and bag-of-words features n Collocational n Features about words at specific positions near target word n Often limited to just word identity and POS n Bag-of-words n Features about words that occur anywhere in the window (regardless of position) n Typically limited to frequency counts
+ Examples n Example text (WSJ): n electric guitar and bass player stand off to one side not really part of the scene n ssume a window of +/- 2 from the target
+ Examples n Example text (WSJ) n electric guitar and bass player stand off to one side not really part of the scene, n ssume a window of +/- 2 from the target
+ Collocational features n Position-specific information about the words and collocations in window guitar and bass player stand n word,2,3 grams in window of ±3 is common [w i 2,POS i 2,w i,pos i,w i+,pos i+,w i+2,pos i+2,w i i 2,wi+ i ] [guitar, NN, and, CC, player, NN, stand, VB, and guitar, player stand]
+ Bag-of-words features n an unordered set of words position ignored n Counts of words occur within the window. n First choose a vocabulary n Then count how often each of those terms occurs in a given window n sometimes just a binary indicator or 0
+ Co-Occurrence Example n ssume we ve settled on a possible vocabulary of 2 words in bass sentences: [fishing, big, sound, player, fly, rod, pound, double, runs, playing, guitar, band] n The vector for: guitar and bass player stand [0,0,0,,0,0,0,0,0,0,,0]
+ Classification: definition n Input: n a word w and some features f n a fixed set of classes C = {c, c 2,, c J } n Output: a predicted class c C
+ Classification Methods: Supervised Machine Learning n Input: n a word w in a text window d (which we ll call a document ) n a fixed set of classes C = {c, c2,, cj} n training set of m hand-labeled text windows again called documents (d,c),...,(dm,cm) 20 n Output: n a learned classifier γ:d à c
+ Classification Methods: Supervised Machine Learning n ny kind of classifier n Naive Bayes n Logistic regression n Neural Networks n Support-vector machines n k-nearest Neighbors n
+ Classifiers n The choice of technique, in part, depends on the set of features that have been used n Some techniques work better/worse with features with numerical values n Some techniques work better/worse with features that have large numbers of possible values n For example, the feature the word to the left has a fairly large number of possible values
Naïve Bayes n Rewriting with Bayes n Removing denominator n assuming independence of the features: n Final: ˆ s = argmax s S ˆ s = argmax s S ˆ s = argmax s S ˆ s argmax s S P(s f ) P( f s) p(s) p( f ) P( f s)p(s) P( n f s) P( f j s) j= n j= P(s) P( f j s)
+ pplying Naive Bayes to WSD n P(c) is the prior probability of that sense n Counting in a labeled training set. n P(w c) conditional probability of a word given a particular sense n P(w c) = count(w,c)/count(c) n We get both of these from a tagged corpus like SemCor n Can also generalize to look at other features besides words. n Then it would be P(f c) n Conditional probability of a feature given a sense
+ ˆP(c) = N c N ˆP(w c) = count(w,c)+ count(c)+ V Doc Words Class 25 Training fish smoked fish f 2 fish line f 3 fish haul smoked f 4 guitar jazz line g Test 5 line guitar jazz jazz Priors: P(f)= 3 V = {fish, smoked, line, haul, guitar, jazz} 4 Choosing a class: P(g)= 4 P(f d5) 3/4 * 2/4 * (/4) 2 * /4 0.00003 Condi.onal Probabili.es: P(line f) = (+) / (8+6) = 2/4 P(guitar f) = (0+) / (8+6) = /4 P(g d5) /4 * 2/9 * (2/9) 2 * 2/9 P(jazz f) = (0+) / (8+6) = /4 0.0006 P(line g) = (+) / (3+6) = 2/9 P(guitar g) = (+) / (3+6) = 2/9 P(jazz g) = (+) / (3+6) = 2/9
+ Word Sense Disambiguation EvaluaRons and Baselines
+ WSD Evaluations and baselines n Best evaluation: extrinsic ( end-to-end, task-based ) evaluation n Embed WSD algorithm in a task and see if you can do the task better! n What we often do for convenience: intrinsic evaluation n Exact match sense accuracy n % of words tagged identically with the human-manual sense tags n Usually evaluate using held-out data from same labeled corpus n Baselines n Most frequent sense n The Lesk algorithm
+ Most Frequent Sense n WordNet senses are ordered in frequency order n So most frequent sense in WordNet = take the first sense n Sense frequencies come from the SemCor corpus
+ Ceiling n Human inter-annotator agreement n Compare annotations of two humans n On same data n Given same tagging guidelines n Human agreements on all-words corpora with WordNet style senses n 75%-80%
+ Word Sense Disambiguation DicRonary and Thesaurus Methods
+ The Simplified Lesk algorithm n Let s disambiguate bank in this sentence: The bank can guarantee deposits will eventually cover future tuition costs because it invests in adjustable-rate mortgage securities. n given the following two WordNet senses: bank Gloss: a financial institution that accepts deposits and channels the money into lending activities Examples: he cashed a check at the bank, that bank holds the mortgage on my home bank 2 Gloss: sloping land (especially the slope beside a body of water) Examples: they pulled the canoe up on the bank, he sat on the bank of the river and watched the currents
+The Simplified Lesk algorithm Choose sense with most word overlap between gloss and context (not counting function words) The bank can guarantee deposits will eventually cover future tuition costs because it invests in adjustable-rate mortgage securities. bank Gloss: a financial institution that accepts deposits and channels the money into lending activities Examples: he cashed a check at the bank, that bank holds the mortgage on my home bank 2 Gloss: sloping land (especially the slope beside a body of water) Examples: they pulled the canoe up on the bank, he sat on the bank of the river and watched the currents
+ The Corpus Lesk algorithm n ssumes we have some sense-labeled data (like SemCor) n Take all the sentences with the relevant word sense: These short, "streamlined" meetings usually are sponsored by local banks, Chambers of Commerce, trade associations, or other civic organizations. n Now add these to the gloss + examples for each sense, call it the signature of a sense. n Choose sense with most word overlap between context and signature.
+ Corpus Lesk: IDF weighting 34 n Instead of just removing function words n Weigh each word by its promiscuity across documents n Down-weights words that occur in every document (gloss, example, etc.) n These are generally function words, but is a more fine-grained measure n Weigh each overlapping word by inverse document frequency
+ Corpus Lesk: IDF weighting 35 n Weigh each overlapping word by inverse document frequency n N is the total number of documents n df i = document frequency of word i n = # of documents with word i idf i = log! # # " N df i $ & & % score(sense i, context j ) = w overlap(signature, context ) i j idf w
+ Graph-based methods 36 n First, WordNet can be viewed as a graph n senses are nodes n relations (hypernymy, meronymy) are edges n lso add edge between word and unambiguous gloss words helping n food n beverage n liquid n milk n sup v toast n 4 consume v drink n drink v sip v sip n consumer n drinker n drinking n consumption n potation n
+ How to use the graph for WSD n Insert target word and words in its sentential context into the graph, with directed edges to their senses 37 n She drank some milk n Now choose the most central sense n dd some probability to drink and milk and compute node with highest pagerank drink v 2 drink v 3 drink v drink 4 v drink 5 v drink n milk n beverage n drinker n food n boozing n nutriment n drink milk 2 milk n 3 milk n milk 4 n
+ Word Sense Disambiguation Semi-Supervised Learning
+ Semi-Supervised Learning Problem: supervised and dicronary-based approaches require large hand-built resources What if you don t have so much training data Solu.on: Bootstrapping Generalize from a very small hand-labeled seed-set.
+ Bootstrapping n For bass n Rely on One sense per collocation rule n word reoccurring in collocation with the same word will almost surely have the same sense. the word play occurs with the music sense of bass the word fish occurs with the fish sense of bass
+ Sentences extracting using fish and play We need more good teachers right now, there are only a half a dozen who can play the free bass with ease. n electric guitar and bass player stand off to one side, not really part of the scene, just as a sort of nod to gringo expectations perhaps. The researchers said the worms spend part of their life cycle in such fish as Pacific salmon and striped bass and Pacific rockfish or snapper. nd it all started when fishermen decided the striped bass in Lake Mead were too skinny. Figure 6.0 Samples of bass sentences extracted from the WSJ by using the simple cor-
+ Summary: generating seeds ) Hand labeling 2) One sense per collocaron : n word reoccurring in collocaron with the same word will almost surely have the same sense. 3) One sense per discourse : n The sense of a word is highly consistent within a document - Yarowsky (995) n (t least for non-funcron words, and especially topic-specific words)
+ Stages in the Yarowsky bootstrapping algorithm for the word plant B B B B B B B B B B LIFE B B MNUFCTURING B B B B B B B B B B B B B B B B LIFE B B MNUFCTURING EQUIPMENT EMPLOYEE B B NIML MICROSCOPIC V 0 V Λ 0 Λ (a) (b)
+ Summary n Word Sense Disambiguation: choosing correct sense in context 44 n pplications: MT, Q, etc. n Three classes of Methods n Supervised Machine Learning: Naive Bayes classifier n Thesaurus/Dictionary Methods n Semi-Supervised Learning n Main intuition n There is lots of information in a word s context n Simple algorithms based just on word counts can be surprisingly good