Statistical Methods. Allen s Chapter 7 J&M s Chapters 8 and 12

Similar documents
ESSLLI 2010: Resource-light Morpho-syntactic Analysis of Highly

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

Enhancing Unlexicalized Parsing Performance using a Wide Coverage Lexicon, Fuzzy Tag-set Mapping, and EM-HMM-based Lexical Probabilities

11/29/2010. Statistical Parsing. Statistical Parsing. Simple PCFG for ATIS English. Syntactic Disambiguation

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

CS 598 Natural Language Processing

Modeling Attachment Decisions with a Probabilistic Parser: The Case of Head Final Structures

Parsing of part-of-speech tagged Assamese Texts

Natural Language Processing. George Konidaris

Grammars & Parsing, Part 1:

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Syntax Parsing 1. Grammars and parsing 2. Top-down and bottom-up parsing 3. Chart parsers 4. Bottom-up chart parsing 5. The Earley Algorithm

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

The Evolution of Random Phenomena

Context Free Grammars. Many slides from Michael Collins

Using dialogue context to improve parsing performance in dialogue systems

Basic Parsing with Context-Free Grammars. Some slides adapted from Julia Hirschberg and Dan Jurafsky 1

Accurate Unlexicalized Parsing for Modern Hebrew

Prediction of Maximal Projection for Semantic Role Labeling

Training and evaluation of POS taggers on the French MULTITAG corpus

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

Linking Task: Identifying authors and book titles in verbose queries

Compositional Semantics

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

An Evaluation of POS Taggers for the CHILDES Corpus

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Probabilistic Latent Semantic Analysis

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Lecture 10: Reinforcement Learning

Proof Theory for Syntacticians

Analysis of Probabilistic Parsing in NLP

arxiv:cmp-lg/ v1 7 Jun 1997 Abstract

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Universiteit Leiden ICT in Business

THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING

Developing Grammar in Context

LTAG-spinal and the Treebank

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger

Major Milestones, Team Activities, and Individual Deliverables

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Advanced Grammar in Use

The Discourse Anaphoric Properties of Connectives

The Role of the Head in the Interpretation of English Deverbal Compounds

Impact of Controlled Language on Translation Quality and Post-editing in a Statistical Machine Translation Environment

BULATS A2 WORDLIST 2

Writing a composition

Chapter 4: Valence & Agreement CSLI Publications

Indian Institute of Technology, Kanpur

Towards a MWE-driven A* parsing with LTAGs [WG2,WG3]

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

ELD CELDT 5 EDGE Level C Curriculum Guide LANGUAGE DEVELOPMENT VOCABULARY COMMON WRITING PROJECT. ToolKit

Ch VI- SENTENCE PATTERNS.

The taming of the data:

Some Principles of Automated Natural Language Information Extraction

A Computational Evaluation of Case-Assignment Algorithms

ENGBG1 ENGBL1 Campus Linguistics. Meeting 2. Chapter 7 (Morphology) and chapter 9 (Syntax) Pia Sundqvist

An Efficient Implementation of a New POP Model

Distant Supervised Relation Extraction with Wikipedia and Freebase

A Comparison of Two Text Representations for Sentiment Analysis

DEVELOPMENT OF A MULTILINGUAL PARALLEL CORPUS AND A PART-OF-SPEECH TAGGER FOR AFRIKAANS

Introduction to HPSG. Introduction. Historical Overview. The HPSG architecture. Signature. Linguistic Objects. Descriptions.

Ensemble Technique Utilization for Indonesian Dependency Parser

Words come in categories

Switchboard Language Model Improvement with Conversational Data from Gigaword

Developing a TT-MCTAG for German with an RCG-based Parser

UNIVERSITY OF OSLO Department of Informatics. Dialog Act Recognition using Dependency Features. Master s thesis. Sindre Wetjen

A Syllable Based Word Recognition Model for Korean Noun Extraction

CS Machine Learning

The stages of event extraction

Three New Probabilistic Models. Jason M. Eisner. CIS Department, University of Pennsylvania. 200 S. 33rd St., Philadelphia, PA , USA

Character Stream Parsing of Mixed-lingual Text

Learning Computational Grammars

A Minimalist Approach to Code-Switching. In the field of linguistics, the topic of bilingualism is a broad one. There are many

Approaches to control phenomena handout Obligatory control and morphological case: Icelandic and Basque

Language and Computers. Writers Aids. Introduction. Non-word error detection. Dictionaries. N-gram analysis. Isolated-word error correction

Specifying a shallow grammatical for parsing purposes

arxiv:cmp-lg/ v1 22 Aug 1994

Informatics 2A: Language Complexity and the. Inf2A: Chomsky Hierarchy

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

The Smart/Empire TIPSTER IR System

Houghton Mifflin Reading Correlation to the Common Core Standards for English Language Arts (Grade1)

Netpix: A Method of Feature Selection Leading. to Accurate Sentiment-Based Classification Models

Derivational and Inflectional Morphemes in Pak-Pak Language

Towards a Machine-Learning Architecture for Lexical Functional Grammar Parsing. Grzegorz Chrupa la

Taught Throughout the Year Foundational Skills Reading Writing Language RF.1.2 Demonstrate understanding of spoken words,

1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature

AN ANALYSIS OF GRAMMTICAL ERRORS MADE BY THE SECOND YEAR STUDENTS OF SMAN 5 PADANG IN WRITING PAST EXPERIENCES

BASIC ENGLISH. Book GRAMMAR

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Lecture 1: Machine Learning Basics

A Graph Based Authorship Identification Approach

Written by: YULI AMRIA (RRA1B210085) ABSTRACT. Key words: ability, possessive pronouns, and possessive adjectives INTRODUCTION

Large vocabulary off-line handwriting recognition: A survey

Leveraging Sentiment to Compute Word Similarity

THE VERB ARGUMENT BROWSER

Formulaic Language and Fluency: ESL Teaching Applications

Sample Goals and Benchmarks

Transcription:

Statistical Methods Allen s Chapter 7 J&M s Chapters 8 and 12 1

Statistical Methods Large data sets (Corpora) of natural languages allow using statistical methods that were not possible before Brown Corpus includes about 1000000 words with POS Penn Treebank contains full syntactic annotations 2

Part of Speech Tagging Determining the most likely category of each word in a sentence with ambiguous words Example: finding POS of words that can be either nouns and/or verbs Need two random variables: 1. C that ranges over POS {N, V} 2. W that ranges over all possible words 3

Part of Speech Tagging (Cont.) Example: W = flies Problem: which one is greater? P(C=N W = Flies) or P(C=V W = flies) P(N flies) or P(V flies) P(N flies) = P(N & flies) / P(flies) P(V flies) = P(V & flies) / P(flies) So P(N & flies) or P(V & flies) 4

Part of Speech Tagging (Cont.) We don t have true probabilities We can estimate using large data sets Suppose: There is a Corpus with 1273000 words There is 1000 uses of flies: 400 with an noun sense, and 600 with a verb sense P(flies) = 1000 / 1273000 = 0.0008 P(flies & N) = 400 / 1273000 = 0.0003 P(flies & V) = 600 / 1273000 = 0.0005 P(V flies) = P(V & flies) / P(flies) = 0.0005 / 0.0008 = 0.625 So in %60 occasions flies is a verb 5

Estimating Probabilities We want to use probability to predict the future events Using the information P(V flies) = 0.625 to predict that the next flies is more likely to be a verb This is called Maximum Likelihood estimation (MLE) Generally the larger the data set we use, the more accuracy we get 6

Estimating Probabilities (Cont.) Estimating the outcome probability of tossing a coin (i.e., 0.5) Acceptable margin of error : (0.25-0.75) The more tests performed, the more accurate estimation 2 trials: %50 chance of reaching acceptable result 3 trials: %75 chance 4 trials: %87.5 chance 8 trials: %93 chance 12trials: %95 chance 7

Estimating tossing a coin outcome 8

Estimating Probabilities (Cont.) So the larger data set the better, but The problem of sparse data Brown Corpus contains about a million words but there is only 49000 different words, so one expect each word occurs about 20 times, But over 40000 words occur less than 5 times. 9

Estimating Probabilities (Cont.) For a random variable X with a set of values V i, computed from counting number times X = x i P(X = x i ) V i / i V i Maximum Likelihood Estimation (MLE) uses V i = x i Expected likelihood Estimation (ELE) Uses V i = x i + 0.5 10

MLE vs ELE Suppose a word w doesn t occur in the Corpus We want to estimate w occurring in one of 40 classes L1 L40 We have a random variable X, X = xi only if w appears in word category Li By MLE, P(Li w) is undefined because the divisor is zero ELE, P(Li w) 0.5 / 20 = 0.025 Suppose w occurs 5 times (4 times as an noun and once as a verb) By MLE, P(N w) = 4/5 = 0.8, By ELE, P(N w) =4.5/25 = 0.18 11

Evaluation Data set is divided into: Training set (%80-%90 of the data) Test set (%10-%20) Cross-Validation: Repeatedly removing different parts of corpus as the test set, Training on the reminder of the corpus, Then evaluating the new test set. 12

Part of speech tagging Simplest Algorithm: choose the interpretation that occurs most frequently flies in the sample corpus was %60 a verb This algorithm success rate is %90 Over %50 of words appearing in most corpora are unambiguous To improve the success rate, Use the tags before or after the word under examination If flies is preceded by the word the it is definitely a noun 13

Part of speech tagging (Cont.) Bayes rule: P(A B) = P(A) * P(B A) / P(B) There is a sequence of words w 1 w t, and Find a sequence of lexical categories C 1 C t, such that 1. P(C 1 C t w 1 w t ) is maximized Using the Bayes rule: 2. P(C 1 C t ) * P(w 1 w t C 1 C t ) / P(w 1 w t ) The problem is reduced to finding C 1 C t, such that 3. P(C 1 C t ) * P(w 1 w t C 1 C t ) is maximized The probabilities can be estimated by some independence assumptions 14

Part of speech tagging (Cont.) Using the information about The previous word category: bigram Or two previous word categories: trigram Or n-1 previous word categories: n-gram Using the bigram model P(C 1 C t ) i=1,t P(C i C i-1 ) P(Art N V N) = P(Art, ) * P(N ART) * P( V N) * P(N V) P(w 1 w t C 1 C t ) i=1,t P(w i C i ) We are looking for a sequence C 1 C t such that i=1,t P(C i C i-1 ) * P(w i C i ) is maximized 15

Part of speech tagging (Cont.) The information needed by this new formula can be extracted from the corpus P(C i = V C i-1 = N) = Count( N at position i-1 & V at position i) / Count (N at position i-1) (Fig. 7-4) P( the Art) = Count(# times the is an Art) / Count(# times an Art occurs) (Fig. 7-6) 16

Using an Artificial corpus An artificial corpus generated with 300 sentences of categories Art, N, V, P 1998 words, 833 nouns, 300 verbs, 558 article, and 307 propositions, To deal with the problem of the problem of the sparse data, a minimum probability of 0.0001 is assumed 17

18 Bigram probabilities from the generated corpus

Word counts in the generated corpus N V ART P TOTAL flies 21 23 0 0 44 fruit 49 5 1 0 55 like 10 30 0 21 61 a 1 0 201 0 202 the 1 0 300 2 303 flower 53 15 0 0 68 flowers 42 16 0 0 58 birds 64 1 0 0 65 others 592 210 56 284 1142 TOTAL 833 300 558 307 1998 19

Lexical-generation probabilities 20 (Fig. 7-6) PROB (the ART).54 PROB (a ART).360 PROB (flies N).025 PROB (a N).001 PROB (flies V).076 PROB (flower N) 063 PROB (like V).1 PROB (flower V).05 PROB (like P).068 PROB (birds N).076 PROB (like N).012

21 Part of speech tagging (Cont.) How to find the sequence C 1 C t that maximizes i=1,t P(C i C i-1 ) * P(w i C i ) Brute Force search: Finding all possible sequences With N categories and T words, there are N T sequences Using bigram probabilities, the probability w i to be in category C i depends only on C i-1 The process can be modeled by a special form of probabilistic finite state machine (Fig. 7-7)

22 Markov Chain Probability of a sequence of 4 words being in cats: ART N V N 0.71 * 1 * 0.43 * 0.35 = 0.107 The representation is accurate only if the probability of a category occurring depends only the one category before it. This called the Markov assumption The network is called Markov chain

Hidden Markov Model (HMM) Markov network can be extended to include the lexical-generation probabilities, too. Each node could have an output probability for its every possible corresponding output The output probabilities are exactly the lexicalgeneration probabilities shown in fig 7-6 Markov network with output probabilities is called Hidden Markov Model (HMM) 23

24 Hidden Markov Model (HMM) The word hidden indicates that for a specific sequence of words, it is not clear what state the Markov model is in For instance, the word flies could be generated from state N with a probability of 0.25, or from state V with a probability of 0.076 Now, it is not trivial to compute the probability of a sequence of words from the network

Hidden Markov Model (HMM) The probability that the sequence N V ART N generates the output Flies Like a flower is: The probability of path N V ART N is 0.29 * 0.43 * 0.65 * 1 = 0.081 The probability of the output being Flies like a flower is P(flies N) * P(like V) * P(a ART) * P(flower N) = 0.025 * 0.1 * 0.36 * 0.063 = 5.4 * 10-5 The likelihood that HMM would generate the sentence is 0.000054 * 0.081 = 4.374 * 10-6 Therefore, the probability of a sentence w 1 w t, given a sequence C 1 C t, is i=1,t P(C i C i-1 ) * P(w i C i ) 25

Markov Chain 26

Viterbi Algorithm 27

Flies like a flower SEQSCORE(i, 1) = P(flies Li) * P(Li ) P(flies/V) = 0.076 * 0.0001 = 7.6 * 10-6 P(flies/N) = 0.035 * 0.29 = 0.00725 P(likes/V) = max( P(flies/N) * P(V N), P(flies/V) * P(V V)) * P(like V) = max (0.00725 * 0.43, 7.6 * 10-6 * 0.0001) * 0.1 = 0.00031 28

Flies like a flower 29

30 Flies like a flower Brute force search steps are N T Viterbi algorithm steps are K* T * N 2

Getting Reliable Statistics (smoothing) Suppose we have 40 categories To collect unigrams, at least 40 samples, one for each category, are needed For bigrams, 1600 samples are needed For trigerams, 64000 samples are needed For 4-grams, 2560000 samples are needed P(C i C C i-1 ) = 1 P(C i ) + 2 P(C i C i-1 ) + 3 P(C i C i-2 C i-1 ) 1 + 2 + 3 = 1 31

32 Statistical Parsing Corpus-based methods offer new ways to control parsers We could use statistical methods to identify the common structures of a Language We can choose the most likely interpretation when a sentence is ambiguous This might lead to much more efficient parsers that are almost deterministic

33 Statistical Parsing What is the input of an statistical parser? Input is the output of a POS tagging algorithm If POSs are accurate, lexical ambiguity is removed But if tagging is wrong, parser cannot find the correct interpretation, or, may find a valid but implausible interpretation With %95 accuracy, the chance of correctly tagging a sentence of 8 words is 0.67, and that of 15 words is 0.46

34 Obtaining Lexical probabilities A better approach is: 1. computing the probability that each word appears in the possible lexical categories. 2. combining these probabilities with some method of assigning probabilities to rule use in the grammar The context independent Lexical category of a word w be L j can be estimated by: P(L j w) = count (L j & w) / i=1, N count( L i & w)

Context-independent lexical categories 35 P(Lj w) = count (Lj & w) / i=1,n count( Li & w) P(Art the) = 300 /303 =0.99 P(N flies) = 21 / 44 = 0.48

Context dependent lexical probabilities A better estimate can be obtained by computing how likely it is that category L i occurs at position t, in all sequences of the input w 1 w t Instead of just finding the sequence with the maximum probability, we add up the probabilities of all sequences that end in w t /L i The probability that flies is a noun in the sentence The flies like flowers is calculated by adding the probability of all sequences that end with flies as a noun 36

37 Context-dependent lexical probabilities Using probabilities of Figs 7-4 and 7-6, the sequences that have nonzero values: P(The/Art flies/n) = P( the ART) * P(ART ) * P(N ART) * P(flies N) = 0.54 * 0.71 * 1.0 * 0.025 = 9.58 * 10-3 P(The/N flies/n) = P( the N) * P(N ) * P(N N) * P(flies N) = 1/833 * 0.29 * 0.13 * 0.025 = 1.13 * 10-6 P(The/P flies/n) = P(the P) * P(P ) * P(N P) * P(flies N) = 2/307 * 0.0001 * 0.26 * 0.025 = 4.55 * 10-9 Which adds up to 9.58 * 10-3

38 Context-dependent lexical probabilities Similarly, there are three nonzero sequences ending with flies as a V with a total value of 1.13 * 10-5 P(The flies) = 9.58 * 10-3 + 1.13 * 10-5 = 9.591 * 10-3 P(flies/N The flies) = P(flies/N & The flies) / P(The flies) = 9.58 * 10-3 / 9.591 * 10-3 = 0.9988 P(flies/V The flies) = P(flies/V & The flies) / P(The flies) = 1.13 * 10-5 / 9.591 * 10-3 = 0.0012

Forward Probabilities The probability of producing the words w 1 w t and ending is state w t /L i is called the forward probability i (t) and is defined as: i (t) = P(w t /L i & w 1 w t ) In the flies like flowers, 2 (3) is the sum of values computed for all sequences ending in a V (the second category) in position 3, for the input the flies like P(w t /L i w 1 w t ) = P(w t /L i & w 1 w t ) / P(w 1 w t ) i (t) / j=1, N j (t) 39

Forward Probabilities 40

41 Context dependent lexical Probabilities

Context dependent lexical Probabilities 42

43 Backward Probability Backward probability, j (t)), is the probability of producing the sequence w t w T beginning from the state w t /L j P(w t /L i ) ( i (t) * i (t) ) / j=1, N ( j (t) * i (t))

44 Probabilistic Context-free Grammars CFGs can be generalized to PCFGs We need some statistics on rule use The simplest approach is to count the number of times each rule is used in a corpus with parsed sentences If category C has rules R 1 R m, then P(R j C) = count(# times R j used) / i=1,m count(# times R i used)

Probabilistic Context-free Grammars 45

Independence assumption You can develop algorithm similar to the Veterbi algorithm that finds the most probable parse tree for an input Certain independence assumptions must be made The probability of a constituent being derived by a rule Rj is independent of how the constituent is used as a sub constituent The probabilities of NP rules are the same no matter the NP is the subject, the object of a verb, or the object of a proposition This assumption is not valid; a subject NP is much more likely to be a pronoun than an object NP 46

47 Inside Probability The probability that a constituent C generates a sequence of words w i, w i+1,, w j (written as w i,j ) is called the inside probability and is denoted as P(w i,j C) It is called inside probability because it assigns a probability to the word sequence inside the constituent

48 Inside Probabilities How to derive inside probabilities? For lexical categories, these are the same as lexicalgeneration probabilities P(flower N) is the inside probability that the constituent N is realized as the word flower (0.06 in fig. 7-6) Using lexical-generation probabilities, inside probabilities of Non-lexical constituents can be computed

49 Inside probability of an NP generating A flower The probability of an NP generates a flower is estimated as: P(a flower NP) = P(rule 8 NP) * P(a ART) * P(flower N) + P(Rule 6 NP) * P(a N) * P(flower N) = 0.55 * 0.36 * 0.06 + 0.09 * 0.001 * 0.06 = 0.012

50 Inside probability of an S generating A flower wilted These probabilities can then be used to compute the probabilities of larger constituents P(a flower wilted S) = P(Rule 1 S) * P(a flower NP) * P(wilted VP) + P(Rule 1 S) * P(a NP) * P(flower wilted VP)

51 Probabilistic chart parsing In parsing, we are interested in finding the most likely parse rather than the overall probability of a given sentence. We can a Chart Parser for this propose When entering an entry E of category C using rule i with n sub constituents corresponding to entries E 1 E n, then P(E) = P(Rule i C) * P(E 1 ) * * P(E n ) For lexical categories, it is better to use forward probabilities rather than lexical-generation probs.

A flower 52

53 Probabilistic Parsing This technique identifies the correct parse %50 times The reason is that the independence assumption is too radical One of crucial issues is handling of lexical items A context-free model does not consider lexical preferences Parser prefers that PP attached to V rather than NP, and fails to find the correct structure of those that PP should be attached to NP

54 Best-First Parsing Exploring higher probability constituents first Much of the search space, containing lowerrated probabilities is not explored Chart parser s Agenda is organized as a priority queue Arc extension algorithm need to be modified

New arc extension for Prob. Chart Parser 55

56 The man put a bird in the house Best first parser finds the correct parse after generating 65 constituents, Standard bottom-up parser generates 158 constituents Standard algorithm generates 106 constituents to find the first answer So, the best-first parsing is a significant improvement

57 Best First Parsing It finds the most probable interpretation first Probability of a constituent is always lower or equal to the probability of any of its sub constituents If S2 with probability of p2 is found after S1 with the probability of p1, then p2 cannot be higher than p1, otherwise: Sub constituents of S2 would have higher probabilities than p1 and would be found sooner than S1 and thus S2 would be found sooner, too

58 Problem of multiplication In practice with large grammars, probabilities would drop quickly because of multiplications Other functions can be used Score(C) = MIN (Score(C C1 Cn), Score(C1),, Score(Cn) But MIN leads to a %39 correct result

59 Context-dependent probabilistic parsing The best-first algorithm improves the efficiency, but has no effect on accuracy Computing rules probability based on some context-dependent lexical information can improve accuracy The first word of a constituent is often its head word Computing the probability of rules based on the first word of constituents : P(R C, w)

60 Context-dependent probabilistic parsing P(R C, w) = Count( # times R used for cat. C starting with w) / Count(# times cat. C starts with w) Singular names rarely occur alone as a noun phrase (NP N) Plural nouns rarely act as a modifying name (NP N N) Context-dependent rules also encode verb preferences for sub categorizations

61 Rule probabilities based on the first word of constituents

Context-Dependent Parser Accuracy 62

63 The man pu the bird in the house P(VP V NP PP VP, put) = 0.93 * 0.99 * 0.76 * 0.76 = 0.54 P(VP V NP VP, put) = 0.0038

64 The man Likes the bird in the house P(VP V NP PP VP, like) = 0.1 P(VP V NP VP, like) = 0.054

65 Context-dependent rules The accuracy of the parser is still %66 Make the rule probabilities relative to larger fragment of input (bigram, trigram, ) Using other important words, such as prepositions The more selective the lexical categories, the more predictive the estimates can be (provided that there is enough data) Other closed class words such as articles, quantifiers, conjunctions can also be used (i.e., treated individually) But what about open class words such as verbs and nouns (cluster similar words)

66 Handling Unknown Words An unknown word will disrupt the parse Suppose we have a trigram model of data If w 3 in the sequence of words w 1 w 2 w 3 is unknown, and if w 1 and w 2 are of categories C 1 and C 2 Pick the category C for w 3 such that P(C C 1 C 2 ) is maximized. For instance, if C 2 is ART, then C will probably be a NOUN (or an ADJECTIVE) Morphology can also help Unknown words ending with ing are likely a VERB, and those ending with ly are likely an ADVERB