Individual Document Keyword Extraction for Tamil T.Vaishnavi 1, Roxanna Samuel 2, Student, Computer Science Engineering, Rajalakshmi Engineering College, vaishnavi.mythili@gmail.com,chennai, India 1 Assistant Professor (SS), Computer Science Engineering, Rajalakshmi Engineering College, roxanna.samuel@rajalakshmi.edu.in Chennai, India 2 Abstract - Keyword extraction is an important technique for summarization, document clustering, Web page retrieval, document retrieval, text mining, and so on. By extracting significant keywords, we can easily identify the content which is easy to read and understand the relationship among documents. Keyword extraction is considered as one of the core technology for all automatic processing for text materials. This paper employs, Conditional Random Fields (CRF) for the task of extracting effective keywords that uniquely identify a document for Tamil using Machine Language Techniques. Keyword Extraction includes POS and Chunking process. Part Of Speech tagging and chunking are the elementary processing steps for any language processing process. Part of speech (POS) tagging is the procedure of labelling the annotation of syntactic categories for each word in the corpus. Chunking is the process of identifying and splitting the text into syntactically correlated word groups. Chunking process employs Conditional Random Field to segment the sentences. We have developed our own tagset for interpret the corpus, which is useful for training and testing the POS tag generator and the chunker. Results show that the Pos-tag enhanced keyword extraction model indeed may assist in automatic key word assignment and in fact performs significantly better than the original state-of-the-art keyword extractor. Keywords: Keyword Extraction, POS Tagging, NP Chunking, SVM, CRF. 1 INTRODUCTION Keywords are defined as a subset of words from a document that describes the meaning of the document. Ideally, Keywords represents the essential content of a document. Keyword extraction is one of the major task in the field of Natural Language Processing (NLP). Several different approaches have already been tried to automate the task of Keyword Extraction for English and other languages. Natural Language Processing (NLP) is a field of computer science, it describes the intercommunication between computers and human (natural) languages. In general, natural-language is an interactive method of human-computer interaction. Sometimes natural language process referred to as Artificial Intelligence-complete problem, because natural-language identification requires extensive and massive knowledge about the outside world and the ability to utilize it. Natural Language Processing has significant features in the field of computational linguistics, and is considered as a sub-field of artificial intelligence. Keyword extraction process includes Part Of Speech tagging (POS) and Chunking. The basic processing step consists of assigning POS tags to every token in the text. The subsequent step focuses on the identification of fundamental structural relations between groups of words in a sentence. This structural recognition is usually referred to as chunking. Chunker divides a sentence into its major phrases or non overlapping phrases and it attaches a label to each chunk. Chunking tasks falls between tagging and parsing. In this paper we present our experiments using Conditional Random Fields (CRF).CRFs method is a undirected graphical models trained to maximize a conditional probability. The paper is organized as follows: In section 2 the interrelated work is discussed. Section 3 describes about the system architecture. In Section 4 we discuss about implementation and results. Section 5 presents conclusion and future work. 2 RELATED WORKS This section comprises of various recent researches related to sentiment analysis. Most of the research work is done using machine learning (ML) approaches. One of the first noted work in this area was done by Kaur and Gupta [1] for English. In this paper Machine learning techniques and various methods are used to extract keywords. Different approaches have been implemented. Results are evaluated by comparing with manual assignment results. In another paper [3], the system focuses on a new keyword extraction algorithm that applies to a single document without using a corpus. Most Frequently used terms are extracted first, then a set of co-occurrences between each term and the frequent terms, i.e., existence of same words in the same sentences is generated. Co-occurrence distribution shows importance of a term in the 448
document as follows. Co-occurrence has attracted interest for a long time in computational linguistics. Stuart Rose, Dave Engel, Nick Cramer and Wendy Cowley [5], system that lists documents related to a primary document s keywords, and that maintains the use of keyword anchors as hyperlinks between documents, enabling a user to quickly access related material. Keywords from documents as the basic building block for an IR system. Keywords can also be used to enrich the presentation of search results. We focus our interest on methods of keyword extraction that maintains on individual documents. Such document-oriented methods will select or extract the same keywords from a document despite of the present state of a corpus. Document oriented methods provide context independent document features, enabling additional analytic methods that categorize changes within a text stream over time. Rapid Automatic Keyword Extraction (RAKE) is an unsupervised learning, domain-independent and language independent method for extracting keywords from individual documents. [2] Chunking or shallow syntactic parsing is a task of interest to many natural language processing applications. In Arabic language, the problem gets worse because of its specific features that make it quite non-identical and even more uncertain than other natural languages when organised. In this paper, we present a method for chunking Arabic language texts based on supervised learning. The Conditional Random Field algorithm and the Penn Arabic Tree bank to train the model. Chunking task focuses on recognizing the chunks that consist of noun phrases (NPs), which is called Noun Phrase Chunking. The authors recognized arbitrary chunks but classified every non-np chunk as VP chunk. Their work has inspired many others to review the application of learning methods to noun phrase chunking. Yong-Hun Lee, Mi-Young Kim, and Jong- Hyeok Lee in this paper, the system present a method of chunking in Korean texts using conditional random fields (CRFs), instantly a new probabilistic model was introduced for labelling and splitting the sequence of data. In agglutinative (a type of synthetic language) languages such as Korean and Japanese, a rule-based chunking method is mostly used for its simplicity and profitable. A hybrid of a rule-based and machine learning method was also recommended to handle exceptional cases of the rules. Korean is an agglutinative language, in which a word unit is a blend of a content word and function words. Post positions, Function words and endings give much data such as morphological relation, case, tense, etc. Well established function words in Korean help with chunking, particularly NP and VP chunking. 3 PROPOSED ARCHITECTURE Fig 3.1: System Overview The system architecture describes the process of Keyword extraction. The paragraph is given as the input to the system. Tokenization is the process of splitting up the given text into units called tokens. The tokens may be words or number or punctuation mark. Then words are mapped to tagset. Part-Of-Speech tagging or word-category disambiguation, is the process of labelling a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition, as well as its context i.e. correspondence with adjacent and related words in a phrase, sentence, or paragraph. Support Vector Machine (SVM) method is used to analyze the text and maps to the tagset. Chunking is an analysis of the sentences which identifies the constituents. Noun Phrase Chunk is a phrase which has a noun as its head or which performs the same grammatical function of such a phrase. Conditional Random Field (CRF) method is used to for labelling the sequences.crf gives much accuracy than other Machine learning methods. The Chunked output is given as the input for the Keyword extraction. CRF model is used to extract the keywords from the Noun Phrases, which defines as a sequence of one or more words, provide a compact description of the document/ paragraph. 4 CONDITIONAL RANDOM FIELDs METHOD CRF model is a sequence labelling and disaggregated model which was put forward by John Lafferty in 2001. In this paper, we only give a simple introduction of CRF model and how it is used for labelling. It is a conditional distribution model of undirected graph. Given a certain observed sequence, it calculates the joint probability of the whole sequence to find the optimal result of the labelling. CRF is able to express long distance dependence and overlapping features, which is conducive to the resolution of the problem of labelling (classification) bias, so as to get the optimal result. For the given observed progression x=x 1 x 2 x n, which xi denotes a word in the sequence. We tag of each word. For a CRF 449
which is given the parameter χ=χ 1 χ 2 χ k, we will get the probability of the Y with the input of the sequence: Z(x) is the normalized functions and f k (y i-1, y i, x,t) denotes a feature function. is the weight parameter which is relevant to. We will achieve it through training. And then the most possible labelling sequence is the output. 5 IMPLEMENTATION AND RESULTS A. Preprocessing and Tokenization Tokenization is a sort of pre-processing in a sense, an identification of basic units to be processed. The process of splitting running the text into words and sentences. The result of tokenization is Tokens. A Token is a structure describing a lexeme that explicitly indicates its categorization for the purpose of parsing. Paragraph is taken as input. Tokenization process is carried out. After tokenization process paragraphs are tokenized into tokens. for computer languages. POS tagging is evaluated as an significant process in speech recognition, information retrieval, document summarization, natural language parsing, text-to-speech conversion, and machine translation. Tamil being a Dravidian language has a very rich semantic structure which is agglutinative. Tamil words are made up of lexical roots consequence by one or more affixes. So tagging a word in a language like Tamil is very complex. The main obstacles in Tamil POS tagging are solving the difficulty and ambiguity of words. POS Tagging is implemented using Support Vector Machines algorithm. SVM are related to supervised learning methods for classification and regression analysis. It is predominantly used in many applications like face analysis, hand writing analysis, so forth. It is an ideal classifier in the sense that, given training data, it learns a classifying hyper plane in the feature space, which has the uttermost distance to all training examples. It is easy to train and provides high flexibility and accuracy. Own tagset was developed for training and testing the POS-tagger generators. Example: Fig 5.2 Tagged Sentences C. Customized Tagset B. POS Tagging Fig 5.1 Tokenization The Part of speech (POS) tagging is an approach for labelling a part of speech or other lexical class marker to each and every word in a sentence. It is equivalent to process of tokenization For POS level, a tagset is used which has just the grammatical categories excluding grammatical features. Since the grammatical characteristics can be obtained from the morphological analyzer. We needed a tagset with minimum tags without compromising on tagging efficiency. Own tagset was developed for training and testing the POS-tagger generators. The tagset consists of 8 tags. A corpus size of one hundred words was used for training and testing the accuracy of the tagger generators. 450
Table 5.2: Chunk Tagset F. CRF Labelling and Training Table 5.1: POS Tagset D. Chunking A classical chunk consists of a single content word surrounded by a constellation of function words. Chunks are normally taken to be a non recursive coordinated group of words. Tamil being an agglutinative language have a composite linguistic and syntactical structure. It is a comparatively free word order language but in the phrasal and clausal construction it behaves like a stable word order language. So the process of chunking in Tamil is less complex compared to the process of POS tagging. Different methodologies have been developed for chunking in different languages. Chunking tasks focuses on recognizing the chunks that consist of noun phrases (NPs) which is called NP Chunking and verb phrases (VPs) called as VP Chunking. Noun Chunks will be given the tag NP. It includes non-recursive noun phrases and post-positional phrases. The source of a noun chunk would be a noun. Noun qualifiers like adjective, quantifiers, determiners will form the left side border for a noun chunk and the head noun will mark the right side boundary for it. The input is the paragraphs. The paragraph is preprocessed and features are extracted. A CRF model has been trained that can label the keyword type. CRF model is considered as effective approach to extract keywords. It provides greater accuracy and flexibility. The training data should be in a particular format. The training data must consist of multiple tokens and the token are nothing but words, and a order or sequence of token becomes a sentence. Each token should be represented in one line, with the columns segregated by white space. Many numbers of columns can be used, but the columns are fixed through all tokens. CRF is able to express long distance dependence and overlapping features. By defining the features in above-stated ways, each element of the data we are trying to model fix s into a feature function that associates the attribute and a feasible label. Fig 5.3: NP Chunking E. Chunk Tagset A typical chunk consists of a single content word surrounded by a constellation of function words. Chunks are normally taken to be a non-recursive correlated group of words. Tamil being an agglutinative language have a complex morphological and syntactical structure. It is a relatively free word order language but in the phrasal and clausal construction it behaves like a fixed word order language. And, so is the process of chunking in Tamil, less complex compared to the process of pos tagging.. Our customized tag set contains ten tags and is in Table Fig 5.4: CRF Label and Training G. Keyword Extraction To select keywords from the document, it determines the chunked phrases and feature values, and then applies the model built during training. CRF training model is used to determine the keywords that are more important to the paragraphs. The model determines the overall probability that each NP has, and then a postprocessing operation selects the best set of keywords. 451
Example: Fig 5.5: Final Keywords Thus the Keywords are extracted from the paragraphs. It makes easier to view and analyse the information. 6 CONCLUSION AND FUTURE WORK The existing system provides keyword extraction for English and Chunking process is carried out in various languages such as Arabic, Bengali, and Assamese etc. In this project Keywords are efficiently identified for Tamil language. The Tamil Keyword extraction provides the effective list of keywords for the given input. Here, we analyzed the performance of the keyword extraction algorithm for the Tamil text with CRF method. CRF is a state of art sequence labeling method and utilize most of the features of documents sufficiently and effectively for efficient keyword extraction. At the same time, keyword extraction can be considered as string labeling. As with the noun phrase keyword extraction methodology, the only requirement is that the language has a morphological analyzer and rules for finding simple noun phrases. Since nouns contain bulk of the information, noun phrases are extracted. The noun phrases are scored. The shortest noun phrases from the highest scoring are then used as the keywords. In the future, large data set can be trained for extraction purpose. Using large data set we can maintain easily extract the keywords even from the single document or several documents. Automatic keyword extraction is also possible once the data set has been well trained and given significantly good results for any languages. As with the noun phrase methodology, the only requirement is that the language has a morphological analyzer and rules for finding simple noun phrases. Conditional Random Fields,IEEE conference 2014. [3] Y. Matsuo, M. Ishizuka, Keyword Extraction from a Single Document using Word Co-occurrence Statistical Information, International Journal on Artificial Intelligence Tools, 2010. [4] Yong-Hun Lee, Mi-Young Kim, and Jong- Hyeok Lee, Chunking Using Conditional Random Fields in Korean Text, Springer, 2005. [5] Stuart Rose, Dave Engel, Nick Cramer and Wendy Cowley, Automatic keyword extraction from individual documents, research gate 2010. [6] Asif Ekbal, Samiran Mandal, Sivaji Bandyopadhyay, POS Tagging Using HMM and Rule-based Chunking, Proceedings of the IJCAI 2007. [7] Fuchun Peng, Andrew McCallum, Accurate Information Extraction from Research Papers using Conditional Random Fields,Elseiver 2013. [8] Pattabhi R K Rao T, Vijay Sundar Ram R, Vijayakrishna R and Sobha L, A Text Chunker and Hybrid POS Tagger for Indian Languages Proceedings of the IJCAI, 2013. [9] Avinesh.PVS, Karthik G, Part-Of-Speech Tagging and Chunking using Conditional Random Fields and Transformation Based Learning, Proceedings of the IJCAI, 2014. [10] Kamal Sarkar, Vivekananda Gayen, Bengali Noun Phrase Chunking Based on Conditional Random Fields, International Conference on Business and Information Management (ICBIM), 2014. [11] Biplav Sarma, Anup kumar Barman, A Comprehensive survey of Noun Phrase Chunking in Natural Languages,A Survey Elseiver, 2015. [12] Tianhang Wang,Shumin Shi, Congjun Long, An HMM based Part Of Speech tagger and statistical Chunk boundary in Tibetan, IEEE conference, 2014. REFERENCES [1] Jasmeen Kaur, Vishal Gupta, Effective Approaches For Extraction Of Keywords, IJCSI International Journal of Computer Science Issues, November 2011. [2] Nabil Khoufi, Chafik Aloulou and Lamia Hadrich Belguith, Chunking Arabic Texts Using 452