Expert locator using concept linking. V. Senthil Kumaran* and A. Sankar

Similar documents
Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments

AQUA: An Ontology-Driven Question Answering System

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

A Case Study: News Classification Based on Term Frequency

Linking Task: Identifying authors and book titles in verbose queries

Probabilistic Latent Semantic Analysis

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

BYLINE [Heng Ji, Computer Science Department, New York University,

Analysis: Evaluation: Knowledge: Comprehension: Synthesis: Application:

Class Responsibility Assignment (CRA) for Use Case Specification to Sequence Diagrams (UC2SD)

Matching Similarity for Keyword-Based Clustering

Using dialogue context to improve parsing performance in dialogue systems

Postprint.

Automating the E-learning Personalization

Conversational Framework for Web Search and Recommendations

Cross Language Information Retrieval

Language Independent Passage Retrieval for Question Answering

Cross-Lingual Text Categorization

CWIS 23,3. Nikolaos Avouris Human Computer Interaction Group, University of Patras, Patras, Greece

Distant Supervised Relation Extraction with Wikipedia and Freebase

Assignment 1: Predicting Amazon Review Ratings

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Organizational Knowledge Distribution: An Experimental Evaluation

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

Towards Semantic Facility Data Management

The Smart/Empire TIPSTER IR System

Exposé for a Master s Thesis

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community

Ensemble Technique Utilization for Indonesian Dependency Parser

A Topic Maps-based ontology IR system versus Clustering-based IR System: A Comparative Study in Security Domain

Chunk Parsing for Base Noun Phrases using Regular Expressions. Let s first let the variable s0 be the sentence tree of the first sentence.

Unit 7 Data analysis and design

The stages of event extraction

On document relevance and lexical cohesion between query terms

On-Line Data Analytics

Rule Learning With Negation: Issues Regarding Effectiveness

A Bayesian Learning Approach to Concept-Based Document Classification

Speech Emotion Recognition Using Support Vector Machine

DICTE PLATFORM: AN INPUT TO COLLABORATION AND KNOWLEDGE SHARING

A Comparison of Two Text Representations for Sentiment Analysis

Customized Question Handling in Data Removal Using CPHC

Bug triage in open source systems: a review

Target Language Preposition Selection an Experiment with Transformation-Based Learning and Aligned Bilingual Data

HLTCOE at TREC 2013: Temporal Summarization

Improving software testing course experience with pair testing pattern. Iyad Alazzam* and Mohammed Akour

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Applications of memory-based natural language processing

Australian Journal of Basic and Applied Sciences

Python Machine Learning

CREATING SHARABLE LEARNING OBJECTS FROM EXISTING DIGITAL COURSE CONTENT

Automatic document classification of biological literature

ScienceDirect. Malayalam question answering system

Speech Recognition at ICSI: Broadcast News and beyond

Introduction to Text Mining

Reducing Features to Improve Bug Prediction

Learning Methods for Fuzzy Systems

Ontological spine, localization and multilingual access

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

OPAC Usability: Assessment through Verbal Protocol

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Parsing of part-of-speech tagged Assamese Texts

PROCESS USE CASES: USE CASES IDENTIFICATION

Universiteit Leiden ICT in Business

Team Formation for Generalized Tasks in Expertise Social Networks

Online Updating of Word Representations for Part-of-Speech Tagging

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

EDITORIAL: ICT SUPPORT FOR KNOWLEDGE MANAGEMENT IN CONSTRUCTION

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

An Interactive Intelligent Language Tutor Over The Internet

Analyzing sentiments in tweets for Tesla Model 3 using SAS Enterprise Miner and SAS Sentiment Analysis Studio

A Case-Based Approach To Imitation Learning in Robotic Agents

The taming of the data:

EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Human Emotion Recognition From Speech

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Term Weighting based on Document Revision History

Towards a Collaboration Framework for Selection of ICT Tools

UCEAS: User-centred Evaluations of Adaptive Systems

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers

Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept B.Tech in Computer science and

Word Segmentation of Off-line Handwritten Documents

A Domain Ontology Development Environment Using a MRD and Text Corpus

Loughton School s curriculum evening. 28 th February 2017

Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011

1 Use complex features of a word processing application to a given brief. 2 Create a complex document. 3 Collaborate on a complex document.

Finding Translations in Scanned Book Collections

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

PRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Combining a Chinese Thesaurus with a Chinese Dictionary

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Transcription:

42 Int. J. Computational Systems Engineering, Vol. 1, No. 1, 2012 Expert locator using concept linking V. Senthil Kumaran* and A. Sankar Department of Mathematics and Computer Applications, PSG College of Technology, Peelamedu, Coimbatore 641 004, India E-mail: vsk.mca@gapps.psgtech.ac.in E-mail: dras@mca.psgtech.ac.in *Corresponding author Abstract: A common task in many applications is to find persons who are knowledgeable about a given topic. The expert locator is a tool for finding people with relevant expertise and/or experience for a given subject. The potential value of the expert locator is directly related to the size of the searchable population. This paper addresses the issue of expert finding in educational and research domain and describes a novel method for expert finding using concept linking. Finding an expert in this domain is an important issue and it is necessary for problem consulting/solving, question answering, providing more detailed information on a topic and team building. The locator is designed to provide instant searches for people based on their qualifications, teaching experience in the field, research, activities like publications, associations, colleagues, patents, awards and panels/boards. Experimental results show that this approach can outperform the baseline approach. Keywords: expert locator; concepts linking; concept extraction; expert finding; ontology. Reference to this paper should be made as follows: Senthil kumaran, V. and Sankar, A. (2012) Expert locator using concept linking, Int. J. Computational Systems Engineering, Vol. 1, No. 1, pp.42 49. Biographical notes: V. Senthil Kumaran received his MSc in Computer Science from Madurai Kamaraj University, Madurai, India. He is currently working as an Assistant Professor at the Department of Mathematics and Computer Applications, PSG College of Technology, Coimbatore, India. Since 2008, he is working towards his PhD in Computer Science from the Faculty of Science and Humanities, PSG College of Technology. His research is focused on semantic web, data mining, intelligent information retrieval and e-learning. A. Sankar received his PhD in Computer Science from Bharathiar University, Coimbatore, India, in 2003. He is currently working as an Associate Professor at the Department of Mathematics and Computer Applications, PSG College of Technology, Coimbatore, India. He has more than 24 years of teaching and ten years of research experience. His research interests include agile software engineering, data mining, e-learning and networks. 1 Introduction Expert locator addresses the task of identifying the right person with the appropriate skills and knowledge. In large organisations, it is standard to employ an expert locator system to find out candidates who are well matched to a job requirement (Crow and DeSanto, 2005) and there are several researches have been performed to mine the information from the structured/semi structured resume documents. These systems work by extracting meaningful concepts from resumes and matching those concepts to job descriptions. In any academic and research institute it is necessary to find out professional people and subject expert in a particular subject area for problem consulting or team building with certain attributes (concepts). This paper describes the significant features of such a system that locates professional people from any educational and research institutes all over the world. It is an interesting real-world problem and which is the most important necessity for every academician and researchers. The expert locator described in this paper has the following four major steps 1 crawling the live data from educational web pages, blogs, news wires, article pages etc. 2 concept detection and extraction it analyses and extracts concepts 3 concept linking it identifies links between the concepts and thus helps the user to make a decision or generate hypothesis 4 faceted browsing. A web crawler is a computer programme that browses the World Wide Web in a methodical, automated manner. Crawlers can be used to gather specific types of information Copyright 2012 Inderscience Enterprises Ltd.

Expert locator using concept linking 43 from web pages. There are lot of open source tools available for crawling data. Carrot 2 tool for crawling data from web was used in this system. Concept detection and extraction is the task of automatically extracting concepts from unstructured documents. In this system, concepts represent the expert s personal and official information, educational and professional qualification, experience in the subject, knowledge, skills, research information, publication details etc. Concept extraction is a specific type of information extraction as described in (Gelfand et al., 1998) and is driven by a domain specific knowledge. Concept linking is the process of connecting related data/documents by identifying their commonly shared concepts based on their co-occurrence and closeness within the document. Concept is an event/record/object which has high relevance in that context. Figure 1 gives a brief overview of concept linking. The process is basically repetitive, that is, the entire process is repeated until satisfactory results are obtained. Faceted browsing is a technique for accessing a collection of information represented using a faceted classification, allowing users to explore by filtering available information. The remaining part of this paper is organised as follows: Section 2 gives a brief description of the system and the overview of related work is given in Section 3, then the method devised for expert finding which make use of concept linking is described in Section 4. Next system workflow is presented in Section 5, and Section 6 describes the ranking of experts while experimental results and discussion is reported in Section 7. Finally, the work is concluded in Section 8. Figure 1 Overview of concept linking 2 Problem description The expert locator is a tool for finding people with relevant expertise and/or experience for a given subject. The aim of the system is to provide user with the list of experts that matches their searching criteria. The key requirements for the expert locator are the ability to: 1 identify experts 2 classify the type and level of expertise 3 validate for relevant expert 4 recommend experts by ranking. Ours is a concept linking approach. Identifying experts is a difficult task because experts and their skills and knowledge are rare, expensive, (unevenly) distributed, difficult to qualify, continuously changing, varying in level, and often culturally isolated and oversubscribed. It is very difficult to judge the people s expertise using one or two parameters. We use multiple parameters like people s qualification, teaching experience, teaching subjects, department, research interest, context of their activities like publications as author and co-author, associations, patents and awards received, member in any board/panel, involvement in discussion forums, etc. to identify the expert. We collect this information from the people s home page and/or organisations web page, publication repository, social network sites, blogs, research forums, etc. The system strives to provide the user with linked critical information in turn saving the effort of manually analysing and inferring from the web pages saving time that the user might otherwise spend by searching for an expert with specific skills by shifting through each web page. The system extracts out details or concepts and semantically annotates the concepts by finding the category they belong to and allows the users to explore by filtering available information. 3 Related work To reflect the growing interest in expert finding, TREC introduced an expert finding task at its enterprise track in 2005 (Craswell et al., 2006). At this track, it emerged that there are two principal approaches to expert finding. There are several researches have been performed and proposed various methods for expert finding. All these methods can be classified into either candidate model or document model (Balog et al., 2006). For the document model, normal document retrieval is carried out to score all the documents in the corpora at first. Then each candidate is evaluated against the related document. Since this approach is not domain specific, the accuracy is low. That is the rules for text in one document will not apply to another document. It is clear from Zhang et al. (2009) that the mean average precision is only 45.72 (which is less than 50%). So it is understood that document model results less accuracy. Candidate models (Fang and Zhai, 2007); Petkova and Croft (2008) build a textual (usually term-based) representation of candidate experts, and rank them based on a query/topic, using traditional ad-hoc retrieval models.

44 V. Senthil Kumaran and A. Sankar Previous studies on classifying names into some particular categories from text, such as the names of people, places, and organisations focus on combining abundant rules or trigger words to enhance the system performance. Using SVM, high accuracy can be obtained (Lin et al., 2006). The advantage of rule based concept annotators is that the rules are human comprehensible and can be tweaked to get the desired results (Deshpande et al., 2009). Participants of the Text Retrieval Conference (TREC 2007) have investigated numerous methods, including probabilistic and language modelling techniques for expert finding (Bailey et al., 2007). There have been various studies presented for evidence collection to represent one s knowledge and skill using the relationship between peoples. As given in the problem description, the first key requirement of expert locator is the ability to identify experts. There are several approaches that have been proposed to automatically identify people s expertise. For example, Campbell et al. (2003) and Balog and de Rijke (2006) used e-mail network; Zhang et al. (2010) and Fu et al., 2007 used social network; Karimzadehgan et al. (2009) used organisational hierarchy; Breslin et al. (2007) proposed an approach to identify expertise using discussion forums, user communities and social networks; Zhang et al. (2009) used incorporation of similar people to collect the evidence; Tang et al. (2007) identify experts from their publications from DBLP and McDonald and Ackerman (2010) used experts database to recommend experts. Escobar-Molano et al. (2009) performed a study on the usability of tags for concept extraction and determining equivalence relations between concepts based on the tag sets associated with these concepts. There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag. Fahim et al. (2007) made a comparative study on different POS tagging techniques and suggested the rule based tagger as the best one. Cunningham et al. (2002) presented GATE tagger framework which is rule based. PoolParty service is a SKOS based thesaurus management tool proposed by Schandl and Blumauer (2010) with liked data capabilities. All the existing approaches use one or two parameters to identify the people s expertise. But it is very difficult judge the people s expertise using one or two events. In this paper, we proposed a method for expert finding using concept linking which is based on candidate model. We use multiple events like people s qualification, teaching experience, teaching subjects, department, research interest, context of their activities like publications as author and co-author, associations, patents and awards received, member in any board/panel, involvement in discussion forums, etc. to identify the expert. We collect these information from the people s home page and/or organisations web page, publication repository, social network sites, blogs, research forums, etc. 4 Overall system architecture The proposed method uses concept linking approach to identify the expertise of people. The system consists the following three main layers and the overall architecture given in Figure 2: 1 data layer 2 control (processing) layer 3 presentation layer. Figure 2 Overall system architecture The data layer holds all the necessary data required for the system. The input to this layer is from the web crawler. The processing/control layer does the major text processing and extracting and linking concepts. There are three components in this layer. Preprocessor component helps in finding and validating the input comes from the data store. It also converts the format to a common machine usable one. Concept extractor involves tokenising, POSTagging of the preprocessed text. It then recognises the named entities, and normalises. The normalised text is then parsed to semantically extract categorised details. Concept linker finds out relationship between the concepts extracted by the concept extractor. The presentation layer helps the user to view the linked concepts. The user can choose or filter out concepts and view only the necessary. This system visualises the concepts as facets. 5 System workflow The overall system workflow is given in Figure 3. Expert locator starts by crawling of live data from various websites and converts the data to a common format, mining the converted data and visualising the mined data. Web crawlers and converting to common format form the preprocessing step of the system, while faceted browser gives the visualisation.

Expert locator using concept linking 45 Figure 3 Expert locator workflow Figure 4 Processes involved in concept extraction Text Tokenisation POSTagging Named entity recognition (NER) Number normalisation Shallow parsing Semantic parsing Coreference linking Concepts 5.1 Crawling data This system is not an organisation specific and requires data from various educational websites, research forums, news sites, blogs etc. The system needs both the historic and live data. Input data has to flow into the system from enormous digital resources that are readily available from the sources such as WWW. Since expert locator requires information about all academicians and researchers, such kind of data has to be crawled into the system using web crawlers. 5.2 Conversion to common format In order to identify the expertise of experts, we need to collect data from many heterogeneous data sources. This phase converts all the crawled data to text format. The generated new file is added to the data store. 5.3 Concept extraction This phase identifies and extracts the concepts like name, address, skill, experience, publications, associations, colleagues, patents, awards, panels/boards and other details. Figure 4 shows the processes involved in the extraction phase. The tokenisation is performed using the OpenNLP parser. In this process, tokens are extracted using regular expressions to recognise whitespaces, punctuation marks, hyphens, and numbers, among others. Tokenisation sometimes faces challenges in finding the starting and end point of a token. This is because of not all tokens are made of one word. For example, operating systems must be chunked to be a single word and not two tokens. Another example, PSG College of Technology must be chunked to be a single word and not four tokens. We employed the GATE tool tagger which is rule based to predict the part-of-speech (POS) of each word. Some of the transformation rules used for tagging are: converting a noun to a number (CD) if. appears in the word converting a noun to a past participle if ((string)words[i]) ends with ed converting any type to adverb if it ends in ly. The role of NER is to classify names into some particular categories from text by machine learning or statistical method. Support vector machine as the power tool in machine learning was widely used in the text categorisation for NER (Lin et al., 2006). NER is now firmly established as a key technology for understanding lowlevel semantics of texts. Its main role is to identify expressions such as date and time as well as names of people, places, and organisations. Among the extracted tokens, relevant features are extracted. The frequency of the words is taken into account.

46 V. Senthil Kumaran and A. Sankar All nouns and verbs are retained as concepts. Stopwords are eliminated. Thus this phase gives us the concepts. These concepts have to be linked to the category they belong to, that is annotation of the extracted concepts have to be done. Figure 7 Part of concepts and relations in ontology 5.4 Concept linking The extracted concepts are then filtered and annotated. That is only the necessary concepts retained and each concept is linked to its category. In order to link to the correct category GATE annotator is used. To annotate the extracted entity, GATE is used. GATE requires Gazetteers and JAPE rules to be written. Gazetteers serve as database of known named entities. They are a kind of lists maintained for NER. We maintain 140 gazetteers, namely city, country, etc. and in addition, we write rules to identify named entities which have patterns like e-mail address for instance the e-mail extraction module extracts the e-mail addresses available in the text by looking at the pattern in the text and then validates the pattern of the address. For recognising entities that have no unique pattern JAPE rules are written. JAPE rules make use of the gazetteers available. Figure 5 shows a sample main JAPE file, which lists the gazetteers, and JAPE rule files available as a resource to annotate named entities in documents. Figure 6 shows a sample JAPE rule that helps annotating cities available in a document. The annotated concepts from all sources are stored in an ontology document, which is the input to the faceted browser. Figure 7 shows part of concepts and relations in the expert ontology document. In addition, the faceted browser needs the list of facets and attributes to be displayed which are automatically generated from the ontology document. Figure 5 Figure 6 Sample main JAPE file Sample JAPE rule 5.5 Faceted browsing Faceted browsing is a technique for accessing a collection of information represented using a faceted classification, allowing users to explore by filtering available information. A faceted classification system allows the assignment of multiple classifications to an object, enabling the classifications to be ordered in multiple ways, rather than in a single, pre-determined, taxonomic order. 6 Ranking experts It is important to rank all the experts, meeting the searching criteria in order to provide the users with relevant experts. We need to differentiate experts based on relevant. We adopt relative ranking method. After storing all the annotated concepts from all sources in an ontology document, each parameter is given some weightage. The total weight (EW) is computed for each expert and a frequency table of weight for all experts meeting the search criteria, n is prepared. The statistical parameters mean, μ and standard deviation, σ of the distribution of weights are arrived as given below. n EWi μ = (1) n i= 1 n = σ ( EWi μ )2 i 1 = (2) n The rank to each expert meeting the search criteria is awarded based on the statistical parameters, mean, μ and standard deviation, σ the distribution of weight as detailed in Table 1.

Expert locator using concept linking 47 Table 1 Rank calculation using total weight Total weight, EW secured by the expert Rank EW Min [(μ + 2 σ), 92] 1 μ + 1.2σ EW < μ + 2 σ 2 μ + 0.4 σ EW < μ + 1.2 σ 3 μ 0.4 σ EW < μ + 0.4 σ 4 μ 1.2 σ EW < μ 0.4 σ 5 μ 2 σ EW < μ 1.2 σ 6 EW < μ 2 σ 0 not two tokens. From the ontology all facets (category) are generated and loaded for faceted browsing from which user can search for any category. Figure 8 shows the loaded facets. Figure 8 Screen shot showing the loaded facets (see online version for colours) In this ranking scheme, the ranks are awarded to the experts based on their total weightage relative to the others, with 1 as the highest rank and 6 as the lowest rank. Rank 0 is considered, as the expert is not qualified. 7 Experimental results and discussion Earlier studies on expert finding concentrates to mine the information from structured resume documents that matches for the given job requirement. The most common method used for searching in a large set of document is keyword search, which will return all documents that contains the keyword (Agrawal et al., 2002). There are lots of keyword based search engines available that can rank the matched documents (Kasneci et al., 2008). Another approach is semantic-based search engines. This approach can understand the document and will return all documents that are semantically related to the given query (Mangold, 2007). This approach gives more relevant result than the keyword search engines, still they make error. For example, the phrase sales manager frequently occurs on resumes, which can lead to associate the words sales and manager. A query for on engineering manager then returns the candidate with word sales on their resume. This is incorrect (Crow and DeSanto, 2005). All the existing approaches use one or two parameters to identify the people s expertise. But it is very difficult judge the people s expertise using one or two parameters. We have built an expert locator system using concept linking approach which allows users to gain an understanding of our local expertise. We use multiple events like people s qualification, teaching experience, teaching subjects, department, research interest, context of their activities like publications as author and co-author, associations, patents and awards received, member in any board/panel, involvement in discussion forums, etc. to identify the expert. Since multiple parameters are used to identify expertise of experts, the accuracy of our method will be high. The concept linking approach used in this paper gives more relevant and accurate results than other approaches. Because this approach extracts all concepts from historic and live data and linked all related concepts to the relevant category and stored as ontology document. For example, operating systems will be chunked to be a single word and We compare our method with some existing methods that does concept linking. POOLPARTY, SAS Text Miner, MALLET, LitLinker, AeroText are some of the products that do concept linking. All these existing systems that do concept linking are not domain specific and hence the accuracy is low. That is the rules for text in a domain will not apply to another domain. That is, aiming to create a generic solution results in less accurate systems. Moreover, customising the systems for a specific purpose is not available in many systems. Thus it does not quench the thirst of text analytics solutions for any specific vertical. None of the available systems provide a database or gazetteer independent solution for NER. Entity extraction accuracy is quite low, and does not work for unstructured text as they do not contain detailed description or a well formed text. The collected information is stored as text file. They are basically semi-structured and may not have full sentences to analyse the context. Since the expertise of experts are represented in ontology document, it is very easy 1 to find the detailed information about one expert by giving their name 2 to find the list of experts in the given field 3 to find persons related with one expert 4 to find the projects undertaken by one expert 5 to find the teaching experience and research experience in the field

48 V. Senthil Kumaran and A. Sankar 6 to find the number of PhD students supervised by one expert. These types of simple queries can be answered very easily. We use SPARQL to query the ontology document. We understand the advantages of representing expertise of expert in ontology. Since the major objective of this system is to retrieve experts, the ranking ability is also very important. In order to evaluate the system performance in the ranking accuracy, we compare the relative ranking method with the voting techniques proposed by Macdonald and Ounis (2008) (a score aggregation technique the score of a document is the sum of the normalised scores received by the document in each individual ranking) and with the baseline (where integrated information of each expert is considered as a single document). Table 2 gives the comparison of relative ranking technique with voting. It is found that the system performance is enhanced significantly by using relative ranking technique, both MAP and p@5 improved. This is because the rank is computed based on their weightage relative to the others and the weight of all parameters are all taken into account in the relative ranking approach. Table 2 Comparison of relative ranking technique with voting Evaluation criteria MAP P@2 P@5 Voting 0.670 0.725 0.53 Relative ranking 0.842 0.924 0.651 Commercially available solutions to expert or expertise finding have appeared in the marketplace. Maybury (2006) has given an overview of some expert finding models. Finally, we compare our model with the already existing expert finding tools. Table 3 shows the comparison of our model with the existing models. Table 3 Comparison of our model with existing models (see online version for colours) Models Sources used to collect information Technique to identify expertise Support of searching Proposed model ExpertLocator Publication repository Personal home pages Discussion forums/news wires Behaviour Social net analysis Entity extraction Entity extraction Social net analysis Social net analysis Concept linking Entity extraction Author identification Taxonomy Supports foreign language Taxonomy Supports foreign language Supports foreign language Taxonomy Taxonomy Faceted browsing

Expert locator using concept linking 49 8 Conclusions Expert finding is an important and challenging task. In this paper, we have proposed an expert locator using concept linking which facilitates the finding of people with relevant expertise and/or experience for a given subject. Expert locator extracts concepts from web documents. The proposed method will enable the persons working in the same field to find each other for collaboration and/or problem consultation. Our experiment results show that our method give a useful result with accuracies between 84% and 92%. In a future work, in order to achieve the highest possible accuracy, the NER will be enhanced and every expert will be evaluated based on their level of involvement in the field. Satisfaction of user with the results will also be used to rank the expert. We have used only 140 gazetteers. We will add more gazetteers to improve the result. References Agrawal, S., Chaudhuri, S. and Das, G. (2002) DBXplorer: a system for keyword-based search over relational databases, Proceedings of the 18th International Conference on Data Engineering, pp.5 16. Bailey, P., Craswell, N., de Vries, A.P. and Soboroff, I. (2007) Overview of the TREC 2007 enterprise track, TREC 2007 Working Notes. Balog, K. and de Rijke, M. (2006) Finding experts and their details in e-mail corpora, 15th International Conference on World Wide Web Conference Proceedings, pp.1035 1036. Balog, K., Azzopardi, L. and de Rijke, M. (2006) Formal models for expert finding in enterprise corpora, 29th Annual International ACM SIGIR Conference Conference Proceedings, pp.43 50. Breslin, J, Bojars, U., Aleman-Meza, B., Boley, H., Mochol, M., Nixon, L.J.B., Polleres, A. and Zhdanova, A.V. (2007) Finding experts using internet-based discussions in online communities and associated social networks, 1st International ExpertFinder Workshop Conference Proceedings. Campbell, C.S., Maglio, P.P., Cozzi, A. and Dom, B. (2003) Expertise identification using e-mail communications, 12th International Conference on Information and Knowledge Management Conference Proceedings, pp.528-531. Craswell, N., de Vries, A.P. and Soboroff, I. (2006) Overview of the TREC-2005 enterprise track, The 14th Text REtrieval Conference Proceedings. Crow, D. and DeSanto, J. (2005) A hybrid approach to concept extraction and recognition-based matching in the domain of human resources, 16th IEEE International Conference on Tools with Artificial Intelligence Conference Proceedings, Vol. 1, pp.535 541. Cunningham, H., Maynard, D., Bontcheva, K. and Tablan, V. (2002) GATE: a framework and graphical development environment for robust NLP tools and applications, 40th Anniversary Meeting of the ACL Proceedings. Deshpande, P.M., Hampp, T. and Hosurmath, M. (2009) Automated concept extraction to aid legal ediscovery review, 15th International Conference on Management of Data. Escobar-Molano, M.L., Badia, A. and Alonso, R. (2009) Exploiting tags for concept extraction and information integration, Proceedings of CollaborateCom, pp.1 9. Fahim, H., Naushad, U. and Mumit, K. (2007) Comparison of different POS tagging techniques for Bangla, in K. Elleithy (Ed.): Advances and Innovations in Systems, Computing Sciences and Software Engineering, pp.121 126, Springer Publications. Fang, H. and Zhai, C. (2007) Probabilistic models for expert finding, in Amati, G., Carpineto, C. and Romano, G. (Eds.): ECIR 2007. LNCS, Vol. 4425, pp.418 430, Springer, Heidelberg. Fu, Y., Xiang, R., Liu, Y., Zhang, M. and Ma, S. (2007) Finding experts using social network analysis, IEEE/WIC/ACM International Conference on Web Intelligence Conference Proceedings, pp.77 80. Gelfand, B., Wulfekuhler, M. and Punch, W.F., III. (1998) Automated concept extraction from plain text, AAAI Workshop on Text Categorization Technical Report. Karimzadehgan, M., White, R.W. and Richardson, M. (2009) Enhancing expert finding using organizational hierarchies, Proceedings of ECIR, pp.177 188. Kasneci, G., Fabian, M., Suchanek, G., Ifrim, M. and Ramanath, G.W. (2008) NAGA: searching and ranking knowledge, Proceedings of ICDE, pp.1285 1288. Lin, X-D., Peng, H. and Liu, B. (2006) Chinese named entity recognition using support vector machines, Proceedings of the 5th International Conference on Machine Learning and Cybernetics, pp.4216 4220. Macdonald, C. and Ounis, I. (2008) Voting techniques for expert search, Knowledge and Information Systems, Vol. 16, No. 3, pp.259 280. Mangold, C. (2007) A survey and classification of semantic search approaches, International Journal of Metadata, Semantics and Ontologies, Vol. 2, No. 1 pp.23 34. Maybury, M.T. (2006) Expert finding systems, MITRE Technical Report. McDonald, D.W. and Ackerman, M.S. (2010) Expertise recommender: a flexible recommendation system and architecture, Proceedings of CSCW, pp.231 240. Petkova, D. and Croft, W.B. (2008) Hierarchical language models for expert finding in enterprise corpora, International Journal on Artificial Intelligence Tools, Vol. 17, No. 1, pp.5 18. Schandl, T. and Blumauer, A. (2010) PoolParty: SKOS thesaurus management utilizing linked data, ESW 2010, Heraklion, The Semantic Web: Research and Applications, Vol. 6089/2010, pp.421 425, Springer Publications. Tang, J., Zhang, J., Zhang, D., Yao, L., Zhu, C. and Li, J (2007) ArnetMiner: an expertise oriented search system for web community, International Conference on Semantic Web Challenge Conference Proceedings. Zhang, J., Tang, J. and Li, J. (2010) Expert finding in a social network, Advances in Databases: Concepts, Systems and Applications Lecture Notes in Computer Science, Vol. 4443/2010, pp.1066 1069, Springer. Zhang, W., Ma, J. and Zhong, Y. (2009) Incorporating similar people for expert finding in enterprise corpora, International Conference on Computational Intelligence and Software Engineering, pp.1 4.