Evaluation of Re-ranking by Prioritizing Highly Ranked Documents in Spoken Term Detection

Similar documents
Learning Methods in Multilingual Speech Recognition

Speech Recognition at ICSI: Broadcast News and beyond

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

Mandarin Lexical Tone Recognition: The Gating Paradigm

Cross Language Information Retrieval

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Investigation on Mandarin Broadcast News Speech Recognition

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Modeling function word errors in DNN-HMM based LVCSR systems

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Automatic Pronunciation Checker

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Modeling function word errors in DNN-HMM based LVCSR systems

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Body-Conducted Speech Recognition and its Application to Speech Support System

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A Neural Network GUI Tested on Text-To-Phoneme Mapping

LEARNING A SEMANTIC PARSER FROM SPOKEN UTTERANCES. Judith Gaspers and Philipp Cimiano

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Evaluation of a Simultaneous Interpretation System and Analysis of Speech Log for User Experience Assessment

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

How to set up gradebook categories in Moodle 2.

Constructing Parallel Corpus from Movie Subtitles

A study of speaker adaptation for DNN-based speech synthesis

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Voice conversion through vector quantization

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

I N T E R P R E T H O G A N D E V E L O P HOGAN BUSINESS REASONING INVENTORY. Report for: Martina Mustermann ID: HC Date: May 02, 2017

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Letter-based speech synthesis

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

Focus of the Unit: Much of this unit focuses on extending previous skills of multiplication and division to multi-digit whole numbers.

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Disambiguation of Thai Personal Name from Online News Articles

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models

user s utterance speech recognizer content word N-best candidates CMw (content (semantic attribute) accept confirm reject fill semantic slots

Characterizing and Processing Robot-Directed Speech

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation

NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Speech Recognition by Indexing and Sequencing

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

On-Line Data Analytics

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

WHEN THERE IS A mismatch between the acoustic

Universiteit Leiden ICT in Business

Human Emotion Recognition From Speech

Edinburgh Research Explorer

Constructing a support system for self-learning playing the piano at the beginning stage

INTERMEDIATE ALGEBRA PRODUCT GUIDE

Detecting English-French Cognates Using Orthographic Edit Distance

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

Switchboard Language Model Improvement with Conversational Data from Gigaword

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

On document relevance and lexical cohesion between query terms

Small-Vocabulary Speech Recognition for Resource- Scarce Languages

Speech Emotion Recognition Using Support Vector Machine

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Python Machine Learning

Georgetown University at TREC 2017 Dynamic Domain Track

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Word Segmentation of Off-line Handwritten Documents

Unit 3: Lesson 1 Decimals as Equal Divisions

Grade 6: Correlated to AGS Basic Math Skills

WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

GACE Computer Science Assessment Test at a Glance

CROSS-LANGUAGE INFORMATION RETRIEVAL USING PARAFAC2

Speaker recognition using universal background model on YOHO database

Improved Hindi Broadcast ASR by Adapting the Language Model and Pronunciation Model Using A Priori Syntactic and Morphophonemic Knowledge

Large vocabulary off-line handwriting recognition: A survey

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Building Text Corpus for Unit Selection Synthesis

The Smart/Empire TIPSTER IR System

A Case Study: News Classification Based on Term Frequency

Proceedings of Meetings on Acoustics

Software Maintenance

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

Vowel mispronunciation detection using DNN acoustic models with cross-lingual training

Comment-based Multi-View Clustering of Web 2.0 Items

Using dialogue context to improve parsing performance in dialogue systems

AQUA: An Ontology-Driven Question Answering System

Performance Analysis of Optimized Content Extraction for Cyrillic Mongolian Learning Text Materials in the Database

Online Updating of Word Representations for Part-of-Speech Tagging

Transcription:

INTERSPEECH 205 Evaluation of Re-ranking by Prioritizing Highly Ranked Documents in Spoken Term Detection Kazuki Oouchi, Ryota Konno, Takahiro Akyu, Kazuma Konno, Kazunori Kojima, Kazuyo Tanaka 2, Shi-wook Lee 3 and Yoshiaki Itoh * Iwate Prefectural University, Japan 2 Tsukuba University, Japan 3 National Institute of Advanced Industrial Science and Technology, Japan * y-itoh@iwate-pu.ac.jp Abstract In spoken term detection, the detection of out-of-vocabulary (OOV) query terms is very important because of the high probability of OOV query terms occurring. This paper proposes a re-ranking method for improving the detection accuracy for OOV query terms after extracting candidate sections by conventional method. The candidate sections are ranked by using dynamic time warping to match the query terms to all available spoken documents. Because highly ranked candidate sections are usually reliable and users are assumed to input query terms that are specific to and appear frequently in the target documents, we prioritize candidate sections contained in highly ranked documents by adjusting the matching score. Experiments were conducted to evaluate the performance of the proposed method, using open test collections for the SpokenDoc-2 task in the NTCIR-0 workshop. Results showed that the mean average precision (MAP) was improved more than 7.0 points by the proposed method for the two test sets. Also, the proposed method was applied to the results obtained by other participants in the workshop, in which the MAP was improved by more than 6 points in all cases. This demonstrated the effectiveness of the proposed method. Index Terms: spoken term detection, re-ranking, rescoring, out of vocabulary query term. Introduction Research on spoken document retrieval (SDR) and spoken term detection (STD) is actively conducted in an effort to enable efficient searching against the vast quantities of audiovisual data [] [3] that have been accumulated following the rapid increase in capacity of recording media such as a hard disks and optical disks in recent years. Conventional STD systems generate a transcript of speech data using an automatic speech recognition (ASR) system for finding invocabulary query terms at high speed, and a subword recognition system for detecting out-of-vocabulary (OOV) query terms that are not included in the dictionary of the ASR system. Because query terms are in fact likely to be OOV terms (such as technical terms, geographical names, personal names and neologisms), STD systems must include a method for detecting such terms, which is usually conducted by using subwords such as monophones, triphones and syllables [4][5]. This paper proposes a method for improving the retrieval accuracy with respect to OOV query terms. Our subwordbased STD system for OOV query terms compares a query subword sequence with all of the subword sequences in the spoken documents and retrieves the target section using a dynamic time warping (DTW) algorithm continuously. Each candidate section is assigned a distance obtained by DTW, the location and spoken document ID. We propose a re-scoring method to improve the retrieval accuracy after extracting the candidate sections that are ranked by DTW distance. We give a high priority to candidate sections contained in highly ranked documents by adjusting their DTW distances. The basic idea behind the proposed method is that query terms with a high TF-IDF value are likely to be selected and query terms are found several times in a small number of documents as a result. The precision among highly ranked candidate sections is usually high and such candidates are reliable. Therefore, we prioritize the distances of candidate sections that appear in the same document that already contain highly ranked candidate sections. In previous work, the STD accuracy was improved by rescoring candidate sections on the basis of acoustic score in the second stage [6][7]. In [8], the STD accuracy was improved by acoustic comparison of a candidate section with highly ranked candidate sections. The method proposed here uses documents that contain highly ranked candidate sections rather than acoustic information about highly ranked candidate sections for the detection of OOV query terms. In this paper, we evaluate a re-ranking method that uses the DTW distances of the top T candidate sections with respect to open test collections for the SpokenDoc-2 task in the NTCIR- 0 workshop held in 203. We also apply the proposed method to the results submitted to the workshop by other participants. 2. Proposed method 2.. STD system for OOV query terms In the proposed STD system for OOV query terms (Figure ) [9][0], the first step (subword recognition) is performed for all spoken documents, and subword sequences for spoken documents are prepared in advance using a subword acoustic mode a subword language model (based, for example, on subword bigrams or trigrams), and a subword distance matrix (). The system supports both text and speech queries (2). When a user inputs a text query, the text is automatically converted into a subword sequence according to conversion rules (3). In the case of Japanese, the phoneme sequence corresponding to the pronunciation of the query term is automatically Copyright 205 ISCA 3675 September 6-0, 205, Dresden, Germany

(2)Query input Retrieval Recognition (3)Transfomation user Text Speech OUTPUT At triphone sequene Subword revognition (4)Matching at subword level Retrieval results SID A _loc DTW_dist SID B _loc DTW_dist SID A _loc 2 DTW_dist SID A _loc 3 DTW_dist SID B _loc 2 DTW_dist Figure : Outline of an STD method based on subword recognition. obtained when a user inputs a query term. For speech queries, the system performs subword recognition and transforms the utterance into a subword sequence in the same manner as for spoken documents. We focus on text queries in this paper. In the retrieving step (4), the system retrieves the candidate sections using a DTW algorithm by comparing the query subword sequence to all subword sequences in the spoken documents. The local distance refers to the distance matrix that represents subword dissimilarity and contains the statistical distance between any two subword models. Although the edit distance is representative of local distance in string matching, we have previously proposed a method for calculating the phonetic distance between subwords [] to improve the STD accuracy. The system outputs candidate sections that show a high degree of similarity to the query term sequence. Each candidate section is assigned a distance (DTW_dist), the location (loc) and the spoken document ID (SID). The candidate sections are ranked according to the DTW dist. In the evaluation performed in the NTCIR-0 workshop, spoken documents are divided into utterances on the basis of pauses (silence sections lasting more than 200 ms), and a candidate section denotes an utterance. If a candidate section contains one or more query terms, the candidate section is regarded as correct because word time stamps are not attached to the spoken documents. In this paper, we adopt the evaluation method presented in the workshop. 2.2. Proposed method: prioritizing sections in highly ranked documents This section describes in detail the proposed method, in which high priority is given to candidate sections contained in highly ranked documents. Because a user is likely to select query terms with a high TF- IDF value, as mentioned in Introduction such query terms appear several times in a small member of spoken documents. And generally speaking, in STD, highly ranked candidate sections are reliable, as suggested by the high precision rate of top candidate sections. We analyze highly ranked candidate sections for each query term and the occurrences of the query terms. Figure 2 shows the precision rates of the top 0 candidate sections (the average for 30 query terms). The precision rate is higher than 80% for 3 candidate sections and higher than 60% for all 0 candidate sections. It is assumed that a user selects query terms that are specific to and appear frequently in the target documents. For the 30 test query terms, there were 77 relevant spoken documents triphone sequences ()Subword revognition Spoken Documents of presentation speechs Figure 2: Precision rates for the top 0 candidate sections (average values for 30 query terms). containing 653 relevant sections, for an average of about 3.7 relevant sections per document. Thus, the input query terms can be expected to appear frequently in the target documents. The abovementioned analysis demonstrates that highly ranked candidate sections are reliable and the query terms appear several times in the same spoken document. We apply this knowledge to the re-ranking process. We prioritize candidate sections that appear in documents already containing highly ranked candidate sections. We believe that this method enables correct but low-ranked candidate sections to be ranked higher, thus improving the STD accuracy. 2.3. Re-scoring:prioritizing highly ranked documents For a query term, let spoken document DOCA contain several sections where the query term is spoken, as mentioned in the previous section. Considering the ith candidate in DOCA, the average distance to the (i-)th candidate in DOCA is small. This is because some of the i- candidate sections are relevant and have small distances. We introduce this idea to the following re-ranking process. Re-ranking is carried out in order from highest-ranked to lowest-ranked candidate sections according to their DTW distance in the same document. Let D( be the DTW distance for the lth spoken document and the ith candidate section. D( ) for i = in Equation () denotes the minimal distance of the lth spoken document for the top candidate. Equation (2) denotes the new distance newd( given by adding the ith original distance of the lth spoken document and the average of the sum of the new distances from the top candidate to the Tth candidate sections ( T i-). The coefficient α is a weighting factor (0 < α ). newd( D( ( =) () newd( D( T newd( t) (2) t ( ) (i ) T The distance of the top candidate does not change in any of the documents. The distances of lower candidate sections change by adding the second term, that is, the average distance from the top candidate to the Tth candidate, using the coefficient α. The re-ranking process is illustrated in Figure 3. Assume that only DOC A among three documents contains the query terms. 3676

precision (AP) for a query is obtained from Equation (3) by averaging the precisions at every occurrence of the query. In Equation (3), C and R are the total number of correct sections and the lowest rank of the last correctly identified section, respectively. Let δi be if the ith candidate section of query s is correct and 0 otherwise. Then, Equation (3) averages the precision when a correct section is presented. The MAP is obtained from Equation (4) as the average of AP for each query s, where Q is the total number of queries. Table. Experimental Condition. Figure 3: An illustration of the proposed re-ranking method. Does not change much in the other two documents because the distances of the top two candidate sections, which are incorrect and are not much smaller. The ranks of the candidate sections in the same document do not change. As shown on the right in Figure 3, because the candidate sections in the document containing the query terms are ranked high for all candidate sections, the overall STD accuracy is improved as a result. 3. Evaluation experiments The evaluation experiments are described in this chapter. First, the next section describes the data sets and experimental conditions used in the experiments. After that, the method for evaluating α is described. Results for open test collections and results applying the proposed methods to the results obtained by other NTCIR participants are shown. Discussions are presented lastly. 3.. Data set and experimental conditions We prepared two test datasets for evaluation experiments. Test set includes a total of 00 queries composed of 50 queries in a dry run and 50 queries in a formal run for the SpokenDoc task of the NTCIR-9 workshop [2]. Test set 2 includes a total of 32 queries composed of 32 queries in a dry run and 00 queries in a formal run for the SpokenDoc task of the NTCIR-0 workshop [3]. In the evaluation experiments, we used the CORE data of the corpus of spontaneous Japanese (CSJ) [4] that amount to about 30 h of speech, including 77 presentations for test set, and the SDPWS (spoken document processing workshop) spoken document corpus that amounts to about 28 h of speech, including 04 presentations for test set 2. Half of the speech data in CSJ (excluding the Core data) were used for training subword acoustic models and subword language models. The training data amounted to about 300 h, including 265 presentations (an average of 4 min per presentation). Subword acoustic models and subword language models were trained using the HTK (hidden Markov model toolkit) [6] and Palmkit [7] software tools, respectively. The feature parameters as extracted are shown in Table together with the conditions for extracting the parameters. 3.2. Evaluation measurement For evaluation, we used the mean average precision (MAP), which was used in the NTCIR workshop and is common for this purpose. MAP is computed as follows. The average R AP( s) i precision( s, C i (3) Q MAP AP s Q (4) s 3.3. Evaluation of parameters of α and T The coefficient α and a number of candidate sections T in Equation () were constant for each test set. We let α vary from 0. to.0 in increments of 0., and let T vary from to 5 in increments of and i- (using all higher ranked candidate sections) in Equation (2). We extracted the best values for the parameters α and T for each test set, and the best parameters were applied to the other test sets for open evaluation by crossvalidation. 3.4. Results for triphone models The results obtained when varying the coefficient α are shown in Figure 4 in the case of T = 2, 3 for triphone models. α = denotes the case where the proposed method was not applied, and α = 0 denotes the case where ignoring the original distance of a candidate leads to a substantial decline in STD accuracy, as shown in the figure 4. When the coefficient α was small (such as 0. or 0.2), the original distance of the candidate in the first term of Equation 2 did not affect the new distance, and the accuracy did not improve. The highest accuracy was achieved when the coefficient α was around 0.5 and the results denoted the distance of highly ranked candidates (the second term in Eq. (2)) is as important as the original distance (the first term). The parameters were determined according to crossvalidation as follows. The values for the parameters α and T that yielded the highest accuracy for test set were 0.5 and 2, respectively. These values were then applied to test set 2. In the same way, the values that resulted in the highest accuracy for test set 2 were 0.5 and 3, respectively, and those values were applied to test set. 3677

80 70 60 50 40 30 20 0 0 Test set at T=3 Test set2 at T=2 0.0 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.2 6.7 65.4 69.0 7.9 72.9 72.6 7. 69. 66.7 64.2 2.5 53.8 57.2 60.4 62. 62.4 6.7 60.4 58.5 56.6 53.8 Figure 4: STD accuracy when the re-ranking method is applied to determine the coefficient α for triphone models. 3.5. Results for other subword models The results of applying the re-ranking method to other subword models, such as triphones, demiphones and subphonetic segments (SPS), are shown Figure 5. We have developed a demiphone models for STD [4], where each triphone is divided into two demiphones corresponding to the front and rear parts of the triphone. An SPS is an acoustic model consisting of a central part of a phone and a transitional part of two phones [5]. Demiphone and SPS models are more precise than phone models. The numbers of demiphones and SPSs were,623 and 433, respectively. The blue part of each bar indicates the accuracy of the original STD. When T = i, that is, when all highly ranked candidate sections are used for re-ranking, the accuracy improved for both test sets and for the three subword models, shown in red. This resulted in an improvement of 4.4 to 7.7 points (an average of 6.4 points) in MAP. When T was limited to a few top-ranked candidate sections, the MAP score improved further by about point (for an average of 7.3 points higher than the original accuracy), which is indicated in black in the graph. The values in parentheses denote the values of the parameters α and T that yielded the highest accuracy for the test set. The optimal parameter values for one set were used in the other test set, as mentioned above. These results demonstrate the effectiveness of the proposed reranking method for subword models. The processing time for the proposed method and was less than 20 ms and was much smaller than that for DTW. 3.6. Applying the proposed method to the results submitted by other participants We applied the proposed method to the results submitted by other participants in the SpokenDoc task of the NTCIR-0 workshop to evaluate the robustness of the proposed method. The query terms used here are included in test set 2. The optimal values of the parameters α and T obtained for triphones for test set of NTCIR-9 in the previous section (0.5 and 3, respectively) were also used in the evaluation. The results are shown in Figure 6. By applying the proposed method to the original results (blue bars) submitted by other participants, the MAP score was improved by 5.9 to 7.8 points (an average of 6.2 points), shown by the red bars.the improvement in MAP was similar to that obtained by applying the proposed method to various subwords outlined in the previous section (6.4 points on average). Green bars denote the MAP score obtained by applying the optimal values for the parameters α and T. The MAP score obtained with the proposed method is close to that in the case of using the optimal parameter values. These results demonstrate the effectiveness and robustness of the proposed re-ranking method. Figure 6: Results submitted by different NTCIR-0 teams and results when applying the proposed method to those results. 4. Conclusions In this paper, we proposed a method that improves the retrieval performance in STD by prioritizing the DTW score of candidate sections contained in highly ranked documents. The performance of the proposed method was evaluated by experiments using triphone, demiphone and SPS models. The results demonstrated that the proposed method can improve the MAP score by more than 7.0 points for all three acoustic models. The robustness and effectiveness of the proposed method was also demonstrated by applying it to results submitted by other teams participating in NTCIR-0, where an improvement of more than 6 points in MAP was achieved in each case. Figure.5: Results obtained by applying the proposed re-ranking method to triphone, demiphone and SPS models using two test sets. 5. Acknowledgements This research is partially supported by Grand-in-Aid for Scientific Research (C), KAKENHI, Project No.5K0024 3678

6. References [] C. Auzanne, JS. Garofolo, JG. Fiscus, and WM Fisher,"Automatic Language Model Adaptation for Spoken Document Retrieva" B, 2000TREC-9 SDR Track, 2000. [2] A. Fujii, and K. itou, "Evaluating Speech-Driven IR in the NTCIR-3Web Retrieval Task," Third NTCIR Workshop, 2003. [3] P. Motlicek, F. Valente, and PN. Garner, English Spoken Term Detection in Multilingual Recordings", INTERSPEECH 200, pp.206-209, 200. [4] K. Iwata, et al., Open-Vocabulary Spoken Document Retrieval based on new subword models and subword phonetic similarity, INTERSPEECH, 2006. [5] Roy Wallace, Robbie Vogt, and Sridha Sridharan, A Phonetic Search Approach to the 2006 NIST Spoken Term Detection Evaluation, INTERSPEECH 2007, pp2385-2388, 2007. [6] N. Kanda, H. Sagawa, T. Sumiyoshi, and Y. Obuchi, Open- Vocabulary Key word Detection from Super-Large Scale Speech Database, MMSP 2008, pp.939-944, 2008. [7] Y. Itoh, et al., Two-stage vocabulary-free spoken document retrieval - subword identification and re-recognition of the identified sections", INTERSPEECH 2006, pp.6-64, 2006. [8] C.-a. Chan, and L.-s. Lee, Unsupervised Hidden Markov Modeling of Spoken Queries for Spoken Term Detection without Speech Recognition, INTERSPEECH 20, pp.24-244, 20. [9] H. Saito, et al., An STD system for OOV query terms using various subword units, Proceedings of NTCIR-9 Workshop Meeting, pp.28-286, 20. [0] Y. Onodera, et al., Spoken Term Detection by Result Integration of Plural Subwords using Confidence Measure, WESPAC, 2009. [] Tanifuji, et al., Improving perfomance of spoken term detection by appropriate distance between subwoed models, ASJ, vol2, pp.239-240, 20-3. [2] T.Akiba, et al., Overview of the IR for Spoken Documents Task in NTCIR-9 Workshop, In Proceedings of the NTCIR-9 Workshop, page 8 pages, 20. [3] T. Akiba, et al., Overview of the NTCIR-0 SpokenDoc-2 Task, Proceedings of the NTCIR-0 Conference, 203. [4] Corpus of Spontaneous Japanese, http://www.ninjal.ac.jp/csj/ [5] Tanaka, K., Kojima H., "Speech recognition method with a language-independent intermediate phonetic code", ICSLP, Vol. IV, pp.9-94, 2000. [6] Hidden Markov Model Toolkit, http://htk.eng.cam.ac.uk/ [7] palmkit, http://palmkit.sourceforge.net/ [8] Julius, http://julius.sourceforge.jp/ 3679