Development & evaluation of different acoustic models for Malayalam continuous speech recognition

Similar documents
Learning Methods in Multilingual Speech Recognition

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Speech Emotion Recognition Using Support Vector Machine

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Modeling function word errors in DNN-HMM based LVCSR systems

Speech Recognition at ICSI: Broadcast News and beyond

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Human Emotion Recognition From Speech

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A study of speaker adaptation for DNN-based speech synthesis

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Word Segmentation of Off-line Handwritten Documents

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Speaker recognition using universal background model on YOHO database

WHEN THERE IS A mismatch between the acoustic

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

Lecture 9: Speech Recognition

On the Formation of Phoneme Categories in DNN Acoustic Models

Mandarin Lexical Tone Recognition: The Gating Paradigm

INVESTIGATION OF UNSUPERVISED ADAPTATION OF DNN ACOUSTIC MODELS WITH FILTER BANK INPUT

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Investigation on Mandarin Broadcast News Speech Recognition

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

Calibration of Confidence Measures in Speech Recognition

Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition

Support Vector Machines for Speaker and Language Recognition

Edinburgh Research Explorer

SARDNET: A Self-Organizing Feature Map for Sequences

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Speaker Recognition. Speaker Diarization and Identification

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Speaker Identification by Comparison of Smart Methods. Abstract

Rule Learning With Negation: Issues Regarding Effectiveness

Unsupervised Acoustic Model Training for Simultaneous Lecture Translation in Incremental and Batch Mode

Voice conversion through vector quantization

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Speech Recognition by Indexing and Sequencing

Phonological encoding in speech production

Segregation of Unvoiced Speech from Nonspeech Interference

Learning Methods for Fuzzy Systems

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Automatic Pronunciation Checker

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

SIE: Speech Enabled Interface for E-Learning

An Online Handwriting Recognition System For Turkish

The IRISA Text-To-Speech System for the Blizzard Challenge 2017

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Body-Conducted Speech Recognition and its Application to Speech Support System

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Phonological Processing for Urdu Text to Speech System

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

Automatic intonation assessment for computer aided language learning

Switchboard Language Model Improvement with Conversational Data from Gigaword

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

ScienceDirect. Malayalam question answering system

Letter-based speech synthesis

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Reducing Features to Improve Bug Prediction

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Rule Learning with Negation: Issues Regarding Effectiveness

INPE São José dos Campos

Disambiguation of Thai Personal Name from Online News Articles

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

Rachel E. Baker, Ann R. Bradlow. Northwestern University, Evanston, IL, USA

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Dyslexia/dyslexic, 3, 9, 24, 97, 187, 189, 206, 217, , , 367, , , 397,

COMPUTER INTERFACES FOR TEACHING THE NINTENDO GENERATION

Bi-Annual Status Report For. Improved Monosyllabic Word Modeling on SWITCHBOARD

Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept B.Tech in Computer science and

Large Kindergarten Centers Icons

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

A Pipelined Approach for Iterative Software Process Model

Lecture 1: Machine Learning Basics

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

The Strong Minimalist Thesis and Bounded Optimality

Automatic segmentation of continuous speech using minimum phase group delay functions

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Transcription:

Available online at www.sciencedirect.com Procedia Engineering 30 (2012) 1081 1088 International Conference on Communication Technology and System Design 2011 Development & evaluation of different acoustic models for Malayalam continuous speech recognition Cini Kurian a, Kannan Balakrishnan b, a* a,b Department of computer Applications, cochin University of science and Technology, cochin Abstract Performance of any continuous speech recognition system is dependent on the accuracy of its acoustic model. Hence, preparation of a robust and accurate acoustic model lead to satisfactory recognition performance for a speech recognizer. In acoustic modeling of phonetic unit, context information is of prime importance as the phonemes are found to vary according to the place of occurrence in a word. In this paper we compare and evaluate the effect of context dependent tied (CD tied) models, context dependent (CD) and context independent (CI) models in the perspective of continuous speech recognition of Malayalam language. The database for the speech recognition system has utterance from 21 speakers including 11 female and 10 males. Our evaluation results show that CD tied models outperforms CI models over 21%. 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of ICCTSD 2011 Keywords: Speech recogntion ; HMM;MFCC;acoustic modelling; 1. Introduction Speech Recognition technology has tremendous potential as it is an integral part of future intelligent devices, in which speech recognition and speech synthesis are used as the basic means for communicating with humans. It will simplify the Herculean task of typing and will eliminate the conventional keyboard [11]. This technology will support a lot in manufacturing and control applications where hands or eyes are otherwise occupied. Disabled, elderly and blind people will no longer need to be away from the internet and advanced information technology revolution [5]. Recently, there has been a large increase in the number of recognition applications; like use over telephones, including automated dialing, operator assistance, and remote data access services; such as financial services, for voice dictation *Cini Kurian. Tel.: 91984774920; fax: 04842838682. E-mail address: cinikurian@gmail.com 1877-7058 2011 Published by Elsevier Ltd. doi:10.1016/j.proeng.2012.01.966

1082 Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 systems like medical transcription applications. Such tantalizing applications have initiated research in Automatic Speech Recognition (ASR) since 1950's. Malayalam is one among the 22 languages spoken in India with about 38 million speakers. Malayalam belongs to the Dravidian family of languages and is one of the four major languages of this family with a rich literary tradition. The majority of Malayalam speakers live in the Kerala, one of the southern states of India and in the union territory of Lakshadweep. There are 37 consonants and 16 vowels in the language. It is a syllable based language and written with syllabic alphabet in which all consonants have an inherent vowel /a/. There are different spoken forms in Malayalam although the literary dialect throughout Kerala is almost uniform.[3,6] Speech recognition is a highly complex task. The basic issue in speech recognition is dealing with two kinds of variability: acoustic and temporal [15]. Acoustic variability covers different accents, pronunciation, pitches, volume, and so on, while temporal variability covers different speaking rates. Development of a better acoustic modeling is the core task in Speech recognition research. In this work, we have used HMM for acoustic modeling. In most of the current speech recognition systems, the acoustic component of the recognizer is built by using HMM. The temporal evolution of speech is modeled by the Markov process in which each state is connected by transitions, arranged into a strict hierarchy of phones, words and sentences. Phoneme based Hidden Markov Models (HMM)[13] are the foundation of this work. We have designed a phone set comprising of 48 phones capturing nearly all the sounds available in standard Malayalam language. For training the models we have used Baum-Welch algorithm[9] and for testing the trained model viterbi algorithms[9] are used. For processing the speech signal it has to be represented in some parametric form[2]. Since MFCC (Mel frequency cepstral coefficient) [13] is more adapted to human hearing, we have used MFCC parameterization technique in this work. Continuous speech recognition is an area of active research for quite some time now. However when compared to languages like English or French, the state of speech research involving Indian languages is yet to gain momentum. Although some amount of effective research has been made for the development of speech recognizers in Hindi [21] and some south Indian Languages [14,16], the research scenario for Malayalam language is far from satisfactory level. A wavelet based word recognizer, [12] a number recognition system [3], and a digit recognizer [4] based on SVM are the reported works in Malayalam. The structure of this paper is as follows. In Section 2 acoustic modeling using HMM is described. Context independent phone models are explored in Section 3. Section 4 and section 5 are devoted for Context independent and context dependent tied models respectively. The results of the experiments are discussed in section 6. And finally our conclusion is in Section 7 2. Acoustic modeling and HMM in continuous speech recognition In this work the acoustic modeling component of the recognizer is created using Hidden Markov Model (HMM). The capability of HMM to statistically model the variability in speech is the main reason for the use of HMM in speech recognition tasks. HMM provides an elegant statistical framework for modeling speech patterns using a Markov [1] process that can be represented as a state machine as shown in Figure 1[9]. The probability distribution associated with each state in an HMM, models the variability which occurs in speech across or even different speech contexts.

Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 1083 Fig. 1. Topology of a five state HMM The speech recognition system presented here is based on principles of statistical pattern recognition. [17,18]. An unknown speech wave form is converted by a front-end signal processor into a sequence of acoustic vectors Y = y1,y2,y3--. Each of these vectors are a compact representation of the short-time speech spectrum covering a period of typically 10 milliseconds. The utterance consists of sequence of words, W = w1, w2, w3 ----wn and it is the job of the LVR system to determine the most probable word sequence, W, given the observed acoustic signal Y. To do this, Bayes rule[9] is used to decompose the required probability P(W/Y) into two components, that is W = argwmax P(W/Y) = arg max P(W) P(Y/W)/P(Y) (1) W = argwmaxp(y/w)p(w) (2) Equation 2 indicates that to find the most likely word sequence, W, the word sequence that maximizes the product of P(W) and P(Y/W) must be found. The first term represents the a priori probability of observing W independent of observed signal, and this probability is determined by language model[10]. The second term represents the probability of observing the vector sequence W, and this probability is determined by the acoustic model. Figure [18] shows how these relationships might be computed. The purpose of acoustic models is to provide a method of calculating the likelihood of any vector sequence Y given word W. In principle, the required probability distribution could be located by finding many examples of each w and collecting the statistics of the corresponding vector sequences. However, this is unfeasible for large vocabulary system and instead, word sequences are decomposed into basic sound phones. Each individual phone is represented by an HMM. An HMM has a number of states connected by arcs. A five state HMM with simple left-right topology as illustrated in Figure 1. The entry and exist states are provided to make it easy to join models together. The exit state of one phone model can be merged with the entry state of another to form a composite HMM. This allows phone models to be joined together to form words and words to be joined together to cover complete utterance.

1084 Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 Fig. 2. Acoustic model for a speech recognition system The acoustic model in a speech recognition engine produces the basic units of speech in the written form with respect to a particular input signal. An input signal is sliced up into overlapping timeframes of 25ms with a 10 ms overlap. Then from these individual frames, 39 MFCC [7] features are extracted. These set of features are then compared with the trained acoustic model. In this work we started by creating of single Gaussian monophone Hidden Markov Model (HMM) [18] for every phone in our phone set. 3. Context independent models The process of creation of context independent acoustic models start with the preparation of training and testing data. This data comprises of utterance recordings by multiple speakers and the corresponding transcripts encoded using the chosen phone set for the language. These transcripts along with the recordings are fed to the training module which utilizes Baum-Welch Re-estimation [9] in order to create HMMs of all the phones occurring in the training data. The process starts with a default prototype HMM for every phone which is tuned according to the input data and transcriptions. Creation of the monophone HMMs, however require specifying the number of states prior to training. Our experiments suggested for utilizing 5-state HMMs for the purpose of acoustic modeling. However the monophone based models cannot capture the variation of a phone with respect to the context. Phones are found to vary depending on the preceding and succeeding phones and this aspect needs to be captured within the acoustic models

Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 1085 to improve performance. Here, we have defined 71 monophones and monophone. created models for every 4. Context dependent models (Triphone based acoustic model) There are no well defined boundaries between phonemes in continuous speech. The spectral characteristics change continuously due to the inertia of the articulators which move from the position of one phoneme to the position of the next phoneme. Also, the articulators move to the position of the next phoneme even when the current sound is being uttered. Consequently, the acoustic properties of a speech sound not only depends on the identity of the corresponding phoneme, but on the neighboring sounds as well. Fig. 3. Wave form and Spectrogram of the word tiruvananthapurm The effect of phonetic context on the spectra of phonemes is illustrated in Figure 3[15]. This Figure shows the time waveform and spectrogram of the word tiruvananthapuram. Two occurrence of the phoneme /a/ in the word have different spectral trajectories. The temporal variations of spectra of two instances of the phonome /a/ are different due to different phonetic contexts. The second format of the vowel /a/ is increasing in the first case, whereas it is nearly steady (and decline slightly later) in the case of /a/ following /r/. Hence contextual effect cause large variations in the way that different sounds are produced. Hence, in order to achieve good phonetic discrimination, different HMMs have to be trained for each different context. The simplest and most common approach is to use triphones, where every phone has a distinct HMM model for every unique pair of left and right neighbors. 5. Context Dependent tied models ( State tying of triphones) When triphones are used, they result in a system that has too many parameters to train. For example, in English language there are about 45 phones. And in principle, 45 3 (approximately 60,000 triphones, since all cannot occur due to phonotactic constraints of the language) be triphones need to be trained. In practice, around 10 mixture GMM model, with a 39 element acoustic vectors would require around 790 parameters per state. Hence, 60,000, 3-state triphones would have a total of 142 million parameters[17]. Here arise the problem of too many parameters and too little training data which are absolutely crucial in

1086 Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 the design of a statistical speech recognizer. This problem is dealt with tying states [17] that are acoustically indistinguishable. This allows all the data associated with each individual state to be pooled and thereby give more robust estimates for the parameters of the tied state. This is illustrated in Figure 4. In conventional triphones, each triphone has its own private output distribution. After tying several states share distributions, the choice of which states to tie is made using decision tree algorithm. In this work we have a total of 71 monophones and after tying the triphones, we could reduce the number of states from 6222 to 1355 Fig 4. State typing of triphones 6. Experiments and results We have conducted several experiments with the different types of acoustic models of Malayalam continuous speech recognizer. We have used about 420 recorded utterances from 11 female and 10 male speakers. Recording is done in normal office environment using a head set, having microphone with 70Hz to 1600Hz of frequency range. Moreover, it is done with 16 khz sampling frequency quantized by 16 bit. The speech is saved in Microsoft wave format. For training and testing the system, the database is divided into three equal parts- 1, 2, & 3 and the training is conducted in a round robin fashion. For each trial, 2/3rd of the data is taken for training and 1/3rd of the remaining the data is used for testing. For eg. In trial I, part 1 and part 2 of the data is given for training. Then Part 3 of the database is used for testing the trained system. In trial II, part 1 and part 3 of the data base is used for training and part II of the database is used for testing. In experiment III, part 2 and part 3 of the database is taken for training and the system is tested with part 1 of the database. The result in terms of word recognition accuracy, sentence recognition accuracy, number of words deleted,

Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 1087 inserted, substituted are obtained from each experiment. For all the performance evaluation reports detailed in the following sections, we have adopted the above procedure and the result reported (sentence recognition accuracy) is the average of testing experiments of I, II and III. Word Error Rate (WER) is the standard evaluation metric used here for speech recognition. It is computed by SCLITE [8], a scoring and evaluating tool from National Institute of Standards and Technology (NIST). Sclite is designed to compare text output from a speech recognizer such as hypothesis text to the original text (reference text) and to generate a report summarizing the performance. The comparison of reference to the hypothesis text is called the alignment process. Then result of the alignment process is obtained in terms of WER, SER, and number of word deletions, insertions and substitutions. If N is the number of words in the correct transcript; S, the number of substitutions; and D, the number of Deletions, then, WER = ((S +D+I )N) / 100 and Sentence Error rate (S.E.R) = (Number of sentences with at least one word error/ total Number of sentences) * 100 Number of Gausian Mixtures for each HMM state: Number of gausian mixtures for each HMM are varied ( 4,8,16) and the results are obtained as detailed in table 1. Initially we have carried out the training for context independent models. The different parameters used are: trigram language model( n=3), Gausian mixture of 8 and 3 states per HMM. Then the testing results in terms of sentence recognition accuracy is obtained. The same procedure is repeated for CD and CD-TIED models. Then we have changed the Gausian mixture to 4 and 16 and the whole process is repeated. Table 1 shows the detailed results.. It is evident from the chart that tied-state triphone based models clearly outperform monophone based models. Table 1. Sentence Recognition Accuracy (%) Context independent (CI) Models Context dependent (CD) models CD tied models GMM =4 61.2 76.3 80.3 GMM=8 64.3 77.6 81.5 GMM=16 56.6 69.6 76.4 Number of HMM states per phone: Phoneme unit in our recognizer are initially modeled with 3 state left to right HMMs. 5 state per HMM for phonetic unit are also examined. The results are detailed in figure 5 for CI, CD and CD-TIED models. It is apparent from the chart that tied-state triphone based models clearly surpass monophone based models by 21%. sentence recognition accuracy 90 80 70 60 50 40 30 20 10 0 CI CD - UNTIED CD- TIED HMM = 3 HMM = 5 Fig. 5. Comparison of CI, CD and CD-tied models

1088 Cini Kurian and Kannan Balakrishnan / Procedia Engineering 30 (2012) 1081 1088 7. Conclusion In this paper we have developed three types of acoustic models for Malayalam continuous speech recognition system and compared the performance of recognition accuracy of speech recognizer in the event of CI,CD and CD-TIED types of acoustic modeling. From the results we have concluded that the type of acoustic modeling influence a lot in the recognition performance of the speech recognizer. CD- TIED models outperform CD models by about 21% for Malayalam continuous speech recognition. The improvement in the recognition figures indicates the adaptability of the tied-state triphone based modeling technique for the purpose of speech recognition. In continuation of this work we propose to improve the model accuracy by utilizing more information of the linguistic knowledge such as tone, prosody, and to implement more efficient approach into the acoustic modeling process. References [1] A. Ganapathiraju, J. Hamaker and J.Picone, Support Vector machines for speech Recogntion, Proceedings of the International Conferences on Spoken Language processing, Sdney,Australia, November, 1999, pp.292-296. [2] B.Gold, N. Morgan, Speech and audio signal processing, John Wiley and Sons, N.Y., 2002 [3] Cini Kurian, Kannan Balakrishnan, (2009), Speech Recognition of Malayalam Numbers, IEEE Transaction on Nature and Biologically Inspired computing NaBIC-2009), pp 1475-1479 [4] Cini Kurian, F. Shah, A.;Balakrishnan, K. (2010), Isolated Malayalam digit recognition using Support Vector Machines, IEEE Transaction on Communication Control and Computing Technologies (ICCCCT-2010), pp 692-695 [5] Cini Kurian;Kannan Balakrishnan,K ; "Natural Language Processing in India Prospects and Challanges" Proceedings of the International Conference on Recent Trends in Computational Science 2008 (ICRTCS-2008), Kochin, India.June 11-June 13 [6] Cini Kurian, Kannan Balakrishnan K, Automated Transcription System for MalayalamLanguage International Journal of Computer Applications(IJCA), ISSN-0975-8887, volume 19- No.5, April 2011 [7] Davis S and Mermelstein P, Comparison of parametric representations for Monosyllabic word Recognition in continuously spoken sentences, IEEE Trans On ASSP,vol. 28, pp.357 366A. [8] Fiscus, J. (1998) Sclite Scoring Package Version 1.5, US National Institute of Standard Technology (NIST), URL - http://www.itl.nist.gov/iaui/894.01/tools/. [9] F.Felinek, Statistical Methods for Speech recognition MIT Press, cambridge Massachusetts, USA, 1997 [10] Huang, X., Alex, A., and Hon, H. W. (2001). Spoken Language Processing; A Guide to Theory, Algorithm and System Development, Prentice Hall, Upper Saddle River, New Jersey [11] Jurasky, D, and Martin, J.H (2007). Speech and Language Processing : An introduction to natural language Processing, Computational linguistics, and speech recognition, 2 nd edition [12] Krishnan, V.R.V. Jayakumar A, Anto P B (2008), Speech Recognition of isolated Malayalam Words Using Wavlet features and Artificial Neural Network. DELTA2008. 4th IEEE International Symposium on Electronic Design, Test and Applications, 2008.Volume, Issue, 23-25 Jan. 2008 Page(s):240 243 [13] Lawrence Rabiner, Biing-Hwang Juang, Fundamentals of Speech Recognition, Pearson Education 2008, [14] M Kumar., et al A Large Vocabulary Continuous Speech recognition system for Hindi, IBM Research and Development Journal, September 2004 [15] Samudravijaya K, Speech and Speaker Recognition: A tutorial, Proc. Int. Workshop on Tech. Development in Indian Languages, Kolkata, Jan 22-24, 2003 [16] Singh, S. P., et al Building Large Vocabulary Speech Recognition Systems for Indian languges International Conference on Natural Language Processing, 1:245-254, 2004. [17] S Yong, J. Odell, and P. Woodland. Tree-Based State Tying for High Accuracy Acoustic modeling. In Proc Human Language Technology Workshop, pages 307 312, Plainsboro NJ, Morgan Kaufman Publishers Inc, Mar. 1994