II. SID AND ITS CHALLENGES

Size: px
Start display at page:

Download "II. SID AND ITS CHALLENGES"

Transcription

1 Call Centre Speaker Identification using Telephone and Data Lerato Lerato and Daniel Mashao Dept. of Electrical Engineering, University of Cape Town Rondebosch 7800, Cape Town, South Africa Abstract- Telecommunications network services and callcentres are examples of avenues for speech technology. In these services speech technology largely exists in the form of speech recognition (voice to text). This paper explores the use of speaker identification (who has spoken?) in call centres. Call centres acquire information telephonically (cellular or landline). This work therefore reports results that show speaker identification (SID) from simulated, real (Vodacom) and telephone speech data. Channel noise reduces SID rate. The large enrolment of speakers to the SID system also degrades the performance of the SID system. This paper finally proposes the hierarchical method of improving SID performance on the large population of enrolled speakers. I. INTRODUCTION Speaker identification (SID) system recognises a person from either known words (text-dependent) or any utterance (text-independent) in the closed group of speakers (e.g. bank customers). The SID is encapsulated in the speech technology for telecommunications [1] as a good candidate for secure access to information through network services and call centres [2]. SID systems, as we have reported [3], perform less efficiently on speech that has passed through the communication channel. Speakers enroll into the SID system forming their individual templates or models and these (templates) are compared to the test talker s speech features during the identification process. There are many speech databases available for testing SID systems. The data used in this study originate from TIMIT speech (clean speech) database [4]. NTIMIT database [4] is a telephone transmitted TIMIT speech. This database was created by Texas Instruments (T.I.) and Massachusetts Institute of Technology (M.I.T.). In this work, clean speech was passed through [5] forming database (figure 4) is ETSI standard [6] for full rate (FR). The second data resulted from transmission of clean speech through local (Vodacom) [7] network. Section V is a brief account of how these databases were formed. Several studies have focused on telecommunications applications of speaker identification at different levels. Westall F. A. et al [1] have put the whole picture of speech technology into telecommunications perspective. Murthy H.A. [8] tried to compensate communication channel noise and managed to improve the performance of SID on telephone speech. Kuitert and Boves [9] showed that a limited telephone band of Hz does not significantly affect the performance of speaker recognition. Mashao D. J. [10] reported the same results using our SID system. Grassi S. et al [11] concluded that coding degrades the SID performance. Although channel noise seems to be the main obstacle to better performance of SID systems, it was also noticed that the increase in speaker population lowers identification rate [3]. This degradation of performance results from the confusion between speakers with similar voices. The hierarchical configuration of speaker identification as described in section IV attempts to solve the problem of misclassification resulting from this large enrolment of speakers. Section VI reports results of both simulated and real full rate (Vodacom) data in order to compare the SID system behaviour in the practical environment with the one simulated in the laboratory. The same section also displays observations when telephone speech is used with the large number of enrolled speakers. Section VII concludes this paper and also gives future directions. II. SID AND ITS CHALLENGES This section is a brief description of speaker identification and some of the problems that affect it. Speaker identification (SID) is in the same category as speaker verification (SV) in the field of speaker recognition [4]. SID recognises the speaker from his or her voice without prior knowledge of the talker. SV verifies the speaker from his or her claimed identity through voice. Speaker specific speech features are important in the SID process. The physiological structure of the speaker s vocal tract is unique and it is possible to extract speaker specific features from his or her voice. Feature extraction process of the SID system is known as the front end while the classifier is called the back end of the system. There are many front ends that can be used. Linear predictive coding (LPC) used in full rate codec [5], is an example of feature extraction algorithm. Perceptual linear prediction (PLP) [12] is an example of auditory-based feature extraction. Our system uses parameterised feature sets (PFS) [13] for speech feature extraction. Classification algorithms are the decision-making tools through probabilistic scores of how likely the talker is compared to the voice models (templates) stored in the database. Hidden Markov models (HMM) and Gaussian mixture models are mostly used as the back ends (classifiers) in speech and speaker recognition respectively. Challenges facing the robust SID system implementation [14] are methods of correcting errors that affect its performance. Some of these errors include inaccurate modeling of vocal tract, extreme emotional state of the speaker, time-varying speech acquisition equipment, head colds and aging of the speaker. Overcoming these problems and limiting channel noise could enhance the implementation of the SID systems in call centres.

2 III. SID IN CALL CENTRES Call centre is a telephone calls management unit which several companies use for handling large number of calls. The key functions of a typical call centre include the automatic queue management and coordination of calls and customer data [2]. Automatic queue management directs incoming calls to suitable agents [2] who can also receive the caller s information on the computer for new records or for consulting other agents. This functionality is also done using automatic call distributor (ACD). Interactive voice response (IVR) is a form of interaction between a customer and telephone answering machine. The customer presses the keys on the handset while the system prompts him/her with voice. IVR is also common in the network-subscriber interaction such as in PSTN where the customer uses the telephone dial or voice [2] to access information. Most automated voice network services are hierarchical and consume a long time before the information is arrived at. An example of such could be pressing key 1 for help followed by key 5 for account information and so on. Several companies have deployed speech recognition in most call centres and telephone services. Nortel [15] announced speech enabled call center facility that will replace dialing of telephone for enquiries. Loquendo [16] s voice technologies have proved to be successful in the implementation of speech recognition. Intelleca Voice & Mobile [17] has built a speech recognition system for ticket bookings for Ster Kinekor s national call centre in South Africa. These are a few of many companies involved in the implementation of this technology in call centres. Speaker identification (SID) has not seen much recognition in call centre operations because less robust SID systems. SID could be used in line with speech recognition where security of information is necessary. Nuance [18] has implemented similar system. Works [19] also advocates the importance of speaker recognition (verification) in telephone services. According to Rachel Lebihan from ZDNet Australia [20], experts verify the relevance of SID in call centres. Our proposed method in the following section contributes towards the optimisation of SID performance in call centres. IV. THE PROPOSED HIERARCHICAL SID SYSTEM The hierarchically configured speaker identification system tries to split voice templates of speakers who sound the same. Figure 1 illustrates the proposed system. Our initial approach [21] to this proposal was based on grouping speakers according to gender but no improvement was observed because the normal SID [3] already had gender separation capabilities. This means that the group detector should use different speech features to those of front-end (Feature extraction). Signal Group Detector Feature Extraction Testing λ 1 Training... Matching and Maximum scores Identification Figure 1: Hierarchical SID system λ Ns During training, the group detector (figure 1) classifies a speaker and gives him/her a group tag so that his/her voice template, λ, can be stored in the correct database. Testing is done by first determining the group that the test speaker belongs to. Consequently the speech features are matched with the correct database (λ is ). The feature extraction (frontend) and classification (matching and scoring) are still the same as in our previous paper [3]. Several studies [22, 23] have used the hierarchical approach to try and improve on processing time. We seek to use this approach for improving the identification rate. Two groups, which result in two databases, are used in this study because of the initial stage of the proposal. Perceptual linear prediction (PLP) [12] algorithm has been used in our group detector. A. Perceptual linear prediction We implemented the PLP algorithm according to Hermasky [12] as shown in figure 2. The speech signal power spectrum is warped into a Bark scale [24]. The result is convolved with the critical band curve, which approximates the resolution of human ear. During the equal-loudness preemphasis, [12, 24] a filter known as the approximate nonequal sensitivity of human hearing is multiplied with the convolved signal. The result is run through the IDFT (inverse discrete Fourier transform). A potion of the resulting spectrum from sample 20 to 100 is used as a grouping segment. Peaks from sample 20 to 55 are added and the sum is stored for group A. Group B is the sum of peaks from sample 55 to 90. The choice of these values was experimental and also based on the knowledge of the spectrum symmetry. Grouping is done such that if sum A is greater than sum B then the speaker belongs to group A and vice versa. The normal speaker identification process then continues.

3 TIMIT database Down-Sample 16kHz 8kHz codec up-sample 8kHz 16kHz database Figure 4: Creation of full rate database full rate compresses the signal to 13kbps for transmission and the speech must be 8 khz [25]. TIMIT speech is sampled at 16 khz and therefore down-sampling was required before the identification process. Finally figure 5 shows how identification processes executes. Figure 2: Hermansky [12] PLP analysis block diagram TIMIT Full Rate SID Identification V. TELEPHONE AND SPEECH EXPERIMENTS The speech signal processed by the system illustrated in figure 1 exists in different forms. NTIMIT telephone speech has been chosen as the first to test capability of our new architecture because this database is commonly used in literature. speech database used was created in the laboratory [3, 25]. TIMIT was also transmitted through Vodacom network as shown in figure 3, for obtaining the real speech data. TIMIT speech from sound card Local Network Vodacom speech to be stored in a PC Figure 3: Generating real coded speech Given the laboratory constraints only a small set of speakers (18 speakers) was tested as indicated by the results in table 1. Figure 4 illustrates how [5, 6] was used for forming simulated data. Figure 5: Speaker identification process of speech VI. RESULTS AND DISCUSSIONS There are two types of results in this section. Results in table 1 show speaker identification of 18 talkers from transmitted voices. Table 2 and figure 6 reflect identification rate from 630 speakers. These results indicate how the SID system performs on a telephone-transmitted voice. A. speech results speaker identification was done on a small population. The results in table 1 highlight the impact of channel effects on speaker identification when real network processes the speech. Channel noise [8] and even the mobile device circuitry contribute to the degradation of identification rate. Each speaker spoke for 15 seconds and 6 seconds during training and testing respectively. Gaussian mixture model classifier [3] used in our experiment performs better if talkers speak for a longer time during training. This means that real network may perform better if we train speakers for a longer time. This is due for further investigation. data Table 1: SID on speech Total number of speakers No. of correctly identified speakers Simulated ( 06.10) Vodacom B. Telephone speech results During the investigation of our proposed hierarchical SID performance, a population of 630 speakers from the whole telephone speech database (NTIMIT) was used. Normal SID in table 2 refers to a non-hierarchical SID [21] while ideal dialect hierarchical SID indicates the dialect based group

4 detection. Hierarchical SID in table 2 used PLP group detector. This detector generated only two groups of speakers. The speakers in NTIMIT database are already grouped according to their dialect. Taking advantage of this arrangement, the group detector (figure 1) produced 8 database groups. This grouping is ideal because it was not done according to the group detector algorithm but was prearranged. The results in table 2 are competitive with those reported in literature [4, 11] using the same data. Architecture of the SID SID rate on 630 speaker (NTIMIT) Normal SID 69.5 % Hierarchical SID 75.7 % Ideal dialect hierarchical SID 86.5 % Table 2: SID rate on large population using NTIMIT sentences 8 and 9 to test and their other 8 for training. Table 3 shows the population of speakers in dialect region (dr) from telephone speech database (NTIMIT). Same experiment was carried out for each dialect region in order to observe the consistency of the proposed architecture on telephone speech. Figure 6 is a graph of identification rates for different dialect groups of speakers. The results in figure 6 however, do not reflect text independence. Speakers uttered same sentence for the group detection process. Region dr1 dr2 dr3 dr4 dr5 dr6 dr7 dr8 Population Table 3: Population of speakers in each dialect region SID rate (%) dr1 dr2 dr3 dr4 dr5 dr6 dr7 dr8 Dialect Region Normal Hierarchical Figure 6: Average SID rates from all dialect regions VII. CONCLUSIONS Prompting the client to say certain fixed words or sentences such as account number and physical address could enhance practical application of speaker identification at call centres. Our SID system shows a reasonable performance on 630 people in a completely text independent setup as tabulated in table 2. We have seen in section III that companies have already deployed speech technology in the form of speech recognition in call centres. Although only few speakers are used in testing real speech on the SID system, the proposed hierarchical architecture (figure 1) of speaker identification could ensure that the speakers models (voice templates) are grouped into several smaller databases. This may maintain high identification rate in real time conditions. This projection is proved by the ideal dialect hierarchical SID rate in table 2. This result proves that the more model databases we have the better identification rate we achieve. Now that it is possible to test local telecommunications network transmitted speech, the second step will be to pass more speakers voices through local network (Vodacom) and get better SID assessment. The transmission of TIMIT speech database (clean speech) through a local telephone network (Telkom) will generate a telephone database that can replace NTIMIT for future assessment of our SID system. This local database can be tested on our system and the SID application in call centres can be evaluated. REFERENCES [1] Westall F.A (2001). technology telecommunications. BT Technol J. Vol. 14 No.1, pp 9-27 [2] Ericsson Telecom AB and Telia AB (1998). Understanding telecommunications 2 pp 61-62, [3] Lerato L., Mashao D. J. (2002). Evaluation of speaker identification system using data. SATNAC Vol. 1 pp [4] Le Floch J.L. et al. (1995). Speaker recognition experiments on the NTIMIT database Proceedings of Eurospeech 95, Vol.1 pp [5] [6] [7] [8] Murthy, A.H. et al. (1999). "Robust text-independent speaker identification over telephone channels." IEEE Transactions of and Audio processing, Vol. 7 No.5 pp [9] Kuitert M., Boves L. (1997). "Speaker verification with coded telephone speech", Proceedings Eurospeech 1997 [10] Mashao D.J. (2002). "Towards an inverse solution for speaker recognition", Proc. of 13th Annual Symposium of the PRASA pp [11] Grassi, S. et al. (2000). Influence of Coding on the performance of text-independent speaker recognition." Proceedings of ICASSP 2000 [12] Hermansky H. (1990). Perceptual linear predictive (PLP) analysis of speech. Journal of the Acoustical Society of America Vol. 87 no. 4, pp [13] Mashao D. J., Adcock J. E. (1997). Utterance dependent parametric warping for a talker-independent HMM-based recognizer. Proceedings of ICASSP 97. Munich, Germany. pp [14] Campbell J. P. (1997). Speaker Recognition: A tutorial. Proceedings of the IEEE, Vol. 85, No. 9. pp [15] Phil Hochmut, (2003), "Nortel brings IVR to midsize call centres", [16] Loquendo voice technologies integrated in Trenitalia call centres uendo_fs_informa_web.pdf. [17] INTELLECA VOICE & MOBILE: [18] [19] [20] Rachel Lebihan (ZDNet Australia) (2002). "Biometrics: the key to call centre fraud?" [21] Lerato L., Mashao D.J. (2002). "Hierarchical approach for improving speaker identification", Proc. of 13th Annual Symposium of the PRASA pp

5 [22] Beigi H.S.M. et al. (1999). A hierarchical approach to largescale speaker recognition, Proceedings of Eurospeech 99. Vol. 5, pp [23] Pan Z. (2000). A on-line hierarchical method of speaker identification for large population. Proc. Of IEEE Nordic signal processing symposium, pp [24] Gunawan W. et al. (2001). PLP Coefficients can be quantized at 400 BPS. Proceedings of ICASSP 01 [25] Besacier L. et al. (2000). " speech coding and speaker recognition." Proceedings of ICASSP 2000

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions 26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation

UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation UTD-CRSS Systems for 2012 NIST Speaker Recognition Evaluation Taufiq Hasan Gang Liu Seyed Omid Sadjadi Navid Shokouhi The CRSS SRE Team John H.L. Hansen Keith W. Godin Abhinav Misra Ali Ziaei Hynek Bořil

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

A Hybrid Text-To-Speech system for Afrikaans

A Hybrid Text-To-Speech system for Afrikaans A Hybrid Text-To-Speech system for Afrikaans Francois Rousseau and Daniel Mashao Department of Electrical Engineering, University of Cape Town, Rondebosch, Cape Town, South Africa, frousseau@crg.ee.uct.ac.za,

More information

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy Sheeraz Memon

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech Dept. for Speech, Music and Hearing Quarterly Progress and Status Report VCV-sequencies in a preliminary text-to-speech system for female speech Karlsson, I. and Neovius, L. journal: STL-QPSR volume: 35

More information

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors Master s Programme in Computer, Communication and Information Sciences, Study guide 2015-2016, ELEC Majors Sisällysluettelo PS=pääsivu, AS=alasivu PS: 1 Acoustics and Audio Technology... 4 Objectives...

More information

Speech Recognition by Indexing and Sequencing

Speech Recognition by Indexing and Sequencing International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

Using dialogue context to improve parsing performance in dialogue systems

Using dialogue context to improve parsing performance in dialogue systems Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

Support Vector Machines for Speaker and Language Recognition

Support Vector Machines for Speaker and Language Recognition Support Vector Machines for Speaker and Language Recognition W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, P. A. Torres-Carrasquillo MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA

More information

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Digital Signal Processing: Speaker Recognition Final Report (Complete Version) Xinyu Zhou, Yuxin Wu, and Tiezheng Li Tsinghua University Contents 1 Introduction 1 2 Algorithms 2 2.1 VAD..................................................

More information

Speaker Recognition. Speaker Diarization and Identification

Speaker Recognition. Speaker Diarization and Identification Speaker Recognition Speaker Diarization and Identification A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

An Online Handwriting Recognition System For Turkish

An Online Handwriting Recognition System For Turkish An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in

More information

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

Speech Translation for Triage of Emergency Phonecalls in Minority Languages

Speech Translation for Triage of Emergency Phonecalls in Minority Languages Speech Translation for Triage of Emergency Phonecalls in Minority Languages Udhyakumar Nallasamy, Alan W Black, Tanja Schultz, Robert Frederking Language Technologies Institute Carnegie Mellon University

More information

Author's personal copy

Author's personal copy Speech Communication 49 (2007) 588 601 www.elsevier.com/locate/specom Abstract Subjective comparison and evaluation of speech enhancement Yi Hu, Philipos C. Loizou * Department of Electrical Engineering,

More information

Spoofing and countermeasures for automatic speaker verification

Spoofing and countermeasures for automatic speaker verification INTERSPEECH 2013 Spoofing and countermeasures for automatic speaker verification Nicholas Evans 1, Tomi Kinnunen 2 and Junichi Yamagishi 3,4 1 EURECOM, Sophia Antipolis, France 2 University of Eastern

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION

Individual Component Checklist L I S T E N I N G. for use with ONE task ENGLISH VERSION L I S T E N I N G Individual Component Checklist for use with ONE task ENGLISH VERSION INTRODUCTION This checklist has been designed for use as a practical tool for describing ONE TASK in a test of listening.

More information

Segregation of Unvoiced Speech from Nonspeech Interference

Segregation of Unvoiced Speech from Nonspeech Interference Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27

More information

Affective Classification of Generic Audio Clips using Regression Models

Affective Classification of Generic Audio Clips using Regression Models Affective Classification of Generic Audio Clips using Regression Models Nikolaos Malandrakis 1, Shiva Sundaram, Alexandros Potamianos 3 1 Signal Analysis and Interpretation Laboratory (SAIL), USC, Los

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

Firms and Markets Saturdays Summer I 2014

Firms and Markets Saturdays Summer I 2014 PRELIMINARY DRAFT VERSION. SUBJECT TO CHANGE. Firms and Markets Saturdays Summer I 2014 Professor Thomas Pugel Office: Room 11-53 KMC E-mail: tpugel@stern.nyu.edu Tel: 212-998-0918 Fax: 212-995-4212 This

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Using and applying mathematics objectives (Problem solving, Communicating and Reasoning) Select the maths to use in some classroom

More information

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

International Journal of Advanced Networking Applications (IJANA) ISSN No. : International Journal of Advanced Networking Applications (IJANA) ISSN No. : 0975-0290 34 A Review on Dysarthric Speech Recognition Megha Rughani Department of Electronics and Communication, Marwadi Educational

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Modeling user preferences and norms in context-aware systems

Modeling user preferences and norms in context-aware systems Modeling user preferences and norms in context-aware systems Jonas Nilsson, Cecilia Lindmark Jonas Nilsson, Cecilia Lindmark VT 2016 Bachelor's thesis for Computer Science, 15 hp Supervisor: Juan Carlos

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques

Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques Non intrusive multi-biometrics on a mobile device: a comparison of fusion techniques Lorene Allano 1*1, Andrew C. Morris 2, Harin Sellahewa 3, Sonia Garcia-Salicetti 1, Jacques Koreman 2, Sabah Jassim

More information

Voice conversion through vector quantization

Voice conversion through vector quantization J. Acoust. Soc. Jpn.(E)11, 2 (1990) Voice conversion through vector quantization Masanobu Abe, Satoshi Nakamura, Kiyohiro Shikano, and Hisao Kuwabara A TR Interpreting Telephony Research Laboratories,

More information

Strategy and Design of ICT Services

Strategy and Design of ICT Services Strategy and Design of IT Services T eaching P lan Telecommunications Engineering Strategy and Design of ICT Services Teaching guide Activity Plan Academic year: 2011/12 Term: 3 Project Name: Strategy

More information

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,

More information

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS

PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS PHONETIC DISTANCE BASED ACCENT CLASSIFIER TO IDENTIFY PRONUNCIATION VARIANTS AND OOV WORDS Akella Amarendra Babu 1 *, Ramadevi Yellasiri 2 and Akepogu Ananda Rao 3 1 JNIAS, JNT University Anantapur, Ananthapuramu,

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE

DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1

More information

Automatic Pronunciation Checker

Automatic Pronunciation Checker Institut für Technische Informatik und Kommunikationsnetze Eidgenössische Technische Hochschule Zürich Swiss Federal Institute of Technology Zurich Ecole polytechnique fédérale de Zurich Politecnico federale

More information

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer

More information

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes

Stacks Teacher notes. Activity description. Suitability. Time. AMP resources. Equipment. Key mathematical language. Key processes Stacks Teacher notes Activity description (Interactive not shown on this sheet.) Pupils start by exploring the patterns generated by moving counters between two stacks according to a fixed rule, doubling

More information

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I

Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I Session 1793 Designing a Computer to Play Nim: A Mini-Capstone Project in Digital Design I John Greco, Ph.D. Department of Electrical and Computer Engineering Lafayette College Easton, PA 18042 Abstract

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS

ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS Annamaria Mesaros 1, Toni Heittola 1, Antti Eronen 2, Tuomas Virtanen 1 1 Department of Signal Processing Tampere University of Technology Korkeakoulunkatu

More information

Lecture 9: Speech Recognition

Lecture 9: Speech Recognition EE E6820: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition 1 Recognizing speech 2 Feature calculation Dan Ellis Michael Mandel 3 Sequence

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access

Courses in English. Application Development Technology. Artificial Intelligence. 2017/18 Spring Semester. Database access The courses availability depends on the minimum number of registered students (5). If the course couldn t start, students can still complete it in the form of project work and regular consultations with

More information

A new Dataset of Telephone-Based Human-Human Call-Center Interaction with Emotional Evaluation

A new Dataset of Telephone-Based Human-Human Call-Center Interaction with Emotional Evaluation A new Dataset of Telephone-Based Human-Human Call-Center Interaction with Emotional Evaluation Ingo Siegert 1, Kerstin Ohnemus 2 1 Cognitive Systems Group, Institute for Information Technology and Communications

More information

Measurement & Analysis in the Real World

Measurement & Analysis in the Real World Measurement & Analysis in the Real World Tools for Cleaning Messy Data Will Hayes SEI Robert Stoddard SEI Rhonda Brown SEI Software Solutions Conference 2015 November 16 18, 2015 Copyright 2015 Carnegie

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH Don McAllaster, Larry Gillick, Francesco Scattone, Mike Newman Dragon Systems, Inc. 320 Nevada Street Newton, MA 02160

More information

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl

More information

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl

More information

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS

AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS R.Barco 1, R.Guerrero 2, G.Hylander 2, L.Nielsen 3, M.Partanen 2, S.Patel 4 1 Dpt. Ingeniería de Comunicaciones. Universidad de Málaga.

More information

Lecturing Module

Lecturing Module Lecturing: What, why and when www.facultydevelopment.ca Lecturing Module What is lecturing? Lecturing is the most common and established method of teaching at universities around the world. The traditional

More information

Data Fusion Models in WSNs: Comparison and Analysis

Data Fusion Models in WSNs: Comparison and Analysis Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,

More information

Appendix L: Online Testing Highlights and Script

Appendix L: Online Testing Highlights and Script Online Testing Highlights and Script for Fall 2017 Ohio s State Tests Administrations Test administrators must use this document when administering Ohio s State Tests online. It includes step-by-step directions,

More information

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ; EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10 Instructor: Kang G. Shin, 4605 CSE, 763-0391; kgshin@umich.edu Number of credit hours: 4 Class meeting time and room: Regular classes: MW 10:30am noon

More information

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY Chen, Hsin-Hsi Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan E-mail: hh_chen@csie.ntu.edu.tw Abstract

More information

Bluetooth mlearning Applications for the Classroom of the Future

Bluetooth mlearning Applications for the Classroom of the Future Bluetooth mlearning Applications for the Classroom of the Future Tracey J. Mehigan, Daniel C. Doolan, Sabin Tabirca Department of Computer Science, University College Cork, College Road, Cork, Ireland

More information

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India

Vimala.C Project Fellow, Department of Computer Science Avinashilingam Institute for Home Science and Higher Education and Women Coimbatore, India World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 2, No. 1, 1-7, 2012 A Review on Challenges and Approaches Vimala.C Project Fellow, Department of Computer Science

More information

Natural Language Processing. George Konidaris

Natural Language Processing. George Konidaris Natural Language Processing George Konidaris gdk@cs.brown.edu Fall 2017 Natural Language Processing Understanding spoken/written sentences in a natural language. Major area of research in AI. Why? Humans

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Intermediate Computable General Equilibrium (CGE) Modelling: Online Single Country Course

Intermediate Computable General Equilibrium (CGE) Modelling: Online Single Country Course Intermediate Computable General Equilibrium (CGE) Modelling: Online Single Country Course Course Description This course is an intermediate course in practical computable general equilibrium (CGE) modelling

More information