A New Kind of Dynamical Pattern Towards Distinction of Two Different Emotion States Through Speech Signals

Size: px
Start display at page:

Download "A New Kind of Dynamical Pattern Towards Distinction of Two Different Emotion States Through Speech Signals"

Transcription

1 A New Kind of Dynamical Pattern Towards Distinction of Two Different Emotion States Through Speech Signals Akalpita Das Gauhati University India Babul Nath, Purnendu Acharjee, Anilesh Dey Kaziranga University India ABSTRACT: Speech Emotion recognition is one of the most popular and widely discussed topics in the present world. Every day human-being shows different types of emotions. In this paper we propose a new technique which can distinguish two emotion states by analyzing speech signals. The quantification is done by fitting an ellipsoid on the reconstructed attractor obtained from the speech signals in two different emotional conditions. Our experiments shows satisfactory results in this context. Keywords: Speech Signal, Phase Space Plot, Ellipsoid Fit Received: 2 July 2017, Revised 10 August 2017, Accepted 17 August DLINE. All Rights Reserved 1. Introduction The scanning of Speech emotion is widely related with the speech production structure. The whole of speech acoustics has an important role to play while explaining the meaning of some definite acoustic parameters. The flow of air through the vocal tract, enhanced by breathing is the ground of all sounds and noises possible by the human vocal apparatus.[1] An interesting fact to note is that the variety of sounds that humans can produce, is dependent on whether the flow of air attains vibration by continuous movement of the glottis, also termed phonation and thus generating quasi regular sounds. The air tends to pass without any influence through the lower part of the vocal tract and is modified into turbulent sounds because of friction occurring while opening the mouth, in case of sounds which are unvoiced or non-periodic in nature, In addition, the acoustic filter properties of the vocal tract is responsible for the quality of sound produced. Hence, the whole structure of the system which is responsible for producing sound is a complex procedure.[1-2] This complexity is increased when emotions and feelings are added to the sound. The existing studies points on how meaningful content is obvious in the acoustic signal a speaker generates and also on how the listener reacts to the signal. The listeners can perceive and tag meaningful content in such a way that the emotion of the speaker is maintained. Linguists are generally interested in connections between logic of the vocal sounds and the tone in which they are 142 Journal of Multimedia Processing and Technologies Volume 8 Number 4 December 2017

2 spoken. Phoneticians like to examine how emotion in common changes the way vocal sounds are sent. Our center of interest from a phonetics point of view is on the expressive part of the acoustic waveform of speech and the articulations related with it and their control, not on the origination of the expressed emotion. In the 70 s [1] HMMs have been successfully applied in automatic speech recognition (ASR), but in recent times researchers are attempting to design and implement for more Dramatic (Emotional) speech synthesis. The arranged training data of the state-assembled HMMs are used for HMM state partitioning to explain the speech data for the unit selection. Few decades later in the 90 s Tokuda[2] put forward a totally self-acting and parametric speech synthesizer with HMMs which is accepted worldwide. Although both speech synthesis and ASR uses HMM technology, a lot of dissimilarities occur between these two applications. Speech recognition and synthesis systems that are based on HMMs, exchange the type of characteristics of the probabilistic models and in order to learn the distribution of probability uses similar methods. To be more specific, the HMMs are coached by optimizing the HMMs distribution of probability given by the series of speech characteristic vectors and the sub-word units sequence, e.g. phones. Emotional (Dramatic) Speech synthesis connects to the approximation of speech parameter sequences from input text with the help of HMMs. For a better recognition accuracy, Statistical representation used for ASR focuses to normalize away the variations speech parameters. In this paper we tried to discuss applicability of some new techniques which is a global analysis of the signal from the reconstruction space not from the signal itself. This is the motivation behind considering such a global analysis. In this article the long term dynamic of speech signal of two healthy subjects (male, female) in two different emotions have been studied and proper quantifications have been made to distinguish the two emotion states. The article is presented sequentially. Section 2 deals with the methodology that includes acquisition of speech signals methodology its quantification techniques by using a unique time delay. The core findings highlights the conclusion section. 2. Methodology 2.1 Signal Acquisition Recording Setup The Recording setup is done in a semi-anechoic and noise proof recording studio setup in the department Electronics and Communication, Kaziranga University, Jorhat, Assam. The components were used in the set up for recording the voices: Microphone Behringer dual diaphragm condenser USB studio microphone B-2 pro. The frequency response was 40Hz to 20Khz. Realtek High Definition Audio Driver was used. The sound card that we used was a Creative Sound Blaster Live 5.1. The distance between the speaker and the microphone was 8 inches. The Block diagram of the recording setup is shown the Figure 1 below: Figure 1. Block diagram of recording setup Journal of Multimedia Processing and Technologies Volume 8 Number 4 December

3 The specifications that has been considered for recording the sounds are explained: The recording software used is Audacity with a resolution of 16bit PCM. The format is mono and sampling frequency is Hz Software Tool for Recording -Audacity Audacity is free open source, cross-platform audio software for multi-track recording and editing. It is available for Windows, Mac OS X, Linux and other operating systems. Audacity can record live audio through a microphone or mixer, or digitize recordings from other media. With some sound cards on it, and on any recent version of Windows, Audacity can also capture streaming audio. To mix and record entire albums, Audacity can be used. The primary features of Audacity includes the following [157]: It can import and export WAV, AIFF, MP3 files It can mix multiple tracks It can record and play back sounds. 2.2 Phase Space Plot and Quantification[3-6] Let us assume that the time series of the trial and error data is specified by implanting dimension and the holdup time for rebuilding of the attractor are m and phase space as:. Let us also assume that the respectively. Thus we obtain the restored with unit lag., here is phase point in m-dimensional phase space and is the number of phase points, explains the assessed trajectory belonging to the system in the phase space Gathering of is A quantification technique [17] in which points of the reconstructed phase space are gathered in three dimensions, is used in three dimensions for differentiating two different phase spaces. Let from any system a continuous signal is obtained. Also by sub-dividing this signal into three groups as with same delay, where, if N is evenand If N is odd. are obtained by reconstructing the three dimensional phase space signal A three dimensional rotation modifies this coordinate system with same angle with respect to X Y and Z axis, which is given by Thus a new co-ordinate system is formed. Let As a final step, an ellipsoid focused at recreated phase space. and with three axes of length SD 1, SD 2 and SD 3 is connected to the already existed 3. Result and Discussion 144 Journal of Multimedia Processing and Technologies Volume 8 Number 4 December 2017

4 We have taken 2 subjects 1 male and 1 female and recorded 4 statements with two emotions for each of them. The emotions stated are Normal and Anger. Figure to Figure shows the phase space plot for male subject in Anger emotion with four different statements. Here AM stands for Anger Male Figure AM1 Figure AM2 Figure AM3 Figure AM4 Figure to Figure shows the phase space plot for male subject in Anger emotion with four different statements. Here NM stands for Normal Male Figure NM1 Figure NM2 Journal of Multimedia Processing and Technologies Volume 8 Number 4 December

5 Figure NM3 Figure NM4 Figure to Figure shows the phase space plot for female subject in Anger emotion with four different statements. Here AF stands for Anger Female. Figure AF1 Figure AF2 Figure AF3 Figure AF4 146 Journal of Multimedia Processing and Technologies Volume 8 Number 4 December 2017

6 Figure to Figure shows the phase space plot for female subject in normal emotion with four different statements. Here NF stands for Normal Female. Figure NF1 Figure NF2 Figure NF3 Figure NF4 It is visibly distinguishable from all of the phase space plots that normal emotion are more clustered then the anger emotion. Actually, no proven canonical way, has been determined yet, to eliminate these abnormalities of the phase space plots. But this things aren t of much importance here. Rather, our focus should be on the main cluster, because most of the principal, applicable and mandatory information in this situation is hidden within the positioning of the cluster. Thus we quantify these phase plots by fitting an ellipsoid to their respective main clusters. Finally quantification parameters {(SD1+SD2+SD3)/3} are found by averaging the axes of the ellipsoid. The results of quantification are shown in table 1: It is observed from Table.1 that the quantifying parameters in case of both the subjects in all the four samples are larger in anger emotion as compared to the normal emotions. It is also evident that for anger emotion, the quantifying parameter of male subjects are larger than that of female subjects. On the contrary, in case of normal emotion, the quantifying parameter of female subject is higher than male. Thus 3D Phase Space Plot with proper delay is a proper tool for distinguishing the two different emotions of speech signals. Journal of Multimedia Processing and Technologies Volume 8 Number 4 December

7 Table 1. Quantification Table of 3D Phase Space Plot of speech signals in Anger and Normal states for Female and Male 4. Conclusion It is to be noted that average value SD1,SD2,SD3 of the fitted ellipsoid reduces in case of normal state as compared to the corresponding value in the anger condition for male and female subjects both. The same analysis shows that the same quantifying parameter decreases in anger state of the female as cooperating to male subject. Since it is a well known fact that anger increases the stress of human being. The quantifying parameter also stands as an indicator of stress reduction As the sample size is small, the whole study is substantiated by the statistical hypothesis testing. References [1]Allen, J. B., Rabiner, L. R. (1977). In: Proceedings of IEEE, 65, [2] Masuko, T., Tokuda, K., Kobayashi, T., Imai, S. (1996). Speech synthesis using hmm with dynamic features. IEEE, [3] Anilesh Dey, D. K., Bhattacharya., Sanjay Kumar Palit., D.N. Tibarewala. Study of the effect of music and meditation on heart rate variability, Encyclopedia of Information Science and Technology, IGI Global, Category: Music Technologies. [4] Anilesh Dey., DK Bhattacharyya., DN Tibarewala., Nilanjan Dey., Amira S Ashour., Dac-Nhuong Le., Evgeniya Gospodinova., Mitko Gospodinov., International Journal of Interactive Multimedia and Artificial Intelligence, 3 (7) [5] Madhuparna Das.,Tuhin Jana., Parna Dutta., Ria Banerjee., Anilesh Dey., D. K. Bhattacharya., M.R.Kanjilal. (2015). Study the Effect of Music on HRV Signal using 3D Poincare Plot in Spherical Co-ordinates - A Signal Processing Approach, IEEE International Conference on Communication and Signal Processing, April 2-4, India. [6] Anilesh Dey., Sanjay Kumar Palit., Sayan Mukherjee., D. K. Bhattacharya., D.N. Tibarewala. (2011). A new technique for the classification of pre-meditative and meditative states, IEEE International Conference, ICCIA Journal of Multimedia Processing and Technologies Volume 8 Number 4 December 2017

Beginning to Flip/Enhance Your Classroom with Screencasting. Check out screencasting tools from (21 Things project)

Beginning to Flip/Enhance Your Classroom with Screencasting. Check out screencasting tools from  (21 Things project) Beginning to Flip/Enhance Your Classroom with Screencasting Check out screencasting tools from http://21things4teachers.net (21 Things project) This session Flipping out A beginning exploration of flipping

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology ISCA Archive SUBJECTIVE EVALUATION FOR HMM-BASED SPEECH-TO-LIP MOVEMENT SYNTHESIS Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano Graduate School of Information Science, Nara Institute of Science & Technology

More information

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers October 31, 2003 Amit Juneja Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech Dept. for Speech, Music and Hearing Quarterly Progress and Status Report VCV-sequencies in a preliminary text-to-speech system for female speech Karlsson, I. and Neovius, L. journal: STL-QPSR volume: 35

More information

Voice conversion through vector quantization

Voice conversion through vector quantization J. Acoust. Soc. Jpn.(E)11, 2 (1990) Voice conversion through vector quantization Masanobu Abe, Satoshi Nakamura, Kiyohiro Shikano, and Hisao Kuwabara A TR Interpreting Telephony Research Laboratories,

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

Body-Conducted Speech Recognition and its Application to Speech Support System

Body-Conducted Speech Recognition and its Application to Speech Support System Body-Conducted Speech Recognition and its Application to Speech Support System 4 Shunsuke Ishimitsu Hiroshima City University Japan 1. Introduction In recent years, speech recognition systems have been

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions 26 24th European Signal Processing Conference (EUSIPCO) Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions Emma Jokinen Department

More information

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS Heiga Zen, Haşim Sak Google fheigazen,hasimg@google.com ABSTRACT Long short-term

More information

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Jana Kitzmann and Dirk Schiereck, Endowed Chair for Banking and Finance, EUROPEAN BUSINESS SCHOOL, International

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Perceptual scaling of voice identity: common dimensions for different vowels and speakers DOI 10.1007/s00426-008-0185-z ORIGINAL ARTICLE Perceptual scaling of voice identity: common dimensions for different vowels and speakers Oliver Baumann Æ Pascal Belin Received: 15 February 2008 / Accepted:

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50 Unit Title: Game design concepts Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50 Unit purpose and aim This unit helps learners to familiarise themselves with the more advanced aspects

More information

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract

More information

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing

Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and Alan W Black Carnegie Mellon University,

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT

More information

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language Z.HACHKAR 1,3, A. FARCHI 2, B.MOUNIR 1, J. EL ABBADI 3 1 Ecole Supérieure de Technologie, Safi, Morocco. zhachkar2000@yahoo.fr.

More information

Robot manipulations and development of spatial imagery

Robot manipulations and development of spatial imagery Robot manipulations and development of spatial imagery Author: Igor M. Verner, Technion Israel Institute of Technology, Haifa, 32000, ISRAEL ttrigor@tx.technion.ac.il Abstract This paper considers spatial

More information

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012 Text-independent Mono and Cross-lingual Speaker Identification with the Constraint of Limited Data Nagaraja B G and H S Jayanna Department of Information Science and Engineering Siddaganga Institute of

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass

BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,

More information

Speaker recognition using universal background model on YOHO database

Speaker recognition using universal background model on YOHO database Aalborg University Master Thesis project Speaker recognition using universal background model on YOHO database Author: Alexandre Majetniak Supervisor: Zheng-Hua Tan May 31, 2011 The Faculties of Engineering,

More information

Application of Virtual Instruments (VIs) for an enhanced learning environment

Application of Virtual Instruments (VIs) for an enhanced learning environment Application of Virtual Instruments (VIs) for an enhanced learning environment Philip Smyth, Dermot Brabazon, Eilish McLoughlin Schools of Mechanical and Physical Sciences Dublin City University Ireland

More information

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2 IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 04, 2014 ISSN (online): 2321-0613 Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant

More information

Speaker Recognition. Speaker Diarization and Identification

Speaker Recognition. Speaker Diarization and Identification Speaker Recognition Speaker Diarization and Identification A dissertation submitted to the University of Manchester for the degree of Master of Science in the Faculty of Engineering and Physical Sciences

More information

9 Sound recordings: acoustic and articulatory data

9 Sound recordings: acoustic and articulatory data 9 Sound recordings: acoustic and articulatory data Robert J. Podesva and Elizabeth Zsiga 1 Introduction Linguists, across the subdisciplines of the field, use sound recordings for a great many purposes

More information

Course Law Enforcement II. Unit I Careers in Law Enforcement

Course Law Enforcement II. Unit I Careers in Law Enforcement Course Law Enforcement II Unit I Careers in Law Enforcement Essential Question How does communication affect the role of the public safety professional? TEKS 130.294(c) (1)(A)(B)(C) Prior Student Learning

More information

Evaluation of Various Methods to Calculate the EGG Contact Quotient

Evaluation of Various Methods to Calculate the EGG Contact Quotient Diploma Thesis in Music Acoustics (Examensarbete 20 p) Evaluation of Various Methods to Calculate the EGG Contact Quotient Christian Herbst Mozarteum, Salzburg, Austria Work carried out under the ERASMUS

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Phonetics. The Sound of Language

Phonetics. The Sound of Language Phonetics. The Sound of Language 1 The Description of Sounds Fromkin & Rodman: An Introduction to Language. Fort Worth etc., Harcourt Brace Jovanovich Read: Chapter 5, (p. 176ff.) (or the corresponding

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence INTERSPEECH September,, San Francisco, USA Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence Bidisha Sharma and S. R. Mahadeva Prasanna Department of Electronics

More information

A Pipelined Approach for Iterative Software Process Model

A Pipelined Approach for Iterative Software Process Model A Pipelined Approach for Iterative Software Process Model Ms.Prasanthi E R, Ms.Aparna Rathi, Ms.Vardhani J P, Mr.Vivek Krishna Electronics and Radar Development Establishment C V Raman Nagar, Bangalore-560093,

More information

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011 Montana Content Standards for Mathematics Grade 3 Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011 Contents Standards for Mathematical Practice: Grade

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

On Developing Acoustic Models Using HTK. M.A. Spaans BSc.

On Developing Acoustic Models Using HTK. M.A. Spaans BSc. On Developing Acoustic Models Using HTK M.A. Spaans BSc. On Developing Acoustic Models Using HTK M.A. Spaans BSc. Delft, December 2004 Copyright c 2004 M.A. Spaans BSc. December, 2004. Faculty of Electrical

More information

A faculty approach -learning tools. Audio Tools Tutorial and Presentation software Video Tools Authoring tools

A faculty approach -learning tools. Audio Tools Tutorial and Presentation software Video Tools Authoring tools A faculty approach -learning tools Audio Tools Tutorial and Presentation software Video Tools Authoring tools Quizz tools Powerpoint 2 Flash Content tools Web 2.0 tools RUFO Project Work visit at Paris

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Indian Institute of Technology, Kanpur

Indian Institute of Technology, Kanpur Indian Institute of Technology, Kanpur Course Project - CS671A POS Tagging of Code Mixed Text Ayushman Sisodiya (12188) {ayushmn@iitk.ac.in} Donthu Vamsi Krishna (15111016) {vamsi@iitk.ac.in} Sandeep Kumar

More information

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY Sergey Levine Principal Adviser: Vladlen Koltun Secondary Adviser:

More information

USER ADAPTATION IN E-LEARNING ENVIRONMENTS

USER ADAPTATION IN E-LEARNING ENVIRONMENTS USER ADAPTATION IN E-LEARNING ENVIRONMENTS Paraskevi Tzouveli Image, Video and Multimedia Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens tpar@image.

More information

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access

The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access The Perception of Nasalized Vowels in American English: An Investigation of On-line Use of Vowel Nasalization in Lexical Access Joyce McDonough 1, Heike Lenhert-LeHouiller 1, Neil Bardhan 2 1 Linguistics

More information

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula

Quarterly Progress and Status Report. Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Voiced-voiceless distinction in alaryngeal speech - acoustic and articula Nord, L. and Hammarberg, B. and Lundström, E. journal:

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Communication around Interactive Tables

Communication around Interactive Tables Communication around Interactive Tables Figure 1. Research Framework. Izdihar Jamil Department of Computer Science University of Bristol Bristol BS8 1UB, UK Izdihar.Jamil@bris.ac.uk Abstract Despite technological,

More information

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all Human Communication Science Chandler House, 2 Wakefield Street London WC1N 1PF http://www.hcs.ucl.ac.uk/ ACOUSTICS OF SPEECH INTELLIGIBILITY IN DYSARTHRIA EUROPEAN MASTER S S IN CLINICAL LINGUISTICS UNIVERSITY

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Innov High Educ (2009) 34:93 103 DOI 10.1007/s10755-009-9095-2 Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Phyllis Blumberg Published online: 3 February

More information

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,

More information

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors

Master s Programme in Computer, Communication and Information Sciences, Study guide , ELEC Majors Master s Programme in Computer, Communication and Information Sciences, Study guide 2015-2016, ELEC Majors Sisällysluettelo PS=pääsivu, AS=alasivu PS: 1 Acoustics and Audio Technology... 4 Objectives...

More information

Mastering Team Skills and Interpersonal Communication. Copyright 2012 Pearson Education, Inc. publishing as Prentice Hall.

Mastering Team Skills and Interpersonal Communication. Copyright 2012 Pearson Education, Inc. publishing as Prentice Hall. Chapter 2 Mastering Team Skills and Interpersonal Communication Chapter 2-1 Communicating Effectively in Teams Chapter 2-2 Communicating Effectively in Teams Collaboration involves working together to

More information

Case study Norway case 1

Case study Norway case 1 Case study Norway case 1 School : B (primary school) Theme: Science microorganisms Dates of lessons: March 26-27 th 2015 Age of students: 10-11 (grade 5) Data sources: Pre- and post-interview with 1 teacher

More information

Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction Sensor

Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction Sensor International Journal of Control, Automation, and Systems Vol. 1, No. 3, September 2003 395 Quantitative Evaluation of an Intuitive Teaching Method for Industrial Robot Using a Force / Moment Direction

More information

Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years

Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years Monitoring Metacognitive abilities in children: A comparison of children between the ages of 5 to 7 years and 8 to 11 years Abstract Takang K. Tabe Department of Educational Psychology, University of Buea

More information

Segregation of Unvoiced Speech from Nonspeech Interference

Segregation of Unvoiced Speech from Nonspeech Interference Technical Report OSU-CISRC-8/7-TR63 Department of Computer Science and Engineering The Ohio State University Columbus, OH 4321-1277 FTP site: ftp.cse.ohio-state.edu Login: anonymous Directory: pub/tech-report/27

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

IBM Software Group. Mastering Requirements Management with Use Cases Module 6: Define the System

IBM Software Group. Mastering Requirements Management with Use Cases Module 6: Define the System IBM Software Group Mastering Requirements Management with Use Cases Module 6: Define the System 1 Objectives Define a product feature. Refine the Vision document. Write product position statement. Identify

More information

Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation

Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation Multimodal Technologies and Interaction Article Evaluating Interactive Visualization of Multidimensional Data Projection with Feature Transformation Kai Xu 1, *,, Leishi Zhang 1,, Daniel Pérez 2,, Phong

More information

Speech Recognition by Indexing and Sequencing

Speech Recognition by Indexing and Sequencing International Journal of Computer Information Systems and Industrial Management Applications. ISSN 215-7988 Volume 4 (212) pp. 358 365 c MIR Labs, www.mirlabs.net/ijcisim/index.html Speech Recognition

More information

Meriam Library LibQUAL+ Executive Summary

Meriam Library LibQUAL+ Executive Summary Meriam Library LibQUAL+ Executive Summary Meriam Library LibQUAL+ Executive Summary Page 2 ABOUT THE SURVEY LibQUAL+ is a survey designed to measure users perceptions and expectations of library service

More information

Executive Summary. Lava Heights Academy. Ms. Joette Hayden, Principal 730 Spring Dr. Toquerville, UT 84774

Executive Summary. Lava Heights Academy. Ms. Joette Hayden, Principal 730 Spring Dr. Toquerville, UT 84774 Ms. Joette Hayden, Principal 730 Spring Dr. Toquerville, UT 84774 Document Generated On April 25, 2013 TABLE OF CONTENTS Introduction 1 Description of the School 2 School's Purpose 4 Notable Achievements

More information

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento

More information

The My Class Activities Instrument as Used in Saturday Enrichment Program Evaluation

The My Class Activities Instrument as Used in Saturday Enrichment Program Evaluation Running Head: MY CLASS ACTIVITIES My Class Activities 1 The My Class Activities Instrument as Used in Saturday Enrichment Program Evaluation Nielsen Pereira Purdue University Scott J. Peters University

More information

Aviation English Solutions

Aviation English Solutions Aviation English Solutions DynEd's Aviation English solutions develop a level of oral English proficiency that can be relied on in times of stress and unpredictability so that concerns for accurate communication

More information

University of Toronto Physics Practicals. University of Toronto Physics Practicals. University of Toronto Physics Practicals

University of Toronto Physics Practicals. University of Toronto Physics Practicals. University of Toronto Physics Practicals This is the PowerPoint of an invited talk given to the Physics Education section of the Canadian Association of Physicists annual Congress in Quebec City in July 2008 -- David Harrison, david.harrison@utoronto.ca

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company

WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company WiggleWorks Software Manual PDF0049 (PDF) Houghton Mifflin Harcourt Publishing Company Table of Contents Welcome to WiggleWorks... 3 Program Materials... 3 WiggleWorks Teacher Software... 4 Logging In...

More information

Arabic Orthography vs. Arabic OCR

Arabic Orthography vs. Arabic OCR Arabic Orthography vs. Arabic OCR Rich Heritage Challenging A Much Needed Technology Mohamed Attia Having consistently been spoken since more than 2000 years and on, Arabic is doubtlessly the oldest among

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Voiceless Stop Consonant Modelling and Synthesis Framework Based on MISO Dynamic System

Voiceless Stop Consonant Modelling and Synthesis Framework Based on MISO Dynamic System ARCHIVES OF ACOUSTICS Vol. 42, No. 3, pp. 375 383 (2017) Copyright c 2017 by PAN IPPT DOI: 10.1515/aoa-2017-0039 Voiceless Stop Consonant Modelling and Synthesis Framework Based on MISO Dynamic System

More information

Spoofing and countermeasures for automatic speaker verification

Spoofing and countermeasures for automatic speaker verification INTERSPEECH 2013 Spoofing and countermeasures for automatic speaker verification Nicholas Evans 1, Tomi Kinnunen 2 and Junichi Yamagishi 3,4 1 EURECOM, Sophia Antipolis, France 2 University of Eastern

More information

Plattsburgh City School District SIP Building Goals

Plattsburgh City School District SIP Building Goals Goal Statement 1: To develop and implement universal benchmarking assessments for all core subject areas. 1. To develop common understandings about the scope and sequence for core courses taught in the

More information

Dakar Framework for Action. Education for All: Meeting our Collective Commitments. World Education Forum Dakar, Senegal, April 2000

Dakar Framework for Action. Education for All: Meeting our Collective Commitments. World Education Forum Dakar, Senegal, April 2000 Dakar Framework for Action Education for All: Meeting our Collective Commitments Text adopted by the World Education Forum Dakar, Senegal, 26-28 April 2000 Dakar Framework for Action Education for All:

More information

Advertisement No. 2/2013

Advertisement No. 2/2013 OFFICE OF THE REGISTRAR ASSAM AGRICULTURAL UNIVERSITY JORHAT-785013 Advertisement No. 2/2013 Applications from the Indian citizens are invited for 19 (nineteen) posts of Jr. Scientists and equivalent rank

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

age, Speech and Hearii

age, Speech and Hearii age, Speech and Hearii 1 Speech Commun cation tion 2 Sensory Comm, ection i 298 RLE Progress Report Number 132 Section 1 Speech Communication Chapter 1 Speech Communication 299 300 RLE Progress Report

More information

Expressive speech synthesis: a review

Expressive speech synthesis: a review Int J Speech Technol (2013) 16:237 260 DOI 10.1007/s10772-012-9180-2 Expressive speech synthesis: a review D. Govind S.R. Mahadeva Prasanna Received: 31 May 2012 / Accepted: 11 October 2012 / Published

More information