THE USE OF A FORMANT DIAGRAM IN AUDIOVISUAL SPEECH ACTIVITY DETECTION

Similar documents
Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Human Emotion Recognition From Speech

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

WHEN THERE IS A mismatch between the acoustic

Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines

Eli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. Graduate School of Information Science, Nara Institute of Science & Technology

International Journal of Computational Intelligence and Informatics, Vol. 1 : No. 4, January - March 2012

Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition

Speech Emotion Recognition Using Support Vector Machine

Lip reading: Japanese vowel recognition by tracking temporal changes of lip shape

SARDNET: A Self-Organizing Feature Map for Sequences

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Speech Synthesis in Noisy Environment by Enhancing Strength of Excitation and Formant Prominence

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Speaker recognition using universal background model on YOHO database

Proceedings of Meetings on Acoustics

Learning Methods in Multilingual Speech Recognition

Segregation of Unvoiced Speech from Nonspeech Interference

Australian Journal of Basic and Applied Sciences

Speech Recognition using Acoustic Landmarks and Binary Phonetic Feature Classifiers

Speaker Identification by Comparison of Smart Methods. Abstract

1. REFLEXES: Ask questions about coughing, swallowing, of water as fast as possible (note! Not suitable for all

Mandarin Lexical Tone Recognition: The Gating Paradigm

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH

Self-Supervised Acquisition of Vowels in American English

Probability and Statistics Curriculum Pacing Guide

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Modeling function word errors in DNN-HMM based LVCSR systems

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

A comparison of spectral smoothing methods for segment concatenation based speech synthesis

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Statewide Framework Document for:

A Comparison of DHMM and DTW for Isolated Digits Recognition System of Arabic Language

Speaker Recognition. Speaker Diarization and Identification

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Modeling function word errors in DNN-HMM based LVCSR systems

Voice conversion through vector quantization

Body-Conducted Speech Recognition and its Application to Speech Support System

How to Judge the Quality of an Objective Classroom Test

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Why Did My Detector Do That?!

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition

Noise-Adaptive Perceptual Weighting in the AMR-WB Encoder for Increased Speech Loudness in Adverse Far-End Noise Conditions

A study of speaker adaptation for DNN-based speech synthesis

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Visit us at:

Self-Supervised Acquisition of Vowels in American English

Evolutive Neural Net Fuzzy Filtering: Basic Description

BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY

Acoustic correlates of stress and their use in diagnosing syllable fusion in Tongan. James White & Marc Garellek UCLA

AP Statistics Summer Assignment 17-18

Introduction to the Practice of Statistics

Revisiting the role of prosody in early language acquisition. Megha Sundara UCLA Phonetics Lab

STA 225: Introductory Statistics (CT)

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Circuit Simulators: A Revolutionary E-Learning Platform

College Pricing and Income Inequality

Automatic Pronunciation Checker

Automatic segmentation of continuous speech using minimum phase group delay functions

Linking the Ohio State Assessments to NWEA MAP Growth Tests *

Quarterly Progress and Status Report. VCV-sequencies in a preliminary text-to-speech system for female speech

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Artificial Neural Networks written examination

The NICT/ATR speech synthesis system for the Blizzard Challenge 2008

Perceptual scaling of voice identity: common dimensions for different vowels and speakers

Speech Recognition at ICSI: Broadcast News and beyond

learning collegiate assessment]

Grade 6: Correlated to AGS Basic Math Skills

CONSTRUCTION OF AN ACHIEVEMENT TEST Introduction One of the important duties of a teacher is to observe the student in the classroom, laboratory and

Digital Signal Processing: Speaker Recognition Final Report (Complete Version)

ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION

Reducing Features to Improve Bug Prediction

THE RECOGNITION OF SPEECH BY MACHINE

Rule Learning With Negation: Issues Regarding Effectiveness

Author's personal copy

Using EEG to Improve Massive Open Online Courses Feedback Interaction

Lecture 1: Machine Learning Basics

Functional Skills Mathematics Level 2 assessment

Calibration of Confidence Measures in Speech Recognition

On-Line Data Analytics

INPE São José dos Campos

Affective Classification of Generic Audio Clips using Regression Models

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Dublin City Schools Mathematics Graded Course of Study GRADE 4

Rule Learning with Negation: Issues Regarding Effectiveness

UNIDIRECTIONAL LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK WITH RECURRENT OUTPUT LAYER FOR LOW-LATENCY SPEECH SYNTHESIS. Heiga Zen, Haşim Sak

On the Combined Behavior of Autonomous Resource Management Agents

VIEW: An Assessment of Problem Solving Style

DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds

Voiceless Stop Consonant Modelling and Synthesis Framework Based on MISO Dynamic System

Atypical Prosodic Structure as an Indicator of Reading Level and Text Difficulty

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

FOR TEACHERS ONLY. The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION PHYSICAL SETTING/PHYSICS

Generative models and adversarial training

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

College Pricing and Income Inequality

COMPUTER INTERFACES FOR TEACHING THE NINTENDO GENERATION

Transcription:

THE USE OF A FORMANT DIAGRAM IN AUDIOVISUAL SPEECH ACTIVITY DETECTION K.C. van Bree, H.J.W. Belt Video Processing Systems Group, Philips Research, Eindhoven, Netherlands Karl.van.Bree@philips.com, Harm.Belt@philips.com ABSTRACT We present an audiovisual approach to the problem of voice activity detection for systems with a single microphone and a single camera with multiple people in the camera s field of view. We aim to have a speech activity detection result per person. The approach utilizes a face tracking and lip contour tracking algorithm for the video analysis, and pitch presence detection and formant frequency tracking algorithms for the audio analysis. When from the audio we detect speech activity and from the video we find lip activity for more than a single person, we check for each person whether the vowels correspond with the video mouth parameters to find out if this person speaks. To this end we make use of the F 1 - F 2 speech formant diagram in which we propose three vowel groups that are distinctive both from audio and video data. 1. INTRODUCTION For many speech signal processing applications such as speech telecommunications and speech recognition systems it is relevant to be able to detect speech activity. Speech activity detection algorithms like in [1] work well under good acoustic conditions but suffer from false detections when ambient noises are speech-like. Activity detection techniques purely on video lip motion like in [2, 3] aim to be independent of these noises, but suffer from false detections when people move their lips in facial expressions without talking. In this paper we adopt an audiovisual approach to the task of speech activity detection. We consider the case of multiple persons in a camera s field of view with only one of them talking at a time, while others could be moving their lips without talking. We aim to have a speech activity detection result per person. As summarized in Fig. 1, we propose to correlate speech features with mouth features to find proof of which person utters a detected vowel. In Section 2 we focus on the audio modality. An audio speech activity detector is given, and a 2-dimensional formant diagram is introduced in which we propose to distinguish three well-separated vowel groups. Section 3 deals The authors thank the reviewers A.C. den Brinker and R. Jasinschi from Philips Research Laboratories in Eindhoven, and R. Sluijter from the Eindhoven University of Technology for their useful comments. Figure 1. Flow chart of the audiovisual speech activity detector with the video modality. We present a lip detection and tracking algorithm. We introduce in the audio formant diagram distinguishable mouth shapes with the three vowel groups that we selected and use this in a lip detection algorithm. The main contribution of this paper is the specific choice of this diagram, and its application for the improved audiovisual activity detector presented in Section 4. Finally, in Section 5 we give our conclusions. 2. AUDIO VOICE SIGNALS AND DETECTION 2.1. Audio Speech Activity Detection We next describe the applied audio-only speech activity detector. In the first step we divide the signal into frames by windowing. Next, for each frame we investigate whether there is non-stationary signal activity. If so, the final step is to verify the presence of pitch. As such we will get one detection per audio frame. Let s[n] denote the sampled audio signal and B the audio frame size. We take B = 128 at F s = 8 khz. Let S w [k] be the M-points discrete Fourier transform (DFT) result of the Hanning windowed 2B last audio samples. We take M = 2B. P s [k] is the power spectrum: P s [k] = S w [k] 2. Note that, due to the symmetry in the frequency domain, only the first M/2 + 1 points of P s [k] are relevant. We estimate 2007 EURASIP 2390

from P s [k] the stationary background noise part P n [k] with a minimum statistics method [4]. We detect non-stationary signal activity when the SNR exceeds θ (we take θ = 8): M/2 k=0 P s[k] P n [k] M/2 k=0 P n[k] > θ. (1) The auto-correlation ρ[l] is calculated by the inverse DFT of P s [k] P n [k]. L = {l min,, l max } is the lag range corresponding to the frequency range of human pitch (between 80 and 500 Hz). Like in [1] we assume presence of pitch when ρ[l]/ρ[0] > θ ρ for any l L. (2) A good value for θ ρ is 0.75. We detect speech when signal activity is detected according to Eq. 1 and when pitch presence is detected according to Eq. 2. To deal with consonants we keep the detection result positive for a small extended time period when pitch is no longer present and Eq. 1 is still satisfied. The audio-only speech detector works well for one person but it cannot discriminate between different people. 2.2. Voice Signals and Vowel Groups Next we link speech formant frequencies to vowels. The formant frequencies are denoted by F 1, F 2,.... In [5] (Figure 9), Peterson and Barney plot for ten vowels uttered by 76 speakers the location in an F 1 -F 2 diagram, and they then distinguish ten smoothly-shaped regions for each vowel. The figure demonstrates that already with the first two formant frequencies one can reasonably well predict the uttered vowel. In our further discussion we therefore restrict ourselves to F 1 and F 2. To estimate F 1 and F 2, we first perform DC-removal and pre-emphasis filtering. The signal is then Hanning-windowed. For each windowed audio frame a 10-th order auto-regressive (AR) model is calculated [6]. To find F 1 we search for the first (lowest) frequency in the range of 200 to 830 Hz at which the AR spectrum peaks with a sufficiently high Q- factor. We do the same for F 2 in the range of 500 to 2650 Hz. Compared to Peterson and Barney, we confine ourselves to only three smoothly shaped vowel regions in the F 1 -F 2 diagram, see Fig. 2. We choose these regions to be wellseparated. By this we only consider vowels that are very distinct. The O-group contains vowels like /o/ and /u/, the A-group vowels like /a/ and /æ/, and the I-group vowels near /i/. Our specific choice for these three regions was based on our intuition, but is justified also by the results in (Table 1) of [7]. In this paper the authors have performed a neural network classification of vowels from reflection coefficients. Their results show classification confusions between vowels, reducing their classification accuracy to only three groups. These three groups are similar to our groups. In the next section we link the three vowel groups to distinct video lip shape parameters to improve our detector. Figure 2. Speech vowels in the F 1 -F 2 plane 3. VIDEO LIP SIGNALS AND DETECTION 3.1. Lip Finding and Contour Tracking We want to extract the vertical mouth opening m and the distance w between mouth corners as indicated in the lip contour model of Fig. 3a. First, we locate the faces in an image with a face detection algorithm based on [8]. Then, for each face, we select the mouth region-of-interest (MROI) as the lower part of the face region. An example MROI is shown in Fig. 3b. Figure 3. (a) Lip contour model; (b) Mouth region of interest (MROI) and search lines for lip edges. The locations of the mouth corners are extracted as follows. First, a binary image is calculated by dynamic thresholding of the MROI. We then look for the blob in the binary image which has the most mouth-like shape. Finally, the locations of the mouth corners are found as the left and right extremities of the mouth blob. The edges of the lips are found on search lines perpendicular to the line between the mouth corners (Fig. 3b). On the q-th search line we apply a function to the p-th pixel value such that a number R is obtained which is large for the red lip area and small for skin, teeth and the inner mouth. Then, for each pixel on the q-th line the value of R is compared to a threshold in order to yield the four edges of the two lips it crosses. Finally, two internal lip edges defining m are found as a second-order polynomial fit on the mouth corners and lip edge points, excluding outliers by a median operation. 2007 EURASIP 2391

We calculate R as R(q, p) = Q(q, p) {max(y (q, p) φ hi (q), 0) + max( Y (q, p) + φ lo (q), 0)}. (3) Here the number Q(q, p) is a mapping of the chroma values C b and C r according to Q(q, p) = α 1 Cr(q, p) + α 2 Cb(q, p), where α 1 and α 2 are chosen to favor the reddish color of the lips. We used α 1 = 0.88 and α 2 = 0.48. The second term in Eq. 3 is the correction for luminance Y on Q resulting in R becoming small for pixels that belong to the (bright) teeth or the (dark) mouth opening. The threshold φ hi (q) is calculated as the average luminance µ Y (q) of the q-th line. The threshold φ lo (q) is chosen to be selective only for the darkest pixels, and is calculated as φ lo (q) = µ Y (q) 0.8σ Y (q) where σ Y (q) is the standard deviation of Y (q). 3.2. Video Speech Activity Detection When speech activity is detected from the audio modality, we exclude activity of some people in the image by inspecting their video lip activity. The video detector must be conservative for silence detection, meaning when silence is detected it is quite certain that this is true. For visual speech detection we follow an approach like in [3]. Let t denote the index of a video frame. We detect speech activity from the video when ṽ[t] > θ v (4) and speech silence otherwise, with θ v a small fixed positive threshold that we obtained experimentally. Here ṽ[t] is the time-smoothed version of the vertical mouth velocity v[t] according to with ṽ[t] = α ṽ[t 1] + (1 α)v[t] (5) v[t] = m n [t] m n [t 1], m n [t] = m[t]/µ w, and µ w the average horizontal mouth opening in number of pixels serving the purpose of normalizing m[t]. Unlike [3] we use an asymmetric recursion in Eq. 5 with fast (rise) response favoring situations when the mouth is opening and slow (decay) response when the mouth is closing. In this way the detector is conservative for detection of silence. We also apply a fast response when the vertical mouth opening is completely zero. To achieve this α is given by { αf when m α = n [t] > m n [t 1] or m n [t] = 0, α s when m n [t] m n [t 1] and m n [t] 0. Here α f = τ f F v 1 τ f F v and α s = τ sf v 1 τ s F v, with F v the video frame rate and we choose τ f = 1/16s and τ s = 1/8s. A combined audiovisual speech detector for each person in the image is now obtained by multiplying the audio-only detection result from Section 2.1 with the video-only detection result belonging to that person from Eq. 4. With the addition of the video-only detector some people can be excluded when the audio modality has detected speech activity. It is not a sufficient measure however; there remains ambiguity when people move their lips without actually speaking (e.g. when they smile). We will show in the next section that some ambiguous detections can be eliminated by correlating detected audio formant frequencies with video lip shape parameters, which is the main contribution of this paper. 4. AUDIOVISUAL DETECTION 4.1. Lips and Vowel Groups In the F 1 -F 2 diagram of Fig. 2 we distinguished three vowel groups. Next, we relate these three vowel groups to typical mouth shapes. The vowel height is a feature expressing the vertical position of the tongue relative to the roof of the mouth during vowels sounds. Likewise, the vowel backness expresses the horizontal tongue position relative to the back of the mouth. In [5] vowels are related to F 1 and F 2, and in the International Phonetic Alphabet (IPA) chart vowels are related to vowel height and backness. More specifically, it can be deduced from [5] and the IPA chart that the first formant frequency F 1 is related to vowel height, and the second formant frequency F 2 is related to vowel backness. A low vowel from the A-group has a high F 1, and a high vowel from the O- group or the I-group has a low F 2. Back vowels from the O-group have a low F 2, and front vowels from the I-group have a high F 2. From video we cannot measure tongue positions, only lip shapes, but from the literature and our own experience we learned that there is a correlation between vowel backness (hence F 2 ) and roundedness of a lip shape. Also, we noticed from experiments a phonetic correlation between vowel height (hence F 1 ) and vertical mouth opening. The mentioned experiments involved the visual inspection of recorded lip images of persons which were pronouncing different isolated vowels. We selected from these experiments a representative lip shape in each of the three audio vowel groups (Fig. 4). 4.2. Detection Fig. 5 shows for two alternately-talking people the results of the audiovisual activity detector from Section 3.2, which is achieved by multiplying the result of the audio-only detector with the video-only mouth activity detector. The figure shows ambiguous detections. For example, in the interval t {19.0,, 21.0} only person 1 was talking, but the detector incorrectly finds speech for person 2 that momentarily 2007 EURASIP 2392

Figure 5. Audiovisual speech detection for two persons Figure 6. Audiovisual vowel detection moved the lips without producing sound. Using Fig. 4 we can remove some ambiguity when we find a clear visual support from the lip shape of only one person (and no other) for the detected formant frequencies. In this article we focus on the visual detection of roundedness because it proved to be the strongest cue. Roundedness is detected when m w > θ r and w < θ w µ w, (6) where θ r = 0.2 and θ w = 0.8 proved to give conservative results. When we detect from F 1 and F 2 that the current sound stems from the /o/-group, and when we detect roundedness for only one person according to Eq. 6, then we set the activity detection result to false for the other persons for as long as there is ambiguity. In Fig. 6 we plot the video lip parameters for both persons, and the detected vowels from F 1 and F 2. As shown in the vowel plot, at t = 19.6 a clear vowel from the /o/-group is recognized. As can be derived from the two roundedness plots and from Eq. 6 this vowel is visually supported by the mouth shape of person 1 and not by the mouth shape of person 2. 2007 EURASIP 2393

Figure 7. Improved audiovisual speech detection Figure 4. Distinct mouth shapes in vowel groups [3] D. Sodoyer, B. Rivet, L Girin, J.-L. Schwartz, and C. Jutten, An analysis of visual speech information applied to voice activity detection, in Proceedings ICASSP. IEEE, 2006, vol. I, pp. 601 604. [4] R. Martin, Noise power spectral density estimation based on optimal smoothing and minimum statistics, IEEE Trans. Speech Audio Processing, vol. 9(5), pp. 504 512, July 2001. [5] G.E. Peterson and H.L. Barney, Control methods used in a study of the vowels, Journal of the Acoustical Society of America, vol. 24(2), pp. 175 184, Mar. 1952. [6] S. Kay, Modern Spectral Estimation, Prentice-Hall, 1988. [7] S. Kshirsagar and M. Magnenat-Thalmann, Lip synchronization using linear predictive analysis, in IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2000, vol. 2, pp. 1077 1080. [8] P. Viola and M.J. Jones, Robust real-time face detection, International Journal of Computer Vision, vol. 57, no. 2, pp. 137 154, 2004. From this observation we can remove ambiguity by setting the detection result for person 2 to false immediately after t = 19.6 until the moment that there is no longer ambiguity. The resulting improved detection for the second person is shown in Fig. 7. 5. CONCLUSIONS We have given an audiovisual approach to speech activity detection for systems with one microphone and one camera, and with multiple persons in the camera s field of view. From an audio-only detector it is not clear which person talks. Combination with a video lip activity detector helps, but still leaves ambiguity when someone moves the lips without talking. We introduced a formant diagram in which we distinguished three separated vowel groups that can be linked with video lip shape parameters. We showed that this diagram is a useful tool to remove ambiguous detections and provide more clarity about which person talks. 6. REFERENCES [1] P.L. Chu, Voice-Activated AGC for Teleconferencing, in Proceedings ICASSP. IEEE, 1996, pp. 929 932. [2] P. Liu and Z. Want, Voice Activity Detection Using Visual Information, in Proceedings ICASSP. IEEE, 2004, vol. I, pp. 609 613. 2007 EURASIP 2394