Exploiting speaker segmentations for automatic role detection. An application to broadcast news documents.
|
|
- Merryl Patrick
- 6 years ago
- Views:
Transcription
1 Exploiting speaker segmentations for automatic role detection. An application to broadcast news documents. Benjamin Bigot Isabelle Ferrané IRIT - Université de Toulouse 118, route de Narbonne Toulouse Cedex 9 - France {bigot, ferrane, pinquier}@irit.fr Julien Pinquier Abstract In the field of automatic audiovisual content-based indexing and structuring, finding events like interviews, debates, reports, or live commentaries requires to bridge the gap between low-level feature extraction and such highlevel event detection. In our work, we consider that detecting speaker role to enrich interaction sequences between speakers is a first step to reach this goal. The generic method we propose follows a data mining approach. We assume that speaker roles can emerge from parameters extracted from speaker segmentations without taking any prior information into account. Each speaker is then represented by a feature vector carrying temporal, signal and prosodic information. In this paper, we study how methods for dimensionality reduction and classification can help to recognize speaker roles. This method is applied to the corpus of the ESTER2 evaluation campaign and our best result reaches about 72% of well recognized roles that corresponds to nearly 79% of speech time. 1. Introduction Archiving and exploiting audiovisual data masses require automatic methods for content indexing, structuring as well as filtering. It is of first importance to guarantee efficient access to information through complex queries of a high semantic level, and to develop tools which enable relevant browsing through audiovisual data masses. Finding events like interviews, debates, reports, or live commentaries in audiovisual documents requires to bridge the gap between low-level feature extraction and high-level event detection using relevant descriptors. Although tools which automatically extract low-level features from audio and video are numerous, this is not the case for high-level features. For the past few years, methods have been developed in several domains in order to bridge this gap, particularly in web document search [17], summarization, information retrieval in sport video [13] and content discovery in audio data [4]. First, we present our main motivations in detecting interaction and speaker roles. Section 3 explains how our work on speaker role detection stands among existing work. The generic method we propose is described in section 4. It is based on the idea that a data mining approach applied to various low level features extracted from temporal segmentations as well as from prosodic or basic signal can make dominant descriptors emerge. To do so methods for dimensionality reduction are applied followed by a classification method. Finally, experiments carried out on the audio corpus are presented in section Motivations 2.1. Detecting interaction sequences In order to achieve high-level indexing of audio or video contents, we are particularly interested in detecting and characterizing interaction sequences between speakers. As a matter of fact, these sequences are important clues about content structure because they can (1) delimit other sequences, (2) be part of recurring patterns or (3) be considered as high-level events if they correspond to interviews or debates. Furthermore, developing methods for uncovering interaction sequences, can be a way to focus on verbal exchanges, more informal or less structured because they may contain spontaneous and conversational speech. We try to bring our contribution to this field, by studying temporal structuring of audiovisual contents, detecting and characterizing interaction zones between speakers. This work has found an applicative framework through the EPAC Project [7] 1. Detecting conversational speech sequences 1 This work is conducted within the French ANR Project ANR-06- CIS6-MDCA /10/$26.00 c 2010 IEEE CBMI 2010
2 could be a way to anticipate difficulties met by automatic speech transcription systems on this type of data [12]. Segmenting the data flow into interaction zones could as well bring some interesting clues in the field of named entity recognition as such sequences generally start with the presenter greeting his guests or introducing them to the listeners and end with the presenter taking his leave of them. Finally, another interesting aspect of interaction zone detection and their characterization as debates for instance, will be helpful in the field of opinion mining Detecting speaker role We think that role detection is a central element in content-based indexing and that studying interaction and extracting information about main speaker role will help to go a step further toward high-level event detection. Usually in audiovisual shows, behaviours adopted by speakers depend on the role they are in charge. Some of them can be present all along the show, while some others can make a short appearance in it. They also may appear alone or interact with one or more other speakers. Some people act as the anchorman of the sequence and other ones as guests. The way people interact together depends on this, as well as the liveliness of the interaction, that can be soft or stronger if speakers are involved in an animated debate. That is why we aim to extract some information about speaker role and study their behaviour in terms of interaction. In order to better place our contribution, the next section gives an overview of the state of the art about the role recognition. 3. Related work In work relative to role recognition, three main categories of roles have been classically studied: anchorman, journalists and a third one gathering the other speakers and most of the time called guests or others. In a content structuring perspective, different approaches are followed State of the art Role detection is a quite recent research field and to our knowledge little work on this topic has been reported in computer science literature. A first type of research work is based on observing role sequence patterns that can be found in a set of documents recorded from a same program show while others build summaries of broadcast news by detecting the journalist who presents the headlines. In 1999, Stolcke [15] worked on broadcast news document structuring, pointing up relations between changes of speaker roles along each document as well as changes of topics. In 2000, Barzilay [1] presented one of the first results for an automatic role recognition task. This work proposes a parameterization of roles based on lexical and contextual features extracted from audio transcriptions as well as a first accurate definition of role categories: anchorman, journalist and guest characterized by means of linguistic considerations. The corpus used for evaluation was composed of 35 recordings of the same broadcast show i.e. with similar content and structure, which represents 17 hours of audio data. In 2006, Liu [14] proposed two different approaches in order to attribute speaker roles chosen among three categories: anchorman, reporter and others. The first approach exploits Hidden Markov Models, the second one is based on a Maximum Entropy classifier. In both cases, manual transcriptions are used to train role specific N-gram language models and predict roles for a set of test documents. Evaluation was conducted on 336 broadcast news from different sources (170 hours of audio data) and 77% of the overall number of speech turns were correctly labelled by these two classifiers. More recently, Vinciarelli [16] has proposed two methods for role detection, both applied to speaker segmentations. The first method exploits intervention duration distributions, while the second one is based on Social Network Analysis. Both have been applied to a very homogeneous corpus consisting of 96 recordings from the same twelveminute broadcast news show (around 19 hours). Performances reached 85% of the overall duration correctly labelled using 6 different roles (anchorman, second anchorman, guest, interview participant, abstract and meteo). These propositions have been followed by Favre s contribution [9] who has integrated a Social Affiliation Network to extract features characterizing speaker interactions and speaker positions within documents. The prediction is achieved with Hidden Markov Models and N-gram language models on a less homogeneous corpus (news, talkshow) and a less structured (meetings) one (1) 36 hours of broadcast news and talkshows and (2) 45 hours of meetings. The performances 80% on (1) and 45% on (2) highlight the difficulty of detecting roles in low structured documents About our contribution Most of the research work mentionned above is based on very homogeneous audio corpora. Our first goal is to propose a generic method to be able to process any type of document, from different sources and programs, because this diversity has an impact on speaker roles, document duration as well as document structures. In the perspective of data mass structuring, we have to be able to make different types of structure emerged in order to automatically cluster documents according to their temporal structure. Another reason for proposing a generic method is that we want to act before the transcription step in order to help transcrip-
3
4 4.3. Parameterization To our knowledge it is the first time these types of descriptors are used for speaker role detection. As a result of our investigations in [2], we think that temporal features combined with prosodic features can be characteristic of speaker roles in the context of broadcast news documents whatever the source and the program are. For each speaker, we extract a set of 34 features. Five of them are based on temporal measurements: his overall speaking time, his speaking span (duration between his first and last segments), his inactivity rate which is based on the difference between the two previous parameters, the number of his segments and his speaking time over the duration of the show in which the speaker appears. A second subset of 25 features is based on signal energy and is directly extracted from the audio files. For example, we look for characteristics describing the zones of silence: length, number, rate, mean, variance, minimum and maximum. We as well realise these calculations over the high energy zones, the Signal over Noise Ration and the results of a telephone detector. The last four features are related to the pitch of the speech signal: pitched zone rate, pitch average, pitch variance, maximal pitch. Depending on the document we are processing, several of these features may be insignificant or strongly correlated for a speaker role detection task. However, since we do not use any prior knowledge about speakers, we have to propose an exhaustive set of features to guarantee our method to be suitable for any document Dimensionality reduction In this work, reducing the dimensionality of the feature vectors is carried out using two different classical methods. First, we apply a Principal Component Analysis (PCA) [5]. We keep the main components which represent 95% of the information of the former representation. The second method is the Canonical Discriminant Analysis (CDA) [10]. The number of dimensions is then reduced to (N 1) where N is the number of classes (role types) Classification methods A lot of tests involving either unsupervised or supervised classification methods [5] were carried out. The unsupervised classification is based on the assumption that speakers who share the same roles should share the same features and then constitute clusters in the feature space. We applied the K-means algorithm and the DBSCAN [8] one, which is a density-based clustering methods. The supervised classification methods used in this work are Gaussian Mixture Models (GMM), Support Vector Machine (SVM) and k-nearest Neighbours (k-nn). The GMM is usually efficient when the learning samples are numerous enough. SVM is insensitive to differences between the number of training examples in each learning classes. We will use this method in two-classe problems. The k-nn algorithm offers the advantage of being still efficient for small learning populations. These sets of methods and their properties will allow us to treat different corpora in future work. In the next section we describe the corpus, the creation of the role ground truth. 5. Test data and ground truth 5.1. Corpus For our experiments, we used several documents taken out from the corpus of the ESTER2 evalutation campaign [11] and focused on the development and test sets. As shown in table 1 and table 2, this corpus consists of 46 radio shows recorded on 4 radio stations. Table 1. The ESTER2-DEV corpus. Radio nb. time slot type speakers Fr. Inter pm news 20 Fr. Inter pm debate 13 Fr. Inter 1 12 am-1pm debate 4 TVME pm news 14 Africa am news 13 RFI am news 16 RFI am news 21 Table 2. The ESTER2-TEST corpus. Radio nb. time slot type speakers Fr. Inter pm news 20 Fr. Inter pm debate 13 Fr. Inter am society 10 TVME pm news 20 Africa pm news 6 Africa am news 9 RFI pm news 7 RFI am news 7 There are 13 different programs in terms of structure, time slots, duration, number of speakers and even in terms of document types since 5 shows are not broadcast news. Ground truth for speaker segmentation was provided by the organizers of the ESTER2 campaign, therefore we can measure the quality of the automatic speaker segmentations used as input.
5 5.2. Ground truth for speaker roles Since we have participated to the speaker diarization task of ESTER2, speaker segmentation references called ESTER2-DEV-ref and ESTER2-TEST-ref are at our disposal. We have created the speaker role reference for these segmentations as well as for their automatic versions called ESTER2-DEV-auto and ESTER2-TEST-auto. The table 3 shows the number of speakers for every role in the corpus. Table 4. Role recognition results. Accuracy(%) Gauss. Mod. k-nn SVM TEST-ref PCA TEST-auto PCA TEST-ref CDA TEST-auto CDA Table 3. Number of speakers per role. Anchorman Journalist Other ESTER2-DEV-ref ESTER2-TEST-ref ESTER2-TEST-auto The Anchorman class is less represented than the two other classes since there is usually only one anchorman in a show. Among the overall number of 583 speakers present in the reference (ESTER2-DEV-ref and ESTER2-TEST-ref) some are not taken into account because we chose to consider only significant speakers whose speaking activity is higher than 10 seconds. The ESTER2-DEV-ref corpus is used for the training (or the tuning) of the supervised classification methods. The role recognition is realised on the ESTER2-TEST corpus. In order to evaluate the influence of the errors introduced by the automatic speaker diarization, the performances obtained on ESTER2-TEST-ref and on ESTER2-TEST-auto are compared. Tests are also done after applying a PCA or a CDA to reduce the dimensionality of feature vectors. 6. Experiments and results First, a basic 3-class recognition process (Anchorman, Journalist, Other) is applied. Results are reported in table 4. The accuracy expresses the proportion of speakers whose role has been correctly labelled. The k-nn classifier reaches higher performances than GMM and SVM. Actually, because of the small number of samples in the Anchorman class, we were not able to apply the GMM method but only a Gaussian Model which could be to few to model the Journalist and the Other classes. Among the different SVM kernel classifiers, the best results have been reached using a Gaussian kernel. The two dimensionality reduction methods (PCA and CDA) reach similar results. A positive result is that the Diarization Error Rate (DER) of the speaker diarization tool (11.35%) has no real impact on the performances: results on either manual or automatic segmentations are equivalent, which may prove the robustness of our approach. In second experiment, a preprocessing step aims to separate punctual speakers from non punctual ones. We define a punctual speaker as a speaker who appears in only one segment and in this case his span and his speaking activity are equals. In the test corpus, 48 Journalist and 36 Other are punctual speakers (no Anchorman). Thanks to this strategy performances increase significantly: about 6% for the k-nn classifier and more than 12% for the Gaussian Model with the CDA method (see table 5). The best result (accuracy 70.92%) is obtained with the PCA and k-nn combination. Table 5. Role recognition results using punctual/non punctual distinction. Accuracy(%) Gauss. Mod. K-NN SVM TEST-auto PCA TEST-auto CDA The confusion matrix (tables 6 and 7) describe more precisely the results for the best classifier (k-nn) according to the method applied. CDA is the best pre-processing method to isolate the Anchorman class: all speakers are detected. Contrarily, PCA method provides best results for discriminating between Journalist and Other. Table 6. Matrix confusion with PCA. Anchorman Journalist Other Anchorman Journalist Other Table 7. Matrix confusion with CDA. Anchorman Journalist Other Anchorman Journalist Other
6 In a third experiment, we improve the overall results by using another specific strategy after a PCA dimensionality reduction. We apply a Gaussian Model classifier for the punctual speakers and the k-nn algorithm for the non punctual. The fusion of these two sub-systems reaches 71.92% of the overall number of speakers correctly attributed. This corresponds to 78.66% of the overall speech time correctly annotated in terms of role. 7. Conclusion and perspectives In this paper, we describe our contribution to the domain of speaker role recognition for 3 generic roles occurring in broadcast news: Anchorman, Journalist and Other. We assume there are clues about roles in temporal, prosodic and basic signal features extracted from the audio files and from speaker segmentations. Evaluations are conducted on 13 hours of radio documents coming from the ESTER2 campaign and corresponding to 13 different radio shows. The performances reach 71.92% of well recognized speakers and 78.66% of the overall document duration is correctly annotated. These good results, similar to the state of art, have to be highlighted because first they are obtained from automatic speaker segmentations and second data come from a heterogeneous corpus. Besides, a complete detection of speaker from the Anchorman class can be achieved which is on one hand quite motivating and above all essential in the context of document structuring. Actually, this role is known in literature as the most central speaker either for document classification or for information retrieval. To go a step further and validate the generic aspect of our approach, it is necessary in future work to increase the size and the diversity of our corpus, for instance by taking TV shows into account. This will also lead us to extend our parameter set. Studying interaction between speakers, given their role will help to better characterize the interaction sequences, to make high level event emerged and to find patterns thanks to which clustering will be possible. These encouraging results open a way to applications in audiovisual content structuring. References [1] R. Barzilay, M. Collins, J. Hirschberg, and S. Whittaker. The rules behind roles: Identifying speaker role in radio broadcasts. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages AAAI Press / The MIT Press, [2] B. Bigot and I. Ferrané. From Audio Content Analysis to Conversational Speech Detection and Characterization. In ACM SIGIR Workshop: Searching Spontaneous Conversational Speech (SSCS), Singapore, pages 62 65, [3] B. Bigot, I. Ferrané, and Z. A. A. Ibrahim. Towards the detection and the characterization of conversational speech zones in audiovisual documents. In International Workshop on Content-Based Multimedia Indexing (CBMI), pages IEEE, [4] R. Cai, L. Lu, and A. Hanjalic. Unsupervised content discovery in composite audio. In MULTIMEDIA 05: Proceedings of the 13th annual ACM international conference on Multimedia, pages , [5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification (2nd Edition). Wiley-Interscience, 2 edition, [6] E. El Khoury, C. Senac, and R. André-Obrecht. Speaker diarization: Towards a more robust and portable system. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Honolulu, Hawaii, USA, pages , Honolulu, Hawaii, USA, April IEEE, IEEE. [7] EPAC. The EPAC Project. [8] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A densitybased algorithm for discovering clusters in large spatial databases with noise. In Second International Conference on Knowledge Discovery and Data Mining, pages AAAI Press, [9] S. Favre, A. Vinciarelli, and A. Dielmann. Automatic role recognition in multiparty recordings using social networks and probabilistic sequential models. In ACM International Conference on Multimedia, Beijing, October [10] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals Eugen., 7: , [11] S. Galliano, G. Gravier, and L. Chaubard. The ESTER 2 evaluation campaign for the rich transcription of french radio broadcasts. In INTERSPEECH 2009, pages 6 10, Brighton, UK, [12] L. Lamel and J. Gauvain. Alternate phone models for conversational speech. Acoustics, Speech, and Signal Processing, Proceedings. (ICASSP 05). IEEE International Conference on, 1: , [13] B. Li, J. H. Errico, H. Pan, and I. Sezan. Bridging the semantic gap in sports video retrieval and summarization. Journal of Visual Communication and Image Representation, 15(3): , March [14] Y. Liu. Initial study on automatic identification of speaker role in broadcast news speech. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 81 84, New York City, USA, Association for Computational Linguistics. [15] A. Stolcke, E. Shriberg, D. Hakkani-Tür, G. Tür, Z. Rivlin, A. S. E. Shriberg, G. Tur, and K. Sönmez. Combining words and speech prosody for automatic topic segmentation. In In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, pages 61 64, [16] A. Vinciarelli. Speakers role recognition in multiparty audio recordings using social network analysis and duration distribution modeling. Multimedia, IEEE Transactions on, 9(6): , Oct [17] R. Zhao and W. Grosky. Narrowing the semantic gap - improved text-based web document retrieval using visual features. Multimedia, IEEE Transactions on, 4(2): , 2002.
Speech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationAnalysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion
More informationWord Segmentation of Off-line Handwritten Documents
Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationAssignment 1: Predicting Amazon Review Ratings
Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationAutoregressive product of multi-frame predictions can improve the accuracy of hybrid models
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,
More informationLearning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More informationInvestigation on Mandarin Broadcast News Speech Recognition
Investigation on Mandarin Broadcast News Speech Recognition Mei-Yuh Hwang 1, Xin Lei 1, Wen Wang 2, Takahiro Shinozaki 1 1 Univ. of Washington, Dept. of Electrical Engineering, Seattle, WA 98195 USA 2
More informationDetecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011
Detecting Wikipedia Vandalism using Machine Learning Notebook for PAN at CLEF 2011 Cristian-Alexandru Drăgușanu, Marina Cufliuc, Adrian Iftene UAIC: Faculty of Computer Science, Alexandru Ioan Cuza University,
More informationA New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation
A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationWHEN THERE IS A mismatch between the acoustic
808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,
More informationPhonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project
Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California
More informationChapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard
Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA Alta de Waal, Jacobus Venter and Etienne Barnard Abstract Most actionable evidence is identified during the analysis phase of digital forensic investigations.
More informationSwitchboard Language Model Improvement with Conversational Data from Gigaword
Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword
More informationMULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY
MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY Chen, Hsin-Hsi Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan E-mail: hh_chen@csie.ntu.edu.tw Abstract
More informationCalibration of Confidence Measures in Speech Recognition
Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationUsing dialogue context to improve parsing performance in dialogue systems
Using dialogue context to improve parsing performance in dialogue systems Ivan Meza-Ruiz and Oliver Lemon School of Informatics, Edinburgh University 2 Buccleuch Place, Edinburgh I.V.Meza-Ruiz@sms.ed.ac.uk,
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 17, NO. 3, MARCH 2009 423 Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition George
More informationSpeech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines
Speech Segmentation Using Probabilistic Phonetic Feature Hierarchy and Support Vector Machines Amit Juneja and Carol Espy-Wilson Department of Electrical and Computer Engineering University of Maryland,
More informationADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION
ADVANCES IN DEEP NEURAL NETWORK APPROACHES TO SPEAKER RECOGNITION Mitchell McLaren 1, Yun Lei 1, Luciana Ferrer 2 1 Speech Technology and Research Laboratory, SRI International, California, USA 2 Departamento
More informationWelcome to. ECML/PKDD 2004 Community meeting
Welcome to ECML/PKDD 2004 Community meeting A brief report from the program chairs Jean-Francois Boulicaut, INSA-Lyon, France Floriana Esposito, University of Bari, Italy Fosca Giannotti, ISTI-CNR, Pisa,
More informationUSER ADAPTATION IN E-LEARNING ENVIRONMENTS
USER ADAPTATION IN E-LEARNING ENVIRONMENTS Paraskevi Tzouveli Image, Video and Multimedia Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens tpar@image.
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS
ACOUSTIC EVENT DETECTION IN REAL LIFE RECORDINGS Annamaria Mesaros 1, Toni Heittola 1, Antti Eronen 2, Tuomas Virtanen 1 1 Department of Signal Processing Tampere University of Technology Korkeakoulunkatu
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationDOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS. Elliot Singer and Douglas Reynolds
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of Technology Lincoln Laboratory {es,dar}@ll.mit.edu ABSTRACT
More informationLecture 1: Basic Concepts of Machine Learning
Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010
More informationA Comparison of Two Text Representations for Sentiment Analysis
010 International Conference on Computer Application and System Modeling (ICCASM 010) A Comparison of Two Text Representations for Sentiment Analysis Jianxiong Wang School of Computer Science & Educational
More informationCSL465/603 - Machine Learning
CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am
More informationProduct Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments
Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &
More informationOn-Line Data Analytics
International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob
More informationMulti-Lingual Text Leveling
Multi-Lingual Text Leveling Salim Roukos, Jerome Quin, and Todd Ward IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 {roukos,jlquinn,tward}@us.ibm.com Abstract. Determining the language proficiency
More informationEvaluation of Usage Patterns for Web-based Educational Systems using Web Mining
Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl
More informationEvaluation of Usage Patterns for Web-based Educational Systems using Web Mining
Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining Dave Donnellan, School of Computer Applications Dublin City University Dublin 9 Ireland daviddonnellan@eircom.net Claus Pahl
More informationMeta Comments for Summarizing Meeting Speech
Meta Comments for Summarizing Meeting Speech Gabriel Murray 1 and Steve Renals 2 1 University of British Columbia, Vancouver, Canada gabrielm@cs.ubc.ca 2 University of Edinburgh, Edinburgh, Scotland s.renals@ed.ac.uk
More informationLearning From the Past with Experiment Databases
Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University
More informationThe stages of event extraction
The stages of event extraction David Ahn Intelligent Systems Lab Amsterdam University of Amsterdam ahn@science.uva.nl Abstract Event detection and recognition is a complex task consisting of multiple sub-tasks
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationTwitter Sentiment Classification on Sanders Data using Hybrid Approach
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationChinese Language Parsing with Maximum-Entropy-Inspired Parser
Chinese Language Parsing with Maximum-Entropy-Inspired Parser Heng Lian Brown University Abstract The Chinese language has many special characteristics that make parsing difficult. The performance of state-of-the-art
More informationAffective Classification of Generic Audio Clips using Regression Models
Affective Classification of Generic Audio Clips using Regression Models Nikolaos Malandrakis 1, Shiva Sundaram, Alexandros Potamianos 3 1 Signal Analysis and Interpretation Laboratory (SAIL), USC, Los
More informationPRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION
PRAAT ON THE WEB AN UPGRADE OF PRAAT FOR SEMI-AUTOMATIC SPEECH ANNOTATION SUMMARY 1. Motivation 2. Praat Software & Format 3. Extended Praat 4. Prosody Tagger 5. Demo 6. Conclusions What s the story behind?
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationA NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren
A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren Speech Technology and Research Laboratory, SRI International,
More informationThe University of Amsterdam s Concept Detection System at ImageCLEF 2011
The University of Amsterdam s Concept Detection System at ImageCLEF 2011 Koen E. A. van de Sande and Cees G. M. Snoek Intelligent Systems Lab Amsterdam, University of Amsterdam Software available from:
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationRole of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation
Role of Pausing in Text-to-Speech Synthesis for Simultaneous Interpretation Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie AT&T abs - Research 180 Park Avenue, Florham Park,
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More information*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN
From: AAAI Technical Report WS-98-08. Compilation copyright 1998, AAAI (www.aaai.org). All rights reserved. Recommender Systems: A GroupLens Perspective Joseph A. Konstan *t, John Riedl *t, AI Borchers,
More informationDeep Neural Network Language Models
Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com
More informationDisambiguation of Thai Personal Name from Online News Articles
Disambiguation of Thai Personal Name from Online News Articles Phaisarn Sutheebanjard Graduate School of Information Technology Siam University Bangkok, Thailand mr.phaisarn@gmail.com Abstract Since online
More informationBODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY
BODY LANGUAGE ANIMATION SYNTHESIS FROM PROSODY AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY Sergey Levine Principal Adviser: Vladlen Koltun Secondary Adviser:
More informationPostprint.
http://www.diva-portal.org Postprint This is the accepted version of a paper presented at CLEF 2013 Conference and Labs of the Evaluation Forum Information Access Evaluation meets Multilinguality, Multimodality,
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationDialog Act Classification Using N-Gram Algorithms
Dialog Act Classification Using N-Gram Algorithms Max Louwerse and Scott Crossley Institute for Intelligent Systems University of Memphis {max, scrossley } @ mail.psyc.memphis.edu Abstract Speech act classification
More informationData Fusion Models in WSNs: Comparison and Analysis
Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,
More informationMulti-modal Sensing and Analysis of Poster Conversations toward Smart Posterboard
Multi-modal Sensing and Analysis of Poster Conversations toward Smart Posterboard Tatsuya Kawahara Kyoto University, Academic Center for Computing and Media Studies Sakyo-ku, Kyoto 606-8501, Japan http://www.ar.media.kyoto-u.ac.jp/crest/
More informationMining Student Evolution Using Associative Classification and Clustering
Mining Student Evolution Using Associative Classification and Clustering 19 Mining Student Evolution Using Associative Classification and Clustering Kifaya S. Qaddoum, Faculty of Information, Technology
More informationVariations of the Similarity Function of TextRank for Automated Summarization
Variations of the Similarity Function of TextRank for Automated Summarization Federico Barrios 1, Federico López 1, Luis Argerich 1, Rosita Wachenchauzer 12 1 Facultad de Ingeniería, Universidad de Buenos
More informationTime series prediction
Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing
More informationBUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING
BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial
More informationCross Language Information Retrieval
Cross Language Information Retrieval RAFFAELLA BERNARDI UNIVERSITÀ DEGLI STUDI DI TRENTO P.ZZA VENEZIA, ROOM: 2.05, E-MAIL: BERNARDI@DISI.UNITN.IT Contents 1 Acknowledgment.............................................
More informationA cognitive perspective on pair programming
Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2006 Proceedings Americas Conference on Information Systems (AMCIS) December 2006 A cognitive perspective on pair programming Radhika
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationSemi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.
Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationMulti-View Features in a DNN-CRF Model for Improved Sentence Unit Detection on English Broadcast News
Multi-View Features in a DNN-CRF Model for Improved Sentence Unit Detection on English Broadcast News Guangpu Huang, Chenglin Xu, Xiong Xiao, Lei Xie, Eng Siong Chng, Haizhou Li Temasek Laboratories@NTU,
More informationIEEE Proof Print Version
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1 Automatic Intonation Recognition for the Prosodic Assessment of Language-Impaired Children Fabien Ringeval, Julie Demouy, György Szaszák, Mohamed
More informationComparison of EM and Two-Step Cluster Method for Mixed Data: An Application
International Journal of Medical Science and Clinical Inventions 4(3): 2768-2773, 2017 DOI:10.18535/ijmsci/ v4i3.8 ICV 2015: 52.82 e-issn: 2348-991X, p-issn: 2454-9576 2017, IJMSCI Research Article Comparison
More informationUsing Web Searches on Important Words to Create Background Sets for LSI Classification
Using Web Searches on Important Words to Create Background Sets for LSI Classification Sarah Zelikovitz and Marina Kogan College of Staten Island of CUNY 2800 Victory Blvd Staten Island, NY 11314 Abstract
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationThe NICT/ATR speech synthesis system for the Blizzard Challenge 2008
The NICT/ATR speech synthesis system for the Blizzard Challenge 2008 Ranniery Maia 1,2, Jinfu Ni 1,2, Shinsuke Sakai 1,2, Tomoki Toda 1,3, Keiichi Tokuda 1,4 Tohru Shimizu 1,2, Satoshi Nakamura 1,2 1 National
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationTHE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING
SISOM & ACOUSTICS 2015, Bucharest 21-22 May THE ROLE OF DECISION TREES IN NATURAL LANGUAGE PROCESSING MarilenaăLAZ R 1, Diana MILITARU 2 1 Military Equipment and Technologies Research Agency, Bucharest,
More informationThe Karlsruhe Institute of Technology Translation Systems for the WMT 2011
The Karlsruhe Institute of Technology Translation Systems for the WMT 2011 Teresa Herrmann, Mohammed Mediani, Jan Niehues and Alex Waibel Karlsruhe Institute of Technology Karlsruhe, Germany firstname.lastname@kit.edu
More informationarxiv: v2 [cs.cv] 30 Mar 2017
Domain Adaptation for Visual Applications: A Comprehensive Survey Gabriela Csurka arxiv:1702.05374v2 [cs.cv] 30 Mar 2017 Abstract The aim of this paper 1 is to give an overview of domain adaptation and
More informationBYLINE [Heng Ji, Computer Science Department, New York University,
INFORMATION EXTRACTION BYLINE [Heng Ji, Computer Science Department, New York University, hengji@cs.nyu.edu] SYNONYMS NONE DEFINITION Information Extraction (IE) is a task of extracting pre-specified types
More informationTRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY
TRANSFER LEARNING IN MIR: SHARING LEARNED LATENT REPRESENTATIONS FOR MUSIC AUDIO CLASSIFICATION AND SIMILARITY Philippe Hamel, Matthew E. P. Davies, Kazuyoshi Yoshii and Masataka Goto National Institute
More informationUnsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model
Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.
More informationThe taming of the data:
The taming of the data: Using text mining in building a corpus for diachronic analysis Stefania Degaetano-Ortlieb, Hannah Kermes, Ashraf Khamis, Jörg Knappen, Noam Ordan and Elke Teich Background Big data
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationMatching Similarity for Keyword-Based Clustering
Matching Similarity for Keyword-Based Clustering Mohammad Rezaei and Pasi Fränti University of Eastern Finland {rezaei,franti}@cs.uef.fi Abstract. Semantic clustering of objects such as documents, web
More information(Sub)Gradient Descent
(Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include
More informationMandarin Lexical Tone Recognition: The Gating Paradigm
Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationSpecification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments
Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,
More information