Selection of Software Estimation Models Based on Analysis of Randomization and Spread Parameters in Neural Networks

Size: px
Start display at page:

Download "Selection of Software Estimation Models Based on Analysis of Randomization and Spread Parameters in Neural Networks"

Transcription

1 Selection of Software Estimation Models Based on Analysis of Randomization and Spread Parameters in Neural Networks Cuauhtémoc López-Martín 1, Arturo Chavoya 2, and María Elena Meda-Campaña 3 1, 2,3 Information Systems Department, CUCEA, Guadalajara University, Jalisco, Mexico 1 cuauhtemoc@cucea.udg.mx, 2 achavoya@cucea.udg.mx, 3 emeda@cucea.udg.mx Abstract - Neural networks (NN) have demonstrated to be useful for estimating software development effort. A NN can be classified depending of its architecture. A Feedforward neural network (FFNN) and a General Regression Neural Network (GRNN) have two kinds of architectures. A FFNN uses randomization to be trained, whereas a GRNN uses a spread parameter to the same goal. Randomization as well as the spread parameter has influence on the accuracy of the models when they are used for estimating the development effort of software projects. Hence, in this study, an analysis of accuracies is done based on executions of NN involving random numbers and spread values. This study used two separated samples, one of them for training and the other one for validating the models (219 and 132 projects respectively). All projects where developed applying development practices based on Personal Software Process (PSP). Results of this study suggest that an analysis of random and spread parameters should be considered in both training and validation processes for selecting the suitable neural network model. Keywords: Software development effort estimation; neural networks, randomization, spread parameter, statistical regression 1 Introduction An inadequate estimation of the development effort on software projects could address to poor planning, low profitability, and, consequently, products with poor quality [9]. There are several techniques for estimating development effort, which could be classified on intuition-based and modelbased. The former is partly based on non-mechanical and unconscious processes and the means of deriving an estimate are not explicit and therefore not repeatable [12]; whereas those ones model-based could be classified on statistical and on computational intelligence techniques. Fuzzy Logic, Genetic Algorithms, Genetic Programming, and Neural networks belong to computational intelligence techniques. Neural networks have been applied in several fields as accounting, finance, health, medicine, engineering, manufacturing or marketing [10]. According to software development effort estimation, the feedforward is the neural network most commonly used in the effort estimation field [7]. When neural networks have been applied, they have presented the following weaknesses [7]: 1. It is not clear some of the characteristics like sample size or number of variables. 2. The statistical techniques have not been optimally used. 3. Clarity on the determination of parameters of the neural networks. 4. Results obtained from the model building processes are not validated on a new data set that is not used for building the models. Each of those four problems have considered in this study as follows: 1. Two data samples were used, one of them integrated by 219 projects and developed by 71 persons from the year 2005 to the year 2009, and the other one integrated by 132 projects and developed by 38 persons through the year Both samples were developed based on the same characteristics of the experiment design (described in section II). Dependent variable is the development effort, whereas independent variables are related to size and people factors, which are described in section I.A. 2. The multiple linear regression equation is generated from a global analysis (based on coefficient of determination) as well as from an individual analysis of its parameters (section IV) to select the significant variables (independent variables) that explain to development effort (dependent variable). This practice has been suggested in [1] and [10]. 3. The GRNN contains a parameter named SPREAD which influences in the GRNN accuracy. Accuracy values are analyzed for several SPREAD values (section V). In addition, FFNN involves randomization to be trained, analysis of executions are done in section VI. 4. Analysis of models is based upon the two following main stages when an prediction model is used [4]: (1) the model adequacy checking or model verification (estimation stage) must be determined, that is, whether the model is adequate to describe the observed (actual) data; if so then (2) the estimation model is validated using new data, that is, prediction stage (sections V and VI). Data of this study were obtained by means of the application of a disciplined software development process: the Personal

2 Software Process (PSP) whose practices and methods have been used by thousands of software engineers for delivering quality products on predictable schedule [5]. 1.1 Data description of software projects Source lines of code (LOC) remains in favor of many models [14]. There are two measures of source code size: physical source lines and logical source statements. The count of physical lines gives the size in terms of the physical length of the code as it appears when printed [11]. In this study, two of the independent variables are New and Changed (N&C) as well as Reused code and all of them were considered as physical lines of code (LOC). N&C is composed of added and modified code. The added code is the LOC written during the current programming process, while the modified code is the LOC changed in the base program when modifying a previously developed program. The base program is the total LOC of the previous project while the reused code is the LOC of previously developed programs that are used without any modification. A coding standard should establish a consistent set of coding practices that is used as a criterion when judging the quality of the produced code. Hence, it is necessary to always use the same coding and counting standards. The software projects of this study followed those two guidelines. After product size, people factors (such as experience on applications), platforms, languages and tools have the strongest influence in determining the amount of effort required to develop a software product [2]. Programming language experience is used as a third independent variable in this study, which was measured in months. Because projects of this study were developed inside an academic environment, the effort was measured in minutes as was used in [16]. 1.2 Accuracy criterion There are several criteria to evaluate the accuracy of estimation models. A common criterion for the evaluation of prediction models has been the Magnitude of Relative Error (MRE). In several papers, a MMRE 0.25 has been considered as acceptable. The accuracy criterion for evaluating models of this study is the Magnitude of Error Relative to the estimate or MER defined as follows: MER i = Actual Effort i Estimated Effort i Estimated Effort i The MER value is calculated for each observation i whose effort is estimated. The aggregation of MER over multiple observations (N) can be achieved through the mean (MMER) as follows: MMER = ( 1/ N ) N i = 1 MER The accuracy of an estimation technique is inversely proportional to the MMER. i Results of MMER had better results than MMRE in in [15] for selecting the best model; this fact is the reason for using MMER 2 Experimental design The experiment was done inside a controlled environment having the following characteristics: 1. All of the developers were experienced working for software development inside of their enterprises which they were working. 2. All developers were studying a postgraduate program related to computer science. 3. Each developer wrote seven project assignments. However only four of them were selected by developer. The first three programs were not considered because they had differences in their process phases and in their logs, whereas in latest four programs were based on the same logs and in the following phases: plan, design, design review, code, code review, compile, testing and postmortem. 4. Each developer selected his/her own imperative programming language whose code standard had the following characteristics: each compiler directive, variable declaration, constant definition, delimiter, assign sentence, as well as flow control statement was written in a line of code. 5. Developers had already received at least a formal course about the object oriented programming language that they selected to be used though the assignments, and they had good programming experience in that language. Sample of this study only involved developers whose programs were coded in C++ or JAVA. 6. Because of this study was an experiment with the aim to reduce bias, we did not inform to developers our experimental goal. 7. Developers fill out an spreadsheet for each task and submit it electronically for examination. 8. Each group course was not greater than fifteen developers. 9. Since that a coding standard should establish a consistent set of coding practices that is used as a criterion when judging the quality of the produced code [16], it is necessary to always use the same coding and counting standards. The programs developed of this study followed these guidelines. All of them coincided with the counting standard depicted in Table I. 10. Developers were constantly supervised and advising about the process. 11. The code wrote in each program was designed by the developers to be reused in next programs. 12. The kind of the developed programs had a similar complexity of those suggested in [16]. 13. Data used in this study belong from those, whose data for all seven exercises were correct, complete, and consistent

3 Table 1. Counting standard Count type Type Physical/logical Physical Statement type Included Executable Yes Nonexecutable Declarations Yes (one by text line) Compiler directives Yes (one by text line) Comments and Blank lines No Delimiters: { and } Yes 3 Neural networks An artificial neural network, or simply a neural network (NN), is a technique of computing and signal processing that is inspired on the processing done by a network of biological neurons [13]. The basis for construction of a neural network is an artificial neuron. An artificial neuron implements a mathematical model of a biological neuron. There is a variety of tasks that neural network can be trained to perform. The most common tasks are pattern association, pattern recognition, function approximation, automatic control, filtering and beam-forming. The neuron model and the architecture of a neural network describe how a network transforms its input into an output. Two or more neurons can be combined in a layer, and a particular network could contain one or more such layers [6]. Two kinds of neural networks are briefly described in the following two sections 3.1 Feedforward neural network (FFNN) The input to an artificial neuron is a vector of numeric values x = x, x,..., x j,..., x }. The neuron receives the { 1 2 m vector and perceives each value, or component of the vector, with a particular independent sensitivity called weight w= { w, w2,..., w j,..., w 1 m }. Upon receiving the input vector, the neuron first calculates its internal state v, and then its output value y. The internal state v of the neuron is calculated as the sum of the inner product of the input vector and the weight vector, and a numerical value b called bias as m follows: : v = x w+ b = y j w j + b. This function is also j= 1 known as transfer function. The output of the neuron is a function of its internal state y = Φ(v). This function is also known as activation function. The main task of the activation function is to scale all possible values of the internal state into a desired interval of output values. The intervals of output values are for instance [0, 1] or (-1, 1). A feedforward network consists of layers of neurons. There is an input layer, an output layer and optionally one or more hidden layers between the input and the output layers. After a network receives its input vector, layer by layer of neurons process the signal, until the output layer emits an output vector as response. Neurons in the same layer process the signal in parallel. In the feedforward network the signals between neurons always flow from the input layer toward the output layer. A neural network learns by adjusting its parameters. The parameters are the values of bias and weights in its neurons. Some neural networks learn constantly during their application, while most of them have two distinct periods: training period and application period. During the training period a network processes inputs adjusting its parameters. It is guided by some learning algorithm, in order to improve its performance. Once the performance is acceptably accurate, or precise, the training period is completed. The parameters of the network are then fixed to the learned values, and the network starts its period of application for the intended task. In the present work, a feedforward neural network with one hidden layer is applied for function approximation. 3.2 General regression neural network (GRNN) The architecture of a GRNN is the following [3]: input units provide all the X i variables to all neurons on the second layer. Pattern units are dedicated to receive as input the outputs from a set of input neurons. When a new vector X is entered into the network, it is subtracted from the stored vector representing each cluster center. Either the squares or the absolute values of the differences are summed and fed into a nonlinear activation function. The activation function normally used is the exponential function. The pattern units output is passed on to the summation units. The summation units perform a dot product between a weight vector and a vector composed of the signals from the pattern units. The summation unit that generates an estimate of F(X)K sums the outputs of the pattern units weighted by the number of observations each cluster center represents. The summation unit that estimates Y F(X)K multiplies each value from a pattern unit by the sum of the samples Y j associated with cluster center X i. The output unit merely divides Y f(x)k by f(x)k to yield the desired estimate of Y. When estimation of a vector Y is desired, each component is estimated using one extra summation unit, which uses as its multipliers sums of samples of the component of Y associated with each cluster center X i. 4 Significant variables from statistical regression analysis From a sample of 219 projects, the following multiple linear regression equation considering New and Changed

4 (N&C), Reused code and Programming Language Experience (PLE) was generated: Effort = (1.1025*N&C) ( *Reused) ( *PLE) This equation has a coefficient of determination of r , which corresponds to an acceptable value in software estimation according to [16]. ANOVA for this equation showed a statistically significant relationship between the variables at the 99% confidence level. To determine whether the model could be simplified, a parameters analysis of the multiple linear regression was done. In results for this analysis (Table 2); the highest p-value on the independent variables was , belonging to reused code. Since this p-value was less than 0.05, reused code is statistically significant at the 95% confidence level. Consequently, the independent variable of reused code was not removed. Hence, this variable will have to be considered for its evaluation. Table 2. Individual analysis of parameters Parameter Estimate Standard error t-statistic p-value Constant N&C Reused PLE Analysis of GRNN spread parameter In GRNN a parameter named SPREAD was empirically changed until a suitable value was obtained. If the parameter spread is small the GRNN function is very steep, so that the neuron with the weight vector closest to the input will have a much larger output than other neurons. The GRNN tends to respond with the target vector associated with the nearest input vector. As the parameter spread becomes larger, the function slope of the GRNN becomes smoother and several neurons can respond to an input vector. The network then acts as if it is taking a weighted average between target vectors whose input vectors are closest to the new input vector. As the parameter spread becomes larger, more and more neurons contribute to the average, with the result that the network function becomes smoother [6]. The values for SPREAD were 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20, 25, 30, 35, and 40. To select suitable GRNN based on its spread value, it is necessary to know the behavior when the GRNN is applied to a new dataset; that is, a low spread value could over-fit the network and then when the GRNN is applied to new data, it could obtain a larger (worse) MMER instead of a better one. Table 3 shows the MMER values in both the verification and the validation stages as the spread value is being increased. Table 3 shows that as the spread value increases, the MMER in validation stage gets better until the spread value is equal to 13 (when MMER has its best value with 0.23). It can be observed that from the spread value equal to 25, the MMER gets worse. Hence, we considered as suitable GRNN that having a spread value equal to 13. Table 3. Analysis of MMER by stage based on GRNN spread value Stage Verification (219 projects) SPREAD values Validation (132 projects) 6 Analysis of FFNN randomization A feedforward network with one layer of hidden neurons is sufficient to approximate any function with finite number of discontinuities on any given interval [13]. Three neurons were used in the input layer of the network. One receives a number of N&C, the second one receives the number of reused lines of code, whereas the last one receives the developer s programming language experience in months. The output layer consists of only one neuron indicating an estimated effort. The set of 219 software projects was used to train the network. This group of projects was randomly separated in three subgroups: training, validation and testing. The training group contained 60% of the projects. The input-output pairs of data for these projects were used by the network to adjust its parameters. The next 20% of data were used to validate the results and identify the point at which the training should stop. The remaining 20% of data were randomly chosen to be used as testing data, to make sure that the network performed well with the data that was not present during parameter adjustment. These percentages were chosen as suggested in [32]. The number of neurons in the hidden layer was optimized empirically: 1, 2, 3, 4, 5, 10, 15, 20, 25 and 30 neurons were used for training the network. Ten executions were done by each number of neurons because this kind of network involved a random process. The optimized Levenberg-Marquardt algorithm [8] was used to train the network. Table 4 presents the MMER obtained by execution.

5 Table 4. MMER by execution having different number of neurons in the hidden layer Neurons by hidden layer Executions Best MMER Table 4 shows a MMER = 0.24 using from 2 to 30 neurons. Considering that more neurons means more computation, to select the final number neurons to be used in this study we proceeded to analyze the frequency of MMER. Table 5 shows that the higher the number of neurons, the higher the MMER dispersion. Table 6. Frequency of MMER by number of neurons in the hidden layer MMER Number of neurons in the hidden layer Total of executions Based on the data from Table 6, we selected as suitable neural network that with three neurons in the hidden layer since it had the highest frequency (seven times) having a MMER = Then, this trained FFNN was applied to the other data set of 132 projects obtaining a MMER = Conclusions and future research This research has analyzed the effect that randomization and spread parameter have on the selection of the best neural network model. Accuracy was measured based on the Mean of Magnitude of Error Relative to the estimate or MMER. Two kinds of neural networks were analyzed. The randomization involved in a FFNN showed that the higher the number of neurons, the higher the MMER dispersion. In accordance with GRNN spread parameter, our analysis showed that to select suitable GRNN, it is necessary to know the behavior when the GRNN is applied to a new dataset and not only is sufficient to know its accuracy of the GRNN when it is trained. This analysis was inside of a software development estimation context based upon projects developed in a controlled environment as well as following a disciplined process. Future work is related to the relationship analysis between data statistical characteristics and accuracy of estimation models 8 Acknowledgement The authors of this paper would like to thank CUCEA of Guadalajara University, Jalisco, México, Programa de Mejoramiento del Profesorado (PROMEP), as well as to Consejo Nacional de Ciencia y Tecnología (Conacyt). 9 References [1] B. A. Kitchenham and E. Mendes, Travassos G.H. (2007). Cross versus Within-Company Cost Estimation Studies: A Systematic Review, IEEE Transactions Software Engineering, Vol. 33, No. 5, pages, [2] B. Boehm Ch. Abts, A.W. Brown, S. Chulani, B.K. Clarck, E. Horowitz, R. Madachy, D. Reifer and B. Steece, 2000, COCOMO II. Prentice Hall. [3] D. F. Specht, A General Regression Neural Network. IEEE transactions on Neural Networks, Vol. 7, No. 3, [4] D. Montgomery and E. Peck, Introduction to Linear Regression Analysis, 2001, John Wiley. [5] D. Rombach, J. Münch, A. Ocampo, W. S. Humphrey and D. Burton, Teaching disciplined software development. Journal Systems and Software, Elsevier, 2008, pp [6] H. Demuth, M. Beale and M Hagan, MatLab Neural Network Toolbox 6, User s Guide, [7] H. Park and S. Baek, An empirical validation of a neural network model for software effort estimation, Journal of Expert Systems with Applications, Elsevier, 2008, Vol. 35, Pp [8] L. Finschi, An Implementation of The Levenberg-Marquardt Algorithm. Eidgenössische Technische Hochschule Zürich, 1996 [9] M. Jørgensen, A Preliminary Theory of Judgment-based Project Software Effort Predictions. IRNOP VIII, Project Research Conference, ed. by Lixiong Ou, Rodney Turner, Beijing, Publishing House of Electronic Industry, 2006, pp [10] M. Paliwal and U.A.Kumar, Neural networks and statistical techniques: A review of applications, Journal of Expert Systems with Applications, Vol. 36, Pp doi: /j.eswa [11] R.E. Park. Software Size Measurement: A Framework for Counting Source Statements, Software Engineering Institute, Carnegie Mellon University. [12] S. Grimstad and M. Jørgensen, Inconsistency of expert judgment-based estimates of software development effort,

6 Journal of Systems and Software, Elsevier, Vol. 80, 2007 pp [13] S. Haykin, Neural Networks: A Comprehensive Foundation, Second edition, Prentice Hall [14] S.G. MacDonell, Software source code sizing using fuzzy logic modelling, Elsevier. Volume 45, Issue 7, 2003, pp Doi: /S (03) [15] T. Foss, E. Stensrud, B. Kitchenham and I. Myrtveit I, A Simulation Study of the Model Evaluation Criterion MMRE, IEEE Transactions on Software Engineering, 2003, Vol. 29, No. 11. [16] W. Humphrey A Discipline for Software Engineering. Addison Wesley

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Probability and Statistics Curriculum Pacing Guide

Probability and Statistics Curriculum Pacing Guide Unit 1 Terms PS.SPMJ.3 PS.SPMJ.5 Plan and conduct a survey to answer a statistical question. Recognize how the plan addresses sampling technique, randomization, measurement of experimental error and methods

More information

Test Effort Estimation Using Neural Network

Test Effort Estimation Using Neural Network J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

Kamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi

Kamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi Soft Computing Approaches for Prediction of Software Maintenance Effort Dr. Arvinder Kaur University School of Information Technology GGS Indraprastha University Delhi Kamaldeep Kaur University School

More information

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview Algebra 1, Quarter 3, Unit 3.1 Line of Best Fit Overview Number of instructional days 6 (1 day assessment) (1 day = 45 minutes) Content to be learned Analyze scatter plots and construct the line of best

More information

Mathematics subject curriculum

Mathematics subject curriculum Mathematics subject curriculum Dette er ei omsetjing av den fastsette læreplanteksten. Læreplanen er fastsett på Nynorsk Established as a Regulation by the Ministry of Education and Research on 24 June

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

Statewide Framework Document for:

Statewide Framework Document for: Statewide Framework Document for: 270301 Standards may be added to this document prior to submission, but may not be removed from the framework to meet state credit equivalency requirements. Performance

More information

School of Innovative Technologies and Engineering

School of Innovative Technologies and Engineering School of Innovative Technologies and Engineering Department of Applied Mathematical Sciences Proficiency Course in MATLAB COURSE DOCUMENT VERSION 1.0 PCMv1.0 July 2012 University of Technology, Mauritius

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Seminar - Organic Computing

Seminar - Organic Computing Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

WHEN THERE IS A mismatch between the acoustic

WHEN THERE IS A mismatch between the acoustic 808 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 14, NO. 3, MAY 2006 Optimization of Temporal Filters for Constructing Robust Features in Speech Recognition Jeih-Weih Hung, Member,

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

On-the-Fly Customization of Automated Essay Scoring

On-the-Fly Customization of Automated Essay Scoring Research Report On-the-Fly Customization of Automated Essay Scoring Yigal Attali Research & Development December 2007 RR-07-42 On-the-Fly Customization of Automated Essay Scoring Yigal Attali ETS, Princeton,

More information

Australian Journal of Basic and Applied Sciences

Australian Journal of Basic and Applied Sciences AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean

More information

Grade 6: Correlated to AGS Basic Math Skills

Grade 6: Correlated to AGS Basic Math Skills Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and

More information

Speech Emotion Recognition Using Support Vector Machine

Speech Emotion Recognition Using Support Vector Machine Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Mathematics. Mathematics

Mathematics. Mathematics Mathematics Program Description Successful completion of this major will assure competence in mathematics through differential and integral calculus, providing an adequate background for employment in

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma International Journal of Computer Applications (975 8887) The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma Gilbert M.

More information

Knowledge-Based - Systems

Knowledge-Based - Systems Knowledge-Based - Systems ; Rajendra Arvind Akerkar Chairman, Technomathematics Research Foundation and Senior Researcher, Western Norway Research institute Priti Srinivas Sajja Sardar Patel University

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Deploying Agile Practices in Organizations: A Case Study

Deploying Agile Practices in Organizations: A Case Study Copyright: EuroSPI 2005, Will be presented at 9-11 November, Budapest, Hungary Deploying Agile Practices in Organizations: A Case Study Minna Pikkarainen 1, Outi Salo 1, and Jari Still 2 1 VTT Technical

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Detailed course syllabus

Detailed course syllabus Detailed course syllabus 1. Linear regression model. Ordinary least squares method. This introductory class covers basic definitions of econometrics, econometric model, and economic data. Classification

More information

On the Formation of Phoneme Categories in DNN Acoustic Models

On the Formation of Phoneme Categories in DNN Acoustic Models On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014 UNSW Australia Business School School of Risk and Actuarial Studies ACTL5103 Stochastic Modelling For Actuaries Course Outline Semester 2, 2014 Part A: Course-Specific Information Please consult Part B

More information

An OO Framework for building Intelligence and Learning properties in Software Agents

An OO Framework for building Intelligence and Learning properties in Software Agents An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as

More information

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017

Lahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017 Instructor Syed Zahid Ali Room No. 247 Economics Wing First Floor Office Hours Email szahid@lums.edu.pk Telephone Ext. 8074 Secretary/TA TA Office Hours Course URL (if any) Suraj.lums.edu.pk FINN 321 Econometrics

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4

Chapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4 Chapters 1-5 Cumulative Assessment AP Statistics Name: November 2008 Gillespie, Block 4 Part I: Multiple Choice This portion of the test will determine 60% of your overall test grade. Each question is

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Mandarin Lexical Tone Recognition: The Gating Paradigm

Mandarin Lexical Tone Recognition: The Gating Paradigm Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE Pierre Foy TIMSS Advanced 2015 orks User Guide for the International Database Pierre Foy Contributors: Victoria A.S. Centurino, Kerry E. Cotter,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering Lecture Details Instructor Course Objectives Tuesday and Thursday, 4:00 pm to 5:15 pm Information Technology and Engineering

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Lecture 2: Quantifiers and Approximation

Lecture 2: Quantifiers and Approximation Lecture 2: Quantifiers and Approximation Case study: Most vs More than half Jakub Szymanik Outline Number Sense Approximate Number Sense Approximating most Superlative Meaning of most What About Counting?

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering

More information

Analyzing the Usage of IT in SMEs

Analyzing the Usage of IT in SMEs IBIMA Publishing Communications of the IBIMA http://www.ibimapublishing.com/journals/cibima/cibima.html Vol. 2010 (2010), Article ID 208609, 10 pages DOI: 10.5171/2010.208609 Analyzing the Usage of IT

More information

Cal s Dinner Card Deals

Cal s Dinner Card Deals Cal s Dinner Card Deals Overview: In this lesson students compare three linear functions in the context of Dinner Card Deals. Students are required to interpret a graph for each Dinner Card Deal to help

More information

School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne

School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne School Competition and Efficiency with Publicly Funded Catholic Schools David Card, Martin D. Dooley, and A. Abigail Payne Web Appendix See paper for references to Appendix Appendix 1: Multiple Schools

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.)

PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.) PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.) OVERVIEW ADMISSION REQUIREMENTS PROGRAM REQUIREMENTS OVERVIEW FOR THE PH.D. IN COMPUTER SCIENCE Overview The doctoral program is designed for those students

More information

Computerized Adaptive Psychological Testing A Personalisation Perspective

Computerized Adaptive Psychological Testing A Personalisation Perspective Psychology and the internet: An European Perspective Computerized Adaptive Psychological Testing A Personalisation Perspective Mykola Pechenizkiy mpechen@cc.jyu.fi Introduction Mixed Model of IRT and ES

More information

Algebra 2- Semester 2 Review

Algebra 2- Semester 2 Review Name Block Date Algebra 2- Semester 2 Review Non-Calculator 5.4 1. Consider the function f x 1 x 2. a) Describe the transformation of the graph of y 1 x. b) Identify the asymptotes. c) What is the domain

More information

Mining Association Rules in Student s Assessment Data

Mining Association Rules in Student s Assessment Data www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Implementing a tool to Support KAOS-Beta Process Model Using EPF

Implementing a tool to Support KAOS-Beta Process Model Using EPF Implementing a tool to Support KAOS-Beta Process Model Using EPF Malihe Tabatabaie Malihe.Tabatabaie@cs.york.ac.uk Department of Computer Science The University of York United Kingdom Eclipse Process Framework

More information

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com

More information

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011 Montana Content Standards for Mathematics Grade 3 Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011 Contents Standards for Mathematical Practice: Grade

More information

A cognitive perspective on pair programming

A cognitive perspective on pair programming Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2006 Proceedings Americas Conference on Information Systems (AMCIS) December 2006 A cognitive perspective on pair programming Radhika

More information

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland

More information

Dublin City Schools Mathematics Graded Course of Study GRADE 4

Dublin City Schools Mathematics Graded Course of Study GRADE 4 I. Content Standard: Number, Number Sense and Operations Standard Students demonstrate number sense, including an understanding of number systems and reasonable estimates using paper and pencil, technology-supported

More information

Speaker Identification by Comparison of Smart Methods. Abstract

Speaker Identification by Comparison of Smart Methods. Abstract Journal of mathematics and computer science 10 (2014), 61-71 Speaker Identification by Comparison of Smart Methods Ali Mahdavi Meimand Amin Asadi Majid Mohamadi Department of Electrical Department of Computer

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

An Empirical and Computational Test of Linguistic Relativity

An Empirical and Computational Test of Linguistic Relativity An Empirical and Computational Test of Linguistic Relativity Kathleen M. Eberhard* (eberhard.1@nd.edu) Matthias Scheutz** (mscheutz@cse.nd.edu) Michael Heilman** (mheilman@nd.edu) *Department of Psychology,

More information

Time series prediction

Time series prediction Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing

More information

D Road Maps 6. A Guide to Learning System Dynamics. System Dynamics in Education Project

D Road Maps 6. A Guide to Learning System Dynamics. System Dynamics in Education Project D-4506-5 1 Road Maps 6 A Guide to Learning System Dynamics System Dynamics in Education Project 2 A Guide to Learning System Dynamics D-4506-5 Road Maps 6 System Dynamics in Education Project System Dynamics

More information

PhD in Computer Science. Introduction. Dr. Roberto Rosas Romero Program Coordinator Phone: +52 (222) Ext:

PhD in Computer Science. Introduction. Dr. Roberto Rosas Romero Program Coordinator Phone: +52 (222) Ext: PhD in Computer Science Dr. Roberto Rosas Romero Program Coordinator Phone: +52 (222) 229 2677 Ext: 2677 e-mail: roberto.rosas@udlap.mx Introduction Interaction between computer science researchers and

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

Improving software testing course experience with pair testing pattern. Iyad Alazzam* and Mohammed Akour

Improving software testing course experience with pair testing pattern. Iyad Alazzam* and Mohammed Akour 244 Int. J. Teaching and Case Studies, Vol. 6, No. 3, 2015 Improving software testing course experience with pair testing pattern Iyad lazzam* and Mohammed kour Department of Computer Information Systems,

More information

A Reinforcement Learning Variant for Control Scheduling

A Reinforcement Learning Variant for Control Scheduling A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement

More information

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm Prof. Ch.Srinivasa Kumar Prof. and Head of department. Electronics and communication Nalanda Institute

More information

Towards a Collaboration Framework for Selection of ICT Tools

Towards a Collaboration Framework for Selection of ICT Tools Towards a Collaboration Framework for Selection of ICT Tools Deepak Sahni, Jan Van den Bergh, and Karin Coninx Hasselt University - transnationale Universiteit Limburg Expertise Centre for Digital Media

More information

Data Fusion Through Statistical Matching

Data Fusion Through Statistical Matching A research and education initiative at the MIT Sloan School of Management Data Fusion Through Statistical Matching Paper 185 Peter Van Der Puttan Joost N. Kok Amar Gupta January 2002 For more information,

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein

More information

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade The third grade standards primarily address multiplication and division, which are covered in Math-U-See

More information

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only. Calculus AB Priority Keys Aligned with Nevada Standards MA I MI L S MA represents a Major content area. Any concept labeled MA is something of central importance to the entire class/curriculum; it is a

More information