DNN Low Level Reinitialization: A Method for Enhancing Learning in Deep Neural Networks through Knowledge Transfer

Size: px
Start display at page:

Download "DNN Low Level Reinitialization: A Method for Enhancing Learning in Deep Neural Networks through Knowledge Transfer"

Transcription

1 DNN Low Level Reinitialization: A Method for Enhancing Learning in Deep Neural Networks through Knowledge Transfer Lyndon White ( ) Index Terms Deep Belief Networks, Deep Neural Networks, Neural Networks, Knowledge Transfer, Image Recognition, Digit Recognition, Handwriting Recognition, Representation Learning. Abstract It is a common problem for there to be a shortage of training data for the target domain (e.g. letter recognition), but plenty of data for a related domain (e.g. digit recognition). This paper presents a novel approach, Deep Neural Network Low Level Reinitialization for making use of auxiliary, out-of-domain. unlabelled training data to enhance the performance of a deep neural network in cases such as this. This new method makes use of high quality features learnt in the related domain to aid training and classification. The application of this approach is shown here for digit recognition, where improvement is found over merely using the limited target domain data alone.

2 Lyndon White 32B Pollard Street Glendalough, WA 6016 W/Prof John Dell Faculty of Engineering, Computing and Mathematics The University of Western Australia 35 Stirling Highway Crawley, WA 6009 Dear Professor Dell, I submit to you this paper entitled DNN Low Level Reinitialization: A Method for Enhancing Learning in Deep Neural Networks through Knowledge Transfer, together with its copious appendices, in partial fulfillment of the requirements for the award of Bachelor of Engineering. Yours Sincerely Lyndon White October 26, 2014

3 iii Acknowledgments I would like to express my sincere gratitude toward my supervisor, Dr Roberto Togneri, for his ongoing support and advice throughout this project, and for offering the chance to research in this fascinating area. In particular, for his prompt responses to s and for helping me out with meetings on short notice. I must give credit to those who provided the hardware for this project. The final collection of data alone involved training 98,630 neural networks. This was only possible because of the vast amount of computing power I was given access to by the UWA Signals Processing Lab, and by the University Computer Club (UCC). UCC provided numerous services, beyond just the computational power, that made the completion of this project much smoother. ivec@uwa provided me with access to their high resolution tiled display. I was able to get more data analysis done in 3 days on this tremendous monitor than I had done in 3 weeks on a desktop PC. The ivec visualization lab is a fantastic resource to have access to for anyone doing a project involving a large amount of data. I acknowledge the great introduction I had to the field of machine learning via the online, archived copy, of Geoffrey Hinton s course Neural Networks for Machine Learning. This course remains freely available online from It is approximately equivalent in content, to a 6 point unit at UWA, and I strongly recommend it to anyone interested in learning more about neural networks. I am grateful for the support of the Stack-Exchange online communities, in particular TEX.SE ( for invaluable tips and tricks that have seen this work through to its final presented form. I would like to show my appreciation for my peers completing their projects simultaneously with me. In particular: Sam Moore, David Gow, Rowan Ashwin and Varun Gobole. I have been in good company throughout the year. I would like to highlight Varun, who is the only person did not give me a blank look when I explained my project to him, but rather asked for more details. I must also thank my close friend Roland Kerr, for the sharing of his experiences from completing his honors project over the last 18 months and the constant advice and assistance which has stemmed from him. I would like to thank all my friends and family who have kept me happy and well. Most of all, my very special thanks to my beautiful wife Isobel; whom has kept me sane-ish through another year, Who looked after me through long days, and has actual grammatical skills. My love is for her always.

4 iv CONTENTS 1 Introduction The Deep Neural Network Low Level Reinitialization Algorithm Related Work Improving knn and SVM Accuracy by Training on Auxiliary Data Sources Zero Shot Learning One Shot Learning Domain Adaptation on Amazon Product Reviews Multimodal Deep Learning Background Deep Neural Networks Deep Belief Networks Other Deep Neural Networks The DNN Low Level Reinitialization Algorithm Method Transfer of Features for Better Structure Theoretical Justification for Reinitializing Layers The Importance of Backpropagation Empirical Evaluation Methods Experimental Setup and Evaluation The Control Experiments Empirical Results Improvement Frequency Expected Improvement: The Requirement for Sufficient Target Domain Training Data Performance in Deeper and Wider Topologies DNN LLR acts as a Superior Regularizer Best and Worst Domain Transitions The Consequences of Adding a Target Dataset Pretraining Step to the Reinitialization Process Conclusion Further Work Applications Closing Remarks References 25

5 v Appendix A: Nomenclature 28 A.1 Abbreviations A.2 Terms Appendix B: Detailed Experimental Setup 29 B.1 Source and Target Domain Datasets B.2 Experimental Parameters B.3 Incremental Training and Evaluation B.4 Evaluation B.5 MNIST Subdivisions Used Appendix C: The Transformation of Subdatasets such that the Whole Dataset is Standardized 35 C.1 Motivation C.2 Derivation of a Method C.3 Conclusion Appendix D: On the Performance of the Linear Classifier Control 38 Appendix E: DBN Reuse Is it necessary to reinitialize the bottom layer. 39 E.1 Method E.2 Results E.3 Conclusion Appendix F: msda Reinitialization 43 F.1 Introduction Appendix G: Cross Re-representation Based Techniques 48 G.1 Introduction G.2 Experimental Setup G.3 DBN Re-representation G.4 msda Re-representation G.5 Conclusion Appendix H: The YADLF Framework 54 H.1 Features Appendix I: Data Analysis and Presentation Tools 56

6 1 1 INTRODUCTION Primates, including humans, are known to excel at learning to do new tasks that are similar to tasks they already know[1], their brains are made up of multiple layers of linked neurons[2]. Deep Neural Networks (DNNs 1 ) is a machine learning model which imitates these structures. Such machine learning models learn to solve problems within a particular domain particular types of problems or tasks with a particular skill set. It is expected that these deep artificial neural networks will be able to transfer knowledge from one domain to a related one, just as humans can[3][4][5]. Learning to read numbers should help to learn to read letters etc. It is often the case that there is plenty of training data for one domain, but little for another domain. For example, large corpora of training data have been collected for use in training machine learners to recognize handwritten digits. This was collected to train machine classifiers for sorting mail by postcode. Much less data is available for handwritten letters. Due to this lack of data, many optical character recognition systems offer recognition of all handwritten or typed digits, but only of typed text. A new model called Deep Neural Network Low Level Reinitialization (DNN LLR) was developed to help in these cases. DNN LLR is useful where there is limited training data for the targeted task, but plenty for a related task. The training data for the related task domain is the additional source dataset for the DNN LLR model. The limited training data for the task actually being solved forms the target dataset. DNN LLR makes use of the auxiliary source data, together with the limited quantity of target data, to solve problems in the target domain. Additional source data is even more plentiful for DNN LLR than for some other knowledge transfer algorithms. DNN LLR does not require labelled source domain training cases. This is a highly desirable trait in machine learning algorithms to be able to utilize unlabelled data[6]. There are huge quantities of publicly available unlabelled data, such as the over 97 million appropriately creative-commons licensed images on Flickr[7]. Contrast this to the the 14 million labelled images available from ImageNet[8], the largest comparable labelled dataset[9]. Being able to make use of knowledge from so many large and readily available sources makes DNN LLR a particularly useful learning algorithm. 1.1 The Deep Neural Network Low Level Reinitialization Algorithm The Deep Neural Network Low Level Reinitialization (DNN LLR) algorithm improves performance via the transfer of high level abstract knowledge. Low level learning about the source domain inputs is discarded. The higher level relationships however are maintained. This can be explained through an analogy of sports learning. When teaching children sports it is important to help them transfer tactical solutions between different games[10]. Low level skills like dribbling and kicking are not as transferable as higher level skills such as tactics and decision-making[10]. Difficulties in transferring knowledge are 1. While each abbreviation is introduced in turn, for the readers reference, a summary of abbreviation and terminology has been prepared and can be found in section A.

7 2 attributed to students being put off by the low level differences, such as using a stick instead of kicking[10]. In a neural network the capacity to forget the distracting low level differences exists. This information is stored in the lowest layer of the network. Thus it can be removed, by reinitializing the bottom layer while preserving the cross-applicable strategic knowledge stored in higher layers DNN LLR as a Curriculum Learner This research presents work toward one of Yoshua Bengio s Open Questions in deep neural architectures: Is a curriculum needed to learn the kinds of high-level abstractions that humans take years or decades to learn? [11] The DNN LLR method allows for a training curriculum to be created. This curriculum contains two courses. First a course containing the source dataset followed by a second course containing the target dataset. Using this curriculum results in a better high-level abstractions than is found on merely training on the target dataset Other Related Models During the course of this research several other models with similar goals to DNN LLR were investigated. Consideration was given to reinitializing (or not reinitializing) the other layers; and to transferring knowledge by a transferring a feature detecting model for simple feature classification. Summaries of the most interesting results from these other investigations can be found in appendices E, F and G. DNN LLR was the most promising of the algorithms investigated. 2 RELATED WORK 2.1 Improving knn and SVM Accuracy by Training on Auxiliary Data Sources The idea that out-of-domain data could be used to improve learning accuracy is not new. The work of [12] presents a method for making use of out-of-domain data when training Support Vector Machines (SVMs), and k-nearest-neighbors (knn) models. SVMs, and knn models are quite different machine learning algorithms compared to neural networks[13][14]. Thus the algorithm in that paper cannot be applied to neural networks, but it was suggested that such algorithms could be found[12]. The goals and reasoning behind improving SVM accuracy in this way are very similar to those for DNN LLR. The source dataset, containing related auxiliary data, provides additional information which helps to solve the target domain problem[12]. This source domain data is used to provide additional structure to help in the classifications. In knn this takes the form of additional neighbors[12]. In a SVM this takes the form of additional potential support vectors[12]. In DNN LLR this takes the form of additional feature detectors. Common between all three algorithms is that the extra training from the source dataset provides additional tools to describe the input target domain cases, thus allowing for easier classification.

8 3 2.2 Zero Shot Learning Zero shot learning algorithms are focused on learning to accomplish a task without complete[15], or in some cases any[16], target domain data. Zero shot learning algorithms are applied in a variety of cases: from mapping between brain patterns and words being thought of[15] (trained only with mappings for some words), through to learning to recognize objects trained only using textual descriptions of the objects[16]. DNN LLR has a lot in common with many zero shot learning algorithms in that it seeks to make use of common abstract structures within the greater domain that the source and target domains are part of. Unlike a zero shot learning algorithm DNN LLR does require some target domain data for retraining. 2.3 One Shot Learning One shot learning algorithms use a very small number of target domain training cases[16] to learn classifications (and other functions). Often just a single target domain example per output classification is required[17]. In [17] the task was to classify a letter-like symbol into one of 20 classes, of which there was only a single training case for each. While the work of [17] does not use neural networks, some notions are very similar to that used by DNN LLR. In [17], one shot learning was accomplished by pretraining the classifier to be able to dissect the symbols into strokes. These strokes were learnt from a large number of different alphabets[17] a related auxiliary source. This allowed the classifier to learn much faster from the single target cases. The high quality abstract representation allowed it to relate to the test data much better. DNN LLR seeks to do similarly: to use high quality representations based on abstract features learnt from outside the target domain training set. In [17], the features learnt from the source dataset were hand selected they were pen strokes. DNN LLR learns the abstract features from the source domain without guidance. Dependance on hand-engineered features adds human effort to the process, and relies on experts being consciously aware of the best features[18]. Learning the features, as done in DNN LLR, bypasses these issues. DNN LLR does however require significantly more than one target domain training case per class. 2.4 Domain Adaptation on Amazon Product Reviews In [19] and [20], a domain adaptation technique was applied to Amazon Product Reviews. The goal of this work was to take textual reviews of Amazon products and learn to predict the rating they gave. Further domain adaptation techniques were used to learn from reviews of one product area (e.g. Electronic Devices) and apply that knowledge to predict the rating associated with reviews from a different product area (e.g. DVDs). These works used Stacked Denoising Autoencoders (SDAs) and Marginalizing Stacked Denoising Autoencoders (msdas) respectively. SDAs and msdas are closely related models to the Deep Belief Networks[21] which form the basis of DNN LLR. In these works, the autoencoders were used to get a representation of their inputs from the source domain training data. These representations were then used to train a support vector machine to perform the final classification.

9 4 No target domain retraining data was allowed. This approach was possible because the source and target domains were very close they had the same inputs (text reviews) and outputs (positive or negative score). The difference was only in the product area (DVDs or Electronics etc). Such an approach is not possible in the cases being considered for DNN LLR, as there are different output classes between the source and target domains[16]. A method based around learning a linear classifier based on a improved representation from a DBN or a msda was investigated as part of this project. Performance gain from transfer was found to be much less than in DNN LLR. A summary of the better results can be found in section G. 2.5 Multimodal Deep Learning Novel work has been done on multimodal deep learning for video and audio recognition, using a variation on a DBN architecture called a bimodal deep autoencoder[22]. In [22], the goal was to use both audio and video of speech to recognize spoken letters and digits. The supplementary data comes from the additional source. Multiple different training datasets were used, of both audio only and video and audio. Most interesting was the use of the TIMIT dataset[23]. The TIMIT dataset is not a dataset of spoken letters or numbers, it is a corpus of spoken English sentences. The use of TIMIT is thus knowledge transfer from a related domain, like in DNN LLR. The TIMIT data is clearly not within the target domain of the problem, though it is related. The paper does not comment further on its use beyond stating it was used for unsupervised audio feature pretraining[22]. Its impact on the learner is expected to be significant. TIMIT contains 6300 training cases of sentences[23]. Combined, the other audio datasets used for training had just 2638 training cases of spoken letters and numbers. Thus most of the audio knowledge in the model came from source domain of spoken sentences, rather than from the spoken letter/number target domain. Transfer learning from a related domain is demonstrated in this paper. The use of out-of-domain data was not the focus of this work. Surprisingly, the paper does not go into any further detail about precisely how or why this was done, nor its effects[22]. It seems likely that it was similar to the methods discussed in section E. 3 BACKGROUND 3.1 Deep Neural Networks A neural network has a number of layers of neurons. Raw data is fed into the bottom layer, is processed through a number of intermediate hidden layers, and the final layer produces the desired output. A key parameter of the neural network is how many hidden layers to have and how many neurons to have in each i.e. how deep and wide to make the network. The number of neurons in the input and output layers is fixed by the problem domain (e.g. 784 pixels input, 10 output classifications). Conventionally every neuron s output is an input to each neuron in the layer above it is layer-wise fully connected[24]. These parameters, the connectedness and the hidden layers sizing define the neural network s topology.

10 5 Output Layer s m ( ) s m ( ) s m ( ) s m ( ) s m ( ) Softmax Output: Vector of probabilities for each class 1 σ( ) σ( ) σ( ) σ( ) σ( ) 3rd Order relationships between pixels. Eg Relationships between line features, such as relative position of corners. Hidden Layers σ( ) σ( ) σ( ) σ( ) σ( ) 1 1 Relationships between relationships between pixels Eg: Relationships between lines, line features such as corners, intersections, relative positions σ( ) σ( ) σ( ) σ( ) σ( ) Relationships between pixels Eg: Lines 1 Input Layer x 1 x 2 x 3 x 4 x 5 Vector of pixel intensities Figure 1: A fairly typical DNN, for image classification. Each node is a neuron, each edge (connecting arrow) between neurons has an associated weight, including the bias neuron (shown dashed) that always outputs 1. The examples on the right are an analogy for the logic which may be occurring on those layers. For readability, only 5 neurons have been shown per layer. A traditional neural net has only one or two hidden layers, neural networks with more are called deep neural networks (DNNs). These have the potential to generalize better than shallow neural networks[25]. While a sufficiently wide shallow neural net can achieve any task 2 [26][27], a deep neural network can be a more efficient solution[3]. This is the premise on which deep neural architectures are based. A deep neural network is shown in figure 1. Its inputs are pixel intensities, and the output is a vector of classification probabilities. Each neuron in the hidden layer has an output value, which is determined by applying the activation function σ to a weighted sum of its inputs. This 2. The Universal Approximation Theorem[26][27] states that a sufficiently wide shallow neural net can approximate any function to arbitrary accuracy. It does however assume that ideal weights and biases can be discovered. It makes no requirement that existing (or any) algorithms can train a network to have those ideal weights. Merely that such weights exist to approximate any function.

11 6 activation function allows each neuron to make a fuzzy binary 3 decision based on its inputs. The decision is a value between 0 and 1, which is output by the neuron. In these experiments used to validate DNN LLR the sigmoid function (see figure 2) was used in the hidden layers. The output layer neurons have a softmax activation function[28] 4. This gives a discrete probability of the input belonging to a particular output class. Training the neural network is the process of determining the correct weights for each input to each neuron. This can be done with the backpropagation algorithm[29] Backpropagation Backpropagation[29] is the most well known algorithm for training neural networks. The rules of multivariate calculus are used to determine the slope of the error surface, with respect to the inter-neuron weights. Once the error/weight derivatives are calculated, the weights are adjusted using gradient descent down the error slope. That is for some learning rate ε, changes are made to each weight w ij, by w ij = ε E w ij, and similarly for the biases. The learning rate ε is how much significance is placed on each training case. Thus the neural net being trained moves through the weight-space, towards values with low error. Figure 2: The Sigmoid Activation Function 1 σ(t) =. For a neuron with a vector of inputs x, with weights w and biases b, this becomes 1 + e t element-wise h = σ( w x + b) 1 = ( ) 1 + e w x+ b Deep Neural Networks Once trained appropriately, each neuron acts as a feature detector for a relationship between its inputs[31]. The deeper the network, the more abstract the relationships which can be described[3]. Consider identifying the number 7 (as shown in figure 3). It could be described by first identifying lines of pixels, and then recognizing a 7 if two lines meet in the upper right corner of the image; or by identifying all possible combinations of pixels that make up a 7. The latter example is a shallow architecture, whereas the former is deep. This allows for a Figure 3: Two examples of the digit Seven from MNIST[30]. It is easier to describe their similarities in terms of lines and corners than in terms of pixels. 3. Some specialized neural networks do not use 0-1 bounded outputs, such as the linear neuron which outputs the linear weighted sum of its inputs. 4. The softmax neuron is another specialized activation function for K neuron in the layer, with w k and b k being the weights and biases of the kth neuron. The output for the jth neuron is is y j = s m( w k, x, j) = e w j x+ b j k<k ew k x+ b k [28]. The effect of using this activation function is that each neuron is given a discrete probability of the output being in the class that that neuron reflects. The total probability for the output will total to 1 across all K neurons.

12 7 more compact neural network, with more abstract descriptions allowing better generalizations. Furthermore, in a deep architecture these feature detectors can be reused [3]; for example detecting a line at the top of the image is also used for recognizing a 5. However there are issues training deep neural networks directly with backpropagation, which can be overcome by using a Deep Belief Network (DBN)[32]. 3.2 Deep Belief Networks In 2006, a new method for training deep neural networks was devised[32]. This method functions by first training a deep belief network (DBN) to learn the structure of the domain s input elements. This allowed knowledge to be gained from unlabelled domain data which is much more available then labelled data. As discussed above, there are many more images of letters, than there are images of letters paired with a digital label saying which letter they are. The DBN technique also allowed the training of deeper networks[32]. The deep belief network is a stack of Restricted Boltzmann Machines (RBMs). It can be used to initialize a deep neural network[25]. The algorithm for training a DBN is known as greedy layer-wise training[32], or greedy layer-wise pretraining when it is used to initialize a deep neural network[25]. This refers to it greedily training each layer without considering the larger network Restricted Boltzmann Machines A Restricted Boltzmann Machine (RBM) is a generative, stochastic, energy based model for learning the probability distributions of its inputs[34]. As a generative model it is unsupervised it is trained to recreate its input. The nodes in a RBM are in two layers the visible layer (input/output) and the hidden layer. RBMs are trained using Contrastive Divergence[35], learning a weight matrix between the layers, which allows each layer to be used to reconstruct the other. The hidden layer can encode the input, and from this encoding the most likely input (visible layer values) can be reconstructed. This means that through learning the best values for the weights, the hidden layer is being trained to encode the most important features of the input layer. Depending on the distribution expected in the visible and hidden layers different variations of the RBM are used. For the Bernoulli-Bernoulli and Gaussian-Bernoulli RBM[31] used in this research, the) probability distribution of the hidden layer ( h) given the visible layer ( x) is P ( h x) = σ ( b + W x [36][37]. ) The similarity in the form of P ( h x) = σ ( b + W x to the neural network activation function: ) h = σ ( b + W x is important it is why a DBN can be used to initialize a DNN. The difference is the change from a probability vector for a Bernoulli distribution, to a vector of values between 0 and Generative Model Greedy Layer-Wise Pretraining (Unsupervised) A deep belief network is equivalent to a stack of RBMs[32]. Each layer is trained to reconstruct the layer below (see figure 4). As discussed above, the hidden layer of the RBM is a set of

13 8 Output Layer W 4 BB-RBM Hidden Layer 3 DBN Hidden Layer 3 Hidden Layer 3 W 3 W T 3 W 3 W 3 Sample BB-RBM from Hidden Layer 2 Visible Layer Hidden Layer 2 Hidden Layer 2 W 2 W T 2 W 2 Sample GB-RBM from Hidden Layer 1 Visible Layer Hidden Layer 1 Hidden Layer 1 W 1 W T 1 W 1 Training cases Visible Layer Visible Layer Visible Layer Greedy Layer-Wise Training Figure 4: Left: Greedy layer-wise pretraining: each RBM is trained fully then the next is trained to learn the hidden layer of the layer below. Center: The RBMs are combined to form a DBN the input distribution can be sampled by randomly initializing the top layer, then running the top two layers as a RBM until it reaches equilibrium, then generating all the layers below[32]. Right: A output layer is appended to initialize a DNN using the DBN. (Diagram adapted from[33].) DBN DNN feature detectors for the visible layer. This means the third layer reconstructs the second layer, which reconstructs the input. Each layer is more abstract which is desired for a deep neural network[38]. This greedy layer-wise pretraining is used to learn from the supplementary source dataset in the DNN LLR algorithm DBN to DNN Backpropagation (Supervised) Once the DBN has been trained, it can be used to initialize a deep neural network, by discarding the generative biases, appending an output layer, then training with back-propagation[33][31] (see figure 4). This makes intuitive sense: in a DBN each layer must encode the data required to reconstruct the layer below thus it must detect the most important features. The weight matrix for the added output layer establishes the relationships between the features in the DBN top layer, and a particular output. The back-propagation trains this top level, and also

14 9 adjusts the other levels[39]. As the normal DBN greedy layer-wise pretraining algorithm only considers the layer below, this global training algorithm can be useful to facilitate better overall performance[32]. A DNN is initialized using these stacked feature detectors (the DBN) and trained to create a function approximator (such as a classifier). Backpropagation is used during DNN LLR, to refit the DBN trained on the source dataset to be a classifier that functions on the target domain. 3.3 Other Deep Neural Networks Not all deep neural networks are initialized with a deep belief network. By a strict definition: a deep neural network is any neural network with more than 2 hidden layers. These can be trained directly with backpropagation, however performance is generally worse[25]. There are also other methods that can be used to create useful features in deeper nets, such as convolutional neural networks, which forces structure into the neural network to give rise to translational invariance[24]. There are also other methods of training a DBN, including some supervised methods which can be used to output a classification[32][40]. Through this paper, the phrase deep neural network (DNN) refers to a neural network which has been initialized with an unsupervised DBN and fine-tuned with backpropagation, unless specifically stated otherwise. 4 THE DNN LOW LEVEL REINITIALIZATION ALGORITHM 4.1 Method Greedly Layerwise Train Reinitialize Bottom Layer Append Output Layer Backpropagation Train Evaluate Source Unlabelled Training Dataset Target Labelled Training Dataset Target Labelled Test Dataset Figure 5: Block diagram showing the training process of a deep neural net, using the DNN LLR Algorithm The DNN LLR algorithm is performed as shown in the block diagram in figure 5. DBNs are not the only generative model that could be used in step 1, another approach based on using msdas is discussed in section F Reinitialization In the second step of the DNN LLR algorithm, the lowest level RBM is reinitialized. In this step all the weights and biases in that RBM are reset to small random values. Equivalently the process could be described as resetting all the weights and biases leading from the input layer to the lowest hidden layer. The reinitialized values were Gaussian distributed with mean 0.00

15 10 and standard deviation This reinitialization allows new weights to be learnt more easily as the neuron connection weights will no longer be set to any large values. Large values would otherwise take many iterations of gradient descent to change. This requirement has been confirmed empirically. Results can be found in section E. 4.2 Transfer of Features for Better Structure It is suggested that this initializing a DNN with a DBN puts the neural net into a state such as it learns it is better able to generalize from its training data to different test data[25]. The DBN pretraining causes useful feature detectors to be created, and learning an output mapping based on these features is more generalizable than directly training a DNN with backpropagation alone. This idea of more abstract features makes intuitive sense based on the example of how to recognize the number 7 given in section In the work on one-shot learning discussed in section 2.3, significant benefits were found by having the classifier able to reason in terms of high level features[17]. It was also shown that these high level features do not have to be based only on the target domain. Any additional features learnt by the DBN, not needed in the final DNN, will simply be ignored during final supervised training[25]. Further, by transferring the feature detectors from another, related domain, new knowledge is added, which may repair flaws in the target training dataset. For example, if the source training dataset contained 7s with the top line having angle of ±10, and the smaller target training dataset only had contained 5s with the top line having an angle of ±5, then when the ±10 top-line feature detector is transferred the final network will be able to recognize 5s with a ±10 top line even though no such 5s existed in either training dataset. Thus new knowledge is added which allows the network to be overcome limitations in its target training dataset. Small datasets are more likely to have such flaws and this leads to poor generalization. However, some desirable features for the target domain will not be found, since the feature detectors are coming from the source. As the target dataset is smaller than the source, it is expected that there will be fewer desirable features missed, compared to those gained by transfer. This does depend on the closeness 5 of the domains and on the relative sizes of the source and target datasets. These expectations are confirmed by the results found. 4.3 Theoretical Justification for Reinitializing Layers A DBN is a set of feature detectors. Those features are tied to features in the input space, but the direct ties are released by reinitializing the lowest layer. 5. Closeness of domains is a rather difficult to define concept. It is hard to describe what makes two domains particularly good (or bad) for knowledge transfer. See

16 11 The features are not, in the DBNs considered, tied to the output space (classification). The DBNs are trained unsupervised without using output labels 6. The features only become tied to the output space once the output layer is appended and used to train the DNN. Thus after the low-level reinitialization and appending of the output layer, the neural net contains knowledge in the form of feature detectors which are not closely tied to its inputs, nor to its outputs. Thus it is largely independent of the specifics of the domain, and therefore transferable. The bottom and top layers are responsible for this transfer. During the backpropagation training of the DNN, new relationships are learnt. The output layer maps between the abstract features and the output classification probabilities. The bottom layer learns to link the raw pixel values to the most useful features described by the central layers. Simultaneously the central features are adjusted to be most applicable to the new domain. 4.4 The Importance of Backpropagation It should be noted that the final stage of training is to train the Deep Neural Net with backpropagation on the target data, without first pretraining it using greedy layer-wise training on the target. This is because backpropagation tunes the whole network; whereas its name suggests, greedy layer-wise training trains each layer greedily without taking into account the optimal solution for the whole network. The bottom layer which has now been reinitialize, must be trained to utilize the feature detectors maintained in the higher layers, for new inputs. Greedy layer-wise training would result in the bottom layer modeling the new (target) input domain, and then as it is applied upwards that model would be placed over the maintained relationships in higher layers destroying a lot of the information that DNN LLR is designed to keep. These results have been confirmed experimentally (see section 6.7). Experimentation was performed on the introduction of a greedy layer-wise training step using the target training data after the reinitialization. Results are presented in section 6.7. It confirmed the expectation of decreased performance. Using backpropagation alone is a more suitable algorithm for the retraining. Backpropagation will retrain the bottom layer based on the state of the whole network. The change in weights in the layers trained with back-propagation are based on both its inputs, and on the calculated error vectors of the relationships above. Thus the bottom layer is trained to create new connections between the inputs and the existing maintained knowledge. It also fine-tunes the transferred relationships for the new domain and for the desired output, and trains the output layer[25]. Sufficient training data is needed however, to allow the bottom layer to make these new connections. 6. A number of alternative supervised fine-tuning algorithms exist for deep belief networks[31][32] other than converting to a DNN with the appending of an output layer. They are shown to produce significant improvement, over purely unsupervised pretraining [31]. These supervised DBN training algorithms would tie the feature detectors to the output classification, so would not be suitable for use with DNN LLR without adaptation being made to the algorithm.

17 12 5 EMPIRICAL EVALUATION METHODS 5.1 Experimental Setup and Evaluation To evaluate DNN LLR, as well as other knowledge transfer techniques, a multitude of experiments were carried out. Each experiment involved training a model with a particular quantity of target training data and a fixed amount of source data. The quantity of target data was varied to investigate how well the algorithm performed at each stage. All target and source datasets were created by partitioning the MNIST[30] dataset into subdatasets (the source and target datasets) with 5 different classes in each. The MNIST Dataset is a collection of labelled greyscale images of handwritten digits[30]. It is large enough that splitting it as discussed still leaves viable training and evaluation datasets. One such partitioning is shown in figure such divisions for source and target datasets were evaluated to test the algorithms. A full list of the target and source domains/datasets evaluated together with their performance is can be found in table 2. As well as the 40 different domain transitions, four different network topologies were evaluated in order to highlight how well the algorithm scales. Figure 6: A small sample of the partitioned MNIST dataset. In this example is partitioned in to 2 domains, either of which could be the source or target. Using the full dataset which has its samples shown on the left as source data, to aid in training of a classifier to solve the target domain problem of recognizing the digits like those on the right was one of the highest performing transitions with DNN LLR. In the empirical evaluations a variety of metrics were considered to highlight its strengths and weaknesses. They are detailed in each experiment as evaluated in section 6. The full details of these methods and the rationale behind their use may be found in section B. All experimental results included here were carried out on software developed for this project (detailed in section H). 5.2 The Control Experiments To assess the advantages gained through the use of the supplementary source data and the knowledge transfer models, control experiments were carried out. Two types of models were trained for use as a control. A Deep Neural Net (DNN) and a Linear Classifier were trained on the target dataset alone. These provide a baseline to compare the knowledge transfer methods against.

18 DNN Control Greedly Layerwise Train Append Softmax Output Layer Backpropagation Train Evaluate Target Unlabelled Training Dataset Target Labelled Training Dataset Target Labelled Test Dataset Figure 7: Block diagram showing the training process for the Control DNN The Deep Neural Network Control experiment is a conventional approach to classifying digits. It is a DNN trained only using the target domain data. In the control, a deep neural net was trained as shown in figure 7. Throughout this process all the same standard techniques, such as early-stopping and L2 weight decay (see section B) were applied as in the knowledge transfer experiments. A DNN Control model was trained for each domain transition, at each training set size increment, for each topology matching the cases for experimental models (such as DNN LLR) being investigated. The corresponding control experiments are used for comparison throughout the rest of this paper Linear Classifier Control Backpropagation Train Evaluate Target Labelled Training Dataset Target Labelled Test Dataset Figure 8: Block diagram showing the training process for the Control Linear Classifier A secondary control was also trained. This one is referred to as a linear classifier. This Linear Classifier Control is a neural network with no hidden layers and a softmax output layer. It is the same for all topologies, as it is not, by its very definition, scalable in that way. As it does not have any hidden layers it does not meet the requirements of the universal approximation theorem[27][26]. It is only able to solve linearly separable problems. This means there are significant numbers of input images that it can not correctly classify even when trained ideally.

19 14 As shown in figure 8, it was trained in a similar fashion to the DNN Control but without any pretraining. There was no need, or capacity, for a DBN to be used to initialize the weights[25]. This control case allows for additional validation of the results. 6 EMPIRICAL RESULTS 6.1 Improvement Frequency The key question in evaluating a technique that may gain a improvement in this way, is how often that improvement will be realized. In the case of domains as close as different subsets of the classes of handwritten digits, improvement is very likely. The additional training time taken to train on the extra related data (the source dataset) is not significant, particularly since the DBN training algorithm is quite fast[32]. The free improvement gained by making use of the free source data with DNN LLR is shown in table 1. Hidden Layer Sizes: Target Set Size 50, 200 Portion of Transitions Improved 50, 50, 200 Portion of Transitions Improved 100, 100, 400 Portion of Transitions Improved 100, 100, 100, 400 Portion of Transitions Improved % 82.05% 76.92% 46.15% % 76.92% 56.41% 46.15% % 64.10% 20.51% 46.15% % 61.54% 15.38% 46.15% % 74.36% 23.08% 58.97% % 97.44% 38.46% 51.28% % % 82.05% 61.54% % 94.87% 76.92% 76.92% % 94.87% 69.23% 82.05% % 87.18% 71.79% 69.23% % 89.74% 51.28% 64.10% Table 1: The portion of domain transitions where using DNN LLR to learn from the source and the target domain, results in a lower error rate when evaluated on target domain test data, than the DNN Control (which is trained on the target domain training data alone). A result of 0% would indicate that for that quantity of target data, and that topology, no DNN LLR transfer cases did better than the DNN Control. Conversely a result of 100% indicates that in all source-target domain transitions investigated, the neural net trained with DNN LLR performed better than the control.

20 Expected Improvement: Figure 9: The improvement of DNN LLR vs DNN Control Case Control Error Rate DNN LLR Error Rate Improvement =, mean Control Error Rate over all domain transitions, shown for all four topologies considered. A single standard deviation is shown around the means. This plot corresponds to table 10 The second question is how much improvement in classification accuracy is expected to be gained by using DNN LLR. The absolute performance of the DNN Control and of the DNN LLR algorithm are shown in figure 11. The mean relative difference in error rate to the control (i.e. improvement) considered when trained with 2000 target domain cases is shown in table 2. This is close to the point where DNN LLR provides maximum benefit, as can be seen in figure 9 and table 10. While what improvement can be gained is determined by which domains are being transferred between, there are some trends based on the neural network topology and the quantity of target domain training data used. There was a greater spread of results for neural networks with the hidden layers sized [100,100,400] and [100,100,100,400], and the average improvements were smaller. The use of DNN LLR helps to get the neural network to a high standard of performance earlier than not making use of the additional data. This is shown in figure 11. The initial performance gain is quiet large, but its lead over the control shrinks as more target data is added. As the target dataset set size becomes large, performance is very similar with or without using DNN LLR; confirming the expectation that there would be less gain in transferring feature detectors for datasets of similar size. For the wider networks evaluated, the average gain when the source and target datasets were the same size was actually marginally worse though with a high degree of variance depending on the particular transition being considered. This is as expected given enough data for the target task, there is no need to transfer knowledge from elsewhere. With plenty of training data high quality background knowledge can be extracted from the target dataset alone. In general it can be seen that the utilization of the extra source domain data better initializes the network for learning, resulting in the early improvements.

21 16 Hidden Layer Sizes: 50, , 50, , 100, 100, 100, Mean , 400 Transition: Improvement Improvement Improvement Improvement Improvement to % 83.16% 70.62% 81.07% 76.52% to % 71.54% 69.01% 25.08% 57.85% to % 59.83% 56.94% 49.14% 54.17% to % 66.92% 50.90% 66.56% 53.00% to % 78.98% 72.09% % 52.72% to % 67.38% 60.02% 19.37% 52.50% to % 73.59% 74.68% 21.26% 51.14% to % 54.55% 58.56% 25.19% 51.10% to % 69.27% 50.82% 38.43% 48.09% to % 57.30% 66.92% 1.16% 46.46% to % 64.03% 36.71% 60.24% 45.49% to % 62.11% 35.15% 19.45% 41.39% to % 74.67% 50.16% % 40.16% to % 61.74% 36.27% 3.84% 38.79% to % 58.54% 41.42% -8.52% 37.91% to % 58.93% 26.18% 11.37% 37.36% to % 58.45% 38.46% -7.50% 36.94% to % 60.05% 51.80% % 34.90% to % 69.78% 35.10% % 34.08% to % 45.67% 22.70% 18.98% 32.50% to % 39.32% 18.18% 40.53% 32.46% to % 50.22% 34.60% 3.56% 28.73% to % 28.99% 45.88% 6.71% 27.88% to % 41.24% 25.66% 31.15% 27.66% to % 53.11% 11.95% 10.53% 26.28% to % 47.53% 29.49% 20.81% 23.93% to % 64.72% 19.24% % 23.49% to % 42.55% 12.73% -3.29% 23.48% to % 43.16% 4.30% -8.06% 20.64% to % 33.52% % 25.49% 20.29% to % 34.40% 16.63% -0.07% 12.65% to % 36.94% 9.67% 5.68% 8.37% to % 46.01% 8.51% % 8.04% to % 19.31% -9.98% % 0.53% to % 52.07% -6.83% % -0.71% to % 50.78% % 29.74% -3.02% to % 28.32% % 15.21% -4.74% to % 17.86% % % % to % 10.70% % % % Table 2: The improvement found for each transition by using DNN LLR over the DNN Control Case. Improvement = Control Error Rate DNN LLR Error Rate, at 2000 training cases, across the 4 different topologies. For compactness, the transition has been written as: <digits in source> to <digits in Control Error Rate target>.

22 17 Hidden Layer Sizes: 50, , 50, , 100, , 100, 100, 400 Target Set Size (Portion of Source Set Size) 50 (0.22%) 100 (0.44%) 150 (0.67%) 200 (0.89%) 250 (1.11%) 300 (1.33%) 500 (2.22%) 1000 (4.44%) 2000 (8.88%) 4000 (17.75%) 8000 (35.51%) (71.02%) (100.0%) Mean (std. dev.) Im- prove- ment % (19.29%) % (36.12%) % (42.7%) % (44.69%) % (51.84%) -19.8% (43.69%) % (45.53%) 12.8% (31.06%) 37.0% (24.01%) 31.86% (29.21%) 27.61% (22.24%) 14.22% (39.64%) 10.59% (33.36%) Mean (std. dev.) Im- prove- ment 5.17% (8.71%) 8.92% (10.95%) 6.98% (16.47%) 4.76% (16.24%) 3.94% (17.38%) 7.13% (20.64%) 7.33% (20.36%) 46.29% (18.6%) 52.24% (17.31%) 45.08% (20.12%) 31.07% (17.18%) 19.98% (19.73%) 18.09% (20.44%) Mean (std. dev.) Im- prove- ment 2.86% (6.31%) 1.41% (11.25%) % (21.47%) % (26.6%) % (22.58%) % (36.13%) % (35.06%) % (39.17%) 23.63% (41.77%) 17.02% (46.5%) 4.01% (52.18%) -1.57% (65.44%) -6.8% (56.48%) Mean (std. dev.) Im- prove- ment 0.63% (3.95%) 0.61% (7.24%) -2.44% (12.22%) -3.73% (13.83%) -2.66% (13.77%) -3.39% (15.37%) -4.59% (20.16%) -9.8% (28.15%) 5.74% (33.73%) 14.13% (42.96%) 10.23% (42.42%) -0.1% (48.42%) -2.63% (55.7%) Figure 10: The Mean and Standard Deviation of the Improvement of DNN LLR over the DNN Control Case. Improvement = Control Error Rate DNN LLR Error Rate. Control Error Rate A 0% Improvement occurs when the DNN LLR model does exactly as well as the DNN Control. A 50% Improvement occurs when the DNN LLR model has half the error rate of the DNN Control. This table corresponds to figure 9. Figure 11: Mean performance of the Control and the DBN LLR algorithms across all domain transitions.

23 The Requirement for Sufficient Target Domain Training Data DNN LLR, unlike single shot learning algorithms, requires a moderate quantity of target domain training data to provide benefit. A sharp jump in frequency of improvement (table 1), and quality of improvement (figure 11) is seen when the quantity of source data exceeds 1000 or 2000 cases (depending on topology). The exception to this is the [50,50,200] topology, and all topologies for extremely small quantities of target data, which always have a high likelihood of the DNN LLR doing better than the DNN Control (table 1). However, while improvement is likely in these cases, it is very small in magnitude (see figure 11), it certainly does not exceed the Linear Classifier Control (see section D). This delay during which the DNN LLR model is out-performed by the control can be linked to the quantity of target training data required to learn useful weights in the bottom reset layer. The exact quantity of data required to reach this point is thus expected to be linked to the learning rate (see section 3.1.1), and to the width of the bottom layer. Verification of the exact effects of adjusting the learning rate remains as future work in this area (see 7.1.1). The increased delay for wider experiments was seen in evaluation. 6.4 Performance in Deeper and Wider Topologies It can be seen that the deeper networks perform better under DNN LLR, than their shallower counterparts. The network with hidden layers sized [50,50,200] saw better gains than the network with [50,200]. The network with hidden layers [100,100,100,400] performed better than that with [100,100,400]. This can be attributed to the deeper networks having higher quality features available for transfer. Their higher level features are more abstract and are thus more able to be generalized to the new domain. Conversely the gain in wider networks is lesser, and later, than in narrower topologies. As discussed above the reason for the delay is the additional training data required to retrain the larger bottom layer. Simultaneously the extra data that is needed to retrain that layer, is also available to the control to directly improve its performance. Thus once the DNN LLR experiment begins functioning, it must be compared to a well trained neural network, thus gains are harder to get. The wider network also has more capacity to discover good features from the limited target dataset in the Control DBN pretraining due to having more neurons. DNN LLR improvement is worse in wider networks, and better in deeper networks. 6.5 DNN LLR acts as a Superior Regularizer Regularization algorithms are a techniques which reduce overfitting. Overfitting is when the model learns based on coincidence (i.e sampling noise) in the training data. Overfitting is directly opposed to good generalization when the neural network is keying off of facts that are only true in the training data it is not going to generalize well to real world (or test) data. There are several algorithms being used in these experiments (including the controls) to reduce overfitting. In particular, early-stopping[41] and L2 weight decay [42][43]. The use of DBNs to

24 19 initialize DNNs is itself believed to act as a regularizer[25]. Initializing the DNN using DNN LLR is superior as a regularizer to initializing it with DBN training on the target data (as is done in the DNN Control). If the neural network generalizes very well then it will perform as well, or better, on the test dataset than it does on the training dataset. The portion of all experiments where such occurs across all domain transitions, all topologies, all training set sizes is shown in table 3. These results support the work of [25], as the DNN Control outperforms the Linear Classifier Control. The use of DNN LLR significant outperform the DNN Control in this regard. This does not, itself, mean that it performs better on a absolute Algorithm Portion with test error training error DNN LLR 30.2% DNN Control 20.5% Linear Classifier Control 8.7% Table 3: A comparison of the performance of test error vs training error with the various algorithms, across all experiments. scale. The results shown in table 3 are comparing the model s test dataset performance against its own training dataset performance. It does however indicated that a notable improvement in generalization capacity of the network can be gained by using DNN LLR. 6.6 Best and Worst Domain Transitions It is expected that some transitions consistently do better than others. This expectation is confirmed in the results shown in table 4. It was also seen in the results shown in table 2, at training set size Here a transition from A to B refers to transferring knowledge from source dataset containing the digits in A, to the target domain problem of recognizing the digits in B. For brevity here, target and source domains will be expressed as a string of digits representing those that they contain. So is the domain containing the digits 0, 1, 2, 3, and 4. The best performance on a given domain was fairly consistent across DNN topology and training set size. Most of the transitions that are the best for that configuration are also the best for multiple other configurations. Experiments found this to be even more true for the worst transitions. For the majority of configurations when applying DNN LLR the worse transition was to This makes sense as few features are common between the domains, so the suitable feature detectors could not be transferred. Which transitions do well compared to others is determined almost exclusively by the information contained in their training datasets. The same transitions are expected to do well in all configurations because their source training dataset contains notions that are particularly reusable in the target domain. This is further confirmed by transitions that contain similar but slightly different items in their domains; such as to and to The strong performance of these overlapping transitions is indicative of common knowledge in the sources that is transferable to the targets. However it is also worth noting that these transfers are not symmetric. It might be expected that if a particular transition does well (or poorly) then the reverse transition will also do well (or poorly). However it can be noted in the results in table 4 that

25 20 no such flipped duplicates occur in the very best or very worst cases. For example the worst transition was with hidden layers sized [100, 100, 400] and training cases, the transition to performs 382% worse than the control, the reverse transition to has a 29% improvement over the control. The reason for this asymmetry in performance is the asymmetry in the algorithm. DNN LLR is not a symmetrical algorithm. As discussed earlier, and as can be seen in figure 5, a different learning algorithm is applied to the source and to the target. The source dataset is learnt from using greedy layer-wise training[32], this learns feature detectors for reconstruction. The target training dataset is learnt from using backpropagation[29], learning ideal for the features to get a good classification. These are very different algorithms in purpose and implementation. Extensive discussion of the differences of what is learnt by the DBN pretraining, and by the traditional backpropagation fine-tuning method can be found in [5], with examples from the MNIST dataset used in this experiment. Hidden Layer Sizes: Target Set Size (Portion of Source Set Size) 1000 (4.44%) 2000 (8.88%) 4000 (17.75%) 8000 (35.51%) (71.02%) (100.0%) 50, 200 Best to to to to to to , 50, 200 Best to to to to to to , 100, 400 Best to to to to to to , 100, 100, 400 Best to to to to to to Hidden Layer Sizes: Target Set Size (Portion of Source Set Size) 1000 (4.44%) 2000 (8.88%) 4000 (17.75%) 8000 (35.51%) (71.02%) (100.0%) 50, 200 Worst to to to to to to , 50, 200 Worst to to to to to to , 100, 400 Worst to to to to to to , 100, 100, 400 Worst Transition Transition Transition Transition Transition Transition Transition Transition to to to to to to Table 4: The Best (left) and Worse (right) Transitions for a given Neural Net Topology, at varying quantities of target domain training data. The cells have been colorized to make reoccurring transitions more apparent. For compactness, the transition has been written as: <digits in source> to <digits in target>

26 The Consequences of Adding a Target Dataset Pretraining Step to the Reinitialization Process Greedly Layerwise Train Reinitialize Bottom Layer Greedly Layerwise Train Append Output Layer Backpropagation Train Evaluate Source Unlabelled Training Dataset Target Unlabelled Training Dataset Target Labelled Training Dataset Target Labelled Test Dataset Figure 12: Block diagram showing the training process of a deep neural net, using the DNN LLR algorithm with an extra step of pretraining on the target dataset added. Figure 13: The improvement of DNN LLR without (top) and with (bottom) Target Domain Pretraining. As in figure 9: Control Error Rate DNN LLR Error Rate Improvement =, Control Error Rate averaged over all domain transitions, shown for all four topologies considered. A single standard deviation is shown around the means. As discussed in section 4.4, investigations were carried out on the effect of adding a pretraining step, to use the DBN training method with the target as well as the source training data as shown in figure 12. The mean improvement of the algorithms is shown in figure 13. While applying the extra target pretraining initially helps for very low quantities of total target training data, it is not enough of a performance increase to allow it to it exceed the Linear Classifier Control (not shown). For larger quantities of target training data adding the target pretraining step causes the model to do worse than the than the control. This indicates that there is conflict between the features which would be learnt from the target dataset and those from the source dataset. Indicating a different approach to the target domain problem is used by networks trained with DNN LLR. These results support the expectation that the greedy layer-wise method is too destructive on the structures desired to transfer. Backpropagation is better for its capacity to adjust the whole network, particularly the reinitialized bottom layer

27 22 (together with the just appended output layer), to re-purpose the structures. Thus DNN LLR does not have the target pretraining step in the final algorithm. 7 CONCLUSION 7.1 Further Work Verification of Results These results are promising and have had some validation performed on them. They are the results of considering large numbers of different domain transitions (see section B.5), across the 4 different topologies, and 13 differing quantities of training data. However, other neural network parameters (see section B.2.2) have not had extensive variational testing applied to them. Also, while many different transitions were considered, each was only evaluated once for each set of parameters. Additional repetitions would increase confidence of the individual results, rather than the overarching trends Defining Similarity and Designing a Heuristic for Predicting Wow Well DNN LLR will Perform As was discussed in section 6.6, some domain transitions consistently function better than others. This is because the feature detectors transferred are more cross applicable. Deep analysis of which transitions are successful and exactly why has not been carried out. If this could be roughly determined without going to the full extent of training DNN LLR and a Control, then it would enhance the utility of the algorithm significantly. It will allow the source dataset to be selected optimally. Some research on defining similarities has been conducted to other learning models, and has resulted in superior, more transfer-aware models[44]. A similar set of research could be done for DNN LLR, potentially even making use of the already collected results Performance Loss in Wider Networks As discussed in section 6.4, performance is worse in wider networks. Due to the heavily increased computational time in training wider networks, very wide networks have not been evaluated. The current cutting edge DNNs for MNIST have hidden layers sized around [500,500,2000] 7. Further investigation on the performance of DNN LLR at similar sizes is necessary. Adaptations such as Convolutional Deep Neural Nets[45] which have been designed for better scaling to wider networks, may be required to see continued performance gain Mixing Source and Target Datasets In the section 6.7 and section E, an additional step of DBN pretraining on the target dataset after the source dataset pretraining was completed. 7. Comparing performance: Best DNN LLR, hidden layers [100,100,100,400], source: 02589, target: 13467, source training cases: 22530, target training cases: 22530, target validation cases 5000, Error Rate 1.79%. vs Wake-Sleep DBN[32] with [500,500,2000], target: , training cases 54216[30], error rate 1.25%

28 23 It may be expected that a better result would be seen by mixing the target and source datasets, followed by a single run of mixed pretraining. This expectation is supported by Hinton s recommendations on how to train RBMs (and thus DBNs) using minibatch[42]. When the domain contains multiple different output classes, it is best to have them dispersed throughout the training set[42]. This could have resulted in less destruction of data, and creation of feature detectors based on the features from all data, and thus a better result. However, this is difficult to test empirically due to the limited capacity to reuse the trained networks by adding more training data. Most of the experimental techniques allowing for faster experimentation discussed in section B.3 are not applicable. As such, the additional computational time required for such investigations put it beyond the scope of this project. It remains an interesting area for future research. 7.2 Applications The algorithm here has only been shown on digit recognition, however it is expected to work in other deep learning application areas. It would be beneficial in any area with limited training data for the task at hand, but where related training data is available Applications in Natural Language Processing A lot of interesting work has recently been done on natural language processing using neural networks. Such as [46], where shallow neural networks were trained to parse English and Chinese. A deep neural network could be used instead gaining the benefits ascribed to deep neural networks[3]. Further, DNN LLR could be employed to use the Chinese language dataset be used to improve the learning of the English language, or vice-versa. It has been shown that there are structural similarities in English and Chinese that can be accessed through machine learning techniques[47]. Thus the transfer techniques of DNN LLR could be expected to function to produce superior results. 7.3 Closing Remarks A method has been presented to allow for cross domain knowledge transfer in deep neural architectures. DNN Low Level Reinitialization, allows the transfer of high-level strategic knowledge, while limiting the transfer of non-applicable knowledge of the source domain. It provides significant enhancement over conventional deep neural networks for generalization. It is shown here to improve recognition of MNIST sub-domains, however it has applications in many areas. The experiments performed show it works better in deeper neural networks and worse in wider networks. Its scalability to very wide networks is thus questionable. It provides significant improvement in the narrower and faster training networks. In suitable cases, as discussed, DNN LLR is a viable method to improve performance though the use of supplementary data. In cases when the applicability is less certain, it may be worth

29 training DNN LLR along side a traditional model, and performing evaluation to find the preferred. With the right choice of source domain, it is believed that any target domain will receive a performance boost from using this knowledge transfer technique. The DNN LLR algorithm demonstrates that knowledge can be transferred between deep neural networks and provides one method to do so. 24

30 25 REFERENCES [1] D. A. Braun, C. Mehring, and D. M. Wolpert, Structure learning in action, Behavioural brain research, vol. 206, no. 2, pp , [2] T. Serre, G. Kreiman, M. Kouh, C. Cadieu, U. Knoblich, and T. Poggio, A quantitative theory of immediate visual recognition, Progress in brain research, vol. 165, pp , [3] Y. Bengio, Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 2 Theoretical Advantages of Deep Architectures, pp [4] Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, [Online]. Available: [5] D. Erhan, P.-A. Manzagol, Y. Bengio, S. Bengio, and P. Vincent, The difficulty of training deep architectures and the effect of unsupervised pre-training, in International Conference on Artificial Intelligence and Statistics, 2009, pp [6] Y. Bengio, Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 1 Introduction, pp [7] Flickr, Creative commons, Website, October [Online]. Available: [8] P. Stanford Vision Lab, Stanford University, Imagenet, Website, October [Online]. Available: image-net.org/ [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in Computer Vision and Pattern Recognition, CVPR IEEE Conference on. IEEE, 2009, pp [10] L. M. G. López, O. R. C. Jordán, D. Penney, and T. Chandler, The role of transfer in games teaching: Implications for the development of the sports curriculum, European Physical Education Review, vol. 15, no. 1, pp , [Online]. Available: [11] Y. Bengio, Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 9 Looking Forward, pp [12] P. Wu and T. G. Dietterich, Improving svm accuracy by training on auxiliary data sources, in Proceedings of the twenty-first international conference on Machine learning. ACM, 2004, p [13] N. S. Altman, An introduction to kernel and nearest-neighbor nonparametric regression, The American Statistician, vol. 46, no. 3, pp , [Online]. Available: [14] C. Cortes and V. Vapnik, Support-vector networks, Machine learning, vol. 20, no. 3, pp , 1995, what this paper calls a Support Vector Network, has now come to be commonly refered to as a Support Vector Machine. [15] M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell, Zero-shot learning with semantic output codes, in Advances in neural information processing systems, 2009, pp [16] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng, Zero-shot learning through cross-modal transfer, in Advances in Neural Information Processing Systems, 2013, pp [17] B. M. Lake, R. Salakhutdinov, J. Gross, and J. B. Tenenbaum, One shot learning of simple visual concepts, in Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011, pp [18] P. Tokarczyk, J. Wegner, S. Walk, and K. Schindler, Beyond hand-crafted features in remote sensing, ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 1, no. 1, pp , [19] X. Glorot, A. Bordes, and Y. Bengio, Domain adaptation for large-scale sentiment classification: A deep learning approach, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011, pp [20] M. Chen, Z. Xu, K. Weinberger, and F. Sha, Marginalized denoising autoencoders for domain adaptation, arxiv preprint arxiv: , [21] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, Extracting and composing robust features with denoising autoencoders, in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp [22] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, Multimodal deep learning, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011, pp [Online]. Available: http: //machinelearning.wustl.edu/mlpapers/paper_files/icml2011ngiam_399.pdf [23] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren, DARPA TIMIT acoustic phonetic continuous speech corpus CDROM, [Online]. Available: [24] Y. LeCun and Y. Bengio, Convolutional networks for images, speech, and time series, The handbook of brain theory and neural networks, vol. 3361, [Online]. Available: [25] D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio, Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, vol. 11, pp , [26] G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of control, signals and systems, vol. 2, no. 4, pp , 1989.

31 [27] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural networks, vol. 4, no. 2, pp , [28] J. S. Bridle, Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters, in Advances in Neural Information Processing Systems 2, D. Touretzky, Ed. Morgan-Kaufmann, 1990, pp [Online]. Available: training-stochastic-model-recognition-algorithms-as-networks-can-lead-to-maximum-mutual-information-estimation-of-paramet pdf [29] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, Nature, vol. 323, no. 6088, pp , [30] C. J. B. Yann LeCun, Corinna Cortes, The mnist database of handwritten digits, November [Online]. Available: [31] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, Greedy layer-wise training of deep networks, Advances in neural information processing systems, vol. 19, p. 153, [Online]. Available: greedy-layer-wise-training-of-deep-networks.pdf [32] G. E. Hinton, S. Osindero, and Y. Teh, A fast learning algorithm for deep belief nets, Neural computation, vol. 18, no. 7, pp , [33] G. Hinton, Recent developments in deep learning, Lecture, The University of British Columbia, [Online]. Available: [34] Y. Freund and D. Haussler, Unsupervised learning of distributions of binary vectors using 2-layer networks, in Advances in Neural Information Processing Systems 4, J. Moody, S. Hanson, and R. Lippmann, Eds. Morgan-Kaufmann, 1992, pp [Online]. Available: unsupervised-learning-of-distributions-of-binary-vectors-using-2-layer-networks.pdf [35] G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural computation, vol. 14, no. 8, pp , [36] J. Melchior, Learning natural image statistics with gaussian-binary restricted boltzmann machines, Master s thesis, University of Bochum, Germany, [Online]. Available: janmelchior.pdf [37] Y. Bengio, Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 5 Energy-Based Models and Boltzmann Machines, pp [38], Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 6 Greedy Layer-Wise Training of Deep Architecture, pp [39] Y. LeCun and M. Ranzato, Deep learning tutorial, ICML, [Online]. Available: talks/lecun-ranzato-icml2013.pdf [40] D. P. F. B. J. Hinton, G. E. and R. Neal, The wake-sleep algorithm for unsupervised neural networks. Science, vol. 268, pp , [41] L. Prechelt, Early stopping-but when? in Neural Networks: Tricks of the trade. Springer, 1998, pp [42] G. Hinton, A practical guide to training restricted boltzmann machines, Momentum, vol. 9, no. 1, [Online]. Available: [43] G. E. Hinton and D. Van Camp, Keeping the neural networks simple by minimizing the description length of the weights, in Proceedings of the sixth annual conference on Computational learning theory. ACM, 1993, pp [44] M. T. Rosenstein, Z. Marx, L. P. Kaelbling, and T. G. Dietterich, To transfer or not to transfer, in NIPS 2005 Workshop on Transfer Learning, vol. 898, [45] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, in Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp [46] D. Chen and C. D. Manning, A fast and accurate dependency parser using neural networks. [47] W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning, Bilingual word embeddings for phrase-based machine translation. in EMNLP, 2013, pp [48] Y. Bengio, Learning deep architectures for AI. Now Publishers Inc., 2009, vol. 2, no. 1, ch. 4 Neural Networks for Deep Architectures, pp [49] M. Chen, K. Weinberger, F. Sha, and Y. Bengio, Marginalized denoising auto-encoders for nonlinear representations, in Proceedings of The 31st International Conference on Machine Learning, 2014, pp [Online]. Available: 26

32 [50] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, Efficient backprop, in Neural networks: Tricks of the trade. Springer, 2012, pp [51] G. Van Rossum, Python 2.7 documentation, [52] T. E. Oliphant, Python for scientific computing, Computing in Science & Engineering, vol. 9, no. 3, pp , [Online]. Available: [53] E. Jones, T. Oliphant, P. Peterson et al., SciPy: Open source scientific tools for Python, [Online]. Available: [54] S. Behnel, R. Bradshaw, C. Citro, L. Dalcin, D. Seljebotn, and K. Smith, Cython: The best of both worlds, Computing in Science Engineering, vol. 13, no. 2, pp , [55] F. Perez and B. E. Granger, Ipython: A system for interactive scientific computing, Computing in Science & Engineering, vol. 9, no. 3, pp , [Online]. Available: [56] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research, vol. 12, pp , [57] R. Bilina and S. Lawford, Python for unified research in econometrics and statistics, Econometric Reviews, vol. 31, no. 5, pp , [58] W. McKinney, Data structures for statistical computing in python, in Proceedings of the 9th Python in Science Conference, S. van der Walt and J. Millman, Eds., 2010, pp [59] J. D. Hunter, Matplotlib: A 2d graphics environment, Computing In Science & Engineering, vol. 9, no. 3, pp , [60] M. Waskom, Seaborn 0.4.0, Software, [Online]. Available: [61] T. Tantau, The TikZ and PGF Packages. [Online]. Available: [62] M. Ettrich et al., The lyx document processor, [63] H. Hagen, Luatex: Howling to the moon, Communications of the Tex Users Group Tugboat, p. 152,

33 28 APPENDIX A NOMENCLATURE A.1 Abbreviations All abbreviations and terms have been defined inline, above, however the following is provided as a quick reference for the reader. DBN: Deep Belief Network. A stacked generative model normally based on RBMs[32]. DNN: Deep Neural Network. A neural network containing two or more hidden layers[48]. DNN LLR: Deep Neural Network Low Level Reinitialization. The knowledge transfer algorithm which is the subject of this paper. knn: k-nearest Neighbors. A machine learning model[13]. SDA: Stacked Denoising Autoencoder. A stacked generative model, based on denoising autoencoders[21]. msda: Marginalizing Denoising Autoencoder: A stacked generative model, based on marginalizing denoising autoencoders[49]. SVM: Support Vector Machine. A machine learning model[14]. RBM: Restricted Boltzmann Machine. An unsupervised learning algorithm that learns to regenerate its input[35] A.2 Terms Dataset: A collection of data for training or evaluating a learning model. It may be labelled or unlabelled. Domain: A general class of problems requiring a single skill set. Linear Classifier: A Neural Network without any hidden layers. It is capable only of solving a particular linearly separable class of problems. MNIST: A machine learning dataset containing greyscale images of handwritten digits[30]. Neural Network: A class of machine learners based on the architecture of the human brain[29]. Target Domain/Dataset: The domain of the problem for which the learning model is being trained to solve, and the training dataset contained information about it. Source Domain/Dataset: A domain of problems, and a dataset for training a model to learn about it, which is not the same domain/dataset as the target domain/dataset but from which it is desirable to transfer knowledge from.

34 29 APPENDIX B DETAILED EXPERIMENTAL SETUP B.1 Source and Target Domain Datasets DNN LLR is an algorithm for transferring knowledge learnt from one domain to another. It can be used to improve performance when there is additional data from another domain available. In the case of the domains used for the empirical evaluation of the algorithm, partitions of MNIST[30] were used. The target domain task was to recognize 5 particular handwritten digits. The target domain training data was the portion of the MNIST training set corresponding to those digits. A variety of quantities of that training data was made available incrementally to allow for evaluation at the different sizes (see section B.3). The source domain problem used was the classification of the remaining 5 digits. To be precise, since the source domain is used the train a DBN, the source domain task is to be able to regenerate 5 digits based. The source domain training data was thus chosen to be the elements from the training dataset corresponding to the 5 digits not with the target domain 8. Across all experiments, the quantity of source data was kept constant. This matches the real world use-case, of trying to take advantage of an existing dataset: as this dataset already exists, its size is fixed. Conversely the quantity of target domain data was varied, to provide insight on how the algorithm performs dependent on this. In a real world application, more target data can be collected, if necessary to put the algorithm into its ideal condition. B.1.1 Target Training Set Quantities The following differing quantities (Value in brackets is target training data were used for evaluation: 50 (0.22%) 100 (0.44%) 150 (0.67%) 200 (0.89%) 250 (1.11%) 300 (1.33%) 500 (2.22%) 1000 (4.44%) 2000 (8.88%) 4000 (17.75%) 8000 (35.51%) (71.02%) (100.0%) target training quantity quantity training quantity 100%) of Up to 300 elements the quantity variations were linearly spaced with a net trained and evaluated every 50. The reasoning behind this linear sizing was to investigate the very small amounts of 8. To be precise, depending on the exact classification in the source domain, as the quantity of MNIST training data for each digit is different, for some source domains the dataset difference by up to 3 training cases to each side of

35 30 data.for ,000 target training cases, the quantity was doubled at each step, to consider the full results. Finally the full target training set of 22,530 elements were used. B.2 Experimental Parameters Throughout all experiments consistent algorithm parameters and techniques were used. B.2.1 Dataset Preparation The input training data was standardized, to be input feature-wise 9 zero mean and unit variance. In the training of a neural net with backpropagation this is useful to gain improved learning by not saturating the neural activation function[50]. An alternative would be to initialize the bias and weight values to achieve the same affect as can be done for RBMs[42]. However a much simpler implementation of the Gaussian-Bernoulli RBM is possible with pre-standardised inputs[36][37]. The scaling and shift factor of the transformation used in the standardization for the target domain training data is stored and used on the validation and test data (see section B.4.1). B.2.2 Greedy Layer-wise Training Parameters The Deep Belief Networks used were trained as described above. As the dataset was standardized (see section B.2.1), the bottom layer RBM, needs to be a Gaussian-Bernoulli RBM[31] in order to accept these Gaussian distributed input features. The remaining layers were Bernoulli-Bernoulli RBMs [37]. The weights and biases were initialized using random values from a Gaussian distribution with mean 0 and standard deviation This, together with standardizing the inputs, helps to avoid the neuron activations saturating, which would decrease the rate of learning[42]. A learning rate of is used throughout the greedy layer-wise training. It is very small, this improves the stability of the Gaussian-Bernoulli bottom layer[42]. With higher learning rates preliminary experiments showed severe numerical instability. It is possible to use a higher learning rate for the Bernoulli-Bernoulli layers, but that was not done in these experiments. It is expected that doing such would result in similar improvements in both the control and the DNN LLR experiments. A L2 Weight Decay cost coefficient of was used. This is slightly larger than the suggested as a starting point in [42]. It was found to give an improvement learning during preliminary investigations. The Contrastive Divergence[35] training algorithm was used to train all layers, with a single step of sampling (CD-1), as described above. In the bottom Gaussian-Bernoulli layer, a mean-field method was used. In this case the reconstructed input is found by v = E(P (v h)) rather than sampling v P (v h). Preliminary investigative experiments found that this improved the learning for the GB-RBM. It also improved stability. This is because it removes the spread in values that are reconstructed. 9. Input feature-wise i.e. pixel-wise. for MNIST input features are pixels. Higher level features are learnt in a DNN.

36 31 B.2.3 Backpropagation Parameters The softmax output layer in all experiments had 10 neurons, based on the original labels. Though the final target dataset only had 5 distinct labels, the sparse output representation containing 5 extra values which were always zero does not affect the training. Even the smallest training set is enough to bias those neurons to be always off. The weights and biases were normally initialized using the values from a DBN as discussed above. In the case of the linear classifier control (see section 5.2.2), and in the reinitialized bottom layer in DNN LLR: the weights and biases were again initialized using mean 0, standard deviation 0.01 Gaussian distribution. During backpropagation, a learning rate of 0.01 was used. This aligned with the order or magnitude increase from the greedy layer-wise pretraining learning rate suggested in [31]. As in the DBN pretraining, a L2 Weight Decay cost coefficient of was used. B.3 Incremental Training and Evaluation The models were trained and evaluated incrementally on more target training data. This allowed more data to be collected, and the change in particular networks to be viewed as more target training data was made available. The training increments were standardized such that at time of evaluation, and at time of DBN training completing, the learning algorithm has had a full set of standardized inputs. that is to say, the inputs appeared approximately Gaussian distributed with zero mean and unit variance. Derivation of the method for doing this can be found in section C. B.3.1 Neural Network Reuse in Experimental Cases In the case of the DBN LLR experimental cases, as the DBN stage is trained only on the fixed sized source dataset, the addition of more target training data did not require repeating the DBN training stage. Further, the deep neural net could be reused, since training was done in minibatch the addition of another increment of target training data is just further minibatch to process. It should be kept in mind that the early stopping rollback (see section B.3.3) must be reverted prior to the next increment being trained, as otherwise the full amount of training data is not used. B.3.2 Reuse of DBN in Control Case The DNN cannot be used in the control case, as now more training data is available for use in DBN pretraining. To allow that to be done, the DBN trained on the last increment is further trained on the new training data, and is then used to initialize a fresh feed-forward deep neural network. This is then trained, without early-stopping, on the target training data from earlier training increments (but see below, early stopping is applied in post-processing), before being trained as with the experimental case on the latest training data increment, using early stopping.

37 32 B.3.3 Early Stopping For each increment of new training data, early-stopping was used [41]. Early stopping was used to help prevent over-fitting. A validation data containing 5,000 instances of the target dataset was used. The viability of such a large validation dataset when the training dataset is very small is questionable in this context. However this same advantage was extended to both the Control and the Experimental cases. Each element in the new training set increment was passed though the network once. Rather than the other option of cycling though the set until an early-stopping criteria occurred. After each mini-batch was processed, the error rate on the validation data was checked. At the end of the training increment, the neural network was rolled back to its best state, prior to evaluation. As early stopping was only used on the last increment of training data, that denies the opportunity to roll back to the states found in earlier increments. However this effect can achieved in post-processing. Rolling back to the best neural network state found in the previous increment, results in the score of the previous training increment being used if it performs better in validation than than that found this increment does. As the validation scores were recording during the experiments, this score substitution can be implemented. It was done so in the evaluations 10. B.4 Evaluation B.4.1 Test Dataset The MNIST Dataset has a separate test set[30]. The different authors wrote the digits in the test set, to the authors used in the training set. In all experiments performed this separation has been maintained. The test set for the experiments is the appropriate target domain half of the full test set. Depending on the exact digits in the target domain, this will have close to 5000 test cases. The same test set is used throughout the experiments not matter the size of the target dataset. Using a larger test set does not benefit the learner, it does however allow better evaluation of how well the classifier would perform the real world. The test dataset, was not standardized with itself, or with the training sets. It was however transformed using the shift and scaling factor that was found to be needed to standardize the training set. This is a viable method in training real world classifiers to transform incoming input data as it is easy to store what final transformation had been done to the training data, and reuse that. Doing this ensures the evaluation input features are similar to the training input features in value. 10. As a interesting aside, during analysis, doing post-processing to early-stopping, while monotonically improving results with more training data was not the case for individual neural nets for a particular dataset, it monotonicity was almost entirely maintained in the averages across all datasets. This is as expected of the early-stopping algorithm. The validation error approximates the test error.

38 33 B.4.2 Training Error Rate The training error rate was also recorded. As the training set was fairly large (22530 elements), training error rate was found by evaluating a sample of the last training increment. For each 5000 T increment of additional target training data of size T, a sample containing 4 T target training cases was evaluated 11. Over the range of training increments considered this was a bit greater than 10%. B.4.3 Error Rate Function The error rate for evaluation and early-stopping validation was determined with a winner-takesall approach. In winner-takes-all an error occurs when the output does not assign the highest probability to the correct output class. This approach was chosen for its real world applicability. A output is correct if it is correct, and incorrect otherwise there is no middle ground of almost correct. This contrasts with a sum-of-squared-errors or similar method where fractional errors could occur if the output was fractionally correct As is usual for a softmax output layer during backpropagation a entropy based error rate was used[28] such that the error derivative remained the usual difference between the labels and actual outputs. B.5 MNIST Subdivisions Used 44 subdivisions of MNIST were used to assess and ensure the method worked predictably across various areas. They consisted of 6 chosen as interesting, and their reverses: [0, 1, 2, 3, 4] [5, 6, 7, 8, 9] [0, 1, 2, 4, 8] [3, 5, 6, 7, 9] [0, 2, 3, 5, 7] [1, 4, 6, 8, 9] [0, 2, 4, 6, 8] [1, 3, 5, 7, 9] [1, 2, 3, 4, 5] [0, 6, 7, 8, 9] [0, 2, 3, 5, 7] [1, 4, 6, 8, 9] [0, 3, 6, 8, 9] [1, 2, 4, 5, 7] and 16 chosen randomly, as well as their reverses. [0, 2, 5, 8, 9] [1, 3, 4, 6, 7] [0, 1, 2, 3, 8] [4, 5, 6, 7, 9] [2, 4, 5, 8, 9] [0, 1, 3, 6, 7] [0, 1, 4, 6, 9] [2, 3, 5, 7, 8] [0, 1, 5, 6, 7] [2, 3, 4, 8, 9] [4, 5, 6, 7, 8] [0, 1, 2, 3, 9] [2, 3, 4, 5, 6] [0, 1, 7, 8, 9] [1, 2, 3, 4, 8] [0, 5, 6, 7, 9] 11. This seemingly unintuitive sampling frequency is the result of a convenient programming idiom to ensure sample contains a equal number of each class (ie digit). While also ensuring a suitable large (but not too large) absolute quantity of test data was used for evaluation.

39 34 [1, 4, 5, 6, 9] [0, 2, 3, 7, 8] [1, 2, 4, 5, 8] [0, 3, 6, 7, 9] [0, 1, 6, 8, 9] [2, 3, 4, 5, 7] [2, 3, 6, 7, 9] [0, 1, 4, 5, 8] [1, 3, 7, 8, 9] [0, 2, 4, 5, 6] [0, 1, 3, 5, 7] [2, 4, 6, 8, 9] [1, 2, 6, 7, 9] [0, 3, 4, 5, 8] [0, 2, 3, 6, 7] [1, 4, 5, 8, 9] B.5.1 Non-convergent Experiments Of those 44 experiments 5 failed to converge during the back-propagation stage, for both the DNN Control and the DNN LLR experiments. Hitting floating point under/overflow. Those results have been discarded from the evaluations. They do not effect the evaluation of the technique as the instability occurs both in the control and in the experiment, resulting in neither getting accuracy above the 20% guess rate (i.e. a 0.8 error rate). [2, 3, 4, 8, 9] [0, 1, 5, 6, 7] [2, 4, 6, 8, 9] [0, 1, 3, 5, 7] [5, 6, 7, 8, 9] [0, 1, 2, 3, 4] [0, 1, 7, 8, 9] [2, 3, 4, 5, 6] [0, 3, 6, 7, 9] [1, 2, 4, 5, 8] It is however very interesting to note that same 5 transitions failed in all 4 topologies, and for both the control and the experiment. It indicates that the target domain training set at its smallest size, after its preprocessing, causes this error.

40 35 APPENDIX C THE TRANSFORMATION OF SUBDATASETS SUCH THAT THE WHOLE DATASET IS STANDARDIZED C.1 Motivation A machine learning algorithm learns from a dataset. This dataset has two components: The test dataset which emulates real world data that might be input to network this is used for evaluating the performance of the network at the end. the total training dataset. The Total Training Dataset is divided into two portions: the training set and the validation set. The learning algorithm is performed using training data from the training set. Periodically (e.g. once every minibatch), it is evaluated against the validation set in order to check if it is over fitting (see section B.3.3). The validation set is a practice test set, to see how well the machine learning algorithm is generalizing. Many machine learning algorithms, including backpropagation[50], perform better if the training set has been feature-wise standardized so that each feature has zero mean, and having a unit standard deviation. Some algorithms, such as non-variance learning GB-RBMs, require this. For this reason, the training set is standardized. After standardizing the training set, the parameters of the transformation are stored, and are then used to transform the validation and test datasets. Note that this does not mean the test and validation sets have zero mean and unit variance transforming them just keeps them similar in magnitude to the training data. For evaluating how well the algorithms perform, with varying quantities of target domain data, they need to be trained on training sets of various sizes. These training sets still need to be standardized, to ensure training algorithms perform optimally thus keeping results relevant. Ideally to minimize training time, the net would first be trained on the first raining subset, then would be evaluated against the test set. Then it would be further trained on the next training subset, and again evaluated on the test set; and so on. However over the course of this training, before it is evaluated it should been presented with a training set which is overall standardized. If this isn t possible them it may mean training from scratch for each training data size. C.2 Derivation of a Method C.2.1 Introduction and Definitions Consider a dataset X = A B X, A, and B are standardized versions of X, A, and B respectively. Scaling and shifting must be done to A and B such the for X = A B, µ X = 0 σ X = 1 Note: A = A and B = B and X = X as it is just a transform of the data. x = x µ X σ X

41 36 µ X = x X x X σ 2 X = 1 X (x µ X ) 2 x X C.2.2 RTP Scaling A and B to have Zero Mean causes their Union to have zero mean x A µ A + µ B = x x B + x A B A µ A + B µ B = A X µ A + B X µ B = x X x x X x X A X µ A + B X µ B = 0 = µ X = µ X C.2.3 RTP Scaling A and B to have zero mean and unit variance causes there Union to have unit variance. 1 σ A = (x µ A ) A 2 σ B = 1 B A σ 2 A + B σ 2 B = x A x A (x µ B ) 2 x B (x µ A ) 2 + x B (x µ B ) 2 As µ X = µ A = µ B = 0: A σa 2 + B σ2 B = (x 0) 2 + (x 0) 2 x A x B A σa 2 + B σ2 B = (x ) 2 x X Since X σx 2 = x X (x ) 2 As B = X A A σ 2 A + B σ2 B = X σ2 X A σ 2 A + ( X A )σ2 B = X σ2 X

42 37 Which with σ 2 A = σ2 B = 1 A (1) + ( X A )(1) = X σ 2 X X 1 = X σ 2 X σ 2 X = 1 C.3 Conclusion The union of two datasets which as standardized, to have zero mean and unit variance, does itself have zero mean and unit variance also. Ergo to have a the method for transforming subdatasets such that there union is standardized to zero mean and unit variance, is simply to standardize the subdatasets.

43 38 APPENDIX D ON THE PERFORMANCE OF THE LINEAR CLASSIFIER CONTROL It can be observed that the Linear Classifier Control has particularly good comparative performance for small quantities of training data. This may seem counter intuitive, as the Linear Classifier can only classify linearly separable inputs. Thus can not ever correctly recognize all of the test cases. However it trains very quickly. It is a simple model and so can learn quickly. It almost instantly fits to its training data. It does also seem to over-fit to its training data as seen in section 6.5. Fitting quickly to the training data does result in good early performance. The mean performance with both Training and Test Error rates of the algorithms is shown in figure 14. It can be seen that the Linear Classifier immediately fits well to its test dataset and its able to classify them correctly. In fact as the test dataset grows Linear Classifier the Training Error Rate increases. This is because as more elements are added to the training set (which is evaluated to get the training error), fewer of them are linearly separable in the input space. Conversely for DNN LLR and the Control DNN, the high training error rate is indicative of the network having not yet converged to a solution. One Figure 14: Mean Training Data Error Rate (top) and Test Data Error Rate (bottom) of the algorithms across all target domains and all topologies. Note that this plot is clipped at 4000 target training cases to highlight the areas where the linear classifier outperforms the more powerful models. way to encourage this would be to use a higher learning rate for smaller training set sizes. More advanced approaches along these lines form the basis for adaptive learning rate algorithms[50].

44 39 APPENDIX E DBN REUSE IS IT NECESSARY TO REINITIALIZE THE BOTTOM LAYER. E.1 Method Greedly Layerwise Train Append Softmax Output Layer Backpropagation Train Evaluate Source Unlabelled Training Dataset Target Labelled Training Dataset Target Labelled Test Dataset Greedly Layerwise Train Greedly Layerwise Train Append Softmax Output Layer Backpropagation Train Evaluate Source Unlabelled Training Dataset Target Unlabelled Training Dataset Target Labelled Training Dataset Target Labelled Test Dataset Figure 15: Block diagram showing the training process for DBN Reuse (top), and DBN Reuse with Target Pretraining (bottom).. Reinitializing the bottom layer is all about untying the higher level structures from the low level details. The neural network is forced to forget the superficial differences between source and target domains, and remember only the abstract features. The results of experiments confirm the necessity of resetting these bottom layers. A method based on directly Reusing the DBN without resetting layers was investigated 12 to confirm this. Both with and without the step of having pretraining on target data. The training procedure is shown in figure 15. Notice that DBN Reuse is identical to DNN LLR with the resetting of the bottom layer removed. Notice also DBN Reuse with Target Pretraining is the same as the Control, but with a earlier step of Pretraining the DBN on the Source Dataset. It is also the same as DNN LLR with Target Pretraining with the resetting of the bottom layer removed. Both ultimately prove inferior to the Control and to DNN LLR. Evaluation was carried out as described in B. 12. Infact every permutation of resetting layers was investigated, from reset [0,0,0,0] which is DBN Reuse, to reset [1,0,0,0] which is DNN LLR, to reset [1,1,1,1] which is to reset everything and train a deep network on target data only with backpropagation only. All were found to be inferior to DNN LLR.

45 40 E.2 Results A plot of the results are shown in figure 16. It is outperformed by the Control, and thus even more outperformed by DNN LLR. Adding the step of pretraining with the target dataset, causes the performance increased, to the extent that for small quantities of target domain data it outperforms the Control. It does not however outperform the Linear Classifier Control. The functionality of DBN Reuse in this way, is to get whatever structure it can into the deep network to allow it to function. This is however not enough to exceed the performance of a simpler model (details on why the linear classifier control performs well can be can be found in section D).

46 Figure 16: Performance of the algorithms of DBN Reuse (ie directly reusing the DBN), Mean across all domain transitions show. DNN LLR and the control cases shown for contrast. 41

47 42 E.3 Conclusion It can be seen from the results that it is indeed necessary to reinitialize the bottom layer. DNN LLR is a superior method to simply reusing the DBN. However this results would need to be confirmed for a mixed target and source used in pretraining (as discussed in 7.1.4)

48 43 APPENDIX F MSDA REINITIALIZATION F.1 Introduction F.1.1 msda In section it is discussed that a Deep Neural Network can be initialized using a Deep Belief Network. Other stack autoencoder models can be used instead: such as Stacked Denoising Autoencoders (SDA) [25]. A denoising autoencoder is a shallow (single hidden layer) neural network, trained by inputting a noisy images and asked to reconstruct the original image[21]. The noise used is normally uniform distributed deletion noise, where each pixel has a fixed chance of being turned off set to zero[21]. These are stacked up in a similar way to the stacking of RBMs for DBN initialization. A Marginalizing Stacked Denoising Autoencoder, is very similar to a SDA (and DBN). It is a stack of marginalizing denoising autoencoders, with outputs passed through a non-linearity, such as the sigmoid function. A marginalizing denoising autoencoder (mda) is a particular approximation of a conventional denoising autoencoder with linear activation function[20]. This approximation allows the ideal weights to be solved using least squares regression, which can be done by simple linear algebra[49]. This makes training a mda incredibly fast on modern vectorizing CPUs 1/5th the time taken to train a comparable denoising encoder [49]. The mdas then have the non-linearity applied, such as a sigmoid function, and are stacked up using the same layer-wise approach seen in DBNs and SDAs[49]. F.1.2 msda for initializing of DNN Little work has been done up until now on using the weights from a msda to initialize a DNN, however in principle it should be almost no different to doing so with a SDA. There is a additional restriction that all layers in a msda must have the same width as the input layers. This restriction does make the backpropagation stage much slower. F.1.3 Experiments were Terminated Early Experiments were terminated after running for over 1 month. Results were not sufficiently promising. The results presented here are from these incomplete experiments. While they are incomplete they highlight interesting differences between how DBN Initialized DNNs differ from msda initialized DNNs. F.1.4 Experimental Procedure Numerous experiments were carried out, of resetting various combinations of layers. Though the experiment a msda noise probability of 0.5 was used and L2 weight decay of In other steps the evaluation was carried out very similarly to as detailed in B.

49 44 Figure 17: The mean performance of a number of different topologies of msda Initialized DNN. The Asterisk Marked layers are reinitialized. (left) Reinitializing the first layer ths are similar to DNN LLR. (right) reinitialize the high level layers, thus beign the opposite of DNN LLR. F.1.5 Results The results shown in table 17 results can be contrasted with those from a DBN based DNN shown in table 17. Quite the opposite of the DBN based DNN, it is seen that deleting the top layer gets the best gain. It can be noticed that the deeper the msda based DBN the more harm is caused by reinitializing the bottom layer.

50 45 Figure 18: For contrast to table 17, the corresponding layer resets sare shown here for DBNs. (left) the DNN LLR. Right: DBN based DNNs with the higher layers reinitialized. F.1.6 msda DNN performs worse than DBN DNN Considering only the control data, we see that the DBN based DNN outperforms the msda DNN. It can be seen that the deeper msda does worse than the shallower ones. This however can be partially attributed to lack of training data to properly train such a deep model. Another issue may have been the noise probability used, being constant no matter the number of layers. Some of my preliminary experiments suggested that it should be scaled with the number of layers. In any case the msda however does perform very well at its designed task, the removal of noise, as can be seen in 19. The input layer is noisy, the output of he first layer clearer, the output

51 46 of second layer is almost ideal, the output of the 3rd is a bit abstract, and the 4th layer output is almost meaningless. It has removed all the noise that is not common to all numbers by this point. This would explain why resetting the higher layers results in a performance jump. If the msda pretraining causes the higher layers to destroy most image features, then resetting these layers will be beneficial. Rather than the performance jump being attributed to the transfer of knowledge from another domain. Further investigation in to this would be beneficial. Figure 19: Output from a msda removing noise. Top is input, each successive layer the output of a deeper msda than the first. The oval shaped burr in the background is the result of the standardization of the inputs. F.1.7 Recommendations into the study of knowledge storage in msdas, SDAs and DBNs These results are all about considering the transfer of knowledge, rather then explicitly about discovering where knowledge is stored within the network. They hint that DBNs and msdas store important knowledge in different layers of the network. This is not surprising given that

52 the denoising autoencoder and the restricted Boltzmann machine are very different algorithms, albeit with the same purpose (to recreate their input). A better study into where knowledge is stored in the various stacked generative models, would perform stacked generative pretraining trained on a single domain, followed by layer reinitialization, followed by backpropagation. This would allow for knowledge to be better isolated and would have benefits in design of techniques for transfer like DNN LLR. 47

53 48 APPENDIX G CROSS RE-REPRESENTATION BASED TECHNIQUES G.1 Introduction While this paper has focused on techniques based on direct transfer of feature detectors from models trained on the source into those for final use in the target, many other techniques exist. One such technique involves using a DBN or similar structure to create a representation or encoding of its input data. This encoding is a non-linearly derived set of features. These features can then be used to classify the input using a linear classifier. The feature detectors used to create the encoding can be devised based on training data from a different domain than the final evaluation. This is what was done in the work of [19] and [20] discussed in section 2.4. In [19] and [20] it was possible to reuse the linear classifier as well as the feature encoder between source and target. In the cases being considered here output domain is different. Thus a new linear classifier must be trained, learning from a source feature based encoding of the target data. This is reasonable to do since linear classifiers can be trained very quickly, requiring much training less data than deeper network (as found in section E). In the aforementioned works, the final classifier was a support vector machine (SVM). In the research presented in this appendix a neural network based linear classifier similar to that used for the Linear Classifier Control was used. This is very similar to the Linear Classifier Control discussed insection Training a neural network without any hidden layers, taking as input the a encoding based on the top layer of a DBN, is almost identical to using a DBN to initialize a DNN (see section 3.2.3). It is strictly identical to appending the top layer and training with backpropagation, while locking the weights that came from the DBN. This method is used when training a DNN with very few labelled training cases[39]. G.2 Experimental Setup G.2.1 Experimental Parameters The parameters for the linear classifier (neural net) and the DBN were as described in section B. the msda L2 weight decay coefficient was The noise probability for the msda was evaluated across the values of 0.5, 0.2, 0.05, 0.0. G.2.2 Cross Re-representation Models The Cross Re-representation models were was trained as follows: 1) The Generative Model (msda/dbn) was trained on the unlabelled source dataset. 2) The target training data inputs were passed though the Generative Model to get representations 3) a Linear Classifier was trained to map from the representations, to there classifications.

54 49 G.2.3 Control Re-representation Models The Control Re-representations were trained as follows: 1) The Generative Model (msda/dbn) was trained on the unlabelled target dataset. 2) The target training data inputs were passed though the Generative Model to get representations 3) a Linear Classifier was trained to map from the representations, to there classifications. G.3 DBN Re-representation Results on using a DBN for getting a representation that could then be fed to a linear classifier were extremely poor. It may be that the networks were simply not wide enough to capture sufficient quantity of features that would make it possible for the linear classifier to use. However networks of this width were used in the lower layers of the DNN Control, and DBNs for reprepressentation. The best results found from a knowledge transfer perspective are shown in table 20. As can be seen the methods only gain over the control for very small quantities of target domain retraining data. Even in those cases they do not perform well enough for the mean the exceed the mean of the Linear Classifier Control. This method does not seem viable for domain transfer, or even for use in normal classification.

55 50 Figure 20: The mean performance of the single hidden layer reprepressentation technique. (left) Absolute mean performance across all transitions. Also shown for comparison, is a DNN Control case, with hidden layers sized [100,100,100,400]. (Right) Improvement over the Re-representational Control (a single standard deviation shown) G.4 msda Re-representation Results on using msdas for re-representation were also not promising as a knowledge transfer technique. Consistently exceeding the control happened in very few cases. One of the few functional cases was a single hidden layer msda (so not truly a deep network, by any description. In fact technically it was just a mda) base network, with noise probability 0.00, or 0.05.

56 It might be expected that with zero noise probability and a hidden layer the same size as the input layer, then the msda would just learn the identity function. This is not the case, as if it had then the Control Re-representation performance would have been the same as that of the Linear Classifier Control. It was not. As a transfer technique this msda, worked well for very small quantities of training data. However unlike the other models discussed that perform well for small quantities of training data, it outperforms the Linear Classifier Control. The Re-representation Control also outperforms the Linear Classifier Control. Results are shown in table 21. The deeper msdas with zero noise performed on a absolute scale similarly, but marginally worse than the single hidden layer plotted. The deeper msdas did not perform as well for knowledge transfer. 51

57 52 Figure 21: The mean performance of the single hidden layer reprepressentation technique. (left) Absolute mean performance across all transitions. Also shown for comparison, is a DNN Control case, with hidden layers sizes [100,100,100,400]. (Right) Improvement over the Rerepresentational Control (a single standard deviation shown) G.5 Conclusion Representational based techniques are not as promising for transfer of knowledge with retraining, as they are for the direct domain adaptation case discussed in section 2.4. For non-knowledge transfer purposes using a single layer msda to get a representation that is feed to a linear classifier is is a very promising technique particularly for small datasets. Further

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

arxiv: v1 [cs.lg] 15 Jun 2015

arxiv: v1 [cs.lg] 15 Jun 2015 Dual Memory Architectures for Fast Deep Learning of Stream Data via an Online-Incremental-Transfer Strategy arxiv:1506.04477v1 [cs.lg] 15 Jun 2015 Sang-Woo Lee Min-Oh Heo School of Computer Science and

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A Review: Speech Recognition with Deep Learning Methods

A Review: Speech Recognition with Deep Learning Methods Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1017

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

A study of speaker adaptation for DNN-based speech synthesis

A study of speaker adaptation for DNN-based speech synthesis A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,

More information

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention Damien Teney 1, Peter Anderson 2*, David Golub 4*, Po-Sen Huang 3, Lei Zhang 3, Xiaodong He 3, Anton van den Hengel 1 1

More information

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION Atul Laxman Katole 1, Krishna Prasad Yellapragada 1, Amish Kumar Bedi 1, Sehaj Singh Kalra 1 and Mynepalli Siva Chaitanya 1 1 Samsung

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING Gábor Gosztolya 1, Tamás Grósz 1, László Tóth 1, David Imseng 2 1 MTA-SZTE Research Group on Artificial

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

arxiv: v2 [cs.cv] 30 Mar 2017

arxiv: v2 [cs.cv] 30 Mar 2017 Domain Adaptation for Visual Applications: A Comprehensive Survey Gabriela Csurka arxiv:1702.05374v2 [cs.cv] 30 Mar 2017 Abstract The aim of this paper 1 is to give an overview of domain adaptation and

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation SLSP-2016 October 11-12 Natalia Tomashenko 1,2,3 natalia.tomashenko@univ-lemans.fr Yuri Khokhlov 3 khokhlov@speechpro.com Yannick

More information

Model Ensemble for Click Prediction in Bing Search Ads

Model Ensemble for Click Prediction in Bing Search Ads Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com

More information

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

An Online Handwriting Recognition System For Turkish

An Online Handwriting Recognition System For Turkish An Online Handwriting Recognition System For Turkish Esra Vural, Hakan Erdogan, Kemal Oflazer, Berrin Yanikoglu Sabanci University, Tuzla, Istanbul, Turkey 34956 ABSTRACT Despite recent developments in

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Learning From the Past with Experiment Databases

Learning From the Past with Experiment Databases Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS ELIZABETH ANNE SOMERS Spring 2011 A thesis submitted in partial

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

University of Groningen. Systemen, planning, netwerken Bosman, Aart

University of Groningen. Systemen, planning, netwerken Bosman, Aart University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

Switchboard Language Model Improvement with Conversational Data from Gigaword

Switchboard Language Model Improvement with Conversational Data from Gigaword Katholieke Universiteit Leuven Faculty of Engineering Master in Artificial Intelligence (MAI) Speech and Language Technology (SLT) Switchboard Language Model Improvement with Conversational Data from Gigaword

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition

Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition Tom Y. Ouyang * MIT CSAIL ouyang@csail.mit.edu Yang Li Google Research yangli@acm.org ABSTRACT Personal

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling Notebook for PAN at CLEF 2013 Andrés Alfonso Caurcel Díaz 1 and José María Gómez Hidalgo 2 1 Universidad

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One

More information

Measurement. When Smaller Is Better. Activity:

Measurement. When Smaller Is Better. Activity: Measurement Activity: TEKS: When Smaller Is Better (6.8) Measurement. The student solves application problems involving estimation and measurement of length, area, time, temperature, volume, weight, and

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

Forget catastrophic forgetting: AI that learns after deployment

Forget catastrophic forgetting: AI that learns after deployment Forget catastrophic forgetting: AI that learns after deployment Anatoly Gorshechnikov CTO, Neurala 1 Neurala at a glance Programming neural networks on GPUs since circa 2 B.C. Founded in 2006 expecting

More information

Deep Neural Network Language Models

Deep Neural Network Language Models Deep Neural Network Language Models Ebru Arısoy, Tara N. Sainath, Brian Kingsbury, Bhuvana Ramabhadran IBM T.J. Watson Research Center Yorktown Heights, NY, 10598, USA {earisoy, tsainath, bedk, bhuvana}@us.ibm.com

More information

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus CS 1103 Computer Science I Honors Fall 2016 Instructor Muller Syllabus Welcome to CS1103. This course is an introduction to the art and science of computer programming and to some of the fundamental concepts

More information

South Carolina English Language Arts

South Carolina English Language Arts South Carolina English Language Arts A S O F J U N E 2 0, 2 0 1 0, T H I S S TAT E H A D A D O P T E D T H E CO M M O N CO R E S TAT E S TA N DA R D S. DOCUMENTS REVIEWED South Carolina Academic Content

More information

A Deep Bag-of-Features Model for Music Auto-Tagging

A Deep Bag-of-Features Model for Music Auto-Tagging 1 A Deep Bag-of-Features Model for Music Auto-Tagging Juhan Nam, Member, IEEE, Jorge Herrera, and Kyogu Lee, Senior Member, IEEE latter is often referred to as music annotation and retrieval, or simply

More information

Grade 6: Correlated to AGS Basic Math Skills

Grade 6: Correlated to AGS Basic Math Skills Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and

More information

Speech Recognition at ICSI: Broadcast News and beyond

Speech Recognition at ICSI: Broadcast News and beyond Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

Multivariate k-nearest Neighbor Regression for Time Series data -

Multivariate k-nearest Neighbor Regression for Time Series data - Multivariate k-nearest Neighbor Regression for Time Series data - a novel Algorithm for Forecasting UK Electricity Demand ISF 2013, Seoul, Korea Fahad H. Al-Qahtani Dr. Sven F. Crone Management Science,

More information

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words, A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C

Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Numeracy Medium term plan: Summer Term Level 2C/2B Year 2 Level 2A/3C Using and applying mathematics objectives (Problem solving, Communicating and Reasoning) Select the maths to use in some classroom

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Truth Inference in Crowdsourcing: Is the Problem Solved?

Truth Inference in Crowdsourcing: Is the Problem Solved? Truth Inference in Crowdsourcing: Is the Problem Solved? Yudian Zheng, Guoliang Li #, Yuanbing Li #, Caihua Shan, Reynold Cheng # Department of Computer Science, Tsinghua University Department of Computer

More information

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier

Analysis of Emotion Recognition System through Speech Signal Using KNN & GMM Classifier IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 2, Ver.1 (Mar - Apr.2015), PP 55-61 www.iosrjournals.org Analysis of Emotion

More information

Online Updating of Word Representations for Part-of-Speech Tagging

Online Updating of Word Representations for Part-of-Speech Tagging Online Updating of Word Representations for Part-of-Speech Tagging Wenpeng Yin LMU Munich wenpeng@cis.lmu.de Tobias Schnabel Cornell University tbs49@cornell.edu Hinrich Schütze LMU Munich inquiries@cislmu.org

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

School Size and the Quality of Teaching and Learning

School Size and the Quality of Teaching and Learning School Size and the Quality of Teaching and Learning An Analysis of Relationships between School Size and Assessments of Factors Related to the Quality of Teaching and Learning in Primary Schools Undertaken

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

Discriminative Learning of Beam-Search Heuristics for Planning

Discriminative Learning of Beam-Search Heuristics for Planning Discriminative Learning of Beam-Search Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University

More information

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology

Essentials of Ability Testing. Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology Essentials of Ability Testing Joni Lakin Assistant Professor Educational Foundations, Leadership, and Technology Basic Topics Why do we administer ability tests? What do ability tests measure? How are

More information

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)

More information

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen TRANSFER LEARNING OF WEAKLY LABELLED AUDIO Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Laboratory of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi

More information

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project Phonetic- and Speaker-Discriminant Features for Speaker Recognition by Lara Stoll Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California

More information

arxiv: v1 [cs.cv] 10 May 2017

arxiv: v1 [cs.cv] 10 May 2017 Inferring and Executing Programs for Visual Reasoning Justin Johnson 1 Bharath Hariharan 2 Laurens van der Maaten 2 Judy Hoffman 1 Li Fei-Fei 1 C. Lawrence Zitnick 2 Ross Girshick 2 1 Stanford University

More information

Why Did My Detector Do That?!

Why Did My Detector Do That?! Why Did My Detector Do That?! Predicting Keystroke-Dynamics Error Rates Kevin Killourhy and Roy Maxion Dependable Systems Laboratory Computer Science Department Carnegie Mellon University 5000 Forbes Ave,

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate

More information

Rule Learning with Negation: Issues Regarding Effectiveness

Rule Learning with Negation: Issues Regarding Effectiveness Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX

More information

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction

Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,

More information

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download

More information

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Texas Essential Knowledge and Skills (TEKS): (2.1) Number, operation, and quantitative reasoning. The student

More information

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

Twitter Sentiment Classification on Sanders Data using Hybrid Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 4, Ver. I (July Aug. 2015), PP 118-123 www.iosrjournals.org Twitter Sentiment Classification on Sanders

More information