Maturaarbeit Oktober A neural network learns to play Mortal Kombat 3

Size: px
Start display at page:

Download "Maturaarbeit Oktober A neural network learns to play Mortal Kombat 3"

Transcription

1 Maturaarbeit Oktober 2016 A neural network learns to play Mortal Kombat 3 Author, class: Carlo Hartmann, M4a Supervising teacher: Andreas Umbach

2

3 Contents 1 Abstract 1 2 Foreword Motivation Acknowledgment Introduction 3 4 Neural Network Neuron The topology of a neural network Purpose of Learning Algorithms Learning Algorithms Supervised Learning The Classical Perceptron The modern Perceptron Linear separability Error surface Perceptron learning Unsupervised Learning Clusters and the clustering problem Competitive learning Image Recognition OpenCV Template matching Difference Own project Mortal Kombat Observation of the actions Losing control over the computer Two different approaches SOM Supervised neural network Structure of the neural network The results of the network Discussion 25

4 Chapter 1 Abstract The purpose of this project is to get a better understanding of neural networks and how to create a neural network that is able to play Mortal Kombat 3. In the first part of my project I created several programs that were able to do basic tasks. Those tasks were: taking screenshots, starting the Emulator, comparing two images and constructing a basic neural network with neurons and layers. After this I started to use my knowledge about the neural networks and created a neural network that was able to play the game to a certain extent. 1

5 Chapter 2 Foreword 2.1 Motivation I have always been really fascinated about the human brain. What is the other person thinking and why? How will he react to this? How is he able to learn so well? These were question I have asked myself for a long time. The neural networks were created to address such questions. By simulating a learning process we are able to get a better understanding of the human brain. This is exactly way I chose this subject. The idea of creating a neural network that could learn to play a video game was not my own. A famous Youtuber called Sethbling created in 2015 a neural network that was able to play Super Mario World. I wanted to create a neural network that was able to do the same, but since doing the exact same thing by also using Mario as the game would not have felt like my own work, so I chose to do my paper about another classic: Mortal Kombat Acknowledgment I really want to thank my supervisor Andreas Umbach for helping me with this project. Every time I was stuck Umbach helped me by giving me new ideas to approach the subject and leading me into the right direction for several obstacles I had during this project. 2

6 Chapter 3 Introduction The goal for this project is to be able to understand how neural networks are built and how they are capable of learning something. This acquired knowledge was to be used to create a create a neural network that is able to utilize Mortal Kombat 3. The network should be able to learn by its own. The program will be written in Java since this is a language I am most familiar with and used before. Java is also a really handy programming language that has access to a lot of different libraries. As a development environment I ll use eclipse and for the emulator a Super NES emulator was used. 3

7 Chapter 4 Neural Network 4.1 Neuron The most basic component of the human brain is the neuron. The brain consists of billions of neurons. The purpose of them is to process information. Even though their task is so important their structure of each neuron is rather simple. As shown in Fig. 4.1 the biological neuron is composed of a nucleus, dendrites, a cell body and an axon. The axon serves as a conductor and transmits signals to dendrites of a different neuron at an intersection called synapse. [Soares, F. M. / Souza, A. M. F. (2016)] The neuron that is used in artificial neural networks are modeled after the biological counterpart. Just like the biological neuron the artificial ones have an input and output component. This is visualized in Fig Every neuron has a set number of inputs. Each input has a specific weight value. The weight values are the components that specify what kind of output will result at the end. This means that altering those values results in a different output. During the learning process the learning algorithms change the values. What the algorithms does exactly will be explained in the chapter 5. The actual body of the neuron has two functions implemented into it: the integration (also referred to as summation) and the activation function. The integration function is needed since we try to use a primitive activation function with only one parameter. This means that this function reduces the n possible arguments to a single value that the activation function uses later on as its arguments. This results into an output. The output is Figure 4.1: Biological neuron. 4

8 Chapter 4. Neural Network Figure 4.2: Artificial neuron. Figure 4.3: Basic structure of layers. either just the result of the activation function or if the output goes into another layer has sometimes also a specific weight value. What Layers are exactly will be explained in the next section. [Rojas, R. (1996)] 5

9 Chapter 4. Neural Network 4.2 The topology of a neural network Layers define the capabilities of each network. The more layers the network has the more information will be processed in the network. A neural network is always separated into several layers. Every network consists of at least an input-layer and an output-layer. As the first layer, the input layer receives and processes information. It is the first layer in every network. The output layer is the last layer of a network. It receives values from either the input layer or a hidden layer and processes it for a last time and has a direct influence to the outside world. Hidden Layers are the body of every network. The number of hidden layers can vary from 0 to as many layers you want. Every added layer will enhance the network s capacity to represent more complex knowledge. Figure 4.3 shows what the structure of a basic neural network with one hidden-layer looks like. [Soares, F. M. / Souza, A. M. F. (2016)] 4.3 Purpose of Learning Algorithms Learning algorithms are responsible for the actual learning process that networks want to achieve. As stated before in section 4.1 learning algorithms optimizes the weight values to enhance the knowledge of the network. Every kind of input has a specified weight value that determines what kind of output will result at the end. Every single weight in the whole network has an influence and can change the whole outcome. 6

10 Chapter 5 Learning Algorithms People do not learn everything in the same way. An example would be how people learn vocabulary: they see the word in their language and they see what it means in a different one. That means they know what kind of result is expected from it. Another example is how they learned how to walk. They had no idea how they should move their own body to achieve it but they tried over and over again, failed many times but they gradually learned it. Learning algorithms are also separated into different types. There are two big classifications: supervised learning and unsupervised learning. [Soares, F. M. / Souza, A. M. F. (2016)] 5.1 Supervised Learning A learning algorithm is an adaptive method by which a network of computing units self-organizes to implement the desired behavior. (Rojas, 1996: p. 77). This is how Rojas described the behavior of a learning algorithm. In supervised learning this selforganization is achieved by presenting some examples of desired input-outputs. The network is then able to adjust the weights between the neurons in order to ensure that a specific input results in a specific output. For such an algorithm to learn a large amount of data is necessary. Insufficient data renders the approach inoperative. How such a scenario can be avoided will be explained in section 5.2. How a learning algorithm processes input-output examples is shown in Fig An input is fed into the network and that network processes the information and gives an output. The output is then compared to the expected output. If the two outputs are identical, the network will not be changed. If they differ, the network parameters will be adjusted. This process is repeated thousands of times. After a certain amount of iterations, the network s output should converge with the desired output, independently of the input.[rojas, R. (1996)] The Classical Perceptron Perceptrons were a big step for neural networks. In 1958, Rosenblatt, an American psychologist, proposed the concept of the perceptron. The innovative part of it was the introduction of numerical weights and a special interconnection pattern. The classical perceptron, as proposed by Rosenblatt, is shown in Fig While the perceptron used nowadays works differently to what was originally proposed, the concept still remains the same. The classical perceptron has a projection area, sometimes labeled retina. This retina sends binary values to a layer of computing units. The connections between the retina and the first layer of computing units are 7

11 Chapter 5. Learning Algorithms Figure 5.1: Learning process. Figure 5.2: The classical perceptron [after Rosenblatt 1958]. 8

12 Chapter 5. Learning Algorithms Figure 5.3: Predicates and weights of a perceptron. deterministic and non-adaptive. This means that they are not weighted and will not be changed in the process of learning. Connections are selected stochastically. This was made so that the model is biologically plausible, since the goal of a neural network is to simulate the process in a human brain. The whole idea behind the system is to train it so that it is able to recognize certain input patterns in the connection area, which in turn leads to the appropriate path through the connections to the output layer. In this model the learning algorithm must derive suitable weights. [Rojas, R. (1996)] The modern Perceptron Minsky and Papert saw a big potential in the system of Rosenblatt. They took the essential features of his system to study its computational capabilities. Their new perceptron is a simplification of Rosenblatt s classical perceptron. For practical reasons I will from now on revere to Rosenblatt s perceptron as the classical perceptron and to Minsky and Papert s perceptron as perceptron. The perceptron also has a retina of pixels with binary values on which patterns are projected. Some pixels are then connected to so-called predicates which are logic elements that can compute a single bit according to the input. Those predicates then transmit their binary values to a weighted threshold element. That threshold element is responsible for the final decision in e.g. a recognition problem. (see Fig. 5.3) A simple perceptron is a computing unit with threshold θ which, when receiving the n real inputs x 1, x 2,..., x n through edges with the associated weights, w 1, w 2,..., w n, outputs 1 if the inequality n i=1 w i x i θ holds and otherwise 0 (Rojas, 1996: p. 60) This is the first definition that has been officially given to the perceptron. Before this, a threshold element was associated with either a whole set of predicates or a network of computing elements. What constitutes a network of computing elements exceeds the scope of my matura thesis. This definition by Rojas refers to a perceptron 9

13 Chapter 5. Learning Algorithms as an isolated threshold element which computes its output without any delay. The perceptron also separates its input into two half-spaces: either 0 or 1. [Rojas, R. (1996)] [Berger, C. (2016)] Bias The bias is an additional input vector that is added to a perceptron with a set input and a set weight. Simply put, the bias is the output that the perceptron gives when there is zero input. It increases the capacity of the neural network to solve problems. The bias isn t essential but it can be a very useful tool. The influence of the bias is better shown with an example of an AND gate with a perceptron. The function of an AND gate is that it gives 1 (true) back only if both inputs are true. To get a better understanding of the bias an example of an AND gate with a bias is presented. A perceptron is used with two inputs x 1 and x 2. In addition to those two input vectors also have the bias which has a value of +1. The setup of the perceptron is shown in Fig The value of the weight vector of the bias is in this example -30. This number varies from situation to situation. In this scenario it needs to be -30 to work out. First the inputs need to go through the summation function: s = w x where w are the values of the weights and x are the values of the input. For our scenario: s = x x 2 For the activation function we use the Sigmoid function. The Sigmoid function is used as an activation function for its capability of simulating the processing in the human brain. A property of the sigmoid function is that around +4 and -4 the y value is already either +1 or 0. This is the equation of the function: The output of the perceptron is: g(s) = e t h θ (x) = g(s) 10

14 Chapter 5. Learning Algorithms If the numbers are put into a table: x 1 x 2 h θ (x) 0 0 g(-30) g(-10) g(-10) g(10) 1 With the bias the AND gate works perfectly. Creating an AND gate without a bias is unnecessarily complicated. The most important question is how do we find the value for the bias? In simple examples like this it is easiest just to do it with simple intuition. Intuition can be used in simple examples, in more complicated networks with hidden layers the learning algorithm finds the bias for us. It does that by treating the bias as a normal vector. The weight vector is then adjusted the same as all the other units. [Berger, C. (2016)] Figure 5.4: Example for an AND gate with a perceptron Linear separability Two sets A and B of points in an n-dimensional space are called absolutely linearly separable if n + 1 real numbers w 1,..., w n+1 exist such that every point (x 1, x 2,..., x n )ɛ A satisfies n i=1 w i x i > w n+1 and every point (x 1, x 2,..., x n ) ɛ B satisfies n i=1 w i x i < w n+1 (Rojas, 1996: p. 80) The learning algorithm tries to separate the input data into two different sets. This is where linear separability comes in. As the definition above states, two sets of points that are put into an n-dimensional space can only be called linearly separable if they meet certain requirements. The threshold is important in this case because w n+1 is θ (the negative of the threshold). The explanation of why w n+1 needs to be θ exceeds the scope of this introduction. With the help of the summation function and w n+1 we 11

15 Chapter 5. Learning Algorithms Figure 5.5: Error function for the AND gate with weights between 0.5 and 1.5. can cleanly separate all points. If the summation n i=1 w i x i is bigger then θ then the points are a part of set A. If the summation is smaller then the points belong to set B. [Rojas, R. (1996)] Error surface The error is technically just an incorrect set of points. The objective of a learning algorithm is to minimize the error. There are several valid approaches to achieve this. One of the simple ways to do this is to use a so-called greedy algorithm that computes the local error of a perceptron with a given weight. After that it decides in which direction the weight vector needs to be updated and does that by selecting new weights in the selected search direction. The error function is visualized in Fig. 5.5 to get a better understanding of how it works. This structure has been created by trying to find the right weights for an AND gate by first choosing a fixed threshold θ. After that we created this by looking for the right weights w 1 and w 2 between 0.5 and 1.5. The error is calculated by comparing the value of the output and the expected value. It is clearly visible that there are places where the error function gives either 2 or 1 back. What we want is the area where it is 0. The area of error = 0 in this function is a triangle. How this structure looks from above and how the right weight is found is illustrated in Fig It is made clear that the solution is not always found right away. The weight is just slowly being adjusted: it first start with w 0 then updates it 2 times and goes through w 1 and w 2 until it finally reaches w. [Rojas, R. (1996)] Perceptron learning Figure 5.7 is a flowchart for this specific learning algorithm. For this algorithm there is a training set that consist of two sets, P and N, in an n-dimensional extended input space. The task of this algorithm is to find a weight w that separates those two sets. This algorithm only changes a weight vector if a vector in either P or N was not classified 12

16 Chapter 5. Learning Algorithms Figure 5.6: Iteration steps to the region of minimal error. correctly. The flowchart does not have an end. This has two reasons. The first one is that it shouldn t stop as long as there is anything to learn. Learning is an ongoing process that shouldn t stop as long as you have data to work with. The second reason is that it shouldn t be able to exit the algorithm. The last node doesn t need to make any more decisions. The only possible answer to the if clause that it has been given is yes. This results into a endless loop of learning. [Rojas, R. (1996)] 5.2 Unsupervised Learning The possibilities with supervised learning are immense but in some scenarios it fails. This is where unsupervised learning comes in. The big difference of supervised and unsupervised is that unsupervised does not need an expected output to compare to. Instead, it decides what output would be best for a given input and reorganizes the network accordingly. There are two main classes of unsupervised learning: reinforcement and competitive learning. In the first method the algorithm reinforces the weights of the network in such a way as to enhance the reproducibility of the desired output. One of the best known example for this method is Hebbian learning. Competitive learning as the name suggests works in a way that the elements of the network compete against each other for the right to provide the desired output associated with an input vector. The unit that wins is called the BMU (best matching unit). This matura paper will explain competitive learning in depth. [Rojas, R. (1996)] [Soares, F. M. / Souza, A. M. F. (2016)] Clusters and the clustering problem Clusters are the key to the concept of competitive learning. The way the learning algorithm is able to find the best output is by organizing the input data into so called clusters. What the clustering looks like is shown in the two figures 5.8.a and 5.8.b. In Fig. 5.8.a there are two sets of input vectors that have been put into a coordination system. What 13

17 Chapter 5. Learning Algorithms start weight vector w 0 is generated randomly set t = 0 vector x ɛ P N selected randomly yes x ɛ P and w t x > 0 yes no x ɛ P and w t x 0 yes set w t+1 = w t + x and t = t + 1 w t+1 = w t x and t = t + 1 no x ɛ N and w t x < 0 no x ɛ N and w t x 0 Figure 5.7: Flowchart of the perceptron learning algorithm. 14

18 Chapter 5. Learning Algorithms (a) (b) Figure 5.8: Left: Two sets of vectors P and N Right: Weight vectors for the clusters. the clustering now does is that it looks at the vectors and tries to approximate them with a weight vector. The weight vectors that result out of this setup is then visible in Fig. 5.8.b. Each weight vector is represented by one computing unit, that means the amount of clusters is predefined. This predefinition results into some problems. Unsupervised learning is mainly used because we do not actually know the whole structure of the data,estimating the amount of clusters needed can be hard. This applies especially when dealing with multidimensional data sets with an unknown deep structure. So the question now is: If the number and distribution of clusters is unknown, how can we decide how many computing units and thus how many representative weight vectors we should use? (Rojas, 1996: p. 103) [Rojas, R. (1996)] Competitive learning To get a better understanding of the subject a network is created for Fig. 5.8 as an example. Since we have 3 clusters A, B and C in the Fig. 5.8 the network that processes the problem also needs 3 units. The concept of competitive learning defines that only one of the units is allowed to actually trigger a 1. This results into the necessity of units to communicate with each other. Figure 5.9 is a possible network that would be able to process the problem in Fig In Fig. 5.9 each unit computes its weighted input, but in the end only the best matching unit is allowed to fire a 1. The other units are then prevented to give any output. It is necessary for the units to communicate with each other. Those connections between the single unit are also visualized in Fig This setup can also be looked at as multiple perceptrons with variable thresholds. In each computation the thresholds are updated to ensure that only one unit is able to fire. The following learning algorithm is a possible way to identify the clusters of input vectors. To first classify some specifics: X = (x 1, x 2,..., x l ) is a set of normalized input vectors in n-dimensional space which we want to classify in K different clusters. The network itself consists of k units, each with n inputs and a threshold of zero. Using a threshold of zero does not loose us any generality. The algorithm will stop after a predetermined number of steps. What it does is that the weight vectors are being attracted in the direction of clusters in the input space. Normalizing after we substituted w m with w m + 1 is a really important step to prevent one vector to become so big that it would win every competition. This would result into several so called dead units since during the algorithm only one would be updated. 15

19 Chapter 5. Learning Algorithms Figure 5.9: A network of three competing units with connections between each unit. [Rojas, R. (1996)] Kohonen network Kohonen, also called self-organizing maps, are a kind of network architecture that use unsupervised learning. It was first created by the Finnish professor Teuvo Kohonen in the early 80s. It works in a similar way as traditional competitive neural networks. In the algorithm the BMU is also calculated and then updated accordingly but there is one difference and that is the concept of neighborhood neurons. The Kohonen works in a way that it also updates the neurons that are nearest to the BMU. The neighborhood neurons are not changed as much as the BMU. This results in the neurons that are close to each other to give a similar output. It creates a map of the data in which the single clusters are visible. This is why it is also called self-organizing map or short SOM. Figure 5.11 is a visualization of the mapping process. The blue cluster stands for the input data and the grid for the neurons in the network. The yellow indication illustrates the nearest neuron that the algorithm calculated that later got moved in. The adjacent neurons also got moved. After repeating this process multiple times the network looks like it is shown in the last portion of the illustration. [Rojas, R. (1996)] 16

20 Chapter 5. Learning Algorithms start normalized weight vectors w 1,..., w k are generated randomly select a vector x j ɛx randomly compute x j w i for i = 1,..., k select w m such that w m x j w i x j for i = 1,..., k substitute w m with w m + x j and normalize Figure 5.10: Flowchart of the competitive learning algorithm. Figure 5.11: Illustration of the training of a self-organizing map. 17

21 Chapter 6 Image Recognition Image Recognition has been a big part of my project. Getting the information out of the emulator has been shown to be a harder task then I first anticipated. I did several approaches in solving this problem. The cleanest way to do this would have been by using a xml file. Since that in itself could have been a separate matura paper I decided to search for something easier to use. My supervisor then suggested that I should use OpenCV a programming library that is widely used for image recognition and processing. What I used and how it works I will explain in 6.1 OpenCV. 6.1 OpenCV OpenCV is an open source computer vision and machine learning software library. It was developed to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. There are over 2500 optimized algorithms implemented in the library that can be freely modified. Originally the library was only developed for the programming languages C++ and C. Later on it was implemented into Python. Lastly Java became one of the languages that could make use of the library. As Java was only added recently, information of how to use the library is scarce. After some research I found two possible ways to use OpenCV with my program: Template matching and difference. [OpenCV (2012)] Before going into the two methods I used, a basic understanding of how computers see an image is needed. A computer is not able to see as humans do. Everything they are able to process are just numbers. Every pixel in a picture has a specific number associated with it. A program on our computer visualizes those numbers for humans so that we are able to see and understand them. An example of how the computer would see a car is shown in Fig In other words a pictures is just a grid or matrix of numbers Template matching This method uses two different pictures: a template and a picture they want the template to match with. The concept is pretty simple. It takes the template and slides it over the picture. Then with one of the two main matching methods that are used with this method it is possible to find the best match. The first method is TM CCOEFF: R(x, y) = x,y (T (x, y ) I(x + x, y + y )) 18

22 Chapter 6. Image Recognition Figure 6.1: The grid of number that the computer is able to understand. This method gives a number between 1 and 1 back. 1 means that there is a perfect match and 1 means that there is a prefect mismatch. 0 just means that there is no correlation at all. The second method is TM SQDIFF which looks like this: R(x, y) = x,y (T (x, y ) I(x + x, y + y )) 2 There are only two differences between the first method and this one: instead of a multiplication there is subtraction and the whole method is squared. This results into this method giving back a number between 0 and a really big number, 0 being a perfect match. Thanks to the square it s easier to identify a really bad match since the worse the match is the faster number grows. [OpenCV (2011)] 19

23 Chapter 6. Image Recognition Figure 6.2: Absdiff() used on Mortal Kombat Difference The second method that has been used worked in a similar but easier way. It also takes two images and compares them but is not actually looking for a specific match but for a difference in the images. The method name is called absdiff() and it is described to return the absolute value of differences between two arrays. What this function returns is an array of points that are different. If you then visualize it you get an image as shown in Fig

24 Chapter 7 Own project 7.1 Mortal Kombat 3 Mortal Kombat 3 is the third adaption of the popular series Mortal Kombat. It is a fighting game that lets players choose between a cast of different characters to then fight against an opponent. The game was first published in arcades in 1995 but was soon ported to three home consoles: Genesis, Super NES and Sony Playstation. Only Playstation was identical to the arcade version though, due to a deal that Sony did with the developers of game Midway. Because of that the games of each platform, even though they are called the same, vary a little. The game itself is best known for the brutality of it, especially when talking about the finishing moves. The complexity of the game is also admired by a lot of people around the world. 7.2 Observation of the actions The first thing that had to be figured out was how the informations could be fed out of the emulator. The emulator was a Super NES emulator. Template Matching, which has been explained in 6.1.1, has been the first method that had been tried to utilize. A template of one of the characters called Shang Tsung was created and then used on another screenshot that had been taken of the game. It did not have any problem locating the character while most of its body was visible. The problems came up as soon as situations were tried out where the character were doing for example a roll or the body were obstructed by the enemy character. The program was not able to find the character and it seemed like it chose a random location on the screen. One possible way to solve this problem would have been to make a template of each possible move that the character could do and work with them. The negative of this method was that the characters would have needed to be predefined. This would take the necessity of a neural network away. Because of this the second method was tried out. 21

25 Chapter 7. Own project Figure 7.1: Template Matching used on Mortal Kombat 3. The second method was a lot better fitting for a neural network since it only gives back an array of numbers that indicates something changed. Nothing has been predefined so the neural network needs to find out by itself what everything means. It does have one weakness and that is, it doesn t just show whats new in the second image but it just shows the absolute difference. This is also visible in the previously shown Fig There seem to be 4 characters in the picture. This happened because the area where the character originally was has changed. That section is then seen as a difference between the images. It is definitely a weakness of the method but it shouldn t affect the neural network to a big extent. 7.3 Losing control over the computer The second thing that was needed was how the neural network should be able to interact with the emulator. That is done by using a so called Robot class in Java. A Java program is able to control the computer by using this class. Thanks to this class it is possible to start everything without anyones interacting with the computer. This helped to create a controlled environment without any human interactions. Because of this any control of the computer is lost for a while since every time someone would try to move the mouse or press a button the program would overwrite the actions. Because of this an emergency stop was implemented so that when the esc button is pressed the program stops interacting with the computer. 7.4 Two different approaches My initial plan with my project was to use an unsupervised learning algorithm. For this I used the Kohonen algorithm that was explained in section SOM The thought behind the use of the SOM-algorithm was that it may be able to cluster the single moves of the opponent, so that it could react in a specific way. The problem 22

26 Chapter 7. Own project behind this concept is that a person would need to intervene way too much to actually be able to call it unsupervised learning. This method would have been able to cluster all the information that would have been given to the neural network by looking at the similarities between each screenshot that had been taken. In the end the program did have several clusters designated to each move of the opponent. One thing was that the number of input neurons would be needed to be adjusted each time the opponent would change so that there would not be any dead neurons. The difficult thing now for the SOM-algorithm was to find the right output. Everything that this algorithm is able to do in this scenario is to clusters the move of the opponent but knowing how to react was another thing. One thing I thought about was using the health-bar as a desired output. So what the neural network would be doing then is try moves out and find a way to either hurt the opponent or look that he does not get hurt. It was probably a possible way to solve the problem but since a desired output had been implemented the neural network turned into a supervised neural network. So it had been decided to goo all the way and implemented supervised learning Supervised neural network The neural network got split into two parts. The input layer was unsupervised and it clustered the input data so that each enemy movement would be designated to one neuron. The output layer was supervised. First data had to be gathered for the network. To be able to do this a program had been created that would supervise a game of Mortal Kombat 3. Two classes that are implemented in Java had been used: the Robot class and the Keylistener. The Robot class had been already used for the emulator to be able to control the computer, as explained in 7.3. It was also used to take screenshots while playing the game. The Keylistener, as the name suggests, records the keys the player presses. With this the needed data had been able to be gathered for the network by playing the game for several hours. The player does not need to be the best play in the world since for this just a rough idea of what the networks needs to do should suffice. With the data that had been gathered information could now be fed into the network. The input layer would cluster all the data into a specific amount of clusters. After this it would feed into a hidden layer with the same amount of neurons. This hidden layer acts as a input layer for the supervised part of the neural network. The supervised learning algorithm had then been used by using the clusters and the keys that were pressed in the situation. The adjustment needed to be small to ensure that they don t always overwrite themselves Structure of the neural network As it already had been implied does the neural network have an input layer, a hidden layer and an output layer. Every layer has a specific amount of neurons that need to be adjusted before going against a specific opponent. The input layer and the hidden layer have the exact same amount of neurons since the hidden layer just receives the clusters of the input layer and acts as a input layer for the supervised learning algorithm. The output layer has always the same amount of neurons since there are just a specific amount of keys to press. Figure 7.2 shows the structure of the neural network that was used for the project and the connections between each neuron. 23

27 Chapter 7. Own project Figure 7.2: Structure of the neural network was used The results of the network The program was in some extent able to compete with the enemy opponent but it had some difficulties in certain scenarios. Especially when the whole environment changed drastically by moving fast in one direction or when the players were overlapping each other, the network was not able to react accordingly. Another thing that was not optimal was that to certain moves of the opponent the program always reacted in the same way. This made the program really predictable. Against the computer this wasn t any problem but a real human could exploit this weakness. 24

28 Chapter 8 Discussion Even though the program did have some weaknesses it still achieved some victories against the opponents. By feeding it a lot of information of the game the network was able to create a neural network that could compete with some mediocre players. The program was also able to process changes on the screen and act accordingly. There are two big weaknesses that could still be improved a lot. One of them was that the program just learns from the input data that has been fed to it since during the game it was not able to confirm if what he was doing was right or wrong. To solve this a different approach would be needed e.g. with neuroevolution. The second weakness was the image processing. There is still a lot of potential to improve it so that the program does not have any problems in processing the game. 25

29 Bibliography [Rojas, R. (1996)] Neural Networks: A Systematic Introduction. Berlin: Springer- Verlag [Soares, F. M. / Souza, A. M. F. (2016)] Neural Network: Birmingham: Packt Publishing Programming with Java. [Berger, C. (2016)] Perceptrons: the most basic form of a neural network. Retrieved October 10, 2016, [OpenCV (2012)] OpenCV: Open Source Computer Vision. Retrieved October 10, 2016, [OpenCV (2011)] OpenCV: Template Matching. Retrieved October 12, 2016, template matching/template matching.html 26

30 Bibliography I hereby declare that this matura thesis is my original work and I did not use any unauthorized help to create it. All information from sources, aids and webpages has been depicted truthfully and was cited. 27

Artificial Neural Networks written examination

Artificial Neural Networks written examination 1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14

More information

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development

More information

Python Machine Learning

Python Machine Learning Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled

More information

Learning Methods for Fuzzy Systems

Learning Methods for Fuzzy Systems Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8

More information

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A Neural Network GUI Tested on Text-To-Phoneme Mapping A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis

More information

LEGO MINDSTORMS Education EV3 Coding Activities

LEGO MINDSTORMS Education EV3 Coding Activities LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a

More information

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers

More information

Axiom 2013 Team Description Paper

Axiom 2013 Team Description Paper Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association

More information

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

OCR for Arabic using SIFT Descriptors With Online Failure Prediction OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,

More information

Reinforcement Learning by Comparing Immediate Reward

Reinforcement Learning by Comparing Immediate Reward Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate

More information

Getting Started with Deliberate Practice

Getting Started with Deliberate Practice Getting Started with Deliberate Practice Most of the implementation guides so far in Learning on Steroids have focused on conceptual skills. Things like being able to form mental images, remembering facts

More information

Circuit Simulators: A Revolutionary E-Learning Platform

Circuit Simulators: A Revolutionary E-Learning Platform Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,

More information

SARDNET: A Self-Organizing Feature Map for Sequences

SARDNET: A Self-Organizing Feature Map for Sequences SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu

More information

(Sub)Gradient Descent

(Sub)Gradient Descent (Sub)Gradient Descent CMSC 422 MARINE CARPUAT marine@cs.umd.edu Figures credit: Piyush Rai Logistics Midterm is on Thursday 3/24 during class time closed book/internet/etc, one page of notes. will include

More information

Using focal point learning to improve human machine tacit coordination

Using focal point learning to improve human machine tacit coordination DOI 10.1007/s10458-010-9126-5 Using focal point learning to improve human machine tacit coordination InonZuckerman SaritKraus Jeffrey S. Rosenschein The Author(s) 2010 Abstract We consider an automated

More information

Generative models and adversarial training

Generative models and adversarial training Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?

More information

INPE São José dos Campos

INPE São José dos Campos INPE-5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Evolutive Neural Net Fuzzy Filtering: Basic Description

Evolutive Neural Net Fuzzy Filtering: Basic Description Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:

More information

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering

More information

An OO Framework for building Intelligence and Learning properties in Software Agents

An OO Framework for building Intelligence and Learning properties in Software Agents An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as

More information

TD(λ) and Q-Learning Based Ludo Players

TD(λ) and Q-Learning Based Ludo Players TD(λ) and Q-Learning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent self-learning ability

More information

Test Effort Estimation Using Neural Network

Test Effort Estimation Using Neural Network J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish

More information

Lecture 10: Reinforcement Learning

Lecture 10: Reinforcement Learning Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za

More information

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT The Journal of Technology, Learning, and Assessment Volume 6, Number 6 February 2008 Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the

More information

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,

More information

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download

More information

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina

More information

Evolution of Symbolisation in Chimpanzees and Neural Nets

Evolution of Symbolisation in Chimpanzees and Neural Nets Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication

More information

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and

More information

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study Purdue Data Summit 2017 Communication of Big Data Analytics New SAT Predictive Validity Case Study Paul M. Johnson, Ed.D. Associate Vice President for Enrollment Management, Research & Enrollment Information

More information

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS L. Descalço 1, Paula Carvalho 1, J.P. Cruz 1, Paula Oliveira 1, Dina Seabra 2 1 Departamento de Matemática, Universidade de Aveiro (PORTUGAL)

More information

CSL465/603 - Machine Learning

CSL465/603 - Machine Learning CSL465/603 - Machine Learning Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Introduction CSL465/603 - Machine Learning 1 Administrative Trivia Course Structure 3-0-2 Lecture Timings Monday 9.55-10.45am

More information

A student diagnosing and evaluation system for laboratory-based academic exercises

A student diagnosing and evaluation system for laboratory-based academic exercises A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens

More information

B. How to write a research paper

B. How to write a research paper From: Nikolaus Correll. "Introduction to Autonomous Robots", ISBN 1493773070, CC-ND 3.0 B. How to write a research paper The final deliverable of a robotics class often is a write-up on a research project,

More information

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and

More information

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Basic Concepts of Machine Learning Lecture 1: Basic Concepts of Machine Learning Cognitive Systems - Machine Learning Ute Schmid (lecture) Johannes Rabold (practice) Based on slides prepared March 2005 by Maximilian Röglinger, updated 2010

More information

Software Maintenance

Software Maintenance 1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories

More information

Knowledge Transfer in Deep Convolutional Neural Nets

Knowledge Transfer in Deep Convolutional Neural Nets Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu

More information

The Round Earth Project. Collaborative VR for Elementary School Kids

The Round Earth Project. Collaborative VR for Elementary School Kids Johnson, A., Moher, T., Ohlsson, S., The Round Earth Project - Collaborative VR for Elementary School Kids, In the SIGGRAPH 99 conference abstracts and applications, Los Angeles, California, Aug 8-13,

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Softprop: Softmax Neural Network Backpropagation Learning

Softprop: Softmax Neural Network Backpropagation Learning Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science

More information

Human Emotion Recognition From Speech

Human Emotion Recognition From Speech RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati

More information

How People Learn Physics

How People Learn Physics How People Learn Physics Edward F. (Joe) Redish Dept. Of Physics University Of Maryland AAPM, Houston TX, Work supported in part by NSF grants DUE #04-4-0113 and #05-2-4987 Teaching complex subjects 2

More information

Grade 6: Correlated to AGS Basic Math Skills

Grade 6: Correlated to AGS Basic Math Skills Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and

More information

Chapter 4 - Fractions

Chapter 4 - Fractions . Fractions Chapter - Fractions 0 Michelle Manes, University of Hawaii Department of Mathematics These materials are intended for use with the University of Hawaii Department of Mathematics Math course

More information

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents

More information

On the Combined Behavior of Autonomous Resource Management Agents

On the Combined Behavior of Autonomous Resource Management Agents On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science

More information

On-Line Data Analytics

On-Line Data Analytics International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob

More information

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma International Journal of Computer Applications (975 8887) The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma Gilbert M.

More information

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Exploration. CS : Deep Reinforcement Learning Sergey Levine Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?

More information

GACE Computer Science Assessment Test at a Glance

GACE Computer Science Assessment Test at a Glance GACE Computer Science Assessment Test at a Glance Updated May 2017 See the GACE Computer Science Assessment Study Companion for practice questions and preparation resources. Assessment Name Computer Science

More information

Case study Norway case 1

Case study Norway case 1 Case study Norway case 1 School : B (primary school) Theme: Science microorganisms Dates of lessons: March 26-27 th 2015 Age of students: 10-11 (grade 5) Data sources: Pre- and post-interview with 1 teacher

More information

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus Language Acquisition Fall 2010/Winter 2011 Lexical Categories Afra Alishahi, Heiner Drenhaus Computational Linguistics and Phonetics Saarland University Children s Sensitivity to Lexical Categories Look,

More information

An Introduction to Simio for Beginners

An Introduction to Simio for Beginners An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality

More information

Modeling function word errors in DNN-HMM based LVCSR systems

Modeling function word errors in DNN-HMM based LVCSR systems Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford

More information

Laboratorio di Intelligenza Artificiale e Robotica

Laboratorio di Intelligenza Artificiale e Robotica Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning

More information

An ICT environment to assess and support students mathematical problem-solving performance in non-routine puzzle-like word problems

An ICT environment to assess and support students mathematical problem-solving performance in non-routine puzzle-like word problems An ICT environment to assess and support students mathematical problem-solving performance in non-routine puzzle-like word problems Angeliki Kolovou* Marja van den Heuvel-Panhuizen*# Arthur Bakker* Iliada

More information

Radius STEM Readiness TM

Radius STEM Readiness TM Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and

More information

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Active Learning. Yingyu Liang Computer Sciences 760 Fall Active Learning Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,

More information

Learning Methods in Multilingual Speech Recognition

Learning Methods in Multilingual Speech Recognition Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex

More information

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots

Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI

More information

Process improvement, The Agile Way! By Ben Linders Published in Methods and Tools, winter

Process improvement, The Agile Way! By Ben Linders Published in Methods and Tools, winter Process improvement, The Agile Way! By Ben Linders Published in Methods and Tools, winter 2010. http://www.methodsandtools.com/ Summary Business needs for process improvement projects are changing. Organizations

More information

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand

Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Grade 2: Using a Number Line to Order and Compare Numbers Place Value Horizontal Content Strand Texas Essential Knowledge and Skills (TEKS): (2.1) Number, operation, and quantitative reasoning. The student

More information

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,

More information

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes WHAT STUDENTS DO: Establishing Communication Procedures Following Curiosity on Mars often means roving to places with interesting

More information

Assignment 1: Predicting Amazon Review Ratings

Assignment 1: Predicting Amazon Review Ratings Assignment 1: Predicting Amazon Review Ratings 1 Dataset Analysis Richard Park r2park@acsmail.ucsd.edu February 23, 2015 The dataset selected for this assignment comes from the set of Amazon reviews for

More information

Many instructors use a weighted total to calculate their grades. This lesson explains how to set up a weighted total using categories.

Many instructors use a weighted total to calculate their grades. This lesson explains how to set up a weighted total using categories. Weighted Totals Many instructors use a weighted total to calculate their grades. This lesson explains how to set up a weighted total using categories. Set up your grading scheme in your syllabus Your syllabus

More information

Diagnostic Test. Middle School Mathematics

Diagnostic Test. Middle School Mathematics Diagnostic Test Middle School Mathematics Copyright 2010 XAMonline, Inc. All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by

More information

Bluetooth mlearning Applications for the Classroom of the Future

Bluetooth mlearning Applications for the Classroom of the Future Bluetooth mlearning Applications for the Classroom of the Future Tracey J. Mehigan, Daniel C. Doolan, Sabin Tabirca Department of Computer Science, University College Cork, College Road, Cork, Ireland

More information

2 nd grade Task 5 Half and Half

2 nd grade Task 5 Half and Half 2 nd grade Task 5 Half and Half Student Task Core Idea Number Properties Core Idea 4 Geometry and Measurement Draw and represent halves of geometric shapes. Describe how to know when a shape will show

More information

Physics 270: Experimental Physics

Physics 270: Experimental Physics 2017 edition Lab Manual Physics 270 3 Physics 270: Experimental Physics Lecture: Lab: Instructor: Office: Email: Tuesdays, 2 3:50 PM Thursdays, 2 4:50 PM Dr. Uttam Manna 313C Moulton Hall umanna@ilstu.edu

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

CSC200: Lecture 4. Allan Borodin

CSC200: Lecture 4. Allan Borodin CSC200: Lecture 4 Allan Borodin 1 / 22 Announcements My apologies for the tutorial room mixup on Wednesday. The room SS 1088 is only reserved for Fridays and I forgot that. My office hours: Tuesdays 2-4

More information

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits. DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE Sample 2-Year Academic Plan DRAFT Junior Year Summer (Bridge Quarter) Fall Winter Spring MMDP/GAME 124 GAME 310 GAME 318 GAME 330 Introduction to Maya

More information

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

Calibration of Confidence Measures in Speech Recognition

Calibration of Confidence Measures in Speech Recognition Submitted to IEEE Trans on Audio, Speech, and Language, July 2010 1 Calibration of Confidence Measures in Speech Recognition Dong Yu, Senior Member, IEEE, Jinyu Li, Member, IEEE, Li Deng, Fellow, IEEE

More information

Critical Thinking in Everyday Life: 9 Strategies

Critical Thinking in Everyday Life: 9 Strategies Critical Thinking in Everyday Life: 9 Strategies Most of us are not what we could be. We are less. We have great capacity. But most of it is dormant; most is undeveloped. Improvement in thinking is like

More information

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 1642-6037 Marek WIŚNIEWSKI *, Wiesława KUNISZYK-JÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders

More information

WHAT ARE VIRTUAL MANIPULATIVES?

WHAT ARE VIRTUAL MANIPULATIVES? by SCOTT PIERSON AA, Community College of the Air Force, 1992 BS, Eastern Connecticut State University, 2010 A VIRTUAL MANIPULATIVES PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR TECHNOLOGY

More information

Measurement. When Smaller Is Better. Activity:

Measurement. When Smaller Is Better. Activity: Measurement Activity: TEKS: When Smaller Is Better (6.8) Measurement. The student solves application problems involving estimation and measurement of length, area, time, temperature, volume, weight, and

More information

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach #BaselOne7 Deep search Enhancing a search bar using machine learning Ilgün Ilgün & Cedric Reichenbach We are not researchers Outline I. Periscope: A search tool II. Goals III. Deep learning IV. Applying

More information

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer

More information

Simple Random Sample (SRS) & Voluntary Response Sample: Examples: A Voluntary Response Sample: Examples: Systematic Sample Best Used When

Simple Random Sample (SRS) & Voluntary Response Sample: Examples: A Voluntary Response Sample: Examples: Systematic Sample Best Used When Simple Random Sample (SRS) & Voluntary Response Sample: In statistics, a simple random sample is a group of people who have been chosen at random from the general population. A simple random sample is

More information

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model Xinying Song, Xiaodong He, Jianfeng Gao, Li Deng Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A.

More information

Calculators in a Middle School Mathematics Classroom: Helpful or Harmful?

Calculators in a Middle School Mathematics Classroom: Helpful or Harmful? University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Action Research Projects Math in the Middle Institute Partnership 7-2008 Calculators in a Middle School Mathematics Classroom:

More information

Introduction to Causal Inference. Problem Set 1. Required Problems

Introduction to Causal Inference. Problem Set 1. Required Problems Introduction to Causal Inference Problem Set 1 Professor: Teppei Yamamoto Due Friday, July 15 (at beginning of class) Only the required problems are due on the above date. The optional problems will not

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

What's My Value? Using "Manipulatives" and Writing to Explain Place Value. by Amanda Donovan, 2016 CTI Fellow David Cox Road Elementary School

What's My Value? Using Manipulatives and Writing to Explain Place Value. by Amanda Donovan, 2016 CTI Fellow David Cox Road Elementary School What's My Value? Using "Manipulatives" and Writing to Explain Place Value by Amanda Donovan, 2016 CTI Fellow David Cox Road Elementary School This curriculum unit is recommended for: Second and Third Grade

More information

DegreeWorks Advisor Reference Guide

DegreeWorks Advisor Reference Guide DegreeWorks Advisor Reference Guide Table of Contents 1. DegreeWorks Basics... 2 Overview... 2 Application Features... 3 Getting Started... 4 DegreeWorks Basics FAQs... 10 2. What-If Audits... 12 Overview...

More information

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe *** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE Proceedings of the 9th Symposium on Legal Data Processing in Europe Bonn, 10-12 October 1989 Systems based on artificial intelligence in the legal

More information

Mathematics process categories

Mathematics process categories Mathematics process categories All of the UK curricula define multiple categories of mathematical proficiency that require students to be able to use and apply mathematics, beyond simple recall of facts

More information