UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

Similar documents
Artificial Neural Networks written examination

Python Machine Learning

Axiom 2013 Team Description Paper

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Seminar - Organic Computing

Laboratorio di Intelligenza Artificiale e Robotica

(Sub)Gradient Descent

Laboratorio di Intelligenza Artificiale e Robotica

Knowledge-Based - Systems

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Rule Learning With Negation: Issues Regarding Effectiveness

Classification Using ANN: A Review

CS Machine Learning

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Learning Methods for Fuzzy Systems

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

Softprop: Softmax Neural Network Backpropagation Learning

Lecture 1: Basic Concepts of Machine Learning

Lecture 1: Machine Learning Basics

Evolutive Neural Net Fuzzy Filtering: Basic Description

A Reinforcement Learning Variant for Control Scheduling

ABSTRACT. A major goal of human genetics is the discovery and validation of genetic polymorphisms

Evolution of Symbolisation in Chimpanzees and Neural Nets

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Test Effort Estimation Using Neural Network

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

CSL465/603 - Machine Learning

A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS

Learning to Schedule Straight-Line Code

Probabilistic Latent Semantic Analysis

Artificial Neural Networks

Probability and Statistics Curriculum Pacing Guide

Rule Learning with Negation: Issues Regarding Effectiveness

I-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers.

FF+FPG: Guiding a Policy-Gradient Planner

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Solving Combinatorial Optimization Problems Using Genetic Algorithms and Ant Colony Optimization

INPE São José dos Campos

Issues in the Mining of Heart Failure Datasets

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Experiments with SMS Translation and Stochastic Gradient Descent in Spanish Text Author Profiling

Reinforcement Learning by Comparing Immediate Reward

SCORING KEY AND RATING GUIDE

Radius STEM Readiness TM

An OO Framework for building Intelligence and Learning properties in Software Agents

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Discriminative Learning of Beam-Search Heuristics for Planning

Ordered Incremental Training with Genetic Algorithms

TD(λ) and Q-Learning Based Ludo Players

Cooperative evolutive concept learning: an empirical study

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

BUSINESS INTELLIGENCE FROM WEB USAGE MINING

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

The Good Judgment Project: A large scale test of different methods of combining expert predictions

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Assignment 1: Predicting Amazon Review Ratings

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Lecture 10: Reinforcement Learning

An empirical study of learning speed in backpropagation

The dilemma of Saussurean communication

Exploration. CS : Deep Reinforcement Learning Sergey Levine

A method to teach or reinforce concepts of restriction enzymes, RFLPs, and gel electrophoresis. By: Heidi Hisrich of The Dork Side

Rule discovery in Web-based educational systems using Grammar-Based Genetic Programming

While you are waiting... socrative.com, room number SIMLANG2016

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Introduction to Simulation

MYCIN. The MYCIN Task

Implementation of Genetic Algorithm to Solve Travelling Salesman Problem with Time Window (TSP-TW) for Scheduling Tourist Destinations in Malang City

A Comparison of Annealing Techniques for Academic Course Scheduling

Using dialogue context to improve parsing performance in dialogue systems

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Functional English 47251

A simulated annealing and hill-climbing algorithm for the traveling tournament problem

Speaker Identification by Comparison of Smart Methods. Abstract

Historical maintenance relevant information roadmap for a self-learning maintenance prediction procedural approach

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

XXII BrainStorming Day

Speech Recognition at ICSI: Broadcast News and beyond

Soft Computing based Learning for Cognitive Radio

Research Article Hybrid Multistarting GA-Tabu Search Method for the Placement of BtB Converters for Korean Metropolitan Ring Grid

Robot Shaping: Developing Autonomous Agents through Learning*

Time series prediction

STAT 220 Midterm Exam, Friday, Feb. 24

Multiobjective Optimization for Biomedical Named Entity Recognition and Classification

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Comparison of network inference packages and methods for multiple networks inference

Model Ensemble for Click Prediction in Bing Search Ads

Why Did My Detector Do That?!

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Learning and Transferring Relational Instance-Based Policies

LEGO MINDSTORMS Education EV3 Coding Activities

Improving Fairness in Memory Scheduling

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

Human Emotion Recognition From Speech

On the Combined Behavior of Autonomous Resource Management Agents

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Understanding and Interpreting the NRC s Data-Based Assessment of Research-Doctorate Programs in the United States (2010)

Classify: by elimination Road signs

A study of speaker adaptation for DNN-based speech synthesis

Transcription:

Page 1 of 7 UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam in INF3490/4490 iologically Inspired omputing ay of exam: ecember 9th, 2015 Exam hours: 09:00 13:00 This examination paper consists of 7 pages. ppendices: 1 Permitted materials: None Make sure that your copy of this examination paper is complete before answering. The exam text consists of problems 1-35 (multiple choice questions) to be answered on the form that is enclosed in the appendix and problems 36-39 which are answered on the usual sheets (in English or Norwegian). Problems 1-35 have a total weight of 70%, while problems 36-39 have a weight of 30%. bout problem 1-35: Each problem consists of a topic in the left column and a number of statements each indicated by a capital letter. Problems are answered by marking true statements with a clear cross (X) in the corresponding row and column in the attached form, and leaving false statements unmarked. Each problem has a variable number of true statements, but there is always at least one true and false statement for each problem. 0.5 points are given for each marked true statement and for each false statement left unmarked, resulting in a score ranging from 0 to 60. You can use the right column of the text as a draft. The form in the appendix is the one to be handed in (remember to include your candidate number). Problem 1 iologically inspired computation is appropriate for Optimization Modelling Safety critical systems Simulation Problem 2 Exhaustive search Not guaranteed to find the optimal solution Test all possible solutions, pick the best Relevant for continuous problems by using approximation Most relevant for large search problems

Page 2 of 7 Problem 3 Which of the following are discrete optimization problems? Travelling salesman problem Robot control hess playing program Prediction of stock prices Problem 4 Gradient ascent The direction of the move is towards a larger value Relevant for discrete optimization Is not guaranteed to find the optimal solution The ascent continues until the gradient is very small Problem 5 Exploration in search is Problem 6 What controls the search in simulated annealing? Problem 7 algorithm: Initialization Problem 8 algorithm: Variation operators Problem 9 algorithm: Recombination oncerned with improving the current best solution by local search ombined with exploitation in evolutionary algorithms Often resulting in getting stuck in local optima oncerned with global search Time Temperature Initial solution Final solution Individuals are normally generated randomly Is concerned with generating candidate solutions Mutation of candidates is normally also taking place during the initialization Heuristics for generating candidates can be applied Is a selection operator ct on population level ct on individual level re crossover and mutation lso known as crossover ombines elements of two or more genotypes lso known as mutation lso known as representation

Page 3 of 7 Problem 10 algorithm: Survivor selection Problem 11 algorithm: Termination condition Problem 12 Permutation representation Is often stochastic lso known as replacement an be fitness based an be age based Several termination criteria can be combined etermines when to compute the fitness for a population Is checked in every generation Should be avoided to get faster evolution Is used for problems where each variable can only appear once it-flip mutation is applicable mutation operator that swaps at least two values is applicable Is used for problems where each variable can appear multiple times Problem 13 Tree representation Is used in Genetic Programming Mutation results in replacing a randomly chosen subtree by a randomly generated tree Not suited for representing computer programs Is used in Genetic lgorithms Problem 14 Selection pressure Should be high to avoid premature convergence The higher pressure, the harder for the fittest solutions to survive Fitness-proportionate selection avoids selection pressure Rank-based selection can adjust and control the pressure Problem 15 Rank based selection Use relative rather than absolute fitness Use absolute rather than relative fitness Results in less control of the selection pressure than fitness-proportionate selection Ranking can be either linear or non-linear Problem 16 Multimodality In crowding, offspring competes with their nearest parent In fitness sharing, the fitness decreases if there are many candidates in a niche The problem has only one locally optimal solution Periodic migration is not relevant in the island model

Problem 17 Simple Genetic lgorithm (G) Problem 18 Strategies (ES) Problem 19 What is most important to be concerned with in the evolution of repetitive problems? Problem 20 What are normally the two best measurement units for an evolutionary algorithm? Problem 21 Multiobjective optimisation problems (MOPs) Problem 22 Learning in neural networks Problem 23 Supervised learning hildren compete with parents in survival selection oth crossover and mutation are applied in each generation The whole population is replaced with the resulting offspring Uses real-valued representation (µ,λ): Select survivors among parents and offspring (µ+λ): Select survivors among parents and offspring (µ-λ): Select survivors among offspring only (µ:λ): Select survivors among offspring only o multiple runs until a good solution is found Execute one run until the solution is good enough Get a reasonably good solution every time Get a very good result just once Number of evaluations Elapsed time PU time Number of generations The travelling salesman problem is an example of a MOP oncurrent optimisation of n possibly conflicting objectives The Pareto front represents the best solutions found The Pareto front consists of dominated solutions Learning takes place in the neurons n error is computed on axon outputs in the human brain Learning takes place in the connections between neurons Weights in a perceptron represent the strengths of synapses esired outputs are not included esired outputs are included Error between desired outputs and actual outputs are computed during training The multi-layer perceptron is trained by supervised learning Page 4 of 7

Problem 24 rtificial neural networks Problem 25 Why use Multi Layer Perceptron instead of a single layer perceptron? Problem 26 When can the weights be adjusted in a multilayer perceptron? Page 5 of 7 re trained by adjusting the network size re trained by adjusting weights The weights are either all positive or all negative The learning rate controls the amount of weight change Faster learning Easier programming an solve more complex problems an learn multiple decision boundaries In the forward pass In the backward pass In both forward and backward pass fter computing output values of each training vector Problem 27 The activation function in a multilayer perceptron oes thresholding to 0 or 1 Is used to compute the output value of a node Is used for initialization of the network Makes it possible to train non-linear decision boundaries Problem 28 artesian Genetic Programming Is more restricted than the general Genetic Programming In evolving circuits, the genes determines function and input to each node The level back parameter decides the number of columns in the node-array The problem of bloat is larger than for the general Genetic Programming Problem 29 Swarm intelligence Global behaviour appears as a result of centralized control In Particle Swarm Optimization, velocity and position of particles are updated ommunication through the environment is called stigmergy The probability of choosing a new edge in ant colony optimization is proportional with the pheromone level of the edge

Page 6 of 7 Problem 30 Support vector machines Only data vectors defining the margins are needed to represent the support vectors an only classify linearly separable data Map inputs into a higher-dimensional space Margins can be increased by using soft margins Problem 31 Ensemble learning Multiple classifiers are trained to be slightly different Only the best classifier is applied after training Training vectors can be assigned weights during training ll training vectors available should be used for training each classifier Problem 32 Principal component analysis Problem 33 Unsupervised learning Problem 34 K means clustering Problem 35 Reinforcement learning Performs mapping to higher dimensions an be applied for feature extraction omponents represent the directions along with the most variation in the data Is a non-linear transformation an be used for training with data sets containing only inputs No specific error function is used for training Self organizing maps are increasing dimensions in the data multi-layer perceptron can be trained with unsupervised learning Need to know the number of clusters in advance Need to know which cluster a data point belongs to Each cluster center is moved most in the beginning The method always results in the global optimal solution The algorithm is told when the answer is wrong, and how to correct it Is training using rewards policy defines how actions are chosen discount factor is used to discount future rewards

Page 7 of 7 Problem 36 (8%) a) riefly explain the evolutionary algorithm terms chromosome, gene, locus and allele by including a figure of a chromosome. b) Explain briefly what a genotype and phenotype are and give an example of each of them. Problem 37 (5%) In a population of three individuals, they have fitness 2, 3 and 5, respectively. What is the probability for selecting each of them when using a roulette wheel? Total fitness= 2+3+5= 10, thus, probability for selection is 1/10 for each of the fitness values: 0.2, 0.3 and 0.5.

Page 8 of 7 Problem 38 (9%) a) Show how the following multi-layer perceptron realizes a XOR-function by computing the output of each node and putting the results into a table: Each perceptron accepts inputs being 0 or 1 and contains a threshold activation function. (T) (T) (T) (T) E (T) E (T) 0 0-0.5-1.5 0 0-0.5 0 0 1 1-0.5 1-1.5 1 0 1-0.5 1 1 0 1-0.5 1-1.5 1 0 1-0.5 1 1 1 2-0.5 2-1.5 1 1 1-1-0.5 0 T: efore threshold, T: fter threshold. b) What values should the weights in the output layer have to make an inverted XOR function (XNOR)? ll output layer weights must be negated (including to the bias). Weight E = 1, Weight E = -1 and Weight bias = 0.5. Problem 39 (8%) List and explain, with one sentence each, up to four of the ethical recommendations for commercial robots the Euronet Roboethics telier came up with. Safety: There must be mechanisms (or opportunities for an operator) to control and limit a robot's autonomy. Security: There must be a password or other keys to avoid inappropriate and illegal use of a robot. Traceability: Similarly as aircraft, robots should have a "black box" to record and document their own behavior. Identifiability: Robots should have serial numbers and registration number similar cars. Privacy policy: Software and hardware should be used to encrypt and password protect sensitive data that the robot needs to save.

ppendix Page 9 of 1 7 INF3490/INF4490 nswers problems 1 35 for candidate no: Problem 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

ppendix Page 10 of 1 7 INF3490/INF4490 nswers problems 1 35 for candidate no: Problem 1 Ο Ο Ο 2 Ο Ο 3 Ο Ο 4 Ο Ο Ο 5 Ο Ο 6 Ο 7 Ο Ο Ο 8 Ο Ο 9 Ο Ο 10 Ο Ο Ο 11 Ο Ο 12 Ο Ο 13 Ο Ο 14 Ο 15 Ο Ο 16 Ο Ο 17 Ο Ο 18 Ο 19 Ο 20 Ο Ο 21 Ο Ο 22 Ο Ο 23 Ο Ο Ο 24 Ο Ο 25 Ο Ο 26 Ο Ο 27 Ο Ο 28 Ο Ο 29 Ο Ο Ο 30 Ο Ο Ο 31 Ο Ο 32 Ο Ο 33 Ο Ο 34 Ο Ο 35 Ο Ο Ο