Sigmoid function is a) Linear B) non linear C) piecewise linear D) combination of linear & non linear

Similar documents
Lecture 1: Machine Learning Basics

Artificial Neural Networks written examination

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Python Machine Learning

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Evolutive Neural Net Fuzzy Filtering: Basic Description

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Artificial Neural Networks

(Sub)Gradient Descent

A Neural Network GUI Tested on Text-To-Phoneme Mapping

An empirical study of learning speed in backpropagation

Human Emotion Recognition From Speech

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

INPE São José dos Campos

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Softprop: Softmax Neural Network Backpropagation Learning

Axiom 2013 Team Description Paper

CS Machine Learning

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Learning Methods for Fuzzy Systems

SARDNET: A Self-Organizing Feature Map for Sequences

Reinforcement Learning by Comparing Immediate Reward

Grade 6: Correlated to AGS Basic Math Skills

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Time series prediction

AP Calculus AB. Nevada Academic Standards that are assessable at the local level only.

Knowledge Transfer in Deep Convolutional Neural Nets

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Model Ensemble for Click Prediction in Bing Search Ads

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

CSL465/603 - Machine Learning

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Issues in the Mining of Heart Failure Datasets

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

WHEN THERE IS A mismatch between the acoustic

Switchboard Language Model Improvement with Conversational Data from Gigaword

Rule Learning With Negation: Issues Regarding Effectiveness

Generative models and adversarial training

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Assignment 1: Predicting Amazon Review Ratings

Calibration of Confidence Measures in Speech Recognition

Seminar - Organic Computing

Speaker Identification by Comparison of Smart Methods. Abstract

Forget catastrophic forgetting: AI that learns after deployment

Knowledge-Based - Systems

Lecture 10: Reinforcement Learning

A Comparison of Annealing Techniques for Academic Course Scheduling

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Using focal point learning to improve human machine tacit coordination

Test Effort Estimation Using Neural Network

Reducing Features to Improve Bug Prediction

An OO Framework for building Intelligence and Learning properties in Software Agents

How People Learn Physics

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

GACE Computer Science Assessment Test at a Glance

Australian Journal of Basic and Applied Sciences

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Language Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus

Design Of An Automatic Speaker Recognition System Using MFCC, Vector Quantization And LBG Algorithm

Learning to Schedule Straight-Line Code

An Empirical and Computational Test of Linguistic Relativity

Rule Learning with Negation: Issues Regarding Effectiveness

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Breaking the Habit of Being Yourself Workshop for Quantum University

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

arxiv: v1 [cs.lg] 15 Jun 2015

On the Combined Behavior of Autonomous Resource Management Agents

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Word Segmentation of Off-line Handwritten Documents

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

arxiv: v2 [cs.cv] 30 Mar 2017

Visit us at:

Radius STEM Readiness TM

A study of speaker adaptation for DNN-based speech synthesis

TD(λ) and Q-Learning Based Ludo Players

A Case Study: News Classification Based on Term Frequency

Enduring Understandings: Students will understand that

Answer Key For The California Mathematics Standards Grade 1

School of Innovative Technologies and Engineering

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Data Fusion Through Statistical Matching

Learning From the Past with Experiment Databases

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

Diagnostic Test. Middle School Mathematics

Communication and Cybernetics 17

BUILDING CONTEXT-DEPENDENT DNN ACOUSTIC MODELS USING KULLBACK-LEIBLER DIVERGENCE-BASED STATE TYING

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Laboratorio di Intelligenza Artificiale e Robotica

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Transcription:

1. Neural networks are also referred to as (multiple answers) A) Neurocomputers B) connectionist networks C) parallel distributed processors D) ANNs 2. The property that permits developing nervous system to adapt to its surrounding environment A) Nonlinearity B) plasticity C) interaction D) transparency At the synaptic stages that is from receptor to bipolar cells and from bipolar to gangellion cells, 3. specialized laterally connected neurons are called respectively A) Amacrine & horizontal cells B) horizontal cells & gangellion cells C) Vertical & amacrine cells D) horizontal cells & amacrine cells The energetic efficiency of the brain is approximately 4. A) 10-16 J per operation per second B) 10-6 J per operation per second C) 10-16 J per second D) 10-16 J per operation E) 10-6 J per operation Axon of a neuron is characterized by 5. A) Low electrical resistance & very large capacitance s B) high electrical resistance & large capacitance C) High electrical resistance & low capacitance D) low electrical resistance & low capacitance 6. The value of Heaviside function for v < 0 is A) 1 B) 0 C) -1 D) v 7. Sigmoid function is a) Linear B) non linear C) piecewise linear D) combination of linear & non linear 8. Fan-in is A) Synaptic convergence B) synaptic divergence C) fan-out D) transmittance 9. In feedback AB is referred to as A) Closed loop operator B) open loop operator C) Time delay D) architectural operator 10-4-4-3-2 has the no. of hidden layers 10. A) 4 B) 5 C) 3 D) 2 Items to be categorized as separate classes should be given widely representation in the network 11. A) Different B) Random C) complex D) Same The founder of artificial neural networks is 12. A) McCulloch B) Rosenblatt C) Hebb D) Gabor Vapnik & coworkers invented in 1990 13. A) Analog VLSI and neural system B) reinforcement learning C) sigmoid belief networks D) Support Vector machines A neuron j receives inputs from two other neurons whose activity levels are 10, -20, 4, and 2. The 14. respective synaptic weights of neuron j are 0.8, 0.2, -1.0 and 0.9. Calculate the output of a neuron when it is linear. Assuming that the bias applied is zero.

A) 1 B) 1.8 C) 1.8 D) 0 Widrow-Hoff rule is 15. A) Error correction learning B) Boltzmann Learning C) Hebbian Learning D) competitive learning w kj (n) = ηy k (n)x j (n) is the formula according to the 16. A) Error correction learning B) Boltzmann Learning C) Hebbian Hypothesis D) Competitive learning E) Covariance Hypothesis Reinforcement learning is closely related to 17. A) Parallel programming B) Branch & Bound C) Greedy Programming D) Dynamic Programming E) Divide & Conquer Neurodynamic programming is 18. A) Learning without a teacher B) Learning with a teacher C) Supervised learning D) reinforcement learning Associate one of the following learning tasks with a keyword Classification 19. Associate one of the following learning tasks with a keyword smoothing 20. Associate one of the following learning tasks with a keyword Memorized pattern 21. Associate one of the following learning tasks with a keyword MIMO 22. Associate one of the following learning tasks with a keyword Jacobean Matrix 23. A) Pattern Association B) Pattern recognition C) Function Approximation D) Control E) Filtering Weight, height, age and number of teeth are chosen as features to determine the wool yield of a flock of 24. sheep. This yields a space of A) 3-D B) 4-D C) 5-D D) 16-D What is a hyper plane? 25. A) A fast jet B) A planar (flat) surface, in high dimensional space C) Any high-dimensional surface Why are linearly separable problems of interest to neural network researchers? A) Because they are the only class of problems that a network can solve successfully 26. B) Because they are the only mathematical functions that are continuous C) Because they are the only mathematical functions you can draw D) Because they are the only class of problems a perceptron can solve successfully 27. A multi-layer perceptron differs from the single layer perceptron in that it has more layers of perceptron-

like units. A) True - there are no other differences B) True - but there are other differences as well, that are at least as important C) False - layers refers to a mathematical effect and is not used in its usual sense here How can network learning be explained in terms of the error function? A) It can't - it's irrelevant 28. B) The network learns by altering its weights to reduce the error each time C) The network reduces the error by altering the target patterns each time The sigmoid function is 29. A) S-shaped B) Z-shaped C) A step function D) U-shaped What is a statistically optimal classifier? A) A classifier which calculates the nearest neighbor for a given test example. 30. B) A classifier that gives the lowest probability of making classification errors. C) A classifier that minimizes the sum squared error on the training set. Which algorithm can be adapted to learning with and without a teacher 31. A) Nearest neighbor B) Boltzmann learning rule C) k-nearest neighbor rule D) Hebbian learning W(n+1) = w(n) H -1 (n)g(n) is the formula for which adaptive filtering unconstrained optimization 32. technique A) Steepest descent B) Newton Method C) Gauss- Newton Method D) linear Least Squares filter What are the virtues of LMS algorithm (multiple answers) 33. A) Simplicity B) model independent C) optimal in accordance with min max criterion D) Slow rate of convergence E) sensitivity to variations in eigen structure Which is true A) Perceptron convergence algorithm is nonparametric while bayes classifier is parametric 34. B) Perceptron convergence algorithm is parametric while bayes classifier is nonparametric C) Perceptron convergence algorithm and bayes classifier are non parametric D) Perceptron convergence algorithm & bayes classifier are parametric The Back propagation is computationally faster in 35. A) Sequential mode B) Batch Mode C) Both D) None of the two The techniques for Maximizing the information content in the examples provided for training the net are (Multiple answers) 36. A) The use of an example that results in the largest training error B) Randomization C) emphasizing scheme D) Recursion 37. The target values in the back propagation algorithm should normally be

A) Fixed B) Highly variable C) Within the range of +Є or Є to a value D) Within the range of + Є only The initialization of synaptic weights & thresholds for a back propagation algorithm should be with A) High value 38. B) B) Low values C) Some values should be high & some should be correlated D) Somewhere between low & high extremes Which is not true with respect to leaning rate in a back propagation algorithm A) All neurons should ideally learn at the same rate B) The learning rate η should be assigned a higher value in the last layer then in the front layer 39. C) The learning rate should be inversely proportional to the square root of the synaptic connections to that neuron. D) Neurons with many inputs should have a smaller rate than neurons with few inputs. Sub sampling in the convolutional networks A) Reduces the sensitivity of the feature map s output to shifts & distortion 40. B) Increases the sensitivity of the feature map s output to shifts & distortion C) Does not affect the sensitivity of the feature map s output to shifts & distortion D) Does not helps in regard to the feature map s output to shifts & distortion The area of computer science dealing with neural networks, AI, Fuzzy set theory, regression is called 41. A) Software computer B) Hard computing C) soft computing D) pervasive computing The membership of a set is defined in terms of a membership function that gives the individual element s participation in the set as a value between 0 and 1 & not the binary value. This concept is key feature of 42. A) Regression & optimization B) Parallel & distributed networks C) Artificial neural networks D) Fuzzy Set theory The important problem sighted in every model of the neural network is referred to as 43. A) XOR problem B) OR Problem C) NOR problem D) Logic Problem The XOR problem is not solvable using a single perceptron because A) The outputs cannot be linearly classified in two classes 44 B) The single perceptron only deals with two outputs C) The single perceptron does not handle such type of problems D) The output cannot be non linearly classified in two classes 45 In reference to Network pruning technique one of these is false

A) The weights of the neurons of the network are reduced by using penalty terms for weak neurons B) The network can be divided into important & not important neurons C) The weights of some neurons keeps increasing while for others keeps decreasing D) Weight Decay & weight elimination are correct forms of complexity regularization 46 Find out the true statement A) From one iteration to the next iteration every learning rate parameter should not be allowed to change. B) When the derivative of the cost function with respect to a synaptic weight has the same algebraic sign for several consecutive iterations of the algorithm, the learning rate parameter for that particular weight should be decreased C) When the derivative of the cost function with respect to a synaptic weight has the alternate algebraic sign for several consecutive iterations of the algorithm, the learning rate parameter for that particular weight should be increased D) Every adjustable network parameter of the cost function should have its own individual learning rate parameter 47 R which is a split parameter for cross validation data lies between the range of 0 and 1, as per popular studies the optimistic value of r is A) 0.5 B ) 0.1 C ) ).4 D ) 0.2 48 The boundary condition for the OR function can be A) y+x+1=0 B) y+x-1=0 C) y=0 D) X=0