DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING S.R.M INSTITUTE OF SCIENCE AND TECHNOLOGY

Similar documents
Artificial Neural Networks written examination

Python Machine Learning

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Artificial Neural Networks

Softprop: Softmax Neural Network Backpropagation Learning

Lecture 1: Machine Learning Basics

Human Emotion Recognition From Speech

Learning Methods for Fuzzy Systems

Speaker Identification by Comparison of Smart Methods. Abstract

INPE São José dos Campos

Evolutive Neural Net Fuzzy Filtering: Basic Description

I-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers.

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Forget catastrophic forgetting: AI that learns after deployment

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Time series prediction

Test Effort Estimation Using Neural Network

A study of speaker adaptation for DNN-based speech synthesis

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Knowledge Transfer in Deep Convolutional Neural Nets

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

(Sub)Gradient Descent

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

An empirical study of learning speed in backpropagation

Evolution of Symbolisation in Chimpanzees and Neural Nets

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Issues in the Mining of Heart Failure Datasets

Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

*** * * * COUNCIL * * CONSEIL OFEUROPE * * * DE L'EUROPE. Proceedings of the 9th Symposium on Legal Data Processing in Europe

Modeling function word errors in DNN-HMM based LVCSR systems

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

SARDNET: A Self-Organizing Feature Map for Sequences

Probability and Statistics Curriculum Pacing Guide

Classification Using ANN: A Review

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

Knowledge-Based - Systems

International Journal of Advanced Networking Applications (IJANA) ISSN No. :

arxiv: v1 [cs.cv] 10 May 2017

On the Formation of Phoneme Categories in DNN Acoustic Models

Deep Neural Network Language Models

Second Exam: Natural Language Parsing with Neural Networks

Phonetic- and Speaker-Discriminant Features for Speaker Recognition. Research Project

Syntactic systematicity in sentence processing with a recurrent self-organizing network

Modeling function word errors in DNN-HMM based LVCSR systems

Kamaldeep Kaur University School of Information Technology GGS Indraprastha University Delhi

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

Model Ensemble for Click Prediction in Bing Search Ads

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Statewide Framework Document for:

Breaking the Habit of Being Yourself Workshop for Quantum University

Automatic Speaker Recognition: Modelling, Feature Extraction and Effects of Clinical Environment

A Case Study: News Classification Based on Term Frequency

Predicting Early Students with High Risk to Drop Out of University using a Neural Network-Based Approach

Using the Artificial Neural Networks for Identification Unknown Person

Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science

Early Model of Student's Graduation Prediction Based on Neural Network

Radius STEM Readiness TM

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Dublin City Schools Mathematics Graded Course of Study GRADE 4

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Axiom 2013 Team Description Paper

SAM - Sensors, Actuators and Microcontrollers in Mobile Robots

Using focal point learning to improve human machine tacit coordination

Speech Recognition at ICSI: Broadcast News and beyond

Improvements to the Pruning Behavior of DNN Acoustic Models

How People Learn Physics

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

A Review: Speech Recognition with Deep Learning Methods

Learning to Schedule Straight-Line Code

phone hidden time phone

Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS

Massachusetts Institute of Technology Tel: Massachusetts Avenue Room 32-D558 MA 02139

Lecture 1: Basic Concepts of Machine Learning

arxiv: v1 [cs.lg] 15 Jun 2015

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

The Good Judgment Project: A large scale test of different methods of combining expert predictions

AUTOMATED FABRIC DEFECT INSPECTION: A SURVEY OF CLASSIFIERS

Device Independence and Extensibility in Gesture Recognition

Soft Computing based Learning for Cognitive Radio

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Lecture 10: Reinforcement Learning

CSL465/603 - Machine Learning

Math-U-See Correlation with the Common Core State Standards for Mathematical Content for Third Grade

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

Australian Journal of Basic and Applied Sciences

arxiv: v1 [cs.lg] 7 Apr 2015

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Word Segmentation of Off-line Handwritten Documents

Version Space. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Version Space Term 2012/ / 18

B.S/M.A in Mathematics

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Transcription:

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING S.R.M INSTITUTE OF SCIENCE AND TECHNOLOGY SUBJECT : ARTIFICIAL NEURAL NETWORKS SUB.CODE : CS306 CLASS : III YEAR CSE QUESTION BANK UNIT-1 1. Define ANN and Neural computing. 2. Distinguish between Supervised and Unsupervised Learning. 3. Draw the basic topologies for (a) Nonrecurrent and (b) Recurrent Networks and distinguish between them. 4. Give some examples for Nonrecurrent and Recurrent ANNs. Specify the learning law used by each ANN. 5. Define Adaptive System and Generalization. 6. Mention the characteristics of problems suitable for ANNs. 7. List some applications of ANNs. 8. What are the design parameters of ANN? 9. Explain the three classifications of ANNs based on their functions. Explain them in brief. 10. Define Learning and Learning Law. 11. Distinguish between Learning and Training. 12. How can you measure the similarity of two patterns in the input space? 13. A two layer network is to have four inputs and six outputs. The range of the outputs is to be continuous between 0 and 1. What can you tall about the network architecture? Specifically, (a) How many neurons are required in each layer? (b) What are the dimensions of the first-layer and second layer weight matrices? (Hidden layer neurons are 5) (c) What kinds of transfer functions can be used in each layer? 14. Mention the linear and nonlinear activation functions used in Artificial Neural Networks. UNIT-I 1. Write the differences between conventional computers and ANN. 2. Explain in Detail how weights are adjusted in the different types of Learning Law.(Both supervised and Unsupervised) 3. Write short notes on the following. a. Learning Rate Parameter b. Momentum

c. Stability d. Convergence e. Generalization 4. (a) Write the advantages and disadvantages of Artificial Neural Networks. (b) What are the design steps to be followed for using ANN for your problem? 5. (a) What are the relevant computational properties of the Human Brain? (b) Write short notes on neural approaches to computation. UNIT-II 1. Compare physical neuron and artificial neuron 2. What is called weight or connection strength? 3. Draw the model of a single artificial neuron and derive its output. 4. Draw the model of MP(McCulloch Pitts) neuron and state its characteristics. 5. What are the two approaches to add a bias input? 6. Distinguish between linearly separable and nonlinearly separable problems. Give examples. 7. Define Perceptron convergence theorem. 8. What is XOR problem? 9. What is perceptron? Write the differences between Single Layer Perceptron(SLP) and Multilayer Perceptron(MLP). 10. Define minimum disturbance principle. 11. Consider a 4 input, 1 output parity detector. The output is 1 if the number of inputs is even. Otherwise, it is 0. Is this problem linearly separable? Justify your answer. 12. What is a-lms algorithm? 13. Draw the ADALINE implementation for AND and OR functions. UNIT-II 1. Draw the structure of a biological Neuron and explain in detail. 2. (a) Explain the three basic neurons which are used to develop complex ANN. (b) Write the differences between MP neuron and WLIC-T and Perceptron. 3. (a) Write short notes on i. Sigmoid Squashing Function ii. Extensions to sigmoid (b)develop simple ANNs to implement the three input AND, OR and XOR functions using MP neurons. 4. State and Prove Perceptron Convergence theorem. 5. (a) Draw the architecture of a single layer perceptron (SLP) and explain its operation. Mention its advantages and disadvantages. (b) Draw the architecture of a Multilayer perceptron (MLP) and explain its operation. Mention its advantages and disadvantages.

6. Explain Why XOR problem can not be solved by a single layer perceptron and how it is solved by a Multilayer Perceptron. 7. Explain ADALINE and MADALINE. List some applications. 8. (a) Distinguish between Perceptron Learning law and LMS Learning law. (b) Give the output of the network given below for the input [1 1 1]T 9. (a) Explain the logic functions performed by the following networks with MP neurons given below. (b) Design ANN using MP neurons to realize the following logic functions using ±1 for the weights. s(a1,a2,a3) = s(a1,a2,a3) = UNIT-III 1. What is meant by mapping problem and mapping network? 2. What is a linear associative network? 3. Distinguish between nearest neighbor recall and interpolative recall. 4. Mention the desirable properties of Pattern Associator. 5. Distinguish between auto correlator and hetero correlator structures. 6. Define Hebbian Synapse. 7. List some issues that we have to consider to design a feed forward net for a specific application. 8. Draw the overall feed forward net based strategy (implementation and training). 9. List the role of hidden layers in a Multilayer FeedForward network. 10. What is GDR? Write the weight update equations for hidden layer and output layer weights. 11. Draw the flow chart of overall GDR procedure. 12. Draw the architecture of layered feedforward architecture. 13. Draw the feedforward architecture for ANN based compressor. 14. Distinguish between Pattern Mode and Batch Mode. 15. What is local minimum and global minimum? 16. Explain how the network training time and accuracy is influenced by the size of the hidden layer. 17. List out some applications of BPN. 18. What are the two types of signals identified in the BackPropagation network? 19. Why the layers in the Bidirectional Associative Memory are called x and y layers? UNIT-III 1. Explain Hebbian Learning. 2. Draw the architecture of Back Propagation Network (BPN) and explain in detail. 3. Derive GDR for a MLFF network

4. (a) Explain the significance of adding momentum to the training procedure. (b) Write the algorithm of generalized delta rule(back Propagation Algorithm). 5. Draw the architecture of Bidirectional Associative memory(bam) and explain in detail. UNIT-IV 1. What do you mean by Weight Space in Feedforward Neural Networks? 2. How can you perform search over weight space? 3. How will you determine the characteristics of a training algorithm? 4. What are the effects of error surface on training algorithms? 5. What is premature saturation in the error surface? 6. What is saddle point in the error surface? 7. What are the two types of transformations which results in symmetries in weight spaces? Explain in brief. 8. What is meant by generalization? 9. What are Ontogenic Neural Networks? Mention their advantages. 10. Distinguish between constructive and destructive methods for network topology modification. 11. Write the differences between Cascade Correlation(CC) network and Layered Feedforward network. 12. Write the quickprop weight correction algorithm for Cascade Correlation Network. 13. Define residual output error. 14. Define pruning. 15. Write the applications of Cascade Correlation network. 16. How will you identify superfluous neurons in the hidden layer? 17. What do you mean by network inversion? 18. Write the differences between HeteroAssociative Memories and interpolative associative memories. 19. Write the differences between Autossociative and HeteroAssociative memories. UNIT-IV 1. Explain Generalization. 2. What are the major features of Cascade Correlation Network? Draw the architecture of a cascade correlation network and explain in detail. 3. Explain how a feedforward network size can be minimized. 4. Explain the stochastic optimization methods for weight determination. 5. (a)explain the methods for network topology determination. (b) What are the costs involved in weights and explain how it is minimized? 6. Draw the architecture of Cascade Correlation Network and explain in detail.

7. Explain the method pruning by weight decay to minimize the neural network size. 8. Explain in detail how the superfluous neurons are determined and the network is pruned. UNIT-V 1. What is competitive learning network? Give examples. 2. What is Self-Organizing network? Give examples. 3. Define the term clustering in ANN. 4. What is c-means algorithm? 5. How will you measure the clustering similarity? 6. What is on-centre off surround technique? 7. Describe the feature of ART network. 8. Write the differences between ART 1 and ART 2. 9. What is meant by stability plasticity dilemma in ART network? 10. What is 2/3rd rule in ART? 11. What are the two subsystems in ART network? 12. What are the applications of ART? 13. What are the two processes involved in RBF network design? 14. List some applications of RBF network. 15. What are the basic computational needs for Hardware implementation of ANN? UNIT-V 1. Explain the architecture and components of Competitive Learning Neural Network with neat diagram. 2. Explain the clustering method Learning Vector Quantization. 3. Draw the architecture of SOM and explain in detail. 4. Explain the SOM algorithm. 5. Draw the architecture of ART1 network and explain in detail. 6. Explain ART1 algorithm. 7. Draw the architecture of RBF network and explain in detail. 8. Explain Time Delay Neural Network.