Deep Learning Nanodegree Syllabus

Similar documents
Python Machine Learning

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Generative models and adversarial training

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Deep search. Enhancing a search bar using machine learning. Ilgün Ilgün & Cedric Reichenbach

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

(Sub)Gradient Descent

Axiom 2013 Team Description Paper

A Simple VQA Model with a Few Tricks and Image Features from Bottom-up Attention

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Training a Neural Network to Answer 8th Grade Science Questions Steven Hewitt, An Ju, Katherine Stasaski

arxiv: v1 [cs.lg] 15 Jun 2015

Laboratorio di Intelligenza Artificiale e Robotica

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

Laboratorio di Intelligenza Artificiale e Robotica

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Reinforcement Learning by Comparing Immediate Reward

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Lecture 1: Machine Learning Basics

CSL465/603 - Machine Learning

arxiv: v1 [cs.cv] 10 May 2017

Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

AI Agent for Ice Hockey Atari 2600

A Reinforcement Learning Variant for Control Scheduling

arxiv: v1 [cs.lg] 7 Apr 2015

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Dual-Memory Deep Learning Architectures for Lifelong Learning of Everyday Human Behaviors

IAT 888: Metacreation Machines endowed with creative behavior. Philippe Pasquier Office 565 (floor 14)

Learning Methods for Fuzzy Systems

Lecture 10: Reinforcement Learning

Rule Learning With Negation: Issues Regarding Effectiveness

arxiv: v2 [cs.cv] 30 Mar 2017

Top US Tech Talent for the Top China Tech Company

Android App Development for Beginners

Challenges in Deep Reinforcement Learning. Sergey Levine UC Berkeley

Learning to Schedule Straight-Line Code

THE world surrounding us involves multiple modalities

Modeling function word errors in DNN-HMM based LVCSR systems

Second Exam: Natural Language Parsing with Neural Networks

Unsupervised Learning of Word Semantic Embedding using the Deep Structured Semantic Model

Artificial Neural Networks written examination

HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

Forget catastrophic forgetting: AI that learns after deployment

How to make an A in Physics 101/102. Submitted by students who earned an A in PHYS 101 and PHYS 102.

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

Shared Portable Moodle Taking online learning offline to support disadvantaged students

Georgetown University at TREC 2017 Dynamic Domain Track

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

CS 446: Machine Learning

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

TD(λ) and Q-Learning Based Ludo Players

We are strong in research and particularly noted in software engineering, information security and privacy, and humane gaming.

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

3D DIGITAL ANIMATION TECHNIQUES (3DAT)

Undergraduate Program Guide. Bachelor of Science. Computer Science DEPARTMENT OF COMPUTER SCIENCE and ENGINEERING

Assignment 1: Predicting Amazon Review Ratings

Carnegie Mellon University Department of Computer Science /615 - Database Applications C. Faloutsos & A. Pavlo, Spring 2014.

Dialog-based Language Learning

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Modeling function word errors in DNN-HMM based LVCSR systems

Learning and Transferring Relational Instance-Based Policies

CS4491/CS 7265 BIG DATA ANALYTICS INTRODUCTION TO THE COURSE. Mingon Kang, PhD Computer Science, Kennesaw State University

CS Machine Learning

Academic Catalog Programs & Courses Manchester Community College

arxiv: v4 [cs.cl] 28 Mar 2016

evans_pt01.qxd 7/30/2003 3:57 PM Page 1 Putting the Domain Model to Work

Model Ensemble for Click Prediction in Bing Search Ads

OFFICE SUPPORT SPECIALIST Technical Diploma

Math 96: Intermediate Algebra in Context

Evolution of Symbolisation in Chimpanzees and Neural Nets

Speak Up 2012 Grades 9 12

Rule Learning with Negation: Issues Regarding Effectiveness

Human Emotion Recognition From Speech

Entrepreneurship: Running the Business Side of Your Voice Acting Career Graeme Spicer - Edge Studio

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

EDIT 576 (2 credits) Mobile Learning and Applications Fall Semester 2015 August 31 October 18, 2015 Fully Online Course

Глубокие рекуррентные нейронные сети для аспектно-ориентированного анализа тональности отзывов пользователей на различных языках

Introduction and Motivation

One Hour of Code 10 million students, A foundation for success

A Review: Speech Recognition with Deep Learning Methods

LEARNER VARIABILITY AND UNIVERSAL DESIGN FOR LEARNING

Ericsson Wallet Platform (EWP) 3.0 Training Programs. Catalog of Course Descriptions

POS tagging of Chinese Buddhist texts using Recurrent Neural Networks

DOCTOR OF PHILOSOPHY HANDBOOK

have to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,

arxiv: v2 [cs.ir] 22 Aug 2016

arxiv: v1 [cs.dc] 19 May 2017

INPE São José dos Campos

The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X

Radius STEM Readiness TM

Knowledge Transfer in Deep Convolutional Neural Nets

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Probabilistic Latent Semantic Analysis

WHEN THERE IS A mismatch between the acoustic

Process improvement, The Agile Way! By Ben Linders Published in Methods and Tools, winter

Computational Data Analysis Techniques In Economics And Finance

Course Content Concepts

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

arxiv: v2 [stat.ml] 30 Apr 2016 ABSTRACT

Transcription:

Deep Learning Nanodegree Syllabus Build Deep Learning Networks Today Congratulations on considering the Deep Learning Nanodegree program! Before You Start Educational Objectives: Become an expert in neural networks, and learn to implement them in Keras and TensorFlow. Build convolutional networks for image recognition, recurrent networks for sequence generation, generative adversarial networks for image generation, and more. Prerequisite Knowledge : Make sure to set aside adequate time on your calendar for focused work. In order to succeed in this program, we recommend having intermediate experience with Python, including numpy and pandas, and basic knowledge of machine learning. You ll also need to be familiar with algebra, calculus (multivariable derivatives) and linear algebra (matrix multiplication). If you d like to refresh your skills for this program, we suggest the AI with Python Nanodegree program. Contact Info While going through the program, if you have questions about anything, you can reach us at deeplearning-support@udacity.com.

Nanodegree Program Info The Deep Learning Nanodegree program offers you a solid introduction to the world of artificial intelligence. In this program, you ll master fundamentals that will enable you to go further in the field, launch or advance a career, and join the next generation of deep learning talent that will help define a beneficial new AI-powered future for our world. You will study cutting-edge topics such as Neural Networks, Convolutional Networks, Recurrent Neural Networks, Generative Adversarial Networks, and Deep Reinforcement Learning, and build projects in Keras and NumPy, in addition to TensorFlow. You'll learn from authorities such as Sebastian Thrun, Ian Goodfellow, and Andrew Trask, and participate in our Experts-in-Residence program, where you ll gain exclusive insights from working professionals in the field. For anyone interested in this transformational technology, this program is an ideal point-of-entry. The program is comprised of 5 courses and 5 projects. Each project you build will be an opportunity to prove your skills and demonstrate what you ve learned in your lessons. This is a term-based program that requires students to keep pace with their peers. The program is delivered in 1 term spread over 4 months. On average, students will need to spend about 12-15 hours per week in order to complete all required coursework, including lecture and project time. Length of Program : 4 months Frequency of Classes : Term-based Number of Reviewed Projects : 5 Instructional Tools Available : Video lectures, Personalized project reviews, Text instructions, Quizzes, In-classroom mentorship Projects Building a project is one of the best ways both to test the skills you've acquired and to demonstrate your newfound abilities to future employers. Throughout this Nanodegree program, you'll have the opportunity to prove your skills by building the following projects: Your First Neural Network Dog-breed Classifier Generate TV Scripts Generate Faces Teach a Quadcopter How to Fly In the sections below, you'll find a detailed description of each project along with the course material that presents the skills required to complete the project.

Project 1: Your First Neural Network Learn neural networks basics, and build your first network with Python and Numpy. Use modern deep learning frameworks (Keras, TensorFlow) to build multi-layer neural networks, and analyze real data. In this project, you will build and train neural networks from scratch to predict the number of bikeshare users on a given day. Supporting Lesson Content: Neural Networks Lesson INTRODUCTION TO NEURAL NETWORKS IMPLEMENTING GRADIENT DESCENTS TRAINING NEURAL NETWORKS SENTIMENT ANALYSIS KERAS TENSORFLOW In this lesson, you will learn solid foundations on deep learning and neural networks. You'll also implement gradient descent and backpropagation in python right here in the classroom. Mat will introduce you to a different error function and guide you through implementing gradient descent using numpy matrix multiplication. Now that you know what neural networks are, in this lesson you will learn several techniques to improve their training. In this lesson, Andrew Trask, the author of Grokking Deep Learning, will walk you through using neural networks for sentiment analysis. In this section, you'll get a hands-on introduction to Keras. You'll learn to apply it to analyze movie reviews. In this section you'll get a hands-on introduction to TensorFlow, Google's deep learning framework, and you'll be able to apply it on an image dataset.

Project 2: Dog Breed Classifier In this project, you will learn how to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, your algorithm will identify an estimate of the canine s breed. If supplied an image of a human, the code will identify the resembling dog breed. Along with exploring state-of-the-art CNN models for classification, you will make important design decisions about the user experience for your app. Supporting Lesson Content: Convolutional Neural Networks Lesson Title CLOUD COMPUTING CONVOLUTIONAL NEURAL NETWORK Take advantage of Amazon's GPUs to train your neural network faster. In this lesson, you'll setup an instance on AWS and train a neural network on a GPU. Alexis explains the theory behind Convolutional Neural Networks and how they help us dramatically improve performance in image classification. CNNs IN TENSORFLOW WEIGHT INITIALIZATION AUTOENCODERS TRANSFER LEARNING IN TENSORFLOW DEEP LEARNING FOR CANCER DETECTION In this lesson, you ll walk through an example Convolutional Neural Network (CNN) in TensorFlow. You'll study the line-by-line breakdown of the code and can download the code and run it yourself. In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. Autoencoders are neural networks used for data compression, image denoising, and dimensionality reduction. Here, you'll build autoencoders using TensorFlow. In practice, most people don't train their own networks on massive datasets. In this lesson, you'll learn how to use a pretrained network on a new problem with transfer learning. In this lesson, Sebastian Thrun teaches us about his groundbreaking work detecting skin cancer with convolutional neural networks.

Project 3: Generate TV Scripts In this project, you will build your own recurrent networks and long short-term memory networks with Keras and TensorFlow. You ll perform sentiment analysis and generate new text, and use recurrent networks to generate new text from TV scripts. Supporting Lesson Content: Recurrent Neural Networks Lesson RECURRENT NEURAL NETWORKS LONG SHORT-TERM MEMORY NETWORK IMPLEMENTATION OF RNN AND LSTM HYPERPARAMETERS EMBEDDINGS AND WORD2VEC SENTIMENT PREDICTION RNN Ortal will introduce Recurrent Neural Networks (RNNs), which are machine learning models that are able to recognize and act on sequences of inputs. Luis explains Long Short-Term Memory Networks (LSTM), and similar architectures which have the benefits of preserving long term memory. Overview of what students will learn in this lesson, displayed when students start the lesson. In this lesson, we'll look at a number of different hyperparameters that are important for our deep learning work. We'll discuss starting values and intuitions for tuning each hyperparameter. In this lesson, you'll learn about embeddings in neural networks by implementing the word2vec model. In this lesson, you ll learn to implement a recurrent neural network for predicting sentiment. This is intended to give you more experience building RNNs.

Project 4: Generate Faces Learn to understand and implement the DCGAN model to simulate realistic images, with Ian Goodfellow, the inventor of GANS (generative adversarial networks). Then, apply what you ve learned to build a pair of Multi-Layer Neural Networks and make them compete against each other in order to generate realistic faces. Supporting Lesson Content: Generative Adversarial Networks Lesson GENERATIVE ADVERSARIAL NETWORK DEEP CONVOLUTIONAL GANs GENERATE FACES SEMI-SUPERVISED LEARNING Ian Goodfellow, the inventor of GANs, introduces you to these exciting models. You'll also implement your own GAN on the MNIST dataset. In this lesson you'll implement a Deep Convolution GAN to generate complex color images of house numbers. Compete two neural networks against each other to generate realistic faces. Ian Goodfellow leads you through a semi-supervised GAN model, a classifier that can learn from mostly unlabeled data.

Project 5: Train a Quadcopter to Fly In this project, you will design an agent that can fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice. You will apply the techniques you have learnt in this module to find out what works best, but you will also have the freedom to come up with innovative ideas and test them on your own. The project is divided into 4 sections which cover different aspects of getting the quadcopter to fly such as taking off, hovering, landing and so on. Supporting Lesson Content: Reinforcement Learning Lesson Title WELCOME TO RL THE RL FRAMEWORK: THE PROBLEM THE RL FRAMEWORK: THE SOLUTION DYNAMIC PROGRAMMING MONTE CARLO METHODS TEMPORAL-DIFFERENCE METHODS RL IN CONTINUOUS SPACES DEEP Q-LEARNING POLICY GRADIENTS ACTOR-CRITIC METHODS The basics of reinforcement learning and OpenAI Gym. Learn how to define Markov Decision Processes to solve real-world problems. Learn about policies and value functions. Derive the Bellman Equations. Write your own implementations of iterative policy evaluation, policy improvement, policy Iteration, and value Iteration. Implement classic Monte Carlo prediction and control methods. Learn about greedy and epsilon-greedy policies. Explore solutions to the Exploration-Exploitation Dilemma. Learn the difference between the Sarsa, Q-Learning, and Expected Sarsa algorithms. Learn how to adapt traditional algorithms to work with continuous spaces. Extend value-based reinforcement learning methods to complex problems using deep neural networks. Policy-based methods try to directly optimize for the optimal policy. Learn how they work, and why they are important, especially for domains with continuous action spaces. Learn how to combine value-based and policy-based methods, bringing together the best of both worlds, to solve challenging reinforcement learning problems.