Deeplearning and AI

UNIT I                                               The Neural Network                         [12 Hours]

 

Building Intelligent Machines – The Limits of Traditional Computer Programs – The Mechanics of Machine Learning – The Neuron – Expressing Linear Perceptrons as Neurons – Feed-Forward Neural Networks – Linear Neurons and Their Limitations -Sigmoid, Tanh, and ReLU Neurons – Training Feed-Forward Neural Networks: The Fast-Food Problem – Gradient Descent – The Delta Rule and Learning Rates – Gradient Descent with Sigmoidal Neurons – The Backpropagation Algorithm – Test Sets, Validation Sets, and Overfitting – Preventing Overfitting in Deep Neural Networks

UNIT II          Implementing Neural Networks in TensorFlow            [12 Hours]

 

What Is TensorFlow? – How Does TensorFlow Compare to Alternatives? – Installing TensorFlow – Creating and Manipulating TensorFlow Variables – TensorFlow Operations – Placeholder Tensors – Sessions in TensorFlow – Beyond Gradient Descent: The Challenges with Gradient Descent – Local Minima in the Error Surfaces of Deep Networks – Model Identifiability – Momentum-Based Optimization – A Brief View of Second-Order Methods – Learning Rate Adaptation

UNIT III                                     Convolutional Neural Networks     [12 Hours]

 

Neurons in Human Vision – The Shortcomings of Feature Selection – Filters and Feature Maps – Max Pooling – Full Architectural Description of Convolution Networks – Accelerating Training with Batch Normalization – Building a Convolutional Network for CIFAR-10 – Visualizing Learning in Convolutional Networks – Embedding and Representation Learning: Learning Lower-Dimensional Representations – Principal Component Analysis – Motivating the Autoencoder Architecture – Implementing an Autoencoder in TensorFlow

UNIT IV                                            Introduction to AI & ML          [12 Hours]

What is Artificial Intelligence? – The Turing Test – Heuristics – Knowledge Representation – Expert Systems – Major Parts of AI – Introduction to Machine Learning: What is Machine Learning? – Types of Machine Learning Algorithms – Feature Engineering, Selection and Extraction – Dimensionality Reduction – Working with Datasets – The Bias-Variance Tradeoff

UNIT V                                              Deep Learning AI                 [12 Hours]

 

Keras and the XOR function – What is Deep Learning? – What are Perceptrons? – The Anatomy of an Artificial Neural Network – The Loss Function Hyperparameter – The Optimizer Hyperparameter – What is Backward Error Propagation? – What is a Multi-Layer Perceptron? – The Convolutional Layer – The ReLU Activation Function – Deep Learning: RNNs and LSTMs: What is an RNN? – What Is an LSTM? – Working with Tensorflow and LSTM

TEXT(S)

  1. Nikhil Buduma, “Fundamentals of Deep Learning – Designing Next-Generation Machine Intelligence Algorithms”, O’Reilly, 2017, First Edition, ISBN: 978-1-491-92561-4
  2. Oswald Campesato, “Artificial Intelligence Machine Learning and Deep Learning”, Mercury Learning and Information, 2020, ISBN: 978-1-68392-467-8

REFERENCE MATERIALS

  1. Goodfellow, I, Bengio,Y, and Courville, A, “Deep Learning”, MIT Press, 2016.
  2. Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola, “Dive into Deep Learning – Release 0.17.0”, Amazon Science, First Edition, 2021, ISBN 1544361378

 

 

 

      

    
E-RESOURCES

  1. https://www.youtube.com/watch?v=jTzJ9zjC8nU
  2. https://www.youtube.com/watch?v=tXVNS-V39A0
  3. https://www.youtube.com/watch?v=YRhxdVk_sIs
  4. https://www.youtube.com/watch?v=ad79nYk2keg
  5. https://www.youtube.com/watch?v=e59u5YyuEfQ