- Teacher: DR.SUMATHI -
- Teacher: DR.SUMATHI -

The Deep Learning course provides a comprehensive introduction to the principles, architectures, and applications of modern deep learning techniques. It is designed to equip undergraduate students in Information Science & Engineering with both theoretical foundations and conceptual understanding required to analyze and design deep neural network models.
The course begins with the foundations of neural networks, covering essential components such as network architectures, activation functions, loss functions, hyperparameters, and training methodologies. It introduces the historical evolution of deep learning and discusses the challenges that motivated its development. Students gain exposure to classical and shallow models, hierarchical feature learning, and pretrained models.
A significant portion of the course focuses on unsupervised learning techniques, including Autoencoders and Restricted Boltzmann Machines (RBMs). Students learn the structure, working principles, and applications of autoencoders for tasks such as sparse feature learning and outlier detection, as well as the theoretical background and stacking of RBMs.
The course further explores advanced deep learning architectures, including Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). Topics include CNN training and landmark architectures such as AlexNet, VGG, GoogLeNet, and ResNet, along with their applications in computer vision. RNNs, LSTM, GRU, and Echo-State Networks are studied for sequential data modeling.
Additionally, the course introduces Deep Reinforcement Learning and Generative Adversarial Networks (GANs), providing insights into learning through interaction, adversarial training, image generation, and conditional GANs. The limitations of neural networks are discussed with emerging research directions such as one-shot learning and energy-efficient learning.
The final unit emphasizes regularization and optimization techniques essential for building robust and efficient deep learning models. Students learn various regularization methods including dropout, early stopping, parameter sharing, ensemble methods, and sparse representations, along with optimization challenges and fundamental algorithms.
By the end of the course, students will be able to analyze deep learning models, understand their applications across domains, and appreciate the need for optimization and regularization in real-world systems, thereby preparing them for advanced study or practical implementation in artificial intelligence and machine learning.
- Teacher: DR.JASON ELROY MARTIS -
- Teacher: ISE Department Office -
- Teacher: KRISHNARAJ RAO N S -
- Teacher: ISE Department Office -
- Teacher: KRISHNARAJ RAO N S -
- Teacher: HoD ISE
