Academics
  /  
Courses
  /  
Descriptions
COMP_SCI 449: Deep Learning


VIEW ALL COURSE TIMES AND SESSIONS

Prerequisites

Doctoral Student Standing or completion of CS 349

Description

Deep learning is a branch of machine learning based on algorithms that try to model high-level abstract representations of data by using multiple processing layers with complex structures. Some representations make it easier to learn tasks (e.g., face recognition or spoken word recognition) from examples. One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

In this course students will study deep learning architectures such as autoencoders, , convolutional deep neural networks, and recurrent neural networks. They will read original research papers that describe the algorithms and how they have been applied to fields like computer vision, automatic speech recognition, and audio event recognition.

For projects, students can work on their own or in groups (recommended) to write a codebase that reproduces a landmark research paper. This course is aimed at advanced undergraduates, masters, and PhD students.

  • Formerly COMP_SCI 396/496 (Last offer in Spring 2022).
  • This course satisfies the Technical Elective.

REQUIRED TEXTBOOK:  None.

REFERENCE TEXTBOOKS: The Deep Learning Book (https://www.deeplearningbook.org) , Excerpts from Tom Mitchell’s Machine Learning, academic papers published in the field.

COURSE COORDINATOR: Prof. Bryan Pardo

COURSE INSTRUCTOR: Prof. Bryan Pardo or Zach Wood-Doughty

COURSE GOALS: The goal of this course is to familiarize graduate students (and advanced undergraduates) with the current state-of-the-art in machine perception of images and sound using Deep Learning architectures.

DETAILED COURSE TOPICS:

What follows is an example syllabus. As topics of current interest in the field shift, course content will vary to reflect research trends.

Week 1: Perceptrons & gradient descent
Week 2: Multilayer Perceptrons & nonlinear activation functions
Week 3: Convolutional networks + residual networks
Week 4: Embeddings, latent spaces & control
Week 5: Recurrent networks
Week 6: LSTMs and GRUs
Week 7: Attention Networks
Week 8: Auto Encoders
Week 9: Variational Auto Encoders (VAEs)
Week 10: Generative Adversarial Networks

ASSIGNMENTS:

Homework assignments (30%)
Readings (30%)
Final projects (30%)
Class participation (10%)

COURSE OBJECTIVES: When a student completes this course, the student should:

  • Have an understanding of the current state-of-the art in Deep Learning.
  • Be able to distill large amounts of research into coherent summaries.
  • Be able to think critically about work in the field.
  • Be able to duplicate an existing project.