Linear Classifier Network

Lecture 02: Image Classification

This lecture introduces the computer vision task - image classification. Then we explain a naive approach K nearest neighbor

Lecture 03: Linear Classifier & Regularization

This lecture introduces linear classifier and the concept of regularization.

  • Linear classifier: An image classification method that enables the model learn from the training data
  • Regularization: A method which enable us to tell the model our preference toward the final model and prevent overfitting

Lecture 04: Optimization

In the previous lecture, we learned the loss function which tells us how good our model is performing currently. In this lecture, we introduces “optimization”, which is the process utilizing the loss we’ve computed to improve the model

Lecture 05: Neural Networks

In this lecture, we introduces

  1. Feature Transform
  2. Neural Network
  3. Space Warping
  4. Universal Approximation

We can’t control our data distribution, but we can do feature transform to make them distribute in a way which we can easily classify Neural network gave us a way to tackle the problem we find in linear classifier. Space warping and universal approximation give us intuition about why and how neural network works

Lecture 06: Backpropagation

Backpropagation gives us an efficient and modular way to calculate gradient of the loss


Convolutional Network

Lecture 07: Convolutional Network

Convolutional Network (CNN) resolves the problem of neural network we’ve learnt in previous lecture, which flattens the image pixels and doesn’t respect the spacial structure of the image

Lecture 08: CNN Architectures

By introducing models in ImageNet classification challenges, this lecture taught us common rules and methods in creating CNN architectures

Lecture 10: Training Neural Network I

In this lecture, we talk in detail about the initializations before training the network: activation function choice, activation functions, data preprocessing, weight initialization, and regularization

Lecture 11: Training Neural Network II

  1. Learning Rate Schedule
  2. Tips and Tricks choosing hyperparameters
  3. Model Ensembles
  4. Transfer Learning
  5. Distributed Training

RNN and Transformers

Lecture 12: Recurrent Neural Network

This lecture introduce RNN, which is a kind of neural network that can deal with tasks where context and order of data matters. For example, speech understanding and stock price prediction over time

Lecture 13: Transformers

This lecture we start from the concept “attention”, which resolve the bottleneck problem in sequence to sequence. After that, we extend the to a new kind of layer, “attention layer”, which is a crucial part of the “transformer”


Other Computer Vision Tasks

Lecture 14: Visualizing and Understanding

This lecture we try to figure out what is actually happening in neural networks, and how do we use these findings to create interesting applications

Lecture 15: Object Detection

This lecture introduces object detection, a computer vision task that detects multiple objects in images and encloses them with bounding boxes.

Lecture 16: Image Segmentation

This lecture introduces a new computer vision task - “image segmentation”, where we want to classify each pixel in the image to a category

Lecture 17: 3D Vision

This lecture we focus on two tasks

  1. Predicting 3D shapes from single shape
  2. Processing 3D input data

Lecture 18: Videos

This lecture introduces computer vision tasks about videos

Lecture 19: Generative Models I

This lecture we first explain what is generative models? Then we introduce “Autoregressive Model” and “Variational Autoencoder”

Lecture 20: Generative Models II (GANs)

This lecture introduces Generative Adversarial Networks (GANs)

Lecture 21: Reinforcement Learning

This lecture introduces reinforcement learning