Advanced Computer Vision with Deep Learning | SGInnovate
January312019
to
February012019

Location

BASH, Level 3,
79 Ayer Rajah Crescent
Singapore 139955

Price

Early Bird Module 2 (Ticket Inclusive of G.S.T) - $1524.75
Module 2 (Ticket Inclusive of G.S.T) - $1605.00

Advanced Computer Vision with Deep Learning

Organised by SGInnovate and Red Dragon AI

Together with Red Dragon AI, we at SGInnovate are pleased to present the Deep Learning Developer Series. This workshop is the second installation of the Deep Learning Series Workshop. In this module, we go beyond the basic skills learned in module 1 such as Convolutional Neural Networks. This module will also expand your ability to build modern image networks using a variety of architectures and for applications beyond simple classification.

To understand the current state-of-the-art technologies, we will review the history of ImageNet winning models and focus in on the Inception and Residual models.  We will also look at some of the cutting-edge models such as NASNet and AmoebaNet, show how they are different and how the field has gone beyond hand-engineered models. 

One key skill that you will acquire is to learn how to use these modern architectures as feature extractors and then applying them to create applications like image search and similarity comparisons.

Start here

    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     
    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     

In this module you will also discover how to do such tasks as object detection and learn how models (like YOLO) are able to go beyond just classifying images to detecting where multiple objects are in an image. 

You will also learn about image segmentation and classifying at the pixel level. This will involve using architectures like U-Nets and DenseNets and learning how they are used in a variety of image segmentation tasks from perception for self-driving cars to medical image analysis.

Building on the tools taught in the first module, we will be going beyond just using TensorFlow and Keras, to introduce PyTorch and TorchVision, which are often being used for research in computer vision and cutting-edge architectures.

As with all the modules, you will have the opportunity to build multiple models yourself. Most importantly, as part of your main project, you will be challenged to use your newly learned skills in an application that relates to your field of work or interest.

Beyond giving you an understanding of what can be done in cutting-edge computer vision and how it is done, the goal of the workshop is to arm you with deep learning computer vision skills so that you can apply it in your own area of work or projects..

This workshop is eligible for funding support. For more details, please refer to the "Pricing" tab above.

In the course participants will: 

  • Learn about advanced classification and objection detection
  • Introduction into PyTorch and TorchVision
  • Acquire skills to create applications like image search and similarity comparisons
  • Learn about image segmentation and classifying at the pixel level with architectures like U-Nets and DenseNets and how they are used in a variety of image segmentation tasks

Mandatory Prerequisites:


Interested but unable to make it on this date? Leave your details below and we will contact you for the next run.

Day 1
08:45am – 09:00am: Registration
09:00am – 10:30am: Convolutional Neural Network Recap Part 1
Frameworks: TensorFlow, Keras

  • Convolution Math in layers
  • Pooling and Strides
  • Alexnets
  • Building CNN networks
  • Calculating the parameters and shapes of various networks
  • Tuning CNN
  • VGG Network

10:30am – 10:45am: Tea Break
10:45am – 12:30pm: Convolutional Neural Network Recap Part 2
Frameworks: TensorFlow, Keras

12:30pm – 1:30pm: Lunch
1:30pm – 2:15pm: Convolutional Neural Network Recap Part 3
Frameworks: TensorFlow, Keras

2:15pm – 3:15pm: Intermediate CNNs Part 1
Frameworks: TensorFlow, Keras, PyTorch

  • Modern Convolutional Nets
  • Transfer Learning with CNNs and Finetuning
  • Inception architectures
  • Residual Networks
  • Imagenet history and applications
  • Building a classifier using transfer learning
  • Kaggle competition for images part 1
  • Start personal project 1

3:15pm – 3:30pm: Tea Break
3:30pm – 6:30pm: Intermediate CNNs Part 2
Frameworks: TensorFlow, Keras, PyTorch

6:30pm – 6:45pm: Closing Comments and Questions

Day 2
08:45am – 09:00am: Registration
09:00am – 10:30am: A Part 1
Frameworks: TensorFlow, Keras, PyTorch

  • Auto Encoders
  • Repurposing CNN models
  • Object Detection
  • YOLO
  • Build an Image search system
  • Continue personal project 1

10:30am – 10:45am: Tea Break
10:45am – 12:45pm: A Part 2
Frameworks: TensorFlow, Keras, PyTorch

12:45pm – 1:45pm: Lunch
1:45pm – 3:45pm: CNNs Segmentation Part 1
Frameworks: TensorFlow, Keras, PyTorch

  • Image Search
  • Segmentation Networks
  • U-Net and Skip connections architectures
  • Batch Normalization

3:45pm – 4:00pm: Tea Break
4:00pm – 5:30pm: CNNs Segmentation Part 1
Frameworks: TensorFlow, Keras, PyTorch

5:30pm – 6:00pm: Closing Comments and Questions

Online Content

  • Building CNNs from scratch
  • Building Auto Encoders
  • Understanding Object detection and location models
  • Style Transfer
  • Fast Style Transfer

Funding Support

This workshop is eligible for CITREP+ funding.

CITREP+ is a programme under the TechSkills Accelerator (TeSA) – an initiative of SkillsFuture, driven by the Infocomm Media Development Authority (IMDA).



*Please see ‘Guide for CITREP+ funding eligibility and self-application process’ below for more information. 

Funding Amount: 

  • CITREP+ covers up to 90% of your nett payable course fee depending on eligibility for professionals

Please note: funding is capped at $3,000 per course application

  • CITREP+ covers up to 100% funding of your nett payable course fee for eligible students / full-time National Servicemen (NSF)

Please note: funding is capped at $2,500 per course application

Funding Eligibility: 

  • Singaporean / PR
  • Meets course admission criteria
  • Sponsoring organisation must be registered or incorporated in Singapore (only for individuals sponsored by organisations)

Please note: 

  • Employees of local government agencies and Institutes of Higher Learning (IHLs) will qualify for CITREP+ under the self-sponsored category
  • Sponsoring SMEs organisation who wish to apply for up to 90% funding support for course must meet SME status as defined here

Claim Conditions: 

  • Meet the minimum attendance (75%)
  • Complete and pass all assessments and / or projects

Guide for CITREP+ funding eligibility and self-application process:

For more information on CITREP+ eligibility criteria and the application procedure, please click here

                       In partnership with:                                  Driven by:
              
  

  

For enquiries, please send an email to [email protected]


Dr Martin Andrews
Martin has over 20 years’ experience in Machine Learning and has used it to solve problems in financial modelling and has created AI automation for companies. His current area of focus and speciality is in natural language processing and understanding. In 2017, Google appointed Martin as one of the first 12 Google Developer Experts for Machine Learning. Martin is also one of the co-founders of Red Dragon AI. 

 


Sam Witteveen
Sam has used Machine Learning and Deep Learning in building multiple tech start-ups, including a children’s educational app provider which has over 4 million users worldwide. His current focus is AI for conversational agents to allow humans to interact easier and faster with computers. In 2017, Google appointed Sam as one of the first 12 Google Developer Experts for Machine Learning in the world. Sam is also one of the co-founders of Red Dragon AI. 

Topics: Artificial Intelligence / Deep Learning / Machine Learning / Robotics, Talent