Advanced Computer Vision with Deep Learning | SGInnovate
August222019
August232019

Location

BASH, Level 3,
79 Ayer Rajah Crescent, via Lift Lobby C
Singapore 139955

Price

Early Bird Module 2 [Ends on 21 July 2019] (Ticket Inclusive of G.S.T) - $1524.75
Module 2 (Ticket Inclusive of G.S.T) - $1,605

Advanced Computer Vision with Deep Learning

Organised by SGInnovate and Red Dragon AI

Advanced Computer Vision with Deep Learning

Together with Red Dragon AI, SGInnovate is pleased to present the second module of the Deep Learning Developer Series. In this module, we go beyond the basic skills taught in Module One, such as Convolutional Neural Networks (CNNs). This would expand your ability to build modern image networks for applications beyond simple classification.

About the Deep Learning Developer Series:
The Deep Learning Developer Series is a hands-on and cutting-edge series targeted at Developers and Data Scientists who are looking to build Artificial Intelligence (AI) applications for real-world usage. It is an expanded curriculum that breaks away from the regular eight-week long full-time course structure and allows for modular customisation according to your own pace and preference. In every module, you will have the opportunity to build your Deep Learning models as part of your main project. You will also be challenged to use your new skills in an application that relates to your field of work or interest.

Start here

    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     
    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     

About this module:
Building on the learnings from the first module, we will be going beyond just using TensorFlow and Keras. PyTorch and TorchVision, which are often used for research in Computer Vision and cutting-edge architectures, will be introduced.

To understand the current state-of-the-art technologies, we will review the history of ImageNet winning models and focus on Inception and Residual models. We will also look at some of the newer models such as NASNet and AmoebaNet, and explore how the field has gone beyond hand-engineered models.

One critical skill that you will acquire is how to use these modern architectures as feature extractors and apply them to create applications like image search and similarity comparisons. You will also discover how to do such tasks such as object detection and learn how models (like YOLO) can detect multiple objects in an image.

You will also learn about image segmentation and classification at the pixel level. This will involve using architectures like U-Nets and DenseNets. Furthermore, you will learn how they are used in a variety of image segmentation tasks from perception for self-driving cars to medical image analysis.

As with the other Deep Learning Developer modules, you will have the opportunity to build multiple models yourself.

This workshop is eligible for funding support. For more details, please refer to the "Pricing" tab above.

In this course, participants will learn:

  • Advanced classification and object detection
  • An introduction into PyTorch and TorchVision
  • Skills to create applications like image search and similarity comparisons
  • Image segmentation and classification at the pixel level with architectures like U-Nets and DenseNets, and how they are used in a variety of image segmentation tasks

Recommended Prerequisites:

Interested but unable to make it on this date? Leave your details below and we will contact you for the next run.

Day 1 

08:45AM – 09:00AM: Registration
09:00AM – 10:30AM: Convolutional Neural Networks (CNNs) Recap Part One
Frameworks: TensorFlow, Keras

  • Convolution math in layers
  • Pooling and Strides
  • Alexnets
  • Building CNN networks
  • Calculating the parameters and shapes of various networks
  • Tuning CNN
  • VGG Network

10:30AM – 10:45AM: Tea Break
11:00AM – 12:30PM: CNNs Recap Part Two
Frameworks: TensorFlow, Keras

12:30PM – 1:30PM: Lunch
1:30PM – 3:30PM: Intermediate CNNs Part Two
Frameworks: TensorFlow, Keras, PyTorch

  • Modern Convolutional Nets
  • Transfer learning with CNNs and fine-tuning
  • Inception architectures
  • Residual networks
  • Imagenet history and applications
  • Building a classifier using transfer learning
  • Kaggle competition for images Part One
  • Start Personal Project One

3:30PM – 3:45PM: Tea Break
3:45PM – 5:30PM: Intermediate CNNs Part Two
Frameworks: TensorFlow, Keras, PyTorch

5:30PM – 6:00PM: Closing comments and questions

Day 2 

8:45AM – 9:00AM: Registration
9:00AM – 10:30AM: CNN Architecture Part One
Frameworks: TensorFlow, Keras, PyTorch

  • Auto-Encoders
  • Repurposing CNN models
  • Object detection
  • YOLO
  • Build an image search system
  • Continue Personal Project One

10:30AM – 10:45AM: Tea Break
10:45AM – 12:45PM: CNN Architecture Part Two
Frameworks: TensorFlow, Keras, PyTorch

12:45PM – 1:45PM: Lunch
1:45PM – 3:45PM: CNNs Segmentation Part One
Frameworks: TensorFlow, Keras, PyTorch

  • Image search
  • Segmentation networks
  • U-Net and Skip connections architectures
  • Batch normalisation

3:45PM – 4:00PM: Tea Break
4:00PM – 5:30PM: Facial Recognition
Frameworks: TensorFlow

5:30PM – 6:00PM: Closing comments and questions

Participants will be given two weeks to complete online learning and individual project.

Online Learning

  • Building CNNs from scratch
  • Building auto encoders
  • Understanding object detection and location models
  • Style transfer
  • Fast style transfer

Assessments:

Participants must fulfil the criteria stated below to pass the course.

1.   Online Tests: Participants are required to score a minimum grade of more than 75%

2.    Project: : Participants are required to present and achieve a pass on a project that demonstrates the following:

  • The ability to use or create a data processing pipeline that gets data in the correct format for running in a Deep Learning model
  • The ability to create a model from scratch or use transfer learning to create a Deep Learning model
  • The ability to train their model and get results. 
  • The ability to evaluate their model on held out data

Funding Support 

CITREP+ is a programme under the TechSkills Accelerator (TeSA) – an initiative of SkillsFuture, driven by Infocomm Media Development Authority (IMDA).


*Please see the section below on ‘Guide for CITREP+ funding eligibility and self-application process.

Funding Amount: 

  • CITREP+ covers up to 90% of your nett payable course fee depending on eligibility for professionals

Please note: funding is capped at $3,000 per course application

  • CITREP+ covers up to 100% funding of your nett payable course fee for eligible students/full-time National Servicemen (NSF)

Please note: funding is capped at $2,500 per course application

Funding Criteria: 

  • Singaporean / PR
  • Meets course admission criteria
  • Sponsoring organisations must be registered or incorporated in Singapore (only for individuals sponsored by organisations)

Please note: 

  • Employees of local government agencies and Institutes of Higher Learning (IHLs) will qualify for CITREP+ under the “Individuals / Self-Sponsored” category
  • Sponsoring SMEs who wish to apply for up to 90% funding support for course must meet SME status as defined here

Claim Conditions: 

  • Meet the minimum attendance (75%)
  • Complete and pass all assessments and / or projects

Guide for CITREP+ funding eligibility and self-application process:

For more information on CITREP+ eligibility criteria and application procedure, please click here

In partnership with:Driven by:

  

In partnership with employers to support employability:

 

This event is co-organised with e2i. e2i administers and acts on behalf of WSG in providing funding to support Singaporeans in enhancing employment and employability, and in the collection, use, processing and/or disclosure of Personal Data, such as NRIC and other national identification documents and numbers, for the purposes of grant administration, validating programme outcomes, fulfilling audit/legal/reporting requirements and analysis of data and statistics and formulating and reviewing of relevant employment or social welfare policies.

 

For enquiries, please send an email to [email protected]



Dr Martin Andrews
Martin has over 20 years’ experience in Machine Learning and has used it to solve problems in financial modelling and has created AI automation for companies. His current area of focus and speciality is in natural language processing and understanding. In 2017, Google appointed Martin as one of the first 12 Google Developer Experts for Machine Learning. Martin is also one of the co-founders of Red Dragon AI. 
 



Sam Witteveen
Sam has used Machine Learning and Deep Learning in building multiple tech start-ups, including a children’s educational app provider which has over 4 million users worldwide. His current focus is AI for conversational agents to allow humans to interact easier and faster with computers. In 2017, Google appointed Sam as one of the first 12 Google Developer Experts for Machine Learning in the world. Sam is also one of the co-founders of Red Dragon AI. 

Topics: Artificial Intelligence / Deep Learning / Machine Learning / Robotics

You may also like the following: