Advanced Computer Vision with Deep Learning | SGInnovate

DATE: TBC

Location

BASH, LEVEL 3, 79 AYER RAJAH CRESCENT SINGAPORE 139955

Advanced Computer Vision with Deep Learning

Organised by SGInnovate and Red Dragon AI

Together with Red Dragon AI, SGInnovate is pleased to present the second module of the Deep Learning Developer Series. In this module, we go beyond the basic skills taught in module 1 such as Convolutional Neural Networks (CNNs). This would expand your ability to build modern image networks using a variety of architectures and for applications beyond simple classification.

About the Deep Learning Developer Series:

The Deep Learning Developer Series is a hands-on and cutting-edge series targetted at developers and data scientists who are looking to build Artificial Intelligence (AI) applications for real-world usage. It is an expanded curriculum that breaks away from the regular 8-weeks long full-time course structure and allows for modular customisation according to your own pace and preference. In every module, you will have the opportunity to build your own Deep Learning models as part of your main project. You will also be challenged to use your new skills in an application that relates to your field of work or interest.

Start here

    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     
    • Have an interest in Deep Learning?
    • Join us if you are able to read and follow codes
    • This module is compulsory before you take the advanced modules
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 before this module
     
    • You will need to take module 1 AND module 2 OR 3 before this module
     
    • You will need to take module 1, 2 AND 3 before this module
    • Attain a “Deep Learning Specialist” certification when you complete all five modules
     

About this module:

Building on the learnings from the first module, we will be going beyond just using TensorFlow and Keras. PyTorch and TorchVision, which are often used for research in computer vision and cutting-edge architectures, will be introduced.

To understand the current state-of-the-art technologies, we will review the history of ImageNet winning models and focus on Inception and Residual models. We will also look at some of the newer models such as NASNet and AmoebaNet, and explore how the field has gone beyond hand-engineered models.

One key skill that you will acquire is how to use these modern architectures as feature extractors and apply them to create applications like image search and similarity comparisons. You will also discover how to do such tasks such as object detection and learn how models (like YOLO) are able to detect multiple objects in an image.

You will also learn about image segmentation and classification at the pixel level. This will involve using architectures like U-Nets and DenseNets. Furthermore, you will learn how they are used in a variety of image segmentation tasks from perception for self-driving cars to medical image analysis.

As with the other Deep Learning Developer modules, you will have the opportunity to build multiple models yourself.

This workshop is eligible for funding support. For more details, please refer to the "Pricing" tab above

In this course, participants will learn:

  • About advanced classification and objection detection
  • An introduction into PyTorch and TorchVision
  • Skills to create applications like image search and similarity comparisons
  • About image segmentation and classification at the pixel level with architectures like U-Nets and DenseNets, and how they are used in a variety of image segmentation tasks

Recommended Prerequisites:

  • Must have attended Module 1: Deep Learning Jump-start Workshop
  • Ability to read and follow the code - We will send out some videos to help you with Python syntax specifically before the course begins.
  • Attendees MUST bring their own laptops

Interested but unable to make it on this date? Leave your details below and we will contact you for the next run.

Day 1

08:45am – 09:00am: Registration
09:00am – 10:30am: Convolutional Neural Networks (CNNs) Recap Part 1
Frameworks: TensorFlow, Keras

  • Convolution math in layers
  • Pooling and Strides
  • Alexnets
  • Building CNN networks
  • Calculating the parameters and shapes of various networks
  • Tuning CNN
  • VGG Network

10:30am – 10:45am: Tea Break
10:45pm – 12:30pm: CNNs Recap Part 2
Frameworks: TensorFlow, Keras

12:30pm – 1:30pm: Lunch
1:30pm – 2:15pm: CNNs Recap Part 3
Frameworks: TensorFlow, Keras

2:15pm – 3:15pm: Intermediate CNNs Part 1
Frameworks: TensorFlow, Keras, PyTorch

  • Modern Convolutional Nets
  • Transfer learning with CNNs and fine tuning
  • Inception architectures
  • Residual networks
  • Imagenet history and applications
  • Building a classifier using transfer learning
  • Kaggle competition for images part 1
  • Start personal project 1

3:15pm – 3:30pm: Tea Break
3:30pm – 5:30pm: Intermediate CNNs Part 2
Frameworks: TensorFlow, Keras, PyTorch

5:30pm – 5:45pm: Closing Comments and Questions

Day 2

08:45am – 09:00am: Registration
09:00am – 10:30am: CNN Architecture Part 1
Frameworks: TensorFlow, Keras, PyTorch

  • Auto Encoders
  • Repurposing CNN models
  • Object detection 
  • YOLO
  • Build an image search system
  • Continue personal project 1

10:30am – 10:45am: Tea Break
10:45am – 12:45pm: CNN Architecture Part 2
Frameworks: TensorFlow, Keras, PyTorch 

12:45pm – 1:45pm: Lunch
1:45pm – 3:45pm: CNNs Segmentation Part 1
Frameworks: TensorFlow, Keras, PyTorch 

  • Image search
  • Segmentation networks
  • U-Net and Skip connections architectures
  • Batch normalisation 

3:45pm – 4:00pm: Tea Break
4:00pm – 5:30pm: CNNs Segmentation Part 1
Frameworks: TensorFlow, Keras, PyTorch 

5:30pm – 6:00pm: Closing comments and questions

Participants will be given two weeks to complete online learning and individual project. 

Online Learning 

  • Building CNNs from scratch
  • Building auto encoders
  • Understanding object detection and location models
  • Style transfer
  • Fast style transfer

Assessments
Participants must fulfil the criteria stated below to pass the course.

1.    Online Tests: Participants are required to score a minimum grade of more than 75% 

2.    Project: Participants are required to present, and achieve a pass on a project that demonstrates the following:

  • The ability to use or create a data processing pipeline that gets data in the correct format for running in a Deep Learning model
  • The ability to create a model from scratch or use transfer learning to create a Deep Learning model
  • The ability to train that model and get results.
  • The ability to evaluate the model on held out data

Funding Support

This workshop was successfully endorsed for April 2018 to March 2019 and is in the process of the CITREP funding application. Indicate your interest and we will contact you when registration opens.

CITREP+ is a programme under the TechSkills Accelerator (TeSA) – an initiative of SkillsFuture, driven by Infocomm Media Development Authority (IMDA).


*Please see the section below on ‘Guide for CITREP+ funding eligibility and self-application process’

Funding Amount: 

  • CITREP+ covers up to 90% of your nett payable course fee depending on eligibility for professionals

Please note: funding is capped at $3,000 per course application

  • CITREP+ covers up to 100% funding of your nett payable course fee for eligible students / full-time National Servicemen (NSF)

Please note: funding is capped at $2,500 per course application

Funding Eligibility: 

  • Singaporean / PR
  • Meets course admission criteria
  • Sponsoring organisation must be registered or incorporated in Singapore (only for individuals sponsored by organisations)

Please note: 

  • Employees of local government agencies and Institutes of Higher Learning (IHLs) will qualify for CITREP+ under the self-sponsored category
  • Sponsoring SMEs organisation who wish to apply for up to 90% funding support for course must meet SME status as defined here

Claim Conditions: 

  • Meet the minimum attendance (75%)
  • Complete and pass all assessments and / or projects

Guide for CITREP+ funding eligibility and self-application process:

For more information on CITREP+ eligibility criteria and application procedure, please click here

In partnership with:Driven by:

  

In partnership with employers to support employability:

For enquiries, please send an email to [email protected]


Dr Martin Andrews
Martin has over 20 years’ experience in Machine Learning and has used it to solve problems in financial modelling and has created AI automation for companies. His current area of focus and speciality is in natural language processing and understanding. In 2017, Google appointed Martin as one of the first 12 Google Developer Experts for Machine Learning. Martin is also one of the co-founders of Red Dragon AI. 


Sam Witteveen
Sam has used Machine Learning and Deep Learning in building multiple tech start-ups, including a children’s educational app provider which has over 4 million users worldwide. His current focus is AI for conversational agents to allow humans to interact easier and faster with computers. In 2017, Google appointed Sam as one of the first 12 Google Developer Experts for Machine Learning in the world. Sam is also one of the co-founders of Red Dragon AI. 

Topics: Artificial Intelligence / Deep Learning / Machine Learning / Robotics

You may also like the following: