Fundamentals of Deep Learning for Multi-GPUs | SGInnovate
March112019

Location

Suntec Singapore Convention and Exhibition Centre
Singapore

Price

Tutorial Pass - $250

Fundamentals of Deep Learning for Multi-GPUs

Presented by SGInnovate, NVIDIA & NSCC

SGInnovate, together with the NVIDIA Deep Learning Institute (DLI) and National Supercomputing Centre (NSCC) is proud to bring to you Fundamentals of Deep Learning for Multi-GPUs.

Led by seasoned instructors, you can expect to learn the latest deep learning techniques for designing and deploying neural network-powered machine learning across a variety of application domains.

This workshop teaches you to apply techniques to train deep neural networks on multiple GPUs to shorten the training time required for data-intensive applications. You will work with widely-used deep learning tools, frameworks, and workflows by performing neural network training on a fully-configured GPU accelerated workstation in the cloud. The workshop starts with a linear neuron, defining the loss function and optimisation logic for gradient descent. It then teaches the concepts to transform single GPU to Horovod multi-GPU implementation to reduce the complexity of writing efficient distributed software and then concludes by teaching the techniques to improve the overall performance of the whole pipeline.

In the course, participants will learn:

  • Various approaches to multi GPU training
  • Algorithmic and engineering challenges to the large-scale training of a neural network

Click here for more SGInnovate – NVIDIA Training Programmes.

08:45am – 09:00am: Registration
09:00am – 09:45am: Theory of Data Parallelism
Understand the issues with sequential single thread data processing and speeding up the applications with parallel processing.

  • Issues with sequential processing

09:45am – 10:00am: Tea Break
10:00am – 12:00pm: Introduction to Multi GPU Training
Define a simple neural network, and a cost function and iteratively calculate the gradient of the cost function and model parameters using the Stochastic Gradient Descent (SGD) optimisation algorithm.

  • Overview of loss function, gradient descent, and SGDs

12:00pm – 01:00pm: Lunch
01:00pm – 03:00pm: Algorithmic challenges to Multi GPU training
Learn to transform single GPU to Horovod multi-GPU implementation to reduce the complexity of writing efficient distributed software. Understand the data loading, augmentation, and training logic using AlexNet model.

  • Data parallelism
  • Large minibatch and its impact on accuracy
  • Gradient exchange

03:00pm – 03:15pm: Tea Break

03:15pm – 05:15pm: Engineering Challenges to Multi GPU Training
Understand the aspects of data input pipeline, communication, reference architecture and take a deeper dive into the concepts of job scheduling.

  • Keeping up with the GPU
  • Job Scheduling
  • Overview of the wider AI system design

05:15pm – 05:30pm: Closing Comments and Questions
A quick overview of the next steps you could leverage to build and deploy your own applications and any Q&A

  • Wrap-up with the potential next steps and Q&A
  • Workshop Survey

SGD$250/pax

This workshop is organised in conjunction with SCAsia 2019.

Kindly register under “Tutorials” in Registration Type.

After clicking “Attending” under “Tutorial Pass at SGD$250/pass (11 Mar)”, please select “Fundamentals – Multi GPU”.

Topics: Artificial Intelligence / Deep Learning / Machine Learning / Robotics

You may also like the following: