A Brief Introduction to ML Security: Robustness Testing
Presented by SGInnovate and Div0
Machine Learning (ML) systems are developed and deployed in a variety of different environment settings which makes them vulnerable to risks which may eventually lead to system failure. ML systems are not only susceptible to manipulation and extraction attacks, but can also fail to function as intended due to its inability to handle unexpected input. To mitigate such risks, robustness testing serves as a means to identify and evaluate the vulnerability of ML models such that developers can identify these weaknesses early and include preventive measures. This talk introduces robustness testing and showcases some tools that developers can implement as part of their workflow to ensure the smooth running and security of ML systems.
Date: 28 June 2022, Tuesday
Time: 7:00pm - 9:00pm (Singapore Time / UTC +8)
6:45pm – 7:00pm: Registration and Networking
7:00pm – 7:15pm: Introduction and Welcome
7:15pm – 8:45pm: Presentation on A Brief Introduction to ML Security: Robustness Testing with
- Kew Wai Marn, AI Engineer, AI Singapore
Kew Wai Marn, AI Engineer, AI Singapore
Wai Marn is an AI Engineer in the SecureAI team from AISG with a background in Physics. The SecureAI team is currently exploring tools for testing the robustness of ML models and incorporating best practices into the ML development process.