Industrial Computer Vision systems rely on Neural Networks - also in production. If we should poke our code until it breaks, why would deep learning models get a free pass? We'll see different ways in which to poke our models and improve them, from the practitioner's point of view, who has access to the model.

Attacks keep improving and getting more sophisticated, but that doesn't mean that practitioners cannot aim at improving models with the resources they have: from very basic techniques applicable from day 1 to sophisticated adversarial training.

This is directly relevant to different domains that make use of Computer Vision solutions, from e-commerce to Healthcare or Autonomous systems. During the talk we will see:

  • ways and reasons why our (vision) models can fail (accidental or intentional),
  • how to highlight weaknesses in our (vision) models,
  • a range of practical techniques (from basic to more complex) to robustify our vision models through adversarial samples before letting them run in production.

Irina Vidal Migallón

Affiliation: Siemens Mobility GmbH

Irina is an Electrical Engineer & Biomedical Engineer who specialised in Machine Learning & Vision. Seasoned in different industries -from optical biopsy systems in France to surgical planning tools and Augmented Reality apps in the Berlin start-up scene-, she now works in Siemens Mobility's growing CV & AI team. Even more than waking up Skynet, she's interested in the limits of Natural Intelligence and its decisions over our data.

visit the speaker at: TwitterGithub