Hes-so Valais
Menu

Next workshops

Topic : Interpreting Machine Learning
Date : May 3rd 2019
Location : Idiap Research Institute (Room 106), Martigny, Switzerland
Registration : https://www.idiap.ch/workshop/valais-wallis-ai-workshop/registration

Program :

08:45 - 09:00 Welcome Coffee
09:00 - 10:00

Keynote speech: Prof. Pena Carlos Andrés, HEIG-VD Methods for Rule and Knowledge Extraction from Deep Neural Networks

Abstract: Artificial deep neural networks are a powerful tool, able to extract information from large datasets and, using this acquired knowledge, make accurate predictions on previously unseen data. As a result, they are being applied in a wide variety of domains ranging from genomics to autonomous driving, from speech recognition to gaming. Many areas, where neural network-based solutions can be applied, require a validation, or at least some explanation, of how the system makes its decisions. This is especially true in the medical domain where such decisions can contribute to the survival or death of a patient. Unfortunately, the very large number of parameters required by deep neural networks is extremely challenging to cope with for explanation methods, and these networks remain for the most part black boxes. This demonstrates the real need for accurate explanation methods able to scale with this large quantity of parameters and to provide useful information to a potential user. Our research aims at providing tools and methods to improve the interpretability of deep neural networks.

10:00 - 10:15 Hannah Muckenhirn, Idiap Research Institute Visualizing and understanding raw speech modeling with convolutional neural networks
10:15 - 10:30 Mara Graziani, HES-SO Valais-Wallis Concept Measures to Explain Deep Learning Predictions in Medical Imaging
10:30 - 10:45 Suraj Srinivas, Idiap Research Institute What do neural network saliency maps encode?
10:45 - 11:00 Dr Vincent Andrearczyk, HES-SO Valais-Wallis Transparency of rotation-equivariant CNNs via local geometric priors
11:00 - 11:30 Coffee
11:30 - 11:45 Dr Sylvain Calinon, Idiap Research Institute Interpretable models of robot motion learned from few demonstrations
11:45 - 12:00 Xavier Ouvrard, University of Geneva / CERN The HyperBagGraph DataEdron: An Enriched Browsing Experience of Scientific Publication Databa
12:00 - 12:15 Seyed Moosavi from Signal Processing Laboratory 4 (LTS4), EPFL Improving robustness to build more interpretable classifiers
12:15 - 12:30 Sooho Kim from UniGe Interpretation of End-to-end one Dimension Convolutional Neural Network for Fault Diagnosis on a Planetary Gearbox
12:30 - 14:00 Lunch

 

Access