Topic : Real and artificial neural processing
Date : April 19 2018
Location : Idiap Research Institute, Martigny
Registration : https://www.idiap.ch/workshop/aiws-apr-2018/registration
- 8h30-9h00 Coffee
- 9h00-10h00 Keynote speach - Jean-Pascal Pfister (ETHZ/Unibe) : The Neural Particle Filter
Abstract : The brain is able to perform remarkable computations such as extracting the voice of a person talking in a noisy crowd or tracking the position of a pedestrian crossing the road. Even though, we perform everyday those computations in a seemingly effortless way, this ongoing feature extraction task is however far from being trivial. This computational task can be formalised as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics. It is however unclear how to reliably and efficiently perform inference for real-word tasks which are highly nonlinear and high dimensional. Furthermore, it is even less clear how this nonlinear filtering may be implemented in neural tissue. We recently proposed a neural network model (the Neural Particle Filter) that performs this nonlinear filtering task [1,2] and derived an online learning rule which becomes hebbian in the limit of small observation noise [1,3]. Since this filter is based on unweighted particles (unlike bootstrap particle filter which relies on weighted particles), we showed that it overcomes the known curse of dimensionality of particle filters .
 Kutschireiter, A., Surace, S. C., Sprekeler, H., & Pfister, J.-P. (2017). Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Nature Scientific Reports, 7(1), 8722.
 Surace, S. C., Kutschireiter, A., & Pfister, J.-P. (2017). How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights. SIAM Review, In Press. arXiv:1703.07879
 Surace, S. C., & Pfister, J. P. (2016). Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes. arXiv:1611.00170
About the speaker :
Trained as a physicist, Jean-Pascal Pfister completed his PhD in 2006 at the EPFL with Wulfram Gerstner where he developed several biological learning models. During his post-doc in Cambridge (UK) with Máté Lengyel and Peter Dayan (UCL), he focused his study on a Bayesian perspective of short-term plasticity. Then, as a group leader at the University of Bern, as well as during his sabbatical in Harvard with Haim Sompolinsky, Jean-Pascal worked on statistical learning. Now,`as a SNF Professor jointly affiliated with the Institute of Neuroinformatics (University of Zurich / ETH Zurich) and with the Department of Physiology (University of Bern) he investigates how neural networks can implement nonlinear Bayesian filtering.
- 10:00-10:15 Coffee
- 10h15-10h25 Juan Otalora (HES-SO Valais-Wallis) : Learning Gleason Patterns using GANs
Abstract : Histopathology image analysis is the gold standard for diagnosis in many diseases, whole slide images with high quality are now available to researchers, but in many cases, they lack annotated data for training powerful discriminative deep learning models. The prostate cancer pathological analysis in whole slide images follows a morphological pattern system in glands and cells known as the Gleason grading system. In this talk, we will show our current work at modeling in an unsupervised manner the morphological changes from a healthy gland to a high cancer grade, using generative adversarial networks, and show their tradeoffs with more standard unsupervised features such as autoencoders.
- 10h25-10h35 Tatjana Chavdarova (Idiap) : SGAN: An Alternative Training of Generative Adversarial Networks
Abstract : Generative Adversarial Networks (GANs) represent an impressively powerful generative model, which is based on deep learning. The quality of the samples produced by this algorithm, made it applied in wide range of computer vision problems. In spite of this success, GANs gained a reputation for being notoriously difficult to train.
We consider an alternative training procedure, named SGAN, where the final pair of networks is pitched against an ensemble of adversarial networks, whose statistical independence is carefully maintained. Such an approach aims at increasing the chances of a successful unsupervised training and improving the performances of the produced generator, in terms of coverage of the targeted distribution by the modeled one. The experimental evaluation also indicates improved stability throughout convergence and faster convergence rate.
- 10h35-11h00 Dr. Vincent Andrearczyk (HES-SO Valais-Wallis) : Dynamic texture analysis with deep learning on three orthogonal planes
Abstract : Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to dense filter banks. The repetitivity property of DTs in space and time allows us to consider them as volumes and to analyze regularly sampled spatial and temporal slices. We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their predictions in a late fusion approach to obtain a competitive DT classifier trained end-to-end.
- 11h00-11h10 Subhadeep Dey (Idiap) : End-to-end approach for recognizing speakers from audio
Abstract : We will present novel ideas to successfully build end-to-end speaker recognition on deep learning. The analysed approach aims to model both speaker and phonetic information of a speech utterance through specific hidden representations of deep neural network. Performance of this new approach will be measured on a standard (RSR 2015) task and compared to conventional speaker recognition systems. Large relative improvement of about 50% in equal error rate has been observed for a fixed-phrase condition.
- 11h10-11h20 TBD
- 11h20-11h45 Dr. Mateusz Kozinski (EPFL) : Learning to Segment 3D Linear Structures Using Only 2D Annotations
Abstract : We propose a loss function for training a Deep Neural Net- work (DNN) to segment volumetric data, that accommodates ground truth annotations of 2D projections of the training volumes, instead of annotations of the 3D volumes themselves. In consequence, we significantly decrease the amount of annotations needed for a given training set. We apply the proposed loss to train DNNs for segmentation of vas- cular and neural networks in microscopy images and demonstrate only a marginal accuracy loss associated to the significant reduction of the annotation effort. The lower labor cost of deploying DNNs, brought in by our method, can contribute to a wide adoption of these techniques for analysis of 3D images of linear structures.
- 11h45-12h30 Lunch
- 12h30-14h00 Business Ideas
Abstract : Self employment as a career option. Get tips and tricks from successful startup founders. More info can be found here: Business Ideas
Topic : Small data: challenges and opportunities
Date : November 10 2017
Location : Idiap Research Institute, Martigny
Registration : http://www.idiap.ch/workshop/small-data/registration
Idiap website : http://www.idiap.ch/workshop/small-data/
Data-driven approaches are increasingly used in support of scientific, technological and economical activities.
Yet, in many situations the amount of data is drastically limited due to time and cost constraints, feasibility, and intrinsic rarity. This includes notably:
- Ecological and medical studies, where observations of rare species/pathologies are scarce
- Catastrophe modelling (related to waste storage, climate evolution, financial crises, etc.) where samples are small or even void due, e.g., to extreme conditions or large time-scales
- Resource-intensive experiments performed on natural and artificial systems, e.g. in protein synthesis, in robot calibration, or in hi-fidelity simulation-based design
In this second Valais/Wallis Artificial Intelligence (AI) Workshop, we will focus on several facets of such challenging topics, which will be tackled through presentations from several leading Swiss-based researchers having to cope with small data with motivating applications in environment and geosciences, medicine, energy engineering, robotics and beyond. In this context, general and domain-specific topics in visualization and extraction of relevant features from small and big data will also be discussed.
Topic : Reproducibility in research
Date : March 24 2017
Location : Sierre Techno-Pôle (Room Electra - TP10)
- 8h45-9h00 Café-croissant
- 9h00-10h00 Keynote (45+15)
Pierre Vandergheynst — EPFL VP for Education
Reproducibility and Open Science @ EPFL
- 10h00-10h40 André Anjos : Reproducible Research with Bob and the BEAT Platform (35+5)
BEAT platform: https://www.beat-eu.org/platform/
- 10h40-11h00 Pavel Korshunov : Running state of the art speaker verification and attack detection experiments on BEAT (15+5)
- 11h00-11h30 Break
- 11h30-12h10 Henning Müller : VISCERAL and Evaluation-as-a-Service (35+5)
- 12h10-12h30 Adrien Depeursinge : A 3D Riesz-Covariance Texture Model for Precision Medicine: Validation on Lung Adenocarcinoma in CT and Open-Access Radiomics Web Platform (15+5)
PET/CT Radiomics Web Service: https://radiomics.hevs.ch
- 12h30-12h50 Guillaume Heusch : Making experiments on remote heart-rate measurement reproducible (15+5)
- 12h50-13h10 Manfredo Atzori : Reproducibility of signals in Electromyography & Research Data Sharing (15+5)
- 13h10 Lunch