Hes-so Valais
Menu

Next workshops

Topic : Real and artificial neural processing
Date : April 19 2018
Location : Idiap Research Institute, Martigny
Registration : https://www.idiap.ch/workshop/aiws-apr-2018/registration

Program :

  1. 8h30-9h00 Coffee
  2. 9h00-10h00 Keynote speach - Jean-Pascal Pfister (ETHZ/Unibe) : The Neural Particle Filter
    Abstract : The brain is able to perform remarkable computations such as extracting the voice of a person talking in a noisy crowd or tracking the position of a pedestrian crossing the road. Even though, we perform everyday those computations in a seemingly effortless way, this ongoing feature extraction task is however far from being trivial. This computational task can be formalised as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics. It is however unclear how to reliably and efficiently perform inference for real-word tasks which are highly nonlinear and high dimensional. Furthermore, it is even less clear how this nonlinear filtering may be implemented in neural tissue. We recently proposed a neural network model (the Neural Particle Filter) that performs this nonlinear filtering task [1,2] and derived an online learning rule which becomes hebbian in the limit of small observation noise [1,3]. Since this filter is based on unweighted particles (unlike bootstrap particle filter which relies on weighted particles), we showed that it overcomes the known curse of dimensionality of particle filters [2].
    [1] Kutschireiter, A., Surace, S. C., Sprekeler, H., & Pfister, J.-P. (2017). Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Nature Scientific Reports, 7(1), 8722.
    [2] Surace, S. C., Kutschireiter, A., & Pfister, J.-P. (2017). How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights. SIAM Review, In Press. arXiv:1703.07879
    [3] Surace, S. C., & Pfister, J. P. (2016). Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes. arXiv:1611.00170
    About the speaker :
    Trained as a physicist, Jean-Pascal Pfister completed his PhD in 2006 at the EPFL with Wulfram Gerstner where he developed several biological learning models. During his post-doc in Cambridge (UK) with Máté Lengyel and Peter Dayan (UCL), he focused his study on a Bayesian perspective of short-term plasticity. Then, as a group leader at the University of Bern, as well as during his sabbatical in Harvard with Haim Sompolinsky, Jean-Pascal worked on statistical learning. Now,`as a SNF Professor jointly affiliated with the Institute of Neuroinformatics (University of Zurich / ETH Zurich) and with the Department of Physiology (University of Bern) he investigates how neural networks can implement nonlinear Bayesian filtering.
  3. 10:00-10:15 Coffee
  4. 10h15-10h25 Juan Otalora (HES-SO Valais-Wallis) : Learning Gleason Patterns using GANs
    Abstract : Histopathology image analysis is the gold standard for diagnosis in many diseases, whole slide images with high quality are now available to researchers, but in many cases, they lack annotated data for training powerful discriminative deep learning models. The prostate cancer pathological analysis in whole slide images follows a morphological pattern system in glands and cells known as the Gleason grading system. In this talk, we will show our current work at modeling in an unsupervised manner the morphological changes from a healthy gland to a high cancer grade, using generative adversarial networks, and show their tradeoffs with more standard unsupervised features such as autoencoders.
  5. 10h25-10h35 Tatjana Chavdarova (Idiap) : SGAN: An Alternative Training of Generative Adversarial Networks
    Abstract : Generative Adversarial Networks (GANs) represent an impressively powerful generative model, which is based on deep learning. The quality of the samples produced by this algorithm, made it applied in wide range of computer vision problems. In spite of this success, GANs gained a reputation for being notoriously difficult to train.
    We consider an alternative training procedure, named SGAN, where the final pair of networks is pitched against an ensemble of adversarial networks, whose statistical independence is carefully maintained. Such an approach aims at increasing the chances of a successful unsupervised training and improving the performances of the produced generator, in terms of coverage of the targeted distribution by the modeled one. The experimental evaluation also indicates improved stability throughout convergence and faster convergence rate.
  6. 10h35-11h00 Dr. Vincent Andrearczyk (HES-SO Valais-Wallis) : Dynamic texture analysis with deep learning on three orthogonal planes
    Abstract : Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to dense filter banks. The repetitivity property of DTs in space and time allows us to consider them as volumes and to analyze regularly sampled spatial and temporal slices. We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their predictions in a late fusion approach to obtain a competitive DT classifier trained end-to-end.
  7. 11h00-11h10 Subhadeep Dey (Idiap) : End-to-end approach for recognizing speakers from audio
    Abstract : We will present novel ideas to successfully build end-to-end speaker recognition on deep learning. The analysed approach aims to model both speaker and phonetic information of a speech utterance through specific hidden representations of deep neural network. Performance of this new approach will be measured on a standard (RSR 2015) task and compared to conventional speaker recognition systems. Large relative improvement of about 50% in equal error rate has been observed for a fixed-phrase condition.
  8. 11h10-11h20 TBD
  9. 11h20-11h45 Dr. Mateusz Kozinski (EPFL) : Learning to Segment 3D Linear Structures Using Only 2D Annotations
    Abstract : We propose a loss function for training a Deep Neural Net- work (DNN) to segment volumetric data, that accommodates ground truth annotations of 2D projections of the training volumes, instead of annotations of the 3D volumes themselves. In consequence, we significantly decrease the amount of annotations needed for a given training set. We apply the proposed loss to train DNNs for segmentation of vas- cular and neural networks in microscopy images and demonstrate only a marginal accuracy loss associated to the significant reduction of the annotation effort. The lower labor cost of deploying DNNs, brought in by our method, can contribute to a wide adoption of these techniques for analysis of 3D images of linear structures.
  10. 11h45-12h30 Lunch
  11. 12h30-14h00 Business Ideas
    Abstract : Self employment as a career option. Get tips and tricks from successful startup founders. More info can be found here: Business Ideas

Access :