The deep learning revolution has brought us virtual assistants that understand what we want, on-demand translation and computer vision systems that allow self-driving cars to see the world around them. Training deep neural networks to solve new tasks requires an enormous amount of annotated training data, which can be expensive and difficult to obtain.
Talk #1 - Henrik Pedersen, Alexandra Institute
To make deep learning more accessible, researchers from the Alexandra Institute are developing systems that help developers train and refine their deep networks on synthetic images, thus avoiding the need for manual annotation. In his talk, Henrik Pedersen presents recent results from such diverse areas as robot navigation and augmented reality.
Henrik Pedersen is Head of Visual Computing Lab at the Alexandra Institute and holds a PhD in medical image analysis. Over his career, Henrik has been in various academic positions, covering research and teaching in computer vision and deep learning. His interests lie in exploring deep learning techniques for object detection and recognition using photorealistic, synthetic images for training.
Talk #2 - Mads Kristensen, Enversion
How do you do deep learning with a sample size of n=1? And how can you train a neural network capable of revealing pathological changes from MRI-scans in devastating diseases such as Parkinson's disease and Multiple Sclerosis using synthetic training data?
Mads is a Biomedical Engineer currently working at the Aarhus based company, Enversion. Mads has previously been working at Centre for Advanced Imaging, University of Queensland Australia where he mainly worked with deep convolutional neural networks on volumetric MRI-data.
Join us for great talks and networking!
Aarhus AI Meetup group and InfinIT.