If you would be interested in giving a talk to Cambridge students interested in artificial intelligence, please email dl543@cam.ac.uk.
Photos from previous events:
Previous talks:

Five AI Talk | Machines that see: what AI can and can’t do
*** TALK ABSTRACT ***
AI vision works nowadays, in medicine, commerce, security and gaming. But how can we trust the judgement of machines that see? AI vision is being entrusted with critical tasks: from access control by face recognition, to diagnosis of disease from medical scans and hand-eye coordination for surgical and nuclear decommissioning robots, and now taking control of motor vehicles. How sure can we be that the AI will make good visual judgements and decisions?
*** SPEAKER BIO ***
Professor Andrew Blake is a highly esteemed academic and one of the most influential researchers in Computer Vision. He is currently the Chairman of the Samsung AI Research Centre and is a consultant and Scientific Adviser to FiveAI. He has written several of the seminal books on computer vision, and was formerly Research Director at The Alan Turing Institute, has held the position of Microsoft Distinguished Scientist and Laboratory Director of Microsoft Research Cambridge, England. Andrew trained in mathematics and electrical engineering in Cambridge, and studied for a doctorate in artificial intelligence in Edinburgh. He was an academic for 18 years, latterly at Oxford University, where he was a pioneer in the development of the theory and algorithms underlying much of today’s field of computer vision.
Samsung AI Talk | Generating Realistic Speech-Driven Facial Animation
CuAI (previously CUMIN) are excited to have the pleasure of hosting Dr Stavros Petridis, a research scientist at the Samsung AI centre in Cambridge and a research fellow at Imperial College London.
*** ABSTRACT ***
Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. However, the majority of previous works has focused on generating accurate lip movements and has neglected the importance of generating facial expressions. This talk will show how realistic talking heads can be generated using a still image of a person and an audio clip containing speech. The proposed method based on generative adversarial networks can generate videos which have lip movements that are in sync with the audio and natural facial expressions such as blinks and eyebrow movements. Several applications of the proposed approach will be presented. Finally, it will be shown how this method can be used for self-supervised learning and solving the inverse problem, i.e., generating speech from video.
*** SPEAKER BIO ***
Stavros is a research scientist at the Samsung AI centre in Cambridge and a research fellow at the intelligent behaviour understanding group (iBUG) at Imperial College London. He studied electrical and computer engineering at the Aristotle University of Thessaloniki, Greece and completed the MSc degree in Advanced Computing at Imperial College London. He also did his Ph.D. in Computer Science at the same university. His main research interest is in audio-visual understanding of human behavior and has worked in a wide range of applications like audio-visual speech recognition, emotion recognition, speech-driven facial animation, age/gender recognition, face re-identification and nonlinguistic vocalisation (e.g. laughter) recognition.
DeepMind Talk | Grounded Language Learning in Virtual Environments
In collaboration with the Cambridge University Language Technology Lab, CuAI (previously CUMIN) invites you to a talk given by DeepMind Research Scientists Dr. Stephen Clark and Dr. Felix Hill. Join us in room LT2 in the Engineering Department at 4pm on February 20th. As spaces are limited, make sure to come a few minutes early to ensure you get a place!
***
ABSTRACT:
Natural Language Processing is currently dominated by the application of text-based language models such as BERT and GPT-2. One feature of these models is that they rely entirely on the statistics of text, without making any connection to the world, which raises the interesting question of whether such models could ever properly “understand” the language. One way in which these models can be “grounded” is to connect them to images or videos, for example by conditioning the language models on visual input and using them for captioning.
In this talk we extend the grounding idea to a simulated virtual
world: an environment which an agent can perceive and interact
with. More specifically, a neural-network-based agent is trained – using distributed deep reinforcement learning – to associate words and phrases with things that it learns to see and do in the virtual world. The world is 3D, built in Unity, and contains recognisable objects, including some from the ShapeNet repository of assets.
One of the difficulties in training such networks is that they have a tendency to overfit to their training data, so first we’ll demonstrate how the interactive, first-person perspective of an agent provides it with a particular inductive bias that helps it to generalize to out-of-distribution settings. Another difficulty is providing the agent with enough linguistic experience so that it can learn to handle the variety and noise in natural language. One way to increase the agent’s linguistic knowledge is to provide it with BERT embeddings, and we’ll show how an agent endowed with BERT representations can achieve substantial (zero-shot) transfer from template-based language to noisy natural instructions given by humans with access to the agent’s world.
**
SPEAKER BIOS:
Dr. Stephen Clark is a Research Scientist at DeepMind and an Honorary Professor at Queen Mary University of London. He has previously worked in multiple UK Universities, including the University of Edinburgh, the University of Oxford and the University of Cambridge.
Dr. Felix Hill is a Research Scientist at DeepMind. He holds a PhD in Computational Linguistics from the University of Cambridge and a Master of Mathematics from the University of Oxford.
ML in cancer diagnosis: from pipe dream to democratising cancer diagnosis | Pandu Raharja-Liu, CTO, Panakeia
Pandu Raharja-Liu, Co-Founder and CTO of Panakeia – an exciting healthcare startup based in London – is coming to the Engineering Department to give a talk on democratising cancer diagnosis with ML.
Come along if you’re interested in the technological challenges of using ML in healthcare, or in the entrepeneurial side of running a startup. Panakeia is under 2 years old, founded through Entrepeneur First and has already secured seed funding and is growing into a new office in Shoreditch. They’re working with hospitals in the UK and around the world to revolutionize cancer diagnosis.
Panakeia is hiring for machine learning scientist, machine learning engineer and software engineer positions so if you’d be interested send your CV to pandu@panakeia.ai
Conversational AI: Past, Present and Future | Prof Steve Young, Apple Siri
We have the pleasure to host Professor Steve Young for a talk on past, present and the future of conversational AI.
Voice-based human-machine interaction, now rebranded as Conversational AI, has a long history. However, as in many areas of applied machine learning, neural networks and end-to-end training dominate current research. This talk will review the main developments in the history of conversational AI and point out some of the issues which have been side-lined by current research but which remain important for the future.
Steve Young is Emeritus Professor of Information Engineering at Cambridge University and a founder of VocalIQ, now part of Apple. His main research interests lie in the area of statistical spoken language systems including speech recognition, speech synthesis and dialogue management. He has more than 300 publications in the field and he is the recipient of a number of awards including an IEEE Signal Processing Society Technical Achievement Award, a Eurosip Technical Achievement Award, the IEEE James L Flanagan Speech and Audio Processing Award, and an ISCA Medal for Scientific Achievement. He is a Fellow of the IEEE, ISCA and the IET and he is a Fellow of the Royal Academy of Engineering.
As spaces are limited, make sure to come a few minutes early to ensure you get a place!
How AI solutions can triumph over the human mind | Aleksi Tukiainen from PROWLER.io (7pm, 22nd Jan, LT6 in CUED)
We are very excited to be kicking off this term’s events with a talk from Aleksi Tukiainen, CUED alumnus, who co-founded PROWLER.io in his fourth year studying at Cambridge.