Home Blog

CSER Talk | What do AI developers owe global south citizens?


We hosted our last talk of the term with CSER research associate Cecil Abungu, who spoke to us about the ethical obligations of AI developers in an excellent and thoughtful talk.


Many tools that use machine learning and deep learning are developed by actors in global north countries and used in global south countries, and we can plausibly predict that this will only grow in the future. These tools can often lead to unfair and unjust outcomes, and it is therefore crucial that societies are able to audit and evaluate them. In this talk, Cecil Abungu will argue that AI developers have an ethical obligation to try and create tools which can be audited & evaluated, and to help global south countries build the field required to successfully do so.


Cecil holds an undergraduate law degree from Strathmore Law School in Nairobi and Master’s in law degree from Harvard Law School. He is an Open Philanthropy research grantee and a research fellow at the Legal Priorities Project. At the Centre for Existential Risk (CSER), Cecil works on a project touching on how AI could lead to extreme inequality and power concentration, and another connected to mapping adaptive governance regimes for changes in AI capabilities.  

Workshop | GPT-3 and Codex


We hosted our second workshop of term on OpenAI’s GPT-3 and Codex, looking at how these models work, and their possible applications.

Many thanks to the Department of Engineering!

GPT-3 is a text generator that made headlines a year ago by producing essays and news articles indistinguishable from human written ones.

Codex is similar, but speaks programming languages fluently: This includes being able to take a single comment and generate an entire function or more (prompts like “a function that adds the odd numbers in a list” or “a webpage with a big red button in the center, that gives you a random cat gif when pressed” can actually work!).

During our workshop, we:
– Tried working with Codex on programming challenges
– Got creative using GPT-3 to solve real-world tasks
– Learned about how such models work, and important limitations
– Brainstormed applications (and perhaps invent the next Google) to win valuable prizes such as dark chocolate!

Panakeia Talk | Digital precision cancer diagnosis: understanding cancer better using computer vision


We were fortunate to host Panakeia’s CTO on 30th November, Pandu Raharja-Liu, who spoke to us about to challenges of precision cancer diagnosis using computer vision in a fascinating talk.

Event poster.


In this lecture, we will discuss the need to better understand cancer in modern diagnosis and treatment and how computer vision is accelerating innovation in the area. Being one of the deadliest diseases known to humanity, cancer has been at the centre of medical research and innovation for the past decades. Most modern cancer treatments are designed to work very well on some populations but not on others, a development commonly referred to as “precision medicine”. This means that understanding cancer better allow for more effective treatment and therefore improve patient survival. So far, the identification of patients suitable for certain treatment relies on costly and time-consuming molecular tests (e.g. genetic sequencing). We present a set of computer vision methods that can provide the same molecular information without having to do the standard molecular tests. This allows patients to quickly receive the treatment by reducing the waiting time for cancer tests from weeks to minutes. The solution has the potential to revolutionise our understanding of how cancer grows, spreads and responds to treatment, potentially allowing us to tackle the disease much more efficiently.

Microsoft Research Talk | An Introduction to Dataset Bias


In collaboration with Cambridge AI in Medicine, we hosted Daniela Massiceti from Microsoft Research on 22nd November, who discussed the importance of good data for AI. High quality datasets are essential for producing functional machine learning systems, and Dr Daniela Massiceti addressed the importance of this with a special focus on the effects of Dataset bias. The link to the recorded talk is available here.

Event poster for Dr Daniela Massiceti’s talk.


Large datasets are a fundamental part of current machine learning (ML) pipelines, however, biases often find their way into these datasets and can have serious consequences if left unaddressed. This talk will dive right in with an introduction to dataset bias and types of bias that are commonly encountered in ML systems. Next, we’ll unpick some examples of what can happen when it’s left unaddressed, and finally we’ll look at an overview of approaches that can be used to tackle it. Join in to learn more about dataset bias and building robust and equitable ML systems.


Daniela is a machine learning (ML) researcher at Microsoft Research Cambridge (UK) where she works on ML systems that learn and evolve with human input (“human-in-the-loop”). Her main research directions lie, firstly, in making models robust to real-world training data provided, and, secondly, in making models more human-explainable. Advances in these directions will enable completely personalised tools in many spheres – from assistive tools for people who are blind/low-vision, to AI-assisted diagnostic tools for doctors. Prior to joining MSR, she did a PhD in computer vision at the University of Oxford, a Masters in Neuroscience also at Oxford, and a Bachelors in Electrical and Computer Engineering at the University of Cape Town. She is passionate about diversity and inclusion in machine learning and is an organiser of the Deep Learning Indaba.

DeepMind Talk | Petar Veličković and Mateusz Malinowski


We were delighted to host our first talk of Michaelmas term alongside MLinPL, welcoming Petar Veličković and Mateusz Malinowski. The event was held in the Cambridge Union’s Debating Chamber.

Both speakers are distinguished scientists with a history of high-profile papers, and senior researchers at DeepMind.

Petar Veličković’s talk was titled “Everything is connected: deep learning on graphs”, and was an entry-level bird’s eye view on GNNs and their applications. Mateusz Malinowski’s talk was titled “Holistic Vision: Reasoning, Interactions, Multimodality”, discussing the role of reasoning in modern computer vision.

Petar Veličković speaking to us at the Cambridge Union.


Petar Veličković is a Staff Research Scientist at DeepMind, and an Affiliated Lecturer at the University of Cambridge. He holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research concerns geometric deep learning—devising neural network architectures that respect the invariances and symmetries in data (a topic he’s co-written a proto-book about). Within this area, Petar focuses on graph representation learning and its applications in algorithmic reasoning and computational biology. He has published relevant research in these areas at both machine learning venues (NeurIPS, ICLR, ICML-W) and biomedical venues and journals (Bioinformatics, PLOS One, JCB, PervasiveHealth). In particular, he is the first author of Graph Attention Networks—a popular convolutional layer for graphs—and Deep Graph Infomax—a scalable local/global unsupervised learning pipeline for graphs (featured in ZDNet). Further, his research has been used in substantially improving the travel-time predictions in Google Maps (covered by outlets including the CNBC, Endgadget, VentureBeat, CNET, the Verge and ZDNet).

Mateusz Malinowski is a Research Scientist at DeepMind. His work concerns computer vision, natural language understanding, reasoning and scalable training. His main contribution is creating foundations and various methods that answer questions about images and proposing a scalable alternative to backprop training mechanism. Mateusz has received a PhD from Max Planck Institute for Informatics and received multiple awards for his contributions to computer vision.

Toyota Research Talk | Data efficient reinforcement learning


We hugely enjoyed welcoming Rowan McAllister from UC Berkeley and Toyota Research, who gave an illuminating talk on current advances in data efficient RL.

Event poster.


Data-efficiency is useful in robotic learning, where real-world data can be expensive and time-consuming to acquire. Probabilistic dynamics models can help accelerate learning by mitigating overfitting and providing richer supervision signals than model-free control methods. An additional benefit of probabilistic models is their ability to detect out-of-distribution events, useful in certain safety-critical settings where control should not deviate from demonstration data. This talk investigates how deep probabilistic models can benefit learning safe control fast, when either learning from scratch, or from imitation data.


Rowan McAllister is a research scientist at Toyota Research Institute. His research is concerned with probabilistic modelling for data-efficient learning of control, often with autonomous vehicle applications in mind. Rowan received a PhD from Cambridge in 2017 and was a postdoctoral scholar at UC Berkeley until 2020.

Ethics Talk | Big tech and its influence on academic research


We hugely enjoyed welcoming Mohamed Abdalla from the University of Toronto for our first talk of Lent, titled ‘AI Ethics: Big tech and its influence on academic research’. Mohamed discussed Big Tech’s close association with academia and drew comparisons with Big Tobacco’s past relationship with medical research. All of this came at a time of unease in the community, highlighted by Timnit Gebru’s disputed resignation from Google, and issues popularised by the Netflix documentary The Social Dilemma. It was a hugely informative talk!

Event poster.

With the increasing adoption of automated algorithms in every part of our lives, greater attention has been dedicated to studying the societal effects of these algorithms. The first portion of the talk will highlight concepts covered in the machine learning fairness literature. The rest of the talk will explore how Big Tech can actively distort the ongoing “Ethics of AI” academic landscape to suit its needs. By comparing the well-studied actions of another industry, that of Big Tobacco, to the current actions of Big Tech we see similar strategies employed by both industries to sway and influence academic and public discourse. We argue that it is vital, particularly for universities and other institutions of higher learning, to discuss the appropriateness and the tradeoffs of accepting funding from Big Tech, and what limitations or conditions should be put in place.

Mohamed Abdalla is a PhD student in the Natural Language Processing Group (Department of Computer Science) at the University of Toronto and a Vanier scholar, advised by Professor Frank Rudzicz and Professor Graeme Hirst. He holds affiliations with: i) Vector Institute for Artificial Intelligence, ii) Centre for Ethics, iii) ICES (formerly known as the Institute for Clinical and Evaluative Sciences).

Google Talk | Better, faster, stronger: 10 years of computer vision and deep learning

It was a great pleasure to welcome Andreas Steiner from Google to deliver his talk: “Better, faster, stronger: 10 years of computer vision and deep learning” on the 2nd of February. Andreas discussed the history of computer vision, and provided some insight into Google’s latest advances in the field using transformers.


The field of computer vision was revolutionized in 2012 with the advent of deep learning. We’re in the year 2021 and the latest results look poised to shake up the field again : Non-convolutional architectures produce state of the art results, and they leverage of large amounts of weakly labelled data leads to impressive results on data never seen during training. This talk will start from the very basics using ML for image classification, dive into transfer learning, and finally focus on recent research results, such as vision transformers and contrastive learning from text and images.


Andreas is a software engineer at Google Zurich. He holds a medical degree from Université de Lausanne and a masters in bioelectronics from ETH Zurich. He’s worked as a civil servant in Tanzania and as a Doctoral Student in the Swiss Tropical and Public Health Institute. He’s been in his current position at Google for 6 years.

Microsoft Research Talk | Bringing Intelligence to the End User

We were delighted to welcome Dr Carina Negreanu and Professor Andy Gordon from Microsoft Research to deliver their talk: “Bringing Intelligence to the End User” on 23rd November. Our speakers discussed how AI could revolutionise the way Excel users interact with spreadsheets, the goals of Microsoft’s Calc Intelligence project, and their experiences working for one of the world’s leading industrial research labs.
Calc Intelligence aims to bring intelligence to end-user programming, and in particular to spreadsheets. The spreadsheet has continually evolved to remain at the forefront of productivity tools and work practices for over forty years. For example, today’s spreadsheets embrace collaboration, serve as databases, are mobile, and encompass AI-powered interaction via natural language. However, the soul of the spreadsheet remains the grid, and its formulas. Indeed, spreadsheets are the world’s most widely-used programming technology – but they also embody apparently-fundamental limitations. We are working on foundational ideas that will take a qualitative step forward, to extend dramatically the reach of what end users can do with spreadsheets. In this talk we will give an overview of Calc Intelligence (focusing on our recent ML publications) and share career experiences from two researchers at different career stages (principal researcher and post-doc researcher).
Speaker Bios
Dr Carina Negreanu completed a PhD in modified gravity at the Cavendish, University of Cambridge in 2018, before embarking on a 1 year AI Residency with Microsoft Research. She now works full-time as a ML postdoc researcher in the Calc Intelligence group at Microsoft.
Professor Andy Gordon is a Senior Principal Research Manager at Microsoft Research, Cambridge. His main project is Calc Intelligence, bringing intelligence to end-user programming, especially spreadsheets. As a part-time position, he also holds the Chair in Computer Security in the School of Informatics in the University of Edinburgh. He convenes the University of Edinburgh’s Microsoft Research Joint Initiative in Informatics and participates in the Data Science PhD programme and the Cyber Security & Privacy Research Network.

Five AI Talk | Machines that see: what AI can and can’t do


CUMIN hosted a talk by Professor Andrew Blake on Thursday 5 March, entitled “Machines that see: what AI can do, and what it can’t do yet” covering the cutting edge and future of computer vision and scene understanding.

Be sure to like our Facebook page to keep up to date with our future events!