We hosted our last talk of the term with CSER research associate Cecil Abungu, who spoke to us about the ethical obligations of AI developers in an excellent and thoughtful talk.

ABSTRACT

Many tools that use machine learning and deep learning are developed by actors in global north countries and used in global south countries, and we can plausibly predict that this will only grow in the future. These tools can often lead to unfair and unjust outcomes, and it is therefore crucial that societies are able to audit and evaluate them. In this talk, Cecil Abungu will argue that AI developers have an ethical obligation to try and create tools which can be audited & evaluated, and to help global south countries build the field required to successfully do so.

SPEAKER BIO

Cecil holds an undergraduate law degree from Strathmore Law School in Nairobi and Master’s in law degree from Harvard Law School. He is an Open Philanthropy research grantee and a research fellow at the Legal Priorities Project. At the Centre for Existential Risk (CSER), Cecil works on a project touching on how AI could lead to extreme inequality and power concentration, and another connected to mapping adaptive governance regimes for changes in AI capabilities.