Lecture series: The Ethics and Social implications of AI
Wed 5.00-6.30 pm, Hilary weeks 2-8 starting from 23rd January
Lecture Room, Radcliffe Humanities
This course will investigate the moral and social implications of Artificial Intelligence. The course includes lectures, discussions and talks given by experts in the area, and will identify and examine the most relevant and important issues that dominate the debate around ethics in AI. Topics covered include bias in decision-making algorithms, the question of accountability for autonomous and semi-autonomous AI, the morality of military and healthcare AI, the difficulties associated with the governance of aI and possible future scenarios that could arise as AI progresses in the future.
We strongly encourage interested students of any subject area to apply and contribute to the interdisciplinary discussions that will form part of this lecture series.
There will be weekly reading assignments, with the option for students to write a blog post or give a short presentation on a given topic. Students who regularly and actively participate in the course will receive a certificate of attendance issued by OxAI.
Applications are now closed. Applicants will receive an email by Sunday 20.01.
This event outlines the main topics of the course and sets the groundwork for later discussions. It centers around the following questions: What is Ethics of AI and why do we need it? What AI related challenges are we likely to encounter within the next decades? How could we overcome them?
2. Alignment Problem
The alignment problem is often phrased as the problem of ensuring that AI behaves as we want, that AI behaves ethically, or that the actions of AI are aligned with our values. But we don’t know how to codify our values. In fact, we disagree about what they are. Is it possible to program computers to engage in the same kind of moral reasoning as humans do? Can AI help us to figure out what is right to do? These questions are at the intersection of computer science, ethics and moral psychology.
3. Algorithmic Bias.
Machine learning algorithms are often considered superior to human decision making, not least because they are more efficient and provide us with a seemingly objective evaluation of data. Questions that will be discussed in this session are: What is algorithmic bias and how does it arise? What are its implications and dangers for society? How do we best respond to the threat of bias?
4. Accountability and Moral Responsibility
Traditionally, most decisions that affect our lives are made by humans, either directly or indirectly through institutions or other bodies. Usually, we hold humans accountable for the consequences that arise as a result of these decisions. The situation becomes much more complicated, however, when it is AI systems that make the decisions. Who is accountable now? This session will focus on applications of AI, such as self driving cars and military AI and we will discuss questions revolving around accountability and moral responsibility of AI systems and those who deploy them.
5. Ownership and Governance
Artificial intelligence has the potential to radically reshape the world as we know it. This lecture deals with the question of how we can ensure that the development and deployment of AI remains safe and beneficial to society. Privacy violations, job replacements and increasing power imbalances between nations and corporations are some of the many problems that may arise in the near and medium-term future. This session is all about how we might ensure that AI remains beneficial.
6. Living with AI
AI has many areas of application. But in which areas of live is AI really desirable? How should our life alongside with AI look like? We need to know what we want AI for, in order to develop it accordingly and to prevent negative outcomes. This session discusses the various ways AI may become part of our lives in the future and evaluates whether these scenarios are desirable or not.
7. Future Scenarios
What are possible future scenarios for AI? What does the future look like on large time-scales and in the case of an artificial general intelligence?