Header

Search

AI for Society: Bridging Technology and Ethics

Artificial intelligence is changing our society at unprecedented speed and uncertain outcome. Prof. Abraham Bernstein explains what it takes to develop Responsible AI. Text: Victoria Watts, Foto: Sophie Stieger

How do you, as a computer scientist, define Responsible AI?

Hm. This is very difficult to do without context. How about we revisit a precise definition at the end of the conversation when we have set the stage?

OK, so let's start from a different angle: How can AI contribute to improving society?

Echochambers are a good example. Echochambers are places – today typically online – where people interact and get confirmation for their own opinions without being challenged. This can drive societal polarization and is widely regarded as an undesirable development. One question we ask ourselves at the Digital Society Initiative (DSI) is therefore: How can AI help to escape these echochambers and provide people with the full diversity of opinions?

Isn't it manipulation if AI defines what people see or read?

No. Any media outlet selects what it wants to “print”. Regulatory efforts oft en require these outlets – especially public broadcasters – to represent the various viewpoints as a basis for a balanced political discourse. My goal is to ensure that people get exposed to balanced opinions to, hopefully, acknowledge that other opinions are available and valid. In an experiment we found that people who consume news from diverse sources are more likely to accept other opinions, even if they do not agree with them. Therefore, Responsible AI should promote openness to and tolerance of different opinions. In this way we create a space of differing viewpoints in which political dialogue and compromise become possible.

On the other hand, we often hear that AI is a danger to democracy.

Yes and no. Within the DSI’s Democracy Community we look at how to improve and strengthen direct democracy. A project supported by the Swiss National Science Foundation (SNF), for example, aims to make it easier for citizens to participate in consultation processes. Often, people have valid causes but struggle to get them through the democratic process. AI can help citizens bring their causes forward in a way that meets formal requirements, increasing the likelihood of implementation. This strengthens the democratic decision-making process. If AI can support them to do this – all the better!

What is the biggest challenge in the field of Responsible AI?

Economic goals and responsibility are oft en seen as being at odds with each other. But there are good examples that a responsible implementation of AI may benefit the long-term economic success. Another big challenge is how quickly and widely these technologies can spread. The economics of digital products such as AI make them accessible to millions within a short period of time.

How can we steer AI towards responsible applications?

Creating an artifact or product that affects the lives of millions of people comes with responsibility. While developing the technology, we must consider both its benefits and potential harms. With each iteration, we must ask ourselves: How will the product be used, is it aligned with society’s normative goals, is it the right solution from an economic perspective, etc.

What role does the Digital Society Initiative play in this discussion?

The digital transformation and artificial intelligence are not purely technical phenomena. They are ultidisciplinary and develop through continuous interaction between research and practice. To ensure we design, develop and implement AI in a responsible manner, we need to approach the digital transformation in its multidisciplinary entirety. This will allow us to realize its chances whilst mitigating possible risks. To do this, we need a multidisciplinary approach. This is what we do at the DSI.

How does this multidisciplinary approach look in real life?

The approaches and the questions that researchers from various disciplines ask are very different. Philosophers may approach the topic from a normative perspective, asking what a machine should be allowed to do. Social scientists may examine how people interact with the machine, and economists focus on whether a solution is efficient. Engineers typically want to build something new and, hence, want to understand the requirements for a tool and how these can be implemented technically. So, whatever setting we are looking at, we need to consider its economic, normative, societal, and technical aspects together.

Some people fear loss of intellectual capacities through AI. Do you agree?

Marshall McLuhan introduced the idea in the 1960’s that “every extension is also an amputation.” Meaning that every tool I use to make something easier or better also leads to me losing the ability to accomplish the goal without the tool. This is not a problem in every case, but it could be if we lose a skill that is a prerequisite for developing other capabilities or if the skill is important.

Returning to my first question: How do you define Responsible AI?

So summarizing what we have talked about: When creating digital artifacts and AI in particular, we need to continuously ask ourselves three questions: What is the world like today, what could the world be like, and what should the world be like? Responsible AI requires a stringent answer to these questions. To achieve this, we must integrate insights from social sciences, normative considerations, economic perspectives, and technical possibilities.

Takeaways

  • To ensure we design, develop and implement AI in a responsible manner, we need to approach the digital transformation in its entirety.
  • Whatever new AI model we develop, we need to consider its economic, normative, societal, and technical aspects.
  • Responsible AI promotes availability of different opinions. Acknowledging different viewpoints creates a space in which political dialogue and compromise become possible.

Abraham Bernstein is a professor of dynamic and distributed information systems at the Department of Informatics UZH and Director of the Digital Society Initiative (DSI). Bernstein received his PhD from MIT, served as an expert on AI regulation for the Council of Europe, and presides NRP77 at the Swiss National Science Foundation.

Text: Victoria Watts, Foto: Sophie Stieger

Source: Oec. Magazin issue #23

Are you interested in AI?

Abraham Bernstein is a lecturer in the "CAS in Generative AI", offered by the Department of Informatics and the Department of Computational Linguistics. In this program, specialists with knowledge of computer science are to enable practitioners to recognize the potentials and limitations of generative AI in 10 course days and to use this knowledge for their work. The CAS is open to people with a University degree who have knowledge of computer science. Find here more information on the "CAS in Generative AI".

AI is also a topic of the following modules:

Subpages