Header

Search

Preventing Harm Through Responsible AI

After years in robotics and neural networks, UZH alumnus Gabriel Gomez now works in Accenture’s Responsible AI Division, where he and a team of 20 specialists stress-test AI models to identify risks and prevent unintended harm. Text: Victoria Watts

Originally from Colombia, Gabriel Gomez completed his PhD at the Department of Informatics within Prof. Rolf Pfeifer’s Artificial Intelligence Lab in 2007. During this time, he tackled a fundamental challenge in robotics: integrating the separate algorithms for object detection, movement, and grasping into a single neural network capable of performing all three tasks. His work culminated in a platform that enables amputees to control a robotic hand with their thoughts. Early robotic movements were clumsy, and interactions could physically hurt humans. “It was our job to fine-tune the robots in a way that interacting with them made life for humans easier and smoother – without potentially harming them,” he remembers. This project involved researchers from the Universities of Tokyo and Chiba, as well as from the AILab UZH and ETH Zurich, bringing together experts from medicine, physics, and engineering. “The international and multidisciplinary environment at Prof. Pfeifer’s lab was so inspiring,” he recalls, “and continues to impact my approach to science today.”

Testing AI products before release

Today, Gabriel Gomez’ team stresstests large language models (LLMs), as well as image, video and audio generating AI models before public release. To do this, the team has a list of sensitive topics and systematically creates prompt variations to probe a model’s weaknesses. Techniques include rephrasing questions, altering single characters to bypass filters or even turning prompts into poems. This approach uncovers vulnerabilities in the model such as misinformation, bias, hallucination, or failure to handle borderline cases, and includes extensive work ensuring that models do not create content that could contain violence or depictions of violence. The team then provides actionable feedback to the developers before the product is deployed.

The importance of Responsible AI

Gomez’ commitment to Responsible AI stems from firsthand experience of the limitations and dangers of generative technologies. Biased training data can lead to algorithmic discrimination. For example, facial recognition systems often fail with people of color and voice recognition struggles with non-native accents. For years, Gomez has experimented with voice cloning. When he started, it took weeks and enormous computing power to clone a person’s voice. Today, with just six seconds of audio, anyone’s voice can be replicated. This has significant implications for privacy and security. “We all have heard of scam calls with obviously automated voices. They don’t pose much of a risk. However, today’s technology allows for scammers to generate voices sounding exactly like the voice of a loved one, calling in distress for your help.”

For Gomez, Responsible AI means proactively identifying and mitigating risks, especially for vulnerable populations, for example, young or old people, minorities, or otherwise disadvantaged communities. His team does this through embedding ethical principles throughout the lifecycle of AI systems, from design and training to deployment and monitoring of running systems.

Customer benefits from Accenture's work

Accenture’s Red Team conducts comprehensive risk assessments, classifying systems as high or low risk based on factors like data bias and regulatory compliance. Is there a right way to regulate AI? Not through a single, global regulatory solution, Gomez believes. Culture plays a large role: While the United States favors a flexible, innovation-driven approach, countries like Japan and Canada occupy the middle ground, and the European Union has very strict and mandatory regulations, particularly around biometric data. This regulatory diversity means that multinational organizations must tailor their AI strategies to each region, balancing innovation with compliance. To handle this, Gomez’ team is working on a product, the AI Companion, that automatically checks for compliance with a local market’s regulatory requirements.

Gomez has found his personal answer to the question of the ethical handling of AI: “I put my energy into preventing potential harm and providing solutions for others to do so too.” Just as his early research focused on making robots useful companions to humans, programmed to ensure seamless and non-harming interactions, his work in AI aims to achieve the same – harnessing the potential of the technology while protecting the humans working with it.

For Gabriel Gomez, father of two teenage children, Responsible AI is not just a technical challenge, but a societal imperative. By combining rigorous testing, ethical understanding, a multidisciplinary approach and integrating global regulations, Gomez and his team help organizations harness the power of AI while safeguarding the interests of individuals and communities.

Text: Victoria Watts

Source: Oec. Magazin issue #23

Are you interested in AI?

The Department of Informatics and the Department of Computational Linguistics offer a Certificate of Advanced Studies (CAS). In this "CAS in Generative AI", specialists with knowledge of computer science are to enable practitioners to recognize the potentials and limitations of generative AI in 10 course days and to use this knowledge for their work. The CAS is open to people with a University degree who have knowledge of computer science. Find here more information on the "CAS in Generative AI".

AI is also a topic of the following modules:

Subpages