How AI Learns From Us and From Each Other
AI is getting smarter, but can it learn to cooperate? Prof. Giorgia Ramponi and her team are studying how machines can collaborate safely and effectively with people and with each other. Text: Giorgia Ramponi
Artificial intelligence will become an integral part of our daily lives, from self-driving cars that merge smoothly into traffic and trading algorithms that balance energy markets to virtual assistants that coordinate our schedules. None of these systems acts alone. They must share information, anticipate others’ actions, and make decisions that benefit everyone involved. Prof. Giorgia Ramponi’s team studies how to teach such agents to learn from people and from each other so their behavior remains safe, efficient, and aligned with human intentions.
Learning from humans
Their approach, called reinforcement learning, is similar to training a pet: the AI tries different actions, learns from the outcomes and gradually discovers which choices lead to better results. But defining what constitutes a “reward” in real life contexts, such as comfort, fairness or trustworthiness, can be challenging. To address this, they explore ways in which people can guide the learning process directly, either by providing examples (“do it like this”) or by expressing preferences (“I prefer this result to that one”). Large language models – like today’s chatbots – are trained in a similar way through human feedback.
A key direction of their research is to integrate multiple types of feedback, ranging from demonstrations to expert corrections, to make AI learning richer and more adaptive. One of their recent methods first teaches a robot the basics through demonstrations, before fine-tuning its behavior using simple human feedback. This approach could power household robots that learn to perform delicate tasks safely, autonomous vehicles that can adapt to local driving habits, and surgical robots that can improve their precision with expert guidance. It paves the way for systems that keep improving through small, meaningful human interactions rather than rigid programming.
Teaching AI to collaborate
When learning involves many agents, like cars coordinating at an intersection or algorithms balancing a market, the challenge becomes one of teamwork. Each AI must predict how the others will act and adjust its behavior accordingly so that they can all work together smoothly. Economists describe this balance point as a Nash equilibrium: a state in which no individual can improve their situation by acting alone. Prof. Ramponi’s research shows that simple imitation cannot always achieve this balance, so they developed a method that only asks for expert guidance when needed, allowing agents to learn cooperative strategies much faster. They also examined how large language models behave in cooperative or competitive games, finding that stronger reasoning sometimes led to less cooperation. This highlights an important insight: intelligence alone does not guarantee collaboration; it must be intentionally cultivated.
Across all these projects, the researchers’ goal is to connect human intuition with machine learning. By combining demonstrations, preferences, and interaction, they aim to create AI that not only performs well, but also collaborates effectively and responsibly.
Giorgia Ramponi is an assistant professor of artificial intelligence for cyber-physical systems at the Department of Informatics UZH.
Text: Giorgia Ramponi, Source: Oec. Mag. #24
Are you interested in AI?
Our programmes “CAS in Game Changer AI“ and “CAS in Generative AI“ reveal the true potential of these emerging technologies and show you new ways to use them in your business.