Five principles for the sensible use of AI
AI is more than just technology - it is changing the way we think, work and make decisions. These five principles show how we can deal with artificial intelligence consciously and responsibly. Text: Cindy Candrian
AI is more than a tool, it is a co-worker
Artificial intelligence does not work like traditional software that stubbornly follows rules. It behaves more like a new team member: creative, surprising, sometimes flawed, but often inspiring. Anyone who only sees AI as a passive tool is not exploiting its potential. Instead, it is worth working with AI like a co-worker - with clear tasks, feedback and critical thinking. A simple but effective method is to give the AI a role for a specific task: should it advise, analyze or creatively develop ideas? This makes the framework for collaboration clear - just like with human colleagues. And here too, the best results are achieved where leadership, communication and responsibility come together.
Active collaboration instead of passive delegation
What happens when people not only use AI, but actively integrate it into their own work? A large-scale study by Harvard University at involving 776 experienced business professionals shows that people who deliberately and consciously used AI achieved better results than traditional human teams. The best results were achieved where humans and AI actively worked together. However, this is only possible if people do not passively delegate tasks to the AI, but consciously control it, question the suggestions and contribute their own expertise. The result is more than just efficiency: the study showed that people began to think in a more interdisciplinary way, adopt new perspectives and develop creative solutions that would have been inconceivable on their own . This so-called “cointelligence” means that technology does not replace our thinking, but expands it.
The responsibility remains with humans
As helpful as AI can be, the responsibility for what it produces always remains with us humans. It processes what we tell it to and provides suggestions based on its training data. This makes it all the more important that we actively incorporate our ethical and moral standards into the work process with AI. This starts with the input: we can use targeted prompts to control which perspectives, values and groups are taken into account. And it doesn't end with the output: What the AI outputs must also be critically examined. Although modern systems contain protective mechanisms against misuse, these can often be circumvented, for example by using clever prompts or exploiting vulnerabilities. AI can manipulate if we ask it to. But the decision to do so is always made by humans. Taking responsibility therefore means: consciously controlling, setting clear boundaries and acting ethically.
Learning is mandatory - for everyone
If you want to use AI responsibly, you need to understand how it works - and that means trying it out for yourself, gaining experience and learning continuously. Only those who actively work with AI will recognize when it is useful and when it reaches its limits. This starts in everyday life: AI should be involved in as many tasks as possible in order to develop a feeling for its possibilities. But targeted upskilling is also crucial at team level. Organizations that want to use AI sustainably need to empower their employees - with workshops, learning spaces or experimental formats. This is not just about technology, but also about critical thinking, reflective application and an understanding of the role of people in interaction with AI. Working responsibly with AI therefore also means learning, trying things out and developing specific skills - both individually and organizationally.
Stay flexible - because AI is constantly changing
The development of AI is rapid - what is new today may be obsolete tomorrow. We should therefore consider every system to be the worst we will ever use - not because it is bad, but because its successors will be even more powerful. New models bring new opportunities, but also new risks. If you want to work responsibly with AI, you have to be prepared to question routines, learn new things and take on new challenges. Flexibility and a willingness to learn are no longer nice extras, but core competencies. Responsible use of AI also means evolving with it, remaining open to - and not standing still when the technology moves on.
Cindy Candrian is co-founder of Delta Labs, a Swiss AI company that supports companies from strategy to implementation of AI. She studied banking and finance at the University of Zurich and holds a doctorate in business administration.
Source: Oec. Magazin issue #23
Are you interested in AI?
The Department of Informatics and the Department of Computational Linguistics offer a Certificate of Advanced Studies (CAS). In this "CAS in Generative AI", specialists with knowledge of computer science are to enable practitioners to recognize the potentials and limitations of generative AI in 10 course days and to use this knowledge for their work. The CAS is open to people with a University degree who have knowledge of computer science. Find here more information on the "CAS in Generative AI".
AI is also a topic of the following modules:
- Executive MBA: Künstliche Intelligenz & Machine Learning
- CAS in General Management / CAS in Leadership / DAS in Unternehmensführung: AI for Business
- Urban & Real Estate Management: Course Practical AI for Real Estate
- CAS in Medical Leadership: Artificial Intelligence & Machine Learning
- Finance: AI-driven Innovations in Finance