All

Article

Audio/Video

Interview

  • Four Reasons why Hyping AI is an Ethical Problem

    Hyping AI creates ethical challenges on top of the existing ones. Here is how: 1. AI hype does not question the very purpose of AI. 2. AI hype is linked to misleading promises. 3. AI hype directs energy at something that is barely tangible. 4. AI hype exaggerates the capabilities of AI when effectively humans…

    Read more

  • Fake it till You Make it: AI and Hype

    The Algo 2020 conference invited me on a panel discussion titled “Fake it till you make it – AI and Hype”. My 4 key points: 1. AI hype does not question the very purpose of AI. 2. AI hype is linked to misleading promises. 3. AI hype directs energy at something that is barely tangible.…

    Read more

  • There is no Responsible Tech without Accountability

    There is a divide between those working on Responsible Tech inside companies and those criticizing from the outside. We need to bridge the two worlds, which requires more open-mindedness and the willingness to overcome potential prejudices. The back and forth between ‘ethics washing’ and ‘ethics bashing’ is taking up too much space.

    Read more

  • Ethics in the Tech Industry: What makes it so Distinctive?

    Kate O’Neill is a global thought leader, author, keynote speaker, strategic advisor, and “tech humanist”. We talked about connecting the dots between AI ethics, privacy, climate change, CSR, ESG, contact tracing, carbon offsetting and much more, including quite some laughter.

    Read more

  • Business is there to Make Life Better. But How?

    As part of his series “Interviews with global leaders in the field of Artificial Intelligence” I spoke with Johan Steyn about AI ethics, privacy, contact tracing, buiness ethics, CSR, etc. – live from my kitchen table.

    Read more

  • Ethical Debates sparked by COVID: Thoughts at the UNESCO Forum

    UNESCO Forum invited me as a speaker to share my thoughts on the Covid-19 crisis. The pandemic has sparked fundamental ethical debates. Think of the terrifying reports from hospitals in Italy in Spring 2020. Intensive care units were overrun with patients. There were not enough ventilators. And suddenly we asked ourselves: What is the value…

    Read more

  • Facial Recognition: Accuracy is not the Point

    Facial recognition is flawed—but should we reject it because it’s inaccurate, or because it’s immoral? This post argues why moral arguments matter more than statistics when it comes to protecting our faces, our privacy, and our civil rights.

    Read more

  • AI and Sustainability: a Solution or Part of the Problem?

    Environmental sustainability is one of the most promising domains to deploy ‘AI for Good’. The environment is an excellent use case for collecting and analyzing data that help us to better understand and address key environmental challenges. In contrast to the use of AI in ‘human settings’, you typically don’t run into problems of privacy…

    Read more

  • Linking Digitalization to Ethics: a Simple Outline of Some Foundations

    It shouldn’t take a scandal of the dimensions achieved by Facebook/ Cambridge Analytica to make it clear that we must not use technology blindly without asking ourselves some ethical questions, but incidents like these certainly help to raise awareness on an ever broader scale. Yet, despite an increasing amount of articles calling for integrating ethics…

    Read more

  • Why AI really needs Social Scientists

    OpenAI states that in order to assure a rigorous design and implementation of this experiment, they need social scientists from a variety of disciplines. The title immediately caught my attention given that the kind of “AI ethics” I am dealing with hinges on an interdisciplinary approach to AI. So, I sat down and spent a…

    Read more

  • A secret AI study. A biometric orb. A new internet ID.

    After researchers sparked outrage with a secret AI experiment on Reddit, the platform considers adopting Sam Altman’s World ID system via a device called the Orb. But is biometric identity the solution, or just another dubious business model?

    Read more

  • How AI Hijacks Human Connection

    AI doesn’t just train on academic or artistic content. Increasingly, it feeds on blogs, guides, and independent journalism; any content that shows human care, credibility, or craft. Summarized and displayed in search results, this content becomes invisible at the source. Welcome to a world where creators are reduced to training fodder.

    Read more

  • The Myth of AI Democratization

    Some still believe that training large language models (LLMs) on copyrighted content is a form of “democratizing knowledge.” But when you look closely at how these models actually handle the material they ingest, the picture looks a lot less heroic – and a lot more extractive.

    Read more

  • Meta’s Silent Swallowing of My Academic Legacy

    My academic legacy? A monograph, a handful of articles—and now a starring role in training Meta’s Llama 3. No royalties. No citations. Just silent swallowing by a machine. A story of vanishing recognition in the age of AI.

    Read more

  • Klarna’s AI Whiplash: From Job Cuts to Human Epiphanies

    From “AI can do all jobs” to “Humans are invaluable!”: Klarna’s AI journey is a masterclass in hype whiplash. But behind the cringe, the CEO’s rhetoric surfaces real ethical tensions. What happens when honesty about AI and jobs is no longer whispered in executive suites – but shouted?

    Read more

  • Keeping AI Weird (for Safety Reasons)

    AI makes mistakes differently from humans. And that’s a good thing. This post explores why we shouldn’t train machines to fail like humans and why weirdness might be an important safety feature of AI.

    Read more

  • AI: Lessons from Business Ethics

    When it comes to business ethics, AI companies ignore the most basic concepts linked to accountability, supply chain responsibility and product safety. Yes, AI companies create groundbreaking innovation. But that comes with the responsibility to ensure that what they do serves humanity, not the other way around.

    Read more

  • AI as an Inevitable Necessity? Let’s Not Go There

    AI as an inevitable constraint? No. In conversation with Nina Benoit about liberalized markets, the failed promise of democratization, and why AI needs standards rather than special treatment – just like any other technology.

    Read more

  • AI in publishing: Bridging gaps, upholding values

    AI in publishing was the topic of a debate, organized by Wiley. How should publishers respond to advances of Big Tech? How can AI make research more accessible? And do we really need to reinvent the wheel when talking about accountability in the age of AI?

    Read more

  • Podcasts – My Thoughts on Air

    Hosts from all over the world invite me to share my thoughts on ethics, artificial intelligence, data protection, sustainability or my personal career. Podcasts are a great opportunity to present my views and convictions in a structured and understandable manner. Every single one of these conversations has been an eye-opener for myself as well.

    Read more

  • Is there Business Ethics in Clubhouse?

    What can AI ethics learn from business ethics? What’s the ethics of Clubhouse, if any? Is the Robinhood app undermining free will? And how can tech companies create an ethical business culture? Listen to my thoughts in this interview.

    Read more

  • AI is a Tool, not a Right. It’s not an End in Itself

    “We might trust machines more than people when we communicate with them but this is dangerous because behind every machine there are the people that create it”. Just one of my statements from my lively talk with Kimberly Misquitta from Indian chatbot company Engati.

    Read more

  • Fake it till You Make it: AI and Hype

    The Algo 2020 conference invited me on a panel discussion titled “Fake it till you make it – AI and Hype”. My 4 key points: 1. AI hype does not question the very purpose of AI. 2. AI hype is linked to misleading promises. 3. AI hype directs energy at something that is barely tangible.…

    Read more

  • On Teaching Artificial Intelligence & Ethics

    The Montreal AI Ethics Institute interviewed me, along with my ForHumanity colleagues Merve Hickok and Ryan Carrier, about our thoughts on teaching AI and ethics. I recommend keeping AI ethics as applied as possible and inspiring people to think about what that means for their own work experience.

    Read more

  • There is no Responsible Tech without Accountability

    There is a divide between those working on Responsible Tech inside companies and those criticizing from the outside. We need to bridge the two worlds, which requires more open-mindedness and the willingness to overcome potential prejudices. The back and forth between ‘ethics washing’ and ‘ethics bashing’ is taking up too much space.

    Read more

  • AI and Sustainability: a Solution or Part of the Problem?

    Environmental sustainability is one of the most promising domains to deploy ‘AI for Good’. The environment is an excellent use case for collecting and analyzing data that help us to better understand and address key environmental challenges. In contrast to the use of AI in ‘human settings’, you typically don’t run into problems of privacy…

    Read more