When was the first time you found yourself watching content without realizing it was AI-generated?
In the latest instalment of our Data Insiders podcast, we navigate the issue with the acclaimed journalist Kaius Niemi and architecture expert Thomas Rosqvist.
When was the first time you found yourself watching content without realizing it was AI-generated?
Even for astute media consumers, it has become increasingly challenging to distinguish between real and fabricated. In recent years, we’ve seen several examples of how effectively AI can be used to spread disinformation, with far-reaching consequences.
However, the threat extends beyond politics: it should also keep companies and organizations on their toes. From smear campaigns to scams targeting employees, safeguarding against these risks is a significant struggle.
To discuss these challenges, we invited two experts to share their thoughts on the Data Insiders podcast. Kaius Niemi is the chair of Finnish Reporters Without Borders and the former editor-in-chief of Helsingin Sanomat, the biggest subscription newspaper in the Nordics. He is joined by Thomas Rosqvist, the Head of Architecture Advisory at Tietoevry Create.
Drawing from their different backgrounds, our guests paint a nuanced picture of these strange times where a deepfake of Joe Biden might call you to influence your voting preferences.
While many key players in global politics agree that AI should be regulated, opinions on how this should be achieved are divided at best.
Niemi points out that China’s motivations are state-driven, while the US has a market-oriented stance. Europe, on the other hand, is proposing rights-based models. These clashing interests are hardly surprising – and proves how difficult it can be to find common ground when the issues know no borders. But as AI tools are becoming more sophisticated, we might not have the luxury of time.
Similarly, most questions related to AI and disinformation are both technical and social challenges. Rosqvist shares an example: when it comes to identifying fake content online, no consensus on the best standard exists. Tools like Meta’s Stable Signature can create invisible watermarks to verify authenticity – but they are not bulletproof and can only be effective when publishers and content platforms embrace them.
Concerning as these developments are, the race is far from lost. Rosqvist and Niemi present ways in which we as communities can take accountability as the legislation catches up.
Education is directly linked to how resilient people are against fake news. Here, the Nordic countries are leading the pack: according to the European Media Literacy Index, Finland has held the top spot for years. Perhaps more than ever, the insights from our progressive Nordic school systems could make an impact worldwide.
Rethinking structures from the ground up is essential for companies as well. Rosqvist suggests that a strong culture can make employees less susceptible to attempts at influencing them from the outside. Niemi calls for response strategies, educating employees and being transparent towards stakeholders.
This transparency might just be our guiding star in broader terms too – paving a way for a future where we no longer have to guess whether something is AI generated or not.
Interested in learning more about this topic? Listen to the full conversation on our Data Insiders podcast below!
Data changes the world – but does your company take full advantage of it? Data Insiders is a podcast where we seek answers to one question: how can data help us all do better business? The podcast addresses the trends and phenomena around this hot topic in an understandable and interesting way. Together with our guests, we share knowledge, offer collegial support and reveal the truth behind hype and buzzwords.