Introduction
This article focuses on parliamentary reporting and Artificial Intelligence (AI). It sheds light on adjusted perspectives on deepfakes and their quality in political context, the coming AI Act and a promising newcomer for parliaments in AI-based automatic speech recognition (ASR).
I have emphasised the concerns surrounding AI in a prior Tiro article (Eras, 2023), such as the danger that deepfakes pose to official parliamentary video reports. I concluded as follows: deepfakes may be fake, but they have real consequences and therefore pose a real threat to video reporting. I also discussed, reassuringly for some of us, that Hansard seems superior to the official video report because of reliability issues. Nevertheless, deepfakes can still be seen as problematic.
Disinformation
For starters, there were and are some alarming reports in the media that deepfakes can have a negative and unwanted impact on elections and the results of these elections. There are suggestions of an infocalypse of disinformation (Schick, 2020) with an expected surge in 2024; that was the year of some 140 elections around the world, including those for the European Parliament in May and the presidential elections in the USA in November. In reality, none of this seems to have taken place, certainly not at a disruptive scale. Sure, there is misinformation and false information, sometimes with the help of generative AI, but not regarding the use of deepfakes; that is the informed opinion of Professor Ciaran Martin of Oxford University, a former head of the British National Cyber Security Centre (Martin, 2024).
The internet is surely full of deepfakes. However, most of them are pornographic and not political in nature. A 2019 study (The Byte, 2019) and a 2023 study (Security Heroes, 2023) state respectively that 96% and 98% of deepfake videos online are pornographic. That does not mean that this is not a problem, and certainly not that this problem cannot have political implications. However, the number of political deepfakes does not seem significant and there don’t seem to be any examples of deepfakes in official parliamentary reporting via video yet. However, it is naive to wait until this happens before identifying the dangers.
Threats to quality and credibility
There are still more perspectives that need to be visited. Let’s return to deepfakes in politics. In my humble opinion, the reason for the limited use or impact of deepfakes in political communication is twofold. In general, deepfakes are characterised by poor quality and lack of credibility. I illustrate this with a deepfake image that was circulated during the House of Representatives elections in the Netherlands in 2023. It shows Frans Timmermans, a former European Commissioner and the driving force behind the European Green Deal, in a luxury plane. The frame is: while European citizens are asked to fly less for the climate, the so-called Climate Pope still enjoys a private jet. Let’s take a closer look.

The two lenses of the eyeglasses differ in size. There is also a strange shadow behind the left glass. Timmermans seems to have an extra sixth finger. On the left, we see an orange with a strange white peel and a floating glass without a stem in the middle, and some strange-looking crackers on the right. Based on this picture and others like this, my assumption is still that deepfakes are characterised by poor quality and lack of credibility. However, because of the newsworthiness of politics and the speed of news dissemination, these factors may be of less importance when it comes to a deepfake of an official parliamentary video. That will certainly be the case if it is not possible to debunk the fake video quickly, with all its consequences for the credibility of official parliamentary video reporting.
EU AI Act
More adjustments in perspectives are needed, I think. This impression of the situation as “not too bad” is reflected in the AI Act introduced by the European Union in 2024. This law, which aims to regulate the introduction and use of all forms of artificial intelligence, surprisingly considers deepfakes a form of AI with “limited or low risk”. Rather than banning deepfakes, the law merely requires that creators reveal the artificial origins of a deepfake and report the techniques used—but, unsurprisingly, they don’t, which is the reason why the picture of Frans Timmermans alongside this article has no copyright notice.
It is striking that, in the Act, deepfakes are not seen as unacceptable or even high-risk. This contrasts with general public opinion about AI and generative AI, which appears to be increasingly sceptical, according to the Edelman Trust Barometer. In its data, Edelman (2024) also sees a general scepticism among technology employees. Some of this scepticism can be explained with the Gartner Hype Cycle, a graphical representation of an American research firm called Gartner (2004). Based on data, Gartner states that all new technologies mature in five phases: technology trigger; peak of inflated expenses; trough of disillusionment; slope of enlightenment; and plateau of productivity. My view is that, with regard to AI and generative AI specifically, the hype and the expectation of quick wins is now fading fast and we are indeed heading towards the so-called trough of disillusionment.
A Strong and promising newcomer
However, the general suspicion towards big AI or generative AI does not mean that specific tools are not used effectively here and there. For example, in parliamentary reporting, for some time now, alternative AI tools have been available to the large, expensive and to-be-customised ASR platforms. Take for example Whisper, an ASR application introduced by OpenAI (also known for ChatGPT) in 2022. The sales pitch includes: high recognition accuracy; the removal of repeated words and filler words; correcting sentences; the offering of fast delivery; and cheap subscriptions. For example, the Scottish Parliament Official Report reported to me as being enthusiastic about working with Whisper as a help with writing Hansard.
Because of this enthusiasm, I initiated ECPRD—European Center for Parliamentary Research and Documentation—request 5795 into the use of generative AI tools for parliamentary reporting. The questions were, in short, about whether AI tools were used in parliamentary reporting and whether AI guidelines were in place. The result (with a 53% response rate) was that 51% of the parliaments use AI tools and an impressive 27% use Whisper. Also, in 43% of the responding parliaments, AI guidelines were in place.
The results of the survey show that Whisper looks like a promising and popular AI tool for parliaments for ASR tasks. However, it should be noted that the chance of success is significantly increased if a parliament has permanent developers who are able to download Whisper locally, tweak it then train it on its own parliamentary data (see Kerr, 2025, in this issue). Encouraged by others’ examples, the Parliamentary Reporting Office of the Netherlands has a small-scale proof of concept planned for using Whisper for 2025.
Conclusion
There aren’t many political deepfakes on the internet. Political deepfakes are generally of poor quality and not credible. Examples of deepfakes in official parliamentary reporting via video have not yet surfaced. The AI law of the European Union surprisingly views deepfakes as a limited risk, while the public is becoming increasingly sceptical about AI in general. Whisper appears to be a promising AI tool for ASR tasks in the field of official parliamentary reporting.
Henk-Jan Eras is a Quality Officer in the House of Representatives of the Netherlands. He is also a member of Tiro’s editorial team.
References
artificialintelligenceact.eu (2024), The EU Artificial Intelligence Act. Up-to-date developments and analyses of the EU AI Act. URL: https://artificialintelligenceact.eu
ECPRD (2024). European Center for Parliamentary Research and Documentation. URL: https://ecprd.secure.europarl.europa.eu/.
Edelman Trust Barometer (2024). Supplemental Report: Insifhts for the Tech Sector. Top Findings. URL: https://www.edelman.com/sites/g/files/aatuss191/files/2024-03/Trust%20Tech%20Sector%20Top%20Findings.pdf)
Eras, H. (2023). Reliability and Parliamentary Reporting, – Tiro 2/2023. URL: https://tiro.intersteno.org/2023/12/reliability-and-parliamentary-reporting/
Gartner (2024). Gartner Hype Cycle. Interpreting Technology Hype. URL: https://www.gartner.com/en/research/methodologies/gartner-hype-cycleHouser, K. (2019). Study: Porn Accounts for 96 Percent of Deepfakes Online. – The Byte. URL: https://futurism.com/the-byte/porn-deepfakes-96-percent-online
Kerr, D. (2025). Harnessing Whisper at the Legislative Assembly of British Columbia: A User-Driven Approach to AI-Supported Parliamentary Reporting. Tiro 1/2025: https://tiro.intersteno.org/2025/06/harnessing-whisper-at-the-legislative-assembly-of-british-columbia-a-user-driven-approach-to-ai-supported-parliamentary-reporting/.
-
[…] Henk-Jan Eras:Taking Stock of Artificial Intelligence from the Perspective of Parliamentary Reporting in 2025 […]
Leave a Comment
An interesting article. As an old-time Westminster Hansard reporter [retired] using pen and notebook I am attracted to the opportunities apparently offered by AI. I hope to read more about this development. Years ago I took part in a Southampton [UK] university experiment designed to examine the possibilities of using computer power to transcribe shorthand notes. Has anyone further information on this? I rather think that it dive-bombed.
Peter Walker