The use of artificial intelligence (AI) can be deemed morally questionable in certain domains, for example due to ethical and legal concerns. While there might be understandable scepticism, the advance of AI models appears to be unstoppable. Many benefits have already been identified. Research shows, for example, that AI usage can lead to a productivity and efficiency boost and increased job satisfaction (Noy & Zhang, 2023). No wonder more and more organisations and citizens are jumping on the AI bandwagon, not only in education, journalism and academia but in politics.
In the run-up to the Dutch parliamentary elections of November 2023, two political parties used ChatGPT to modify their election programmes (Timmermans, 2023). In addition, the Dutch Government are investigating whether a virtual policy assistant, Codi, can help civil servants answer the thousands of parliamentary questions (NL AIC, 2023). Politicians have also sought ChatGPT’s assistance in crafting speeches and motions.
The Parliamentary Reporting Office (PRO) of the Dutch House of Representatives of the States General produces edited verbatim reports of the House’s plenary hearings and of other debates and hearings. Reporters write out five minutes of audio at a time based on an audio recording. Writing and editing are entirely manual procedures, thus requiring a significant amount of time. As is widely recognised, automatic speech recognition (ASR) can ease this process.
However, this article does not primarily address ASR. Instead, it focuses on generative AI tools centered around text, such as ChatGPT. It examines three specific domains in which AI can benefit parliamentary reporters: improving texts, providing feedback and analysing documents.
In addition to the edited verbatim reports, the PRO produces short, non-verbatim web reports of major debates and of the regular Question Hour, when MPs ask Ministers questions. The reports are published shortly thereafter, which adds time pressure to reporting. AI can assist the reporter in generating suggestions to clarify and shorten the text to meet the maximum word limit for paragraphs. It is also able to provide title recommendations. Also, AI can help to detect grammatical errors and other language inaccuracies. This process can be further facilitated when AI is integrated into Microsoft Word, which reportedly is Microsoft’s aim (Holmes & McLaughlin, 2023).
When editing, all reporters of the PRO use a wide set of rules. For example, we rely on a carefully curated glossary and a style guide with agreements on when and when not to correct a text. It can be challenging to fully internalise these conventions, especially for individuals who are new to the organisation. AI could scan an already edited text for passages or word usages that do not conform to our set of rules, for example by teaching an existing AI model those rules or developing a model of its own. In this way, the editor can quickly rectify any errors in order to meet the required uniform style of reporting.
There is also the possibility of uploading the complete set of editing rules to a tool such as ChatPDF, after which you can ask questions about the document. This would allow reporters easily to find out whether numbers, for example, are written as words or digits. Of course, editors can also upload their feedback databases and consult them. Thus, AI can fulfil the role of a substitute tutor or virtual trainee, which can be particularly useful on busy days when there are many debates or no colleagues available for assistance.
The Dutch House of Representatives meets hundreds of times yearly, either in plenary or in committee. Every reporter is thus confronted with an impressive range of debates and topics. Texts should be grammatically correct and their content easily understandable. This requires a strong understanding of all topics and their political intricacies. That is not always easy, particularly when the debate is about complex pension legislation, for example. In that case, AI can be used to scan legal texts and policy documents and answer questions about them to grasp the topic better. After all, understanding the topic makes writing and editing easier.
The three examples above demonstrate that AI can be used very easily to help parliamentary reporters in their work, and to help them become smarter. However, several pitfalls must be taken into account.
To begin with, one should realise that AI is not really “intelligent”. Large language models, the foundation of tools such as ChatGPT, are also known as “stochastic parrots”: they mimic human speech based on datasets without understanding what they are saying (Bender et al., 2021). Therefore, the reporter must be extra attentive to context and interpretation when editing. This kind of craftsmanship is impossible to outsource to AI (Voutilainen, 2023). Typical government jargon, nonverbal cues or emotions can be understood adequately only by reporters practised in these political texts.
As mentioned, AI tools can make recommendations for grammatical improvements. Grammar tools, such as Grammarly, are becoming more and more sophisticated. However, they are mainly English language-focused and are therefore of limited use to non-English language parliaments. Human intervention remains essential, yet parliamentary editors might explore the potential of grammar tools for experimentation.
Secondly, the reporter should take into account any “hallucinations” of AI tools. When trying to understand the context of legislation for debate, it is important not to rely solely on AI-generated conclusions. When hallucinating, AI tools do not differentiate between truth and falsehood. The story about a lawyer who cited fake cases in a court filing after getting “help” from ChatGPT is well known (Weiser & Schweber, 2023). It has also been discovered that political bias in AI models can occur because of the pre-existing political bias present in the input data (Gover, 2023).
Thirdly, using AI raises ethical and legal issues related to privacy violations and copyright infringement. Civil servants, including parliamentary editors, should handle data with care. For instance, when information is entered into ChatGPT, the text is not kept in a secure environment. Instead, it is utilised to train the model further. In theory, these texts could end up with third parties via prompt leaks. The EU AI Act is likely to require more transparency from large language models. However, for now, it is unclear how the AI Act, which is expected to come into force no earlier than 2025, will specifically impact parliamentary reporters. Until then, parliamentary organisations will have to develop policies and guidelines on AI use.
AI tools such as ChatGPT offer potential benefits for parliamentary reporters, enhancing text improvement, providing valuable feedback and aiding in document analysis. However, it is important to recognise their limitations. They do not have a deep understanding and can be biased. Additionally, ethical and legal concerns regarding data privacy and copyright need to be considered. Yet parliamentary organisations might find a way to ensure that AI’s benefits are captured while also preserving the insights and context that human reporters offer, particularly in the complex world of politics and legislation.
Gijs Freriks is an editor in the Parliamentary Reporting Office (PRO) of the House of Representatives of the States General, The Netherlands.
- Bender, E., T. Gebru, A. McMillan-Major & S. Shmitchell (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? – FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. URL: https://dl.acm.org/doi/10.1145/3442188.3445922
- Gover, L. (2023). Political Bias in Large Language Models. – The Commons: Puget Sound Journal of Politics, vol. 4, no. 1. URL: https://soundideas.pugetsound.edu/thecommons/vol4/iss1/2/
- Holmes, A. & K. McLaughlin (2023). Ghost Writer: Microsoft Looks to Add OpenAI’s Chatbot Technology to Word, Email. – The Information. URL: https://www.theinformation.com/articles/ghost-writer-microsoft-looks-to-add-openais-chatbot-technology-to-word-email
- NL AIC (2023). “Codi, de Virtuele Beleidsassistent. – Nederlandse AI Coalitie. URL: https://nlaic.com/use-cases/codi-de-virtuele-beleidsassistent/
- Noy, S. & W. Zhang (2023). Experimental evidence on the productivity effects of generative artificial intelligence. – Science, vol. 381, no. 6654, pp. 187-192. URL: https://www.science.org/doi/10.1126/science.adh2586
- Timmermans, M. (2023). ChatGPT slaat ook aan bij BBB en Volt in aanloop naar verkiezingscampagne. – Trouw.nl. URL: https://www.trouw.nl/politiek/chatgpt-slaat-ook-aan-bij-bbb-en-volt-in-aanloop-naar-verkiezingscampagne~b9dc1b47/
- Voutilainen, E. (2023). Artificial intelligence suggests using itself for professional transcription. – Tiro 1/2023. URL: https://tiro.intersteno.org/2023/07/artificial-intelligence-suggests-using-itself-for-professional-transcription/
- Weiser, B. & N. Schweber (2023). The ChatGPT Lawyer Explains Himself. – The New York Times. URL: https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html