menu
News

Using AI in National Dialogues – Lessons and Reflections

In preparing the spring 2025 Mental resources National Dialogue summary, we experimented with using artificial intelligence. What added value does it bring, and what should be considered when using it? In this blog, Elina Henttonen from Valtaamo Oy shares the lessons and insights gained from this trial.

By Elina Henttonen

AI as a tool for democracy

Artificial intelligence can strengthen democracy in many ways—if it is applied responsibly and transparently. Already today, AI is used to structure material from deliberative citizen discussions, process municipal feedback, and analyse public consultation responses in legislative drafting.

Its power lies in handling, classifying, and summarising large volumes of citizen perspectives and weaving them into comprehensive overviews. At best, AI can improve the knowledge base of decision-making and, in doing so, foster citizen participation. At the same time, however, we must remain mindful of issues such as privacy, the functioning of algorithms, and the responsibility of interpreting results.

Why bring AI into National Dialogues?

The goal of National Dialogues is to strengthen trust, participation, and democracy. A single round of National Dialogues can involve more than a hundred separate dialogues (and we hope many more in the future). Each dialogue generates written records, which can range from a few pages to dozens. Unsurprisingly, such vast datasets are demanding and time-consuming to process.

This is why we wanted to explore how AI could support the analysis and reporting of National Dialogue material, and more broadly, how it could help structure experiential knowledge. Our aim was also practical: to make producing summaries faster and more cost-efficient.

Testing AI in analysis

For the spring 2025 Mental resources dialogues, I tested AI as a support tool for data management and analysis. Having prepared several similar dialogue summaries before, I first created my own “manual” thematization of the 117-dialogue dataset. This gave a solid baseline to compare the AI’s results with my own sense of the material—and to judge how accurately AI captured the essence of it.

I then used two different tools: Notebook LM (based on Google’s Gemini language model) and Skimle, a Finnish tool still in testing. I was curious about how each tool summarized, categorized, and themed the data, and what kinds of questions and perspectives they raised. I also wanted to compare their outputs with each other and with my own analysis. Could AI surface something I had missed? Would it enrich the analysis with new angles—or would it flatten out diverse voices?

What I found was encouraging, though not surprising: AI is effective at structuring large datasets. Its categories differed somewhat from mine, but the substance was largely the same. This confirmed that my own analysis had captured the key themes comprehensively. Yet I had secretly hoped AI would challenge my framing and reveal something entirely new. That did not happen.

Why human work is still essential

Despite its usefulness, AI cannot replace thorough human work. A researcher needs to be familiar with the material to evaluate the AI’s categorisations and interpretations. In my experiment, AI did not hallucinate or produce false content, but without a deep understanding of the dataset, it would have been difficult to judge the significance of its suggestions.

Another feature of AI-generated content is its initial polish: at first glance, it can look impressively smooth—even brilliant. But closer examination often reveals flatness, repetition, overlaps, empty phrases, and a weakening sense of narrative.

It is also the researcher’s responsibility to situate observations in context. In my trial, I had fully anonymised the dataset, so AI had no metadata such as organiser or participant details. While it still produced meaningful classifications, linking insights back to context required a clear system on my part.

The benefits of AI

  • A practical and user-friendly tool for structuring and managing large datasets
  • Speeds up and streamlines data processing
  • Allows versatile exploration through prompts and generates useful visualisations and tables
  • Enables targeted searches (e.g. extracting all mentions of certain topic without rereading thousands of pages)
  • Acts as a “second reader,” allowing researchers to reflect on and validate their interpretations

Points to keep in mind

  • AI accelerates processing and provides new ways of examining material but it cannot replace the role of human judgment, contextual understanding, and narrative building.
  • You need a solid grasp of the dataset to use AI wisely. For very large datasets, this can also be gained by reviewing a representative sample.
  • AI treats all words equally. It cannot interpret context or frameworks unless you provide them. Prompts therefore matter greatly, and experimenting with them is key. Without a good feel for the data, AI may flatten results in ways that go unnoticed.
  • Documenting the stages of AI-assisted analysis is important for transparency.
  • If dialogue transcripts are generated by AI, errors may occur. This raises the risk of AI analyzing AI-made mistakes.
  • Privacy and data protection must be carefully addressed: does the material contain personal data or indirect identifiers, and how is it anonymised or pseudonymised? Are local or cloud-based systems being used, and is the chosen tool reliable from a data protection perspective?
  • In some cases, using AI may distance participants and weaken their sense of ownership of the results, especially if trust in AI is low or its use is not clearly explained. Transparency and communication are therefore critical.