Citizens need clear, accessible information to make informed decisions, but today’s digital landscape can be overwhelming. This column tests whether an AI chatbot can ease election information overload for 1,500 Californian voters. The chatbot reduces the voters’ perception of the effort needed to get more information, increases voter engagement, and improves the accuracy of their understanding – especially among those who initially know less about the issues. For policymakers, this suggests transparent AI tools can help reduce inequalities in representation, but only if users trust their performance.
In a well-functioning democracy, citizens require clear, accessible information to make informed decisions. However, today’s digital information landscape can overwhelm voters, increasing cognitive costs and leading to reliance on incomplete or biased information, weakening democratic accountability (Matějka and Tabellini 2020).
In recent work (Ash et al. 2025), we argue that large language models (LLMs) and AI-powered chatbots can help address information overload in democracies. Our analysis complements existing evidence demonstrating LLMs’ potential in various fields, such as productivity gains for freelance writers (Noy and Zhang 2023), customer support (Brynjolfsson et al. 2025), and consulting tasks (Dell’Acqua et al. 2023).
To investigate whether LLMs can provide similar productivity benefits to voters navigating dense political information, we developed BallotBot, a custom chatbot designed specifically for California ballot initiatives from the November 2024 election. BallotBot employs retrieval-augmented generation, drawing directly from the official voter guide to simplify and clarify information about three major propositions. To evaluate BallotBot’s effectiveness compared to traditional voter guides, we conducted a pre-registered randomised online experiment. Specifically, we examined whether the chatbot could reduce voters’ information costs and boost overall engagement while providing access to reliable information.
BallotBot offers voters an interactive way to explore information about California’s ballot propositions. Unlike standard chatbots (e.g. ChatGPT or Claude), it uses retrieval-augmented generation, directly incorporating proposition-specific content from California’s official voter guide to improve the accuracy of its responses. We developed three versions of BallotBot, each tailored to one of the selected ballot initiatives:
After collecting initial demographic information, approximately 1,500 participants recruited through Prolific were randomly assigned to one of two experimental groups, each with access to an informational tool: one group accessed BallotBot, while the other accessed the traditional voter guide.
In the initial phase of the experiment, participants completed an incentivised quiz on one of three California ballot propositions, consisting of basic and in-depth questions. Basic questions were answerable using introductory content from the voter guide, while in-depth questions required deeper reading. Participants earned 20 cents for each correct answer and could use the assigned informational tools during the quiz (i.e. either BallotBot or the traditional voter guide). Afterwards, a randomly selected subset received feedback on quiz performance to induce varying confidence in the informational tools.
We recorded participants’ probability of providing correct answers, the time spent on each question, and their confidence in their answers.
To measure the perceived cost (effort) associated with obtaining political information, we used an incentive-compatible method by asking participants how much additional compensation they would require (i.e. their ‘bid’) to answer one more quiz question; a lower bid indicated a lower perceived cost. After the quiz, participants had the opportunity to freely interact with their assigned resource, exploring any further questions they had about the proposition. We tracked their engagement by measuring the amount of time they spent interacting with the source. Participants retained access to their assigned resource throughout the following week.
In the second wave, conducted one week later, we reassessed participants’ political knowledge using a similar quiz, but this time without access to external resources. We also asked participants to explain their voting intentions through an open-ended question. Finally, in the third wave, conducted after election day, we collected self-reported data on voter turnout and voting behaviour.
First, we find that participants who had access to BallotBot performed significantly better on in-depth questions, with accuracy improving by approximately 18% compared to those using the traditional voter guide (Figure 1, panel a). They also took about 10% less time to answer these challenging questions (panel b). However, participants with BallotBot access did not show improved accuracy on basic questions and actually spent slightly more time answering these simpler questions. BallotBot did not significantly affect overall confidence in answers (basic or in-depth; panel c).
Figure 1 Participants with BallotBot access performed better on in-depth questions
(a) Accuracy: Share of correct answers
(b) Response time: Time to answer questions
(c) Confidence: Self-reported confidence in answers
A key finding from our analysis is that BallotBot effectively reduces the perceived effort required to learn about ballot initiatives (Figure 2). Focusing on participants who received feedback, we asked them how much money they would require to answer an additional quiz question, with lower amounts indicating lower perceived effort. Although BallotBot did not significantly lower the average declared amount for all participants, it notably reduced the amount among individuals with less prior knowledge about the propositions and/or lower education (Figure 2, panel b). This result is consistent with recent studies suggesting generative AI tools provide greater benefits to users with lower initial knowledge or skill levels (Brynjolfsson et al. 2025, Cui et al. 2024).
Interestingly, this effect is not observed if respondents do not receive feedback on the number of correct answers, consistent with earlier research highlighting that feedback and transparency are essential for increasing trust and acceptance of AI-driven recommendations (Ahn et al. 2024).
Figure 2 BallotBot reduces perceived effort required to learn about ballot initiatives
(a) Perceived cost
(b) Perceived cost, by prior knowledge
Another new finding is that BallotBot boosts curiosity and engagement with ballot initiatives. When given the chance to explore information they judged important for their vote, participants using BallotBot spent, on average, about 70% more time gathering that information (Figure 3, panel a). They were also 75% more likely to access optional bots for additional propositions (panel b). Beyond exploration, BallotBot modestly improved perceived information quality – about a 17% higher likelihood of rating the resource as better than usual (panel c) – but it did not significantly increase mid-week use of the resource (panel d).
Figure 3 BallotBot boosts curiosity and engagement with ballot initiatives
(a) Time spent collecting information
(b) Probability of checking other propositions
(c) Resource is better than the usual means of information
(d) Used resource during the week
Next, we find that the effects of BallotBot do not persist over time (Figure 4). After the first wave, participants maintained access to BallotBot or the voter guide, though only around 10% used them during the interim week. One week later, participants completed a second quiz without access to the tool. We observed no significant difference in quiz scores between the BallotBot and voter guide groups. Additionally, we asked participants to write brief statements explaining their voting intentions. AI-based evaluation found no differences in the reasoning quality between the two groups.
However, descriptive analysis offered further insights. Participants who liked BallotBot and accessed it during the interim week showed improved quiz performance, suggesting enhanced factual understanding. Conversely, participants who accessed the voter guide again produced higher-quality written explanations, indicating better articulation and reasoning. These findings suggest that each tool may support voter learning in different, potentially complementary ways.
Figure 4 Effects of BallotBot do not persist over time
(a) Effect of BallotBot on share of correct answers and quality of written voting motivation
(b) Effect of long usage
(c) Effect of long usage, voter guide users
(d) Effect of long usage, BallotBot users
Finally, we examined whether BallotBot influenced declared voting behaviour. We found no statistically significant effect on self-reported voter turnout nor on the likelihood of voting ‘yes’ on any of the assigned propositions.
Our findings demonstrate that BallotBot reduces the perceived effort required to obtain political information, increases voter engagement, and improves the accuracy of voters’ understanding – especially among those who initially know less about the issues. Importantly, these positive effects require that users clearly see how well the tool performs; we have to build users’ trust in AI recommendations.
For policymakers, these results suggest that integrating transparent AI tools into voter education could support voters who stand to gain the most, potentially reducing inequalities in representation. Ensuring users can directly assess the quality and reliability of AI-generated information will be essential for users to fully benefit from these technologies.
We also provide suggestive evidence that AI tools like BallotBot complement, rather than substitute, traditional voter guides. Policymakers should therefore combine AI with existing voter information methods.
Source: cepr.org
Artificial intelligence differs from other technological advancements in finance, such as the initial adoption of…
Industrial raw materials such as nickel, cobalt, and rare earths are critical inputs in countless…
As European governments scale up investment, bond market stability is more critical than ever. This…
Economists have long warned of the negative consequences of excessive US public debt (e.g. Friedman…
Financial distress affects roughly one in five adults in OECD countries (OECD 2024). It constrains…
Until 2018, the US-China trade data gap was in line with the discrepancies found in…