Combating AI-Driven Disinformation: Reward-Based Model for Enhancing Civic Engagement and Election Integrity

International Institute for Middle East and Balkan Studies (IFIMES)[1] from Ljubljana, Slovenia, regularly analyses developments in the Middle East, the Balkans, and around the world. In the text entitled “Combating AI-Driven Disinformation: A Reward-Based Model for Enhancing Civic Engagement and Election Integrity”, Dr. Harvey Dzodin, media commentator, author and former vice president of ABC-TV, writes about the reward-based push-pull model between public, private, and philanthropic sectors, which can effectively promote civil political discourse and fair elections.

● Dr. Harvey Dzodin

 

Combating AI-Driven Disinformation: Reward-Based Model for Enhancing Civic Engagement and Election Integrity

 

We face an existential crisis of paramount importance to democratic institutions worldwide as malign uses of technology continue to accelerate ahead of beneficial applications. This essay argues that only through adopting a reward-based push-pull model between public, private, and philanthropic sectors can we effectively promote civil political discourse and fair elections. By acknowledging human motivational psychology and leveraging multi-sector collaboration, this approach offers promising strategies to combat increasingly sophisticated AI-generated disinformation campaigns that threaten to undermine democratic processes.

The Accelerating Threat of AI-Generated Disinformation

Where disinformation campaigns once required substantial financial and technical resources, we now live in an age where realistic AI-enabled disinformation is both inexpensive and accessible. Open-source tools like Stable Diffusion for image generation and ElevenLabs for voice synthesis have democratized the creation of convincing fake media, placing powerful capabilities in the hands of actors with limited technical expertise. The civic environment today is saturated with AI-driven fake media capable of creating hyper-realistic but false videos and audio clips of candidates and public officials, accompanied by algorithmically generated propaganda at industrial scale designed to distort democratic processes.

A particularly illustrative case occurred two days before Slovakia's 2023 parliamentary elections, when an AI-generated audio clip falsely depicted a candidate discussing vote-rigging tactics. This incident exemplifies modern election interference, where low-cost, highly realistic fakes can disproportionately influence public perception. The timing was strategic, as the fake circulated during the election blackout period when candidates and parties were prohibited from responding. Although platforms like TikTok and YouTube removed the clip (while Facebook did not), and fact-checkers eventually debunked it, the damage was already done. The party targeted by this deception had been leading in polls before the fake flooded social media but ultimately lost the election.

Recent analysis of this "Slovak case" has complicated the straightforward narrative about deepfakes swinging elections, highlighting additional contextual factors that made the electorate particularly susceptible to pro-Russian disinformation. Beyond the immediate impact on the election outcome, this case raises important questions about the growing use of encrypted messaging applications in influence operations, misinformation effects in low-trust environments, and the role politicians play in amplifying misinformation, including deepfakes.

Media Literacy: Necessary but Insufficient

Media literacy represents a gold standard in combating disinformation, as it equips individuals with the critical thinking skills necessary to distinguish between authentic and fabricated information. Without such literacy, many people lack the awareness to identify if the information they encounter is genuine or false, and may not even recognize the personal and societal importance of making this distinction.

Finland has emerged as a global leader in this domain, consistently topping the European Media Literacy Index that measures potential disinformation vulnerabilities. The Finnish approach recognizes that media literacy is not a one-time intervention but rather an ongoing process requiring constant attention, similar to a garden that needs regular maintenance. Consequently, Finland has integrated media literacy as a core component of its national educational curriculum, beginning in early childhood and continuing throughout all educational levels.

Finland's comprehensive approach to media literacy is characterized by its integration within the national curriculum, professional development for teachers, and strong methodological approaches to teaching. This model has been identified as one of the most successful in Europe, alongside those implemented in Great Britain, Slovenia, and France. The Finnish approach to media education aims not only to enable students to analyze current information environments but also to envision and work toward desired futures. Finnish media education is particularly distinguished by its emphasis on critical thinking skills, enabling students to analyze authorship, ownership, control, and the ways media texts encode information and achieve their effects.

Many OSCE participating states have led media literacy efforts, with Finland and its Baltic neighbors at the forefront. However, recognizing the political and economic diversity of these states, not all can devote the resources that Finland does to comprehensive media education. For these countries, making information available to voters who can pull it from trusted domestic and international sources may have to suffice.

Despite the demonstrated success of comprehensive media literacy programs, relying solely on media literacy presents limitations. Even when assuming that most citizens aspire to be well-informed, they often lack strong incentives to engage critically with information when compared to malevolent actors motivated by power, status, or financial gain. The contemporary information environment, characterized by an overwhelming volume of both legitimate and false information, creates significant barriers to engagement. The high threshold of effort required to access and utilize fact-checking websites and trusted information dashboards further compounds these challenges.

The Incentive Gap

There's no particular incentive for most people to pay attention to alerts about misinformation when they're already drowning in information, disinformation, and misinformation both online and offline. Even providing the public with trusted dashboards and fact-checking websites requires overcoming a high threshold of action. This fundamental incentive imbalance necessitates innovative approaches that acknowledge human psychology and motivational factors.

A Reward-Based Push-Pull Model

To address these limitations, this essay proposes a reward-based push-pull model that leverages collaboration between public, private, and philanthropic sectors to incentivize civic engagement and disinformation detection. This approach recognizes the dual motivations of human behavior: altruism and self-interest. While virtue may be its own reward for some, others may be more effectively motivated by tangible incentives that both acknowledge their civic contributions and provide practical benefits.


The involvement of public, private, and philanthropic spheres is critical because we need a model that can motivate and reward ordinary people enough to be proactive in either exposing disinformation themselves or that incentivizes them to shun debunked disinformation, spread the truth, and participate in democratic processes like voting.

Three Implementation Approaches

The proposed model involves neutral entities—whether foundations, civic organizations, or media platforms with no stake in the election's outcome but an interest in burnishing their reputations as good corporate citizens—implementing one or more of three potential approaches:
 

  1. Rewarding individuals who expose and publicize deepfakes, similar to the "white-hat hacker" model in cybersecurity
  2. Providing incentives for citizens who demonstrate engagement with fact-checking websites by successfully completing online quizzes about current disinformation
  3. Offering rewards for basic civic engagement, such as voting

These rewards could take various forms, including gift cards, merchandise, entries into prize drawings, digital badges, or public recognition, tailored to different contexts and preferences.

Real-World Evidence and Analogies

While some might express skepticism about incentivizing actions that should be intrinsically motivated, real-world examples demonstrate the effectiveness of such approaches. In a local Canadian election, a $100 gift card incentive may have contributed to boosting voter turnout from 17.5% in 2021 to 67% in 2024. Similarly, a student council election at the University of Leicester that offered free food vouchers and event tickets reported higher participation rates. Even non-monetary incentives, such as badges or public recognition, have shown positive results in enhancing civic engagement.

These reward-based systems have analogues in other domains of civic participation. In South Africa, the "Going the Extra Mile" program successfully provides digital incentives for community improvement activities, such as neighborhood clean-ups. Participants earn "gems" that can be redeemed at participating retailers who fund the program. Other jurisdictions have implemented loyalty point systems for pro-civic behaviors like using public transportation, with points redeemable at local businesses. These partnerships allow governments to offer rewards without direct expenditure while supporting civic engagement and the local economy.

Additionally, some communities have developed gamified approaches where residents earn digital badges for activities such as attending town hall meetings or contributing to local improvement projects. Online and offline leaderboards recognize achievements, leveraging the power of social recognition to motivate continuing engagement. These examples demonstrate that even non-financial gamified models can effectively encourage civic participation.

The Win-Win Scenario

Where rewards are contributed by private businesses, the businesses receive positive recognition as good corporate citizens that may increase their own profits. This creates a win-win variation on Adam Smith's invisible hand—businesses benefit from the positive publicity while civic engagement increases. To be clear, this model is not advocating buying votes but rather encouraging broader participation in democratic processes and information verification.

Conclusion

The threat posed by AI-generated disinformation to democratic processes and civil discourse demands innovative solutions that go beyond traditional approaches. While media literacy remains fundamental to building resilient societies, it alone cannot overcome the incentive imbalances that favor producers and disseminators of disinformation over ordinary citizens. The reward-based push-pull model proposed in this essay offers a complementary approach that acknowledges human nature and leverages both intrinsic and extrinsic motivations to promote civic engagement and combat disinformation.

By fostering collaboration between public, private, and philanthropic sectors, this model creates win-win scenarios where businesses receive positive recognition as good corporate citizens, governments achieve higher levels of civic participation without direct expenditure, and citizens receive tangible benefits for their engagement. Most importantly, democratic processes benefit from increased participation and reduced vulnerability to disinformation campaigns.

As AI-generated disinformation continues to evolve in sophistication and scale, our responses must similarly adapt and innovate. The reward-based push-pull model represents one potential path forward, worthy of further exploration, refinement, and experimental implementation across various contexts. The urgency of this challenge demands that we think outside conventional boundaries and embrace approaches that effectively counter the existential threat posed by technology misuse in our civic spaces.

Necessary factual information that fuels public perception to accurately reflect the will of the people in elections is being threatened like never before. We're facing wave after wave of tsunamis of AI and human-generated disinformation. Only by adopting innovative approaches that acknowledge basic human motivations can we hope to effectively combat this existential threat to democracy.

About the author: 
Dr. Harvey Dzodin, media commentator and author, former vice president of ABC-TV and political appointee in Carter administration, Senior Fellow of Center for China and Globalisation.

This is the paper exclusively made for the IFIMES to accompany talk, March 18th, 2025, Hofburg Palace, OSCE https://www.osce.org/odihr/shdm_1_2025. Organized by the International institute IFIMES under Finnish OSCE Chairpersonship, the OSCE Representative on Freedom of the Media, and the OSCE Office for Democratic Institutions and Human Rights (ODIHR). Panel Title: Media, Disruptions (Conflicts, Technologies), Truth and Reconciliation.

The article presents the stance of the author and does not necessarily reflect the stance of IFIMES.

Ljubljana/Vienna, 28 April 2025
 

[1] IFIMES - International Institute for Middle East and Balkan Studies, based in Ljubljana, Slovenia, has a special consultative status with the United Nations Economic and Social Council ECOSOC/UN in New York since 2018, and it is the publisher of the international scientific journal “European Perspectives”, link: https://www.europeanperspectives.org/en