By: Digital Newspaper
Date: June 22, 2025
LONDON / NEW DELHI / WASHINGTON / BRATISLAVA — As more than 60 democracies head toward national elections in 2025 and 2026, a silent revolution is altering the electoral landscape: artificial intelligence. From campaign automation to synthetic media manipulation, AI is redefining how democracies communicate, influence, and protect the vote.
AI-powered platforms are now integral to modern political strategy. In India’s 2024 elections, parties used AI to translate speeches into 14 regional languages within seconds. In the U.S., chatbots crafted targeted campaign messages based on demographic data. Meanwhile, in Slovakia, a deepfake audio clip impersonating a party leader days before voting may have altered the final outcome.
FROM CAMPAIGN TO CONTROVERSY
The evolution of AI in elections is multifaceted:
- Microtargeted Messaging: AI tools analyze voter profiles to tailor hyper-specific political messages. While efficient, this has raised concerns about data privacy and psychological manipulation.
- Synthetic Media: Deepfakes—convincing yet fake videos or audios—are being used to mimic politicians or misrepresent their words. In Pakistan, AI-generated videos of Imran Khan circulated widely, though labeled as fake.
- Automated Engagement: Chatbots on WhatsApp, Telegram, and Messenger now simulate real political volunteers, answering policy questions and persuading undecided voters.
INTERNATIONAL CASE STUDIES
- Slovakia (2023): A viral AI-generated audio clip, falsely portraying opposition leaders, shook voter confidence. The government later declared a state of disinformation emergency.
- India (2024): Multiple deepfake videos of both BJP and Congress leaders were widely shared. Though authorities acted quickly, they couldn’t contain the initial spread.
- United States (2024 Primaries): AI chatbots were used to suppress votes by giving misleading election information in targeted states. Investigations are ongoing.
RISKS TO DEMOCRACY
Experts warn of three primary risks:
- Voter Manipulation: Targeted disinformation campaigns may nudge voters toward specific ideologies without them realizing.
- Erosion of Trust: The line between real and fake continues to blur. When truth becomes uncertain, public trust erodes.
- Foreign Interference: Nations like Russia and China are suspected of using generative AI to spread political propaganda globally.
“AI doesn’t just automate processes—it automates persuasion,” said Dr. Natalia Grekov, a political technologist at MIT. “That’s the real danger when misused.”
RESPONSES & REGULATION
European Union: Enacted the Digital Services Act requiring labeling of AI-generated content and holding platforms accountable for political misinformation.
United States: Proposed the AI Transparency in Elections Act, mandating all political content generated by AI to be clearly marked.
United Nations: Assembled a global working group in 2025 to draft voluntary guidelines for AI use in democratic processes.
Tech Companies: Meta, OpenAI, and Google pledged to develop watermarking technologies and restrict use of their tools for unauthorized election manipulation.
THE PATH AHEAD
As nations hurtle toward digitally shaped elections, the balance between technological innovation and democratic integrity grows more delicate. Voter literacy, transparency laws, and ethical AI development will determine whether artificial intelligence becomes democracy’s ally—or its undoing.
“Elections are no longer just about people. They are about data, algorithms, and influence,” said EU Commissioner for Digital Rights Helena Schwartz. “And we must act fast to protect the will of the people.”