Read, Debate: Engage.

Deepfakes have penetrated India’s general election

May 21, 2024
topic:Digitalisation
tags:#India, #election, #AI, #tech, #politics
located:India
by:Hanan Zaffar, Shaheen Abdulla
As India's monumental general election unfolds, the surge of AI presents ethical dilemmas amid growing concerns of misinformation.

The world’s largest elections are being held in India, with close to a billion people voting in seven phases of the election, beginning in April and ending in the first week of June. Capitalising on the monumental event, Divyendra Singh Jadoun (31), founder of an Artificial Intelligence start-up -The Indian Deepfaker- has discovered a niche market for his company.

Different political parties seeking to use AI for campaigning are reaching out to him, but Jadoun says he frequently grapples with ethical dilemmas regarding the needs of his clients.

“We refuse to do 50 per cent of the assignments, he told FairPlanet.

Artificial Intelligence (AI) has become a new obsession in a heavily polarised election which will decide whether the rightwing ruling Bharatiya Janata Party (BJP) will be elected for a rare third term. The potential of technology in a country grappling with misinformation has alarmed experts at the Internet Freedom Foundation (IFF), a prominent digital rights organisation based in India

“The use of deepfakes for disseminating misleading or deceptive narratives about political candidates and affiliated institutions may affect voter perspectives, sentiments, as well as their behaviour,” wrote IFF in an open letter to Indian Parliamentarians.

“Targeted misuse of the technology against candidates, journalists, and other actors who belong to gender minorities, may further deepen inequities in the election.”

Jadoun also shares the concern. Despite working for “two main parties” in India and other regional parties, he finds it concerning that there are no proper guidelines on AI usage in the country. “I cannot say it is not being used in harmful ways,” he said.

AI is being increasingly utilised for innovative yet potentially risky tactics. For instance, just days before the commencement of the general election, a manipulated video featuring the late Indian politician Harikrishnan Vasanthakumar appeared on social media. In the video, Vasanthakumar is depicted endorsing his son Vijay Vasanth, a sitting Member of Parliament and a renowned film personality who is presently contesting elections in his late father’s former electoral district.

“The love and trust that you’ve placed in me, I am proud that you have placed the same faith in my son Vijay Vasanth,” Vasanthakumar’s AI avatar said.

The manipulated video aimed to sway voters in Kanniyakumari, where Vasanthakumar previously held office, in support of his son's candidacy.

Many such AI avatars of dead politicians have surfaced during the election season, and memes created with AI by supporters of political parties are spreading on social media platforms ahead of the elections.

Political parties are also creating content with ease using the freely available AI tools “IT cells,” which are special units responsible for managing digital campaigns and online propaganda.

On Instagram, Prime Minister Modi’s Bharatiya Janata Party used an old patriotic Hindi song with altered lyrics that praise Modi’s achievements for their campaign video. The new version was sung by AI audio of the late legendary singer Mahendra Kapoor.

In another case, a police complaint was filed after a video message was shared on X showing Bollywood star Aamir Khan appearing to taunt Indian Prime Minister Narendra Modi.

The technology is set to impact not only India’s polls but that of 25 countries globally including the United States and the United Kingdom, which are holding elections this year.

Many IT and AI companies like that of Jadoun are being roped in by political parties for “impactful” and “easy” campaigning. “We once sent personalised messages from a party leader to one million party workers within hours,” says Jarduon who calls his work “non-traditional” campaigning.

“The new outreach can cut costs and save time,” Jadoun explains.

Need for a Responsible Use

Amid growing concern, India’s poll monitoring agency, the Election Commission of India (ECI), consulted the executives of OpenAI, creators of ChatGPT, to avoid misuse of the technology during the high-stakes elections.

In February this year, Tech giants including Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok declared a joint agreement to work towards mitigating the risk of AI disrupting elections in 2024.

Last year, as AI-powered tools became popular, Modi himself termed the misuse of artificial intelligence for creating deep fakes as problematic and asked the media to educate people about it.

“A new crisis is emerging due to deepfakes produced through artificial intelligence. There is a very big section of society which does not have a parallel verification system,” Modi said.

His remarks came after a deep fake video of actress Rashmika Mandanna went viral on social media, forcing India’s Ministry of Electronics and Information Technology to intervene. The video depicts a woman in black entering a lift, with her face later morphing into that of Mandanna. 

Indian government warned social media platforms to take such content down within 36 hours, a requirement outlined in the country’s IT Rules (2021). It also urged them to make “reasonable efforts are made to identify misinformation and deepfakes.”

However, the ruling party’s supporters have been accused of using the technology during the election, violating social media giant Meta’s regulations.

While political parties and their supporters are blatantly using AI, some Indian citizens employing AI to satirise powerful politicians have faced consequences. In January, a youth in Tamil Nadu was arrested for sharing a satirical AI video on X, that humorously critiqued the former chief minister of the state.

“Neither BJP nor any other political groups have taken a step to pledge not to use such content,” Prateek Waghre, Executive Director of IFF, told FairPlanet. 

Ahead of the election, IFF wrote an open letter addressed to “electoral candidates and parliamentary representatives” on the impact of deepfakes on electoral outcomes.

In the letter, IFF requested “all political candidates and affiliated organisations to publicly commit not to use deepfakes technology to create deceptive or misleading synthetic content in the run-up to and during the 2024 general elections.”

IFF was also part of a joint letter by civil society organisations and concerned citizens to the ECI and platform companies urging to “add stringent measures regarding generative AI and manipulated media” during the election period.

Waghre said the political parties have more responsibility in the stake compared to the Election Commission which also lacks “technical capacity” to counter the problem. 

“Technology and legislation can’t fix something a section of society wants to breach,” Waghre said, calling for political parties to instruct their supporters to restrain from deceptive content.

Just before the elections, the Indian government also asked tech companies to obtain explicit approval before launching "unreliable" or "under-tested" generative AI models or tools. They cautioned against AI products generating responses that could compromise the integrity of the electoral process.

But Waghre believes it is insufficient and not a long-term intervention to a very “complex problem.”

A global threat 

As easily accessible generative AI tools continue to proliferate, the country has emerged as a laboratory for manipulated media, raising concerns about electoral integrity across the globe as more than two dozen countries go to the polls this year. 

“Neither in India, nor elsewhere in the world is there a clear idea of what the measures should be on the misuse of AI during elections,” Joyojeet Pal, an Associate Professor at the School of Information at the University of Michigan, told FairPlanet. 

“The way the overwhelming majority of people refer to AI in electoral settings is in the use of doctored videos, but in practice, AI and its use is much broader than that,” he pointed out.

Pal, an expert on misinformation in India, points out that a video that is not generated by AI, but has elements of being doctored, can be extremely powerful, and there are many such instances from past elections wherein such use of technology has been employed.

For instance, just before the 2019 general elections, a photoshopped image purportedly depicted Rahul Gandhi, the President of the main opposition Congress party, standing alongside a suicide bomber. 

“What matters is that they were manipulations, rather than their generation through new tools,” he told Fair Planet. “So sometimes the amount of harm arising out of such manipulations is hard to quantify since the effect is as strong or weak as any other audio-visual material that is used during an election campaign.”

Image by Al Jazeera English.

Article written by:
Hanan Zaffar
Author
.
Shaheen Abdulla
Author
Embed from Getty Images
Artificial Intelligence (AI) has become a new obsession in a heavily polarised election which will decide whether the rightwing ruling Bharatiya Janata Party (BJP) will be elected for a rare third term.
Embed from Getty Images
“Targeted misuse of the technology against candidates, journalists, and other actors who belong to gender minorities, may further deepen inequities in the election.”
Embed from Getty Images
Days before the commencement of the general election, a manipulated video featuring the late Indian politician Harikrishnan Vasanthakumar appeared on social media.
.
.