AI and deepfakes blur reality in India elections

AI and deepfakes blur reality in India elections
News Desk

By News Desk


Published: 16/05/2024

In November last year, Muralikrishnan Chinnadurai was watching a livestream of a Tamil-language event in the UK when he noticed something odd.

A woman introduced as Duwaraka, daughter of ​​Velupillai Prabhakaran, the Tamil Tiger militant chief, was giving a speech.

The problem was that Duwaraka had died more than a decade earlier, in an airstrike in 2009 during the closing days of the Sri Lankan civil war. The then-23-year-old's body was never found.

And now, here she was - seemingly a middle-aged woman - exhorting Tamilians across the world to take forward the political struggle for their freedom.

Mr Chinnadurai, a fact-checker in the southern Indian state of Tamil Nadu, watched the video closely, noticed glitches in the video and soon pinned it down to being a figure generated by artificial intelligence (AI).

The potential problems were immediately clear to Mr Chinnadurai: "This is an emotive issue in the state [Tamil Nadu] and with elections around the corner, the misinformation could quickly spread."

As India goes to the polls, it is impossible to avoid the wealth of AI-generated content being created - from campaign videos, to personalised audio messages in a range of Indian languages, and even automated calls made to voters in a candidate's voice.

Content creators like Shahid Sheikh have even had fun using AI tools to show Indian politicians in avatars we haven't seen them in before: wearing athleisure, playing music and dancing.

But as the tools get more sophisticated, experts worry about its implications when it comes to making fake news appear real.

"Rumours have always been a part of electioneering. [But] in the age of social media, it can spread like wildfire," says SY Qureshi, the country's former chief election commissioner.

"It can actually set the country on fire."

India's political parties are not the first in the world to take advantage of recent developments in AI. Just over the border in Pakistan, it allowed jailed politician Imran Khan to address a rally.

And in India itself, Prime Minister Narendra Modi has also already made the best of the emerging technology to campaign effectively - addressing an audience in Hindi which, by using the government-created AI tool Bhashini, was then translated into Tamil in real time.

But it can also be used to manipulate words and messages.

Last month, two viral videos showed Bollywood stars Ranveer Singh and Aamir Khan campaigning for the opposition Congress party. Both filed police complaints saying these were deepfakes, made without their consent.

Then, on 29 April, Prime Minister Modi raised concerns about AI being used to distort speeches by senior leaders of the ruling party, including him.

The next day, police arrested two people, one each from the opposition Aam Aadmi Party (AAP) and the Congress party, in connection with a doctored video of Home Minister Amit Shah.

Mr Modi's Bharatiya Janata Party (BJP) has also faced similar accusations from opposition leaders in the country.

The problem is - despite the arrests - there is no comprehensive regulation in place, according to experts.

Which means "if you're caught doing something wrong, then there might be a slap on your wrist at best", according to Srinivas Kodali, a data and security researcher.

In the absence of regulation, creators told the OceanNewsUK they have to rely on personal ethics to decide the kind of work they choose to do or not do.

The OceanNewsUK learned that, among the requests from politicians, were pornographic imagery and morphing of videos and audios of their rivals to damage their reputation.

"I was once asked to make an original look like a deepfake because the original video, if shared widely, would make the politician look bad," reveals Divyendra Singh Jadoun.

"So his team wanted me to create a deepfake that they could pass off as the original."

Mr Jadoun, founder of The Indian Deepfaker (TID), which created tools to help people use open source AI software to create campaign material for Indian politicians, insists on putting disclaimers on anything he makes so it is clear it is not real.

But it is still hard to control.

Mr Sheikh, who works with a marketing agency in the eastern state of West Bengal, has seen his work shared without permission or credit by politicians or political pages on social media.

"One politician used an image I created of Mr Modi without context and without mentioning it was created using AI," he says.

And it is now so easy to create a deepfake that anyone can do it.

"What used to take us seven or eight days to create can now be done in three minutes," Mr Jadoun explains. "You just need to have a computer."

Indeed, the OceanNewsUK got a first-hand look at just how easy it is to create a fake phone call between two people - in this case, me and former US president Donald Trump.

Despite the risks, India had initially said it wasn't considering a law for AI. This March, however, it sprung into action after a furore over Google's Gemini chatbot response to a query asking: "Is Modi a fascist?"

Rajeev Chandrasekhar, the country's junior information technology minister, said it had violated the country's IT laws.

Since then, the Indian government has asked tech companies to get its explicit permission before publicly launching "unreliable" or "under-tested" generative AI models or tools. It has also warned against responses by these tools that "threaten the integrity of the electoral process".

But it isn't enough: fact-checkers say keeping up with debunking such content is an uphill task, particularly during the elections when misinformation hits a peak.

"Information travels at the speed of 100km per hour," says Mr Chinnadurai, who runs a media watchdog in Tamil Nadu. "The debunked information we disseminate will go at 20km per hour."

And these fakes are even making their way into the mainstream media, says Mr Kodali. Despite this, the "election commission is publicly silent on AI".

"There are no rules at large," Mr Kodali says. "They're letting the tech industry self-regulate instead of coming up with actual regulations."

There isn't a foolproof solution in sight, experts say.

"But [for now] if action is taken against people forwarding fakes, it might scare others against sharing unverified information," says Mr Qureshi.

You may like