Faking in the making of democracy: Deepfakes in elections

While the AI start-ups talk about the making of deepfakes of the politicians and the demand for AI content, policy, law and cybersecurity experts highlight the threats and the unclarity involved with the deepfakes.

New Update

The Bollywood actor Ranveer Singh not only garnered attention for being the showstopper on the banks of the River Ganges in Varanasi recently but also for criticizing the BJP government and Narendra Modi from the same place while talking to the media. While the first event is true, the second involves a synthetic video, a deepfake, for which the actor has already lodged a complaint. However, what's more concerning is the rapid spread of the video, making it one of the biggest issues for India this year—the Lok Sabha elections of 2024.


Ranveer Singh was seen criticising the BJP in a deepfake video. Source

A few days before Singh’s video surfaced online during the first phase of voting, another video of the actor Aamir Khan went viral, where he appeared to be endorsing Congress. However, before anyone could connect his previous remarks on 'intolerance in India' with this endorsement, it was revealed to be a deepfake. This clarification came later from the actor’s office, along with a complaint to the Cyber Crime Cell of the Mumbai Police. "No one, especially viewers, can distinguish between an original and a deepfake video," says Senthil Nayagam, the founder of Muonium, a Chennai-based startup.


A Chennai-based start-up Muonium created the deepfake video of M. Karunanidhi. Source

The rationale behind Nayagam's statement is well-researched and applied, as he was the creator behind the deepfake video of the former DMK leader, M. Karunanidhi, who served as the two-time Chief Minister of Tamil Nadu. The video circulated in January, depicting the politician, who passed away in 2018, addressing a rally. "Some representatives of the DMK had approached me last year for such a video. I utilized clips from the 1980s when he was in his 60s, as he looked and sounded far better back then, to create the synthetic video," Mr. Nayagam explains, noting that he charged Rs 7 lakhs for the service. "The cost depends on the number of targeted voters and the customizations required by the parties," he adds.

Similar to the deepfake of M. Karunanidhi, even Mahatma Gandhi can be seen soliciting votes for Congress, as this election witnesses multiple deepfakes in circulation, including deceased politicians. Discussing the ethical considerations surrounding the creation of such videos for startups like Muonium, Mr. Nayagam remarks, "We aim to evoke nostalgia by depicting deceased politicians in alignment with their areas of popularity, without attempting to influence people's voting perceptions." He also suggests the possibility of a deepfake video featuring the late Bal Thackeray, the founder of Shiv Sena, potentially refurbishing the tarnished image of the party's factions in Maharashtra.


From Senthil Nayagam's studio with the team working for deepfakes.

Having said that, Mr. Nayagam also opines that members of the political parties are 'not easy to deal with.' "They are reluctant to relinquish power and often have egos. It takes a long time to obtain approvals from them," says Mr. Senthil, who took around 2-3 hours to create Karunanidhi’s video. Due to several other factors, Senthil prefers not to work for political parties and instead focuses on other clients in the entertainment industry. 

The entry of deepfakes 

Deepfake videos came to the limelight in India, majorly after Rashmika Mandanna’s deepfake video went viral last year. According to Deep Trace Labs, a deepfake detection technology firm, deepfake images and videos have largely been used in pornography since its inception in 2017 and 96% of the deepfakes are non-consensual pornographic videos. However, it saw its presence as early as in the 2020 Delhi Assembly elections with Delhi’s BJP chief Manoj Tiwari’s videos; in Hindi, English and Haryanvi surfaced online, of which only the Hindi version was authentic.


The BJP was at the forefront of deepfakes with Manoj Tiwari's videos shared in 2020 during Delhi Assembly Elections. Source

But in the run-up to the elections in 2024, several videos started approaching voters. From a deepfake clip of KBC to Ashok Gehlot’s cloned voice used for sending personalized WhatsApp messages in Rajasthan Assembly Elections to even Yogi Adityanath speaking in Odia, and Mamata Banerjee calling voters and asking for feedback about her work, deepfakes, in forms of all possible synthetic content, has been existing with the voters since last year. 

“It is a continuation of technological evolution,” says Meghna Bal, the director at The Esya Centre, a Delhi-based tech policy think-tank about the amalgamation of Artificial Intelligence (AI) and Machine Learning (ML) that has enabled computer systems to create deepfakes. It involves an AI encoder running and learning the similarities in the two faces and, compressing the images, then the encoded images getting fed into the “wrong” decoder for face swapping which is done on every frame to make the video convincing. This Generative AI production, simply, involves audio and video cloning.

'Demand for AI-content has increased'

Other than the elections, the demand for such AI-generated content has been “exponentially increasing’, as the Bengaluru-based AI start-up DaveAI has observed and recorded. “On average, we receive 15-20 requests per month from various clients. The numbers have increased in the last 6 months. This is because there are more live use cases and case studies emerging and every digital leader wants to explore how generative AI can benefit their business,” Sriram PH, the CEO, and co-founder of DaveAI says.



The start-up claims to be explicit about its avatars being differentiable as AI avatars and also asserts transparency to users regarding their interaction with or consumption of AI content. The founder states that legal frameworks are always aligned with the geography of their clients. "However, we focus on data sovereignty and governance in generative AI deployments for our clients," the founder adds, noting that modern AI technologies have achieved a level of sophistication where the produced content, such as deepfakes, appears close to reality.

In political academia, Dr. Abhinaya Ramesh, Professor and Head of the Political Science Department at K.J. Somaiya College of Arts & Commerce in Mumbai, notes that deepfakes have been a topic of discussion since 2018. "But since last year, the discussion has intensified regarding the methodology's use in studying how to understand the various types and malicious applications of deepfakes to tarnish opponents' images," Dr. Ramesh explains, highlighting the creation of an anti-democratic ambiance, often termed as illiberalism, by many people.

Can deepfakes be identified?

As difficult as it may seem to identify a deepfake video, cyber solutions firms like Cyfirma suggest several indicators to look for. "One can try to identify by checking if the eyes seem off with no blinking or strange eye movements," says Kumar Ritesh, the founder of Cyfirma. He advises to "check if the skin looks too smooth or has odd discolorations, look for blurry edges around the face or hair, and notice if something seems 'off' with the lighting or shadows appearing inconsistent." Ritesh adds, "As the technology improves, deepfakes become harder to catch. It's a constant cat-and-mouse game.


Senthil Nayagam's Muonium working on audio cloning processes.

Pankit Desai, the co-founder and CEO of Sequretek says, “It is important to check for any information that can confirm its authenticity, such as timestamps, metadata, or corroborating accounts from trusted sources.” He also adds that if confronted with a video like this, one should “consider the source of the content and the circumstances surrounding it”. “Check if it is from a reputable source and if it aligns with other information you know to be true,” Mr. Desai adds.

“Deepfake audio may be devoid of the typical subtleties seen in real recordings. Keep an ear out for anomalies in tone, pitch, and ambient noise. Red flags can be raised by irregularities in the backdrop, surroundings, or general context, as deepfakes may find it difficult to recreate realistic settings,” says Major Vineet Kumar, who is serving in the Indian Army and is the Global President and founder of CyberPeace that offers cyber security services and also the Indian Army.  

One of the major reasons for the increase in synthetic content, as stated by Meghna Bal, is that "technology has become more accessible." According to a 2022 report by the industry body, the Internet and Mobile Association of India, and market data analytics firm Kantar, India has over 1.4 billion people with internet access. "Socio-economic demographics play a pivotal role in determining who is most likely to believe in such synthetic content," Meghna adds. With such a vast base of internet users, it becomes easy for such videos to spread, which Meghna identifies as a major challenge. "It should be incumbent upon the actors/politicians featured in the videos to clarify," she says.


KT Rama Rao, in the deepfake video, asked the voters to vote for the rival, Congress. Source

However, even in cases involving living politicians, such videos often achieve their objectives before they can be rectified and brought to the public's attention. On November 30 last year, voters heading to the legislative elections in Telangana encountered a video of the BRS Leader KT Rama Rao, who was governing the state, apparently urging people to vote in favor of the Congress, the rival party. This incident potentially influenced the election outcome, favoring the Congress party in the state.

'We are an awareness-deficient country'

“We are still an awareness-deficient country. Either our awareness is at extremes or is literally zero. So, the grey zone is reflected in marketing teams, IT cells of political parties,” says Abhivardhan, the Chairperson and Managing Trustee, of the Indian Society of Artificial Intelligence and Law. “It is a myth that people have less attention span as our brains are actually designed to read better, watch better, listen better,” he says while talking about the spreading of long deepfake videos. 



Jaspreet Bindra, who runs The Tech Whisperer, employs numerous digital-age references to illustrate how deepfakes can manipulate public opinion by presenting false information or fabricated statements. "Imagine if your favorite meme page suddenly started spouting political lies, or if you couldn't discern whether your meme was true or not—that's the kind of mind game deepfakes play," he explains. Bindra, also a visiting professor at Ashoka University and an alumnus of Cambridge University, emphasizes, "When we can't even trust the videos we see, how can we trust the people in them?" He adds, "Our media functions like digital detectives, striving to separate truth from lies. However, deepfakes throw a major curveball, significantly complicating their task."

Discussing the potential dangers deepfakes pose to democracy, Bindra asserts, "It opens the door to digital invaders. Foreign hackers might employ deepfakes to interfere with our elections, akin to a cyber invasion that undermines our democracy from behind screens." He underscores that deepfakes are more than just digital pranks; they represent serious threats to democracy. "It's like playing a game where the rules keep changing—and not in our favor," Bindra concludes.

Why is there unclarity in the legal framework for deepfakes?

In the context of deepfakes, as Abhivardhan says, “If we look at Rule 3(1)(b) of the Information Technology Rules, 2021, it is about misinformation. So, when you spread misinformation through deepfake, it is not defined properly as a deepfake is not just about videos. The multi-dimensional part of deepfakes is that it is audio plus visual and could also be equated by textual manipulation to the extent of how it is presented. So, it can be covered under the IT rules,” he says.

For start-ups making such content for their product, it is a consumer law problem, Abhivardhan opines. “We have consumer law courts for this but since such videos are political and is a public law matter, there is unclarity for the redressal,” he says.


Yogi Adityanath's deepfake video showed him speaking in Odia. Source

While talking to us, Abhivardhan wonders if the Digital India Act will come this year, However, he says that with the IT rules, one can “curb on misinformation”. “With ‘misinformation’ being clear within the context of public law, one can file petitions in the Supreme Court and the High Court under Articles 32 & 226 of the Indian Constitution respectively.

Meghna Bal is of the opinion that the “mitigation regulators” need to create incentives for such AI startups to make their intentions behind the deepfakes clear to the audience. Calling such deepfake videos, a “misuse of technology”, she says that the regulatory framework for deepfakes, first needs to clarify and classify between “Generative AI”, and “Responsible AI” and define what “harm” is being generated before coming up with specific provisions for the regulation of deepfakes. 

On the other hand, Bal thinks that the existing provisions are adequate to monitor AI start-ups. Major tech companies, on February 16, signed an accord at the Munich Security Conference to adopt “reasonable precautions” to prevent AI tools from being used to avoid any disruptions in elections around the world. Moreover, the European Parliament recently approved its Artificial Intelligence Act which sets transparency requirements, mandates compliance and looks forward to reducing risks by banning the untargeted scraping of facial images from surveillance footage. 



Ahead of the elections, the IT Ministry issued an advisory directed at AI companies, stating that if they offer "under-testing/unreliable" AI systems to Indian users, they must seek permission from the Centre and also label them as "fallibility or unreliability of the output generated." Despite receiving backlash, Minister of State for Electronics and IT Rajeev Chandrasekhar clarified that it didn't apply to startups.

However, defending the advisory, Major Vineet Kumar from CyberPeace, who claims to share the responsibility with the government for advocating "safe and true cyberspace and establishing cyber peace," says that it requires "collective efforts by all to work together to strengthen our defenses against this sneaky phenomenon and maintain the trustworthiness of our internet-based culture in the face of ever-changing technological challenges by emphasizing preventive measures like robust cybersecurity strategy, legal frameworks regulating AI, deepfake detection technologies, user awareness, and more."

At the individual level, while Microsoft has announced digital watermarks for AI-generated content, Meta announced that political ads must mention if they used AI. Discussing the same, Pankit Desai from Sequretek suggests that social media platforms should invest in developing and implementing advanced detection algorithms and tools to identify and flag deepfake content on their platforms. "This can include AI-based systems trained to recognize patterns indicative of manipulation. They should enforce clear policies against disseminating deepfakes and AI-generated content designed to deceive or manipulate users. And most importantly, they should implement mechanisms for reporting and removing such content swiftly," he says.


Voters wait in the queue to cast their votes in the first phase of elections in Tripura. Source

“It demands a sophisticated defence strategy at the intersection of technology, education, legislation, and platform governance. Technological mitigation stands as a linchpin in our arsenal against deepfakes. Leveraging machine learning algorithms, we can develop robust detection tools capable of discerning nuanced anomalies in facial dynamics, vocal patterns, and contextual inconsistencies,” Kumar Ritesh from Cyfirma adds. 

Deepfakes, which have already gained a prominent place in the ongoing elections, have the potential to move the minds of voters in various ways, and experts worry about the bleaking future. “Today it is happening in politics. Tomorrow, it will happen in economics. One day, it will happen in the FMCG sector, fashion and entertainment,” Abhivardhan says adding that while there are copyright and patent issues in entertainment, it is hard to tell how the influence of deepfakes will look like for other industries and businesses.

Deepfakes in Lok Sabha elections Deepfake videos in elections