Introduction
Deepfake technology marks a significant advancement in Artificial Intelligence (‘AI’) and machine learning, particularly within the field of synthetic media creation. The term ‘deepfake’ is a combination of ‘deep learning’ and ‘fake,’ highlighting the use of deep learning algorithms to generate or alter visual and audio content to make it appear genuine. Such technology allows superimposing existing images and videos onto new content, producing hyper-realistic (but artificial) representations.
This insight delves into the multifaceted dimensions underlying deepfake technology, exhaustively examining its various aspects. It then seeks to elucidate notable cases associated with this technology and address its multifarious challenges. Consequently, this exploration underscores the profound considerations surrounding the application and implications of deepfake technology and examines its broader societal impacts and ethical implications.
Definition and Origins
The origins of deepfake technology can be traced back to the development of generative models within AI research. A breakthrough came in 2014 when Ian Goodfellow and his team introduced Generative Adversarial Networks (‘GAN’). GAN consists of two neural networks – (i) the generator and (ii) the discriminator, both operating in opposition. The generator’s role is to create synthetic data, while the discriminator assesses the authenticity of the data produced by the generator. Over time, the generator refines its output and learns to produce content so realistically that the discriminator fails to separate the real and synthetic data. In such instances, its missions are successful for the AI.
Deepfake technology, like other innovations, was initially developed for beneficial purposes. However, as the adage goes, ‘with great power comes great responsibility,’ we embarked on a powerful but irresponsible journey of misusing deepfake technology to commit quite a few big scandals. What was initially developed and introduced to reduce human effort, especially in the creative/entertainment industry, by providing innovative ways to enhance special effects, screenplay/ character development, etc. or by educational institutions to provide immersive and interactive learning experiences or in healthcare sector where it was purported to be used to simulate patient interactions for training purposes, is now only being misused, and misused to an extent that not only are people losing money over it, but also their self-respect and social standing.
The pitiful condition deepfake technology currently faces is due to the democratization of AI tools. The cheap and easily accessible AI tools allowed people with minimal technical expertise to create deepfakes, which were then used by perpetrators to not only spread misinformation amongst society but also primarily blackmail females by creating explicit sexual content, wherein such women would be featured without their knowledge or consent. This unfortunate shift from constructive to malicious applications has raised significant ethical, legal, and social concerns about the implications of deepfake technology.
How Deepfakes Work? - A brief explanation
As stated, deepfakes are created using deep learning techniques, with GANs being the primary tool. The process involves the following steps:
Data Collection: The process of data collection begins with gathering extensive data on the target subject. For a better understanding, let us assume that our target subject is Shah Rukh Khan, who portrays Justice DY Chandrachud, the Chief Justice of India ('CJI'), in his biopic. We will feed the AI with photos, videos, and audio recordings of the CJI and SRK, and such data will serve as the foundation for training the AI model.
Training the Model: The AI model then analyses the data to learn the target’s facial features, voice patterns, expressions, and mannerisms and then proceeds to create synthetic content based on this data while the discriminator network evaluates its authenticity, refining the results until the fake content closely resembles the real content.
Synthesis and Manipulation: Once trained, the AI can generate new content by superimposing the target’s likeness onto another person’s face in a video or modifying existing footage to alter actions or speech.
Post-processing: The synthetic content undergoes refinement to improve its realism. This includes adjusting lighting, shadows, and the synchronization of lip movements with speech to make the deepfake more convincing.
Deep Face Lab's computing power, coupled with user-friendly software like Fake App and Deep Face Lab, has made it easier for individuals to create deepfake content. These tools have reduced the technical barriers, enabling users to produce convincing deepfake videos with little prior expertise.
An Overview of the Worldwide Deepfake Scenario
The emergence of deepfake technology has sparked a global response. The challenges deepfakes pose are universal, necessitating coordinated efforts across nations to mitigate their harmful effects. As governments, legal systems, and international organizations grapple with its potential risks and misuse, we discuss how major economies worldwide cope with such (notorious) technology in the paragraphs mentioned below
i. United States
In the United States, both federal and state governments have implemented measures to tackle deepfake-related challenges. On the federal level, one significant step is the introduction of the ‘Deepfakes Accountability Act,’ which aims to require digital watermarks on synthetic media to indicate they are artificially generated and proposes legal consequences for malicious deepfake usage. The National Defence Authorization Act for Fiscal Year 2020 also included provisions for the Department of Homeland Security to research deepfake detection technologies. At the state level, some states have enacted more direct laws. For instance, California passed legislation banning the distribution of manipulated videos within 60 days of an election and criminalizing non-consensual deepfake pornography. Similarly, Texas has made creating or distributing deepfakes intended to influence elections illegal. In response to the growing concerns, major tech companies such as Facebook, X (Twitter), and Google have also taken strict steps to combat the spread of the deepfake disease. For example, Facebook launched the Deepfake Detection Challenge, which aims to promote the development of tools to detect deepfake content.
ii. European Union
The European Union (‘EU’) is addressing the issue of deepfake within its broader regulatory frameworks. One of the key regulations is the General Data Protection Regulation (‘GDPR’), which can be applied to deepfakes. Under the GDPR, the unauthorised creation of deepfakes may breach privacy and data protection laws by processing personal data without consent. Another significant initiative is the proposed Digital Services Act (‘DSA’), which aims to modernize the legal framework for digital services. The DSA seeks to hold online platforms accountable for hosting illegal content, including deepfakes and requires transparency in content moderation practices. In addition to these regulations, the European Commission has also funded research projects to combat deepfakes, such as the Social Truth project, which focuses on improving media verification techniques to detect and counter deepfake content.
iii. China
China has proactively attempted to regulate deepfake technology by introducing comprehensive measures, including the introduction of the Cyberspace Administration of China (‘CAC’), which enforces regulations on online audio-visual information services, requiring that synthetic media, including deepfakes, be labelled to inform viewers that the content is artificial. These regulations are designed to prevent the misuse of deepfakes in ways that could harm national security or disrupt social order. In addition to setting these rules, the CAC actively monitors online platforms to ensure compliance. Violators of these regulations face significant penalties, reinforcing the country’s strict approach to managing deepfake technology.
iv. Other Countries
In Australia, the government has introduced the Enhancing Online Safety Act, which is focused on combating image-based abuse. This legislation aims to protect individuals from the unauthorized use of their images, and the government is actively considering updates to address the growing concern of deepfakes specifically.
In the United Kingdom (‘UK’), the Law Commission has proposed legal reforms that would criminalize the creation and distribution of non-consensual deepfake pornography. Additionally, the UK is working on the Online Safety Bill, which aims to hold online platforms accountable for protecting users from harmful content, including deepfakes. This bill is designed to ensure that online platforms have a duty of care to their users, requiring them to take steps to prevent the spread of malicious or harmful deepfake content.
On a global scale, international organizations such as Interpol and the United Nations have recognized the serious threats posed by deepfakes, particularly in disinformation and cybercrime. These organizations are working to promote international cooperation in developing detection tools and regulatory frameworks to address the challenges posed by deepfakes across borders.
India Chapter – Management of the deepfake in India
Deepfake is being put to similar uses in India as well. While certain sectors continue to explore its legitimate use, its misuse is also increasing at a steady pace. Mentioned below is a short brief that encapsulates the positive and negative applications of the deepfake technology in India: -
Positive Applications of Deepfakes | Negative Application of Deepfakes |
Entertainment Industry: filmmakers use deepfake technology to enhance visual effects, create digital avatars, and localize content through realistic dubbing. Actors’ faces are seamlessly integrated into dangerous stunt scenes, preserving realism and ensuring safety. | Political Manipulation: Deepfakes have been weaponized in India to spread disinformation, particularly during elections. In one notable case, a political party allegedly circulated a deepfake video of an opponent making inflammatory remarks in order to influence public opinion. |
Education and Training: Educational institutions are sincerely trying to integrate deepfakes to create an interactive learning experience for the students, especially for subjects that require extensive interaction and engagement, such as history, physics, geography, etc. Virtual educators use deepfakes to deliver high-intensity and optic-sensitive lectures, improving learning and engaging students in new ways for longer periods. | Non-Consensual Explicit Content: Deepfake has, unfortunately, come up as another tool to torture and traumatize Indian women, wherein merely by superimposing a woman’s photo on an already existing explicit video, perpetrators are creating fake pornographic videos and films of women. Such videos are further used to threaten and blackmail women and are often also shared on social media and other messaging platforms as a revenge mechanism. |
Startups and Innovation: several startups, such as Rephrase.ai SimYog technologies, are developing AI tools and deepfake to create personalized synthetic videos for marketing and communication purposes, allowing businesses to produce customized messages at scale using digital avatars. | Fraud and Impersonation: Deepfakes are also being used in India to commit financial frauds and scams, wherein persons impersonate high-profile individuals/or known (family members) and extract confidential financial data, which is thereafter extensively misused. |
Managing Deepfakes in India
India’s growing awareness of the potential risks and misuse of deepfakes is prompting calls for legal and technological measures to manage its spread. Legislative reforms, stronger regulations, and collaboration between government bodies and tech platforms are essential to safeguard against the harmful effects of deepfakes. India could benefit from global partnerships and international strategies to regulate this rapidly evolving technology. India’s legal framework addresses aspects of deepfake misuse through various statutes:
1. Information Technology Act, 2000: The Information Technology Act, 2000 (‘IT Act’) is the cornerstone of India’s legal cyber-activity framework. Several provisions within this act can be of help while handling the issues pertaining to deepfakes:
S. 66E (Violation of Privacy) deals with violating privacy by capturing, publishing, or transmitting private images without consent. In the context of deepfakes, this section can be invoked when an individual’s images or videos are manipulated and shared without permission, particularly if the content involves private or intimate imagery.
S. 67 (Obscenity) prohibits the publication or transmission of obscene material electronically. Deepfakes depicting sexually explicit content fall within the scope of this provision, enabling authorities to take action against those who create or disseminate such material.
S. 67A (Sexually Explicit Content) specifically addresses the transmission of material containing sexually explicit acts. This provision imposes stricter penalties and is relevant in cases where deepfakes depict individuals in explicit scenarios without their consent.
S. 69A (Blocking Powers): empowers the government to block public access to information that threatens national security or public order. This section allows authorities to mandate removing deepfake content from online platforms. However, the effectiveness of such measures can be limited due to the rapid dissemination and replication of content on the internet.
2. Indian Penal Code (‘IPC’) /Bhartiya Nagarik Sanhita (‘BNS’): The IPC/BNS contains several sections applicable to deepfake-related offences:
S. 499/356(1) (Defamation): Criminalizes actions that harm a person’s reputation through false statements or representations. Deepfakes that portray individuals in a defamatory manner can be prosecuted under this section.
S. 500/356(4) (Punishment for Defamation) prescribes penalties for defamation, including imprisonment and fines. The challenge in applying this section lies in identifying the perpetrators, who often operate anonymously or from different jurisdictions.
S. 503/351(1) (Criminal Intimidation): Pertains to threats causing alarm or compelling a person to do something against their will.
S. 419/319(1) (Cheating by Personation) applies when deepfakes are used to impersonate someone, particularly in fraudulent schemes where the impersonation leads to financial or personal harm.
S. 354C/77 (Voyeurism) and 354D/78 (Stalking) address voyeurism and stalking, respectively. These sections are relevant when deepfakes involve unauthorized use of a woman’s image in a private act or when the creation and dissemination of such content constitute harassment.
Sections on Sexual Offenses: Protect against sexual harassment, assault, and exploitation, relevant when deepfakes are used to create sexually explicit content involving women.
S. 507/351(4) (Criminal Intimidation by Anonymous Communication) and 509/79 (Word, Gesture or Act Intended to Insult the Modesty of a Woman) can also be invoked in cases where deepfakes are used to threaten, harass, or demean individuals.
3. Indecent Representation of Women (Prohibition) Act, 1986, prohibits indecent representation of women through advertisements, publications, and other media. Amendments have extended its scope to cover digital content, making it applicable to deepfakes that depict women in derogatory or sexualized manners without their consent. The act broadly defines ‘indecent representation’ under s. 2(c) as any depiction likely to deprave, corrupt, or injure public morality.
Under this act, producing, distributing, or publishing indecent representations of women is punishable by imprisonment and fines. While the act provides a legal avenue for addressing certain types of deepfake content, enforcement can be challenging due to online activity's anonymous and transnational nature.
4. Digital Personal Data Protection Act, 2023: The Digital Personal Data Protection Act, 2023, represents a significant advancement in India’s data protection regime. The act emphasizes the rights of individuals over their personal data and imposes obligations on entities that process such data:
Consent Requirement: Entities must obtain explicit consent from individuals before processing their personal data. In the context of deepfakes, using someone’s image or likeness without consent violates this provision.
Data Principal Rights: Individuals can access, correct, and erase personal data. Victims of deepfake misuse can exercise these rights to request the removal of unauthorized content involving their data.
Penalties for Non-Compliance: The act imposes substantial fines for breaches, which can deter entities from misusing personal data to create deepfakes.
While the act primarily targets data fiduciaries—organizations that process personal data—it sets a precedent for recognizing personal images and likenesses as data requiring protection. This framework can be leveraged to hold accountable those who create or distribute deepfakes without consent.
5. Cybersecurity Framework in India: India’s cybersecurity infrastructure plays a crucial role in managing deepfakes:
Indian Computer Emergency Response Team (‘CERT-In’): This national agency responds to cybersecurity incidents. While its mandate does not specifically focus on deepfakes, CERT-In issues advisories and guidelines that can aid in detecting and mitigating malicious activities involving deepfakes.
Cyber Crime Investigation Cells: These exist within state police departments that investigate cyber offences. These units have the technical expertise and resources to handle cases involving deepfakes, though they often face challenges due to limited manpower and rapidly evolving technology.
Intermediary Guidelines and Digital Media Ethics Code Rules, 2021: impose obligations on social media platforms and intermediaries. These entities must have grievance redressal mechanisms and remove unlawful content upon receiving appropriate orders. For deepfakes, platforms are responsible for releasing harmful content when notified, although proactive detection remains challenging.
Recent Cases and Enforcement Challenges
As the mischief progressed, the thought that it could have been controlled without the involvement of law enforcement agencies was almost impossible. As deepfakes have revealed their harmful potential, how could our heroes (not Anil Kapoor and Amitabh Bachchan, but the Delhi High Court itself!) not rescue us?
In the case of Anil Kapoor v. Simply Life India and Ors[i], our very own Anil Kapoor filed a case against multiple defendants who had unlawfully been using his personality rights. The Delhi High Court ruled in his favour and granted an ex-parte injunction to protect his personality rights, including his name, image, voice, and persona. The court found that the defendants had misused these elements for commercial gain without his consent, including creating deepfakes, AI-generated content, and selling merchandise bearing his likeness. The court restrained the defendants from further unauthorized usage, ordered the blocking of infringing links, and directed the transfer of squatted domain names like www.anilkapoor.in to the plaintiff. Thus, it sets a strong precedent for protecting celebrities’ rights against modern technological misuse, such as deepfakes and AI-based content creation.
In the case of Amitabh Bachchan v. Rajat Negi and Ors[ii], the Delhi High Court again addresses the unauthorized use of the plaintiff’s image and persona through deepfake technology. The court ruled that the defendant’s creation and dissemination of deepfake content, which manipulated the plaintiff’s likeness and image without consent, violated the plaintiff’s personality rights and privacy. The court acknowledged that such use, especially for commercial purposes or in a derogatory manner, infringes upon the right to privacy and publicity. An injunction was issued restraining the defendants from further misuse of the plaintiff’s image and ordering the removal of all deepfake content across platforms, setting a strong legal precedent against the misuse of personality through emerging technologies like AI-generated deepfakes.
Deepfakes and disruption – Short stories of real-life distribution caused through Deepfake
Deepfake technology first drew major attention in 2017 when Gal Gadot, a well-known actor, had her face digitally placed onto explicit content without her consent. This misuse highlighted the serious potential for harm that deepfakes can cause.
Since then, several significant incidents have occurred:
Jim Acosta Altered Video (2018): A manipulated video made CNN reporter Jim Acosta appear aggressive during a press briefing. The Trump administration circulated the video, which was allegedly used to justify revoking his White House press access.
David Beckham’s Anti-Malaria Campaign (2019): Using deepfake technology, David Beckham appeared to deliver a message in nine different languages to promote malaria awareness. While for a good cause, it raised concerns about how such technology could be misused.
Nancy Pelosi Manipulated Video (2019): A deepfake video falsely showed U.S. House Speaker Nancy Pelosi appearing intoxicated during a speech, aiming to discredit her and mislead the public.
Mark Zuckerberg Deepfake (2019): A fabricated video depicted Facebook CEO Mark Zuckerberg admitting to unethical data practices, demonstrating how deepfakes can spread misinformation and damage reputations.
Volodymyr Zelensky Deepfake (2022): Amid the conflict in Ukraine, a deepfake video showed President Zelensky falsely announcing the surrender of Ukrainian forces, intended to undermine morale and spread false information.
Other notable instances include altered images of Pope Francis wearing unconventional attire, a fake video of former President Donald Trump’s arrest, and a deepfake involving Indian actor Rashmika Mandana, which raised serious concerns about privacy violations and the need for legal protections.
These examples highlight the growing prevalence of deepfake technology and underscore the urgent need for legal frameworks, ethical guidelines, and public awareness to address their challenges.
Impersonation of the CJI: In a sophisticated fraud scheme, individuals use deepfake technology to impersonate the CJI. They created videos and audio clips that appeared to feature the CJI endorsing certain legal services or investments. Due to the realistic portrayal, victims were convinced of the legitimacy, leading to financial losses amounting to crores of rupees.
Law enforcement agencies launched an investigation involving cyber forensic experts. The challenge lies in tracing the digital footprint of the perpetrators, who often use encrypted communication and operate from multiple jurisdictions. The case underscores the potential for deepfakes to undermine trust in public institutions and the urgency for legal safeguards.
Non-Consensual Explicit Content Involving an Actor: A prominent Indian actor became a victim when her face was superimposed onto sexually explicit videos. These deepfake videos were circulated widely on social media platforms like WhatsApp, Facebook, and X(Twitter). The actor filed a complaint with the cybercrime division, highlighting the psychological and reputational harm caused.
The authorities faced difficulties in removing the content due to its rapid proliferation and the use of anonymous accounts. The incident sparked public debate on the vulnerabilities faced by women in the digital age and the inadequacy of existing legal remedies to address such technologically advanced forms of harassment.
Political Deepfakes During Elections: During the 2020 Delhi Legislative Assembly elections, a political party allegedly circulated a deepfake video of its leader delivering a speech in different languages, including Haryanvi, which the leader does not speak. While the intent was to reach a wider audience, concerns were raised about the ethical implications and potential for misuse in spreading misinformation.
Moreover, fake videos of politicians making inflammatory statements have been used to incite communal tensions. These instances highlight the role of deepfakes in undermining democratic processes and the need for regulations to prevent electoral manipulation.
Legal Challenges Posed by Deepfake Technology
Violation of Privacy: Deepfakes significantly infringe on an individual’s right to privacy, recognized as a fundamental right by the Supreme Court of India in the landmark Justice K.S. Puttaswamy (Retd.) v. Union of India[iii]. The unauthorized creation and use of a person’s images or likeness without their consent is a clear breach of informational privacy. This violation is especially concerning when deepfakes are used for malicious purposes, such as defamation or exploitation.
The issue of consent: Deepfakes are often created without the knowledge or permission of the individuals involved, leaving them vulnerable to misuse. This lack of consent can lead to significant harm, as the affected individuals may face reputational damage, emotional distress, or financial loss. Moreover, victims of deepfakes often struggle to find adequate legal remedies due to the absence of specific laws addressing deepfake technology.
The issue of cross-border jurisdiction: Perpetrators who create or distribute deepfakes may operate from other countries, making it difficult for victims to pursue legal action or hold the responsible parties accountable. This complicates enforcement and underscores the need for international cooperation in regulating and managing deepfakes.
Defamation and Damage to Reputation: Defamation laws under IPC aim to protect individuals from harm to their reputation caused by false or misleading statements. However, the advent of deepfakes complicates these legal frameworks significantly. One of the primary challenges is the anonymity of offenders. Many creators of deepfakes use advanced technologies to conceal their identities, making it difficult for victims to identify and pursue legal action against them. Victims face substantial hurdles in holding the creators accountable without knowing who is responsible, complicating the defamation process.
Additionally, deepfakes can spread rapidly across digital platforms, often going viral in minutes. This swift dissemination can lead to extensive reputational damage before victims become aware of the harmful content. Furthermore, proving that a deepfake has caused reputational harm presents evidentiary challenges. Legal proceedings require technical expertise to establish the authenticity of the deepfake and judicial understanding of the technology involved. Consequently, this complexity adds a layer of difficulty to legal cases, necessitating the evolution of the judicial system to address the unique challenges posed by deepfake technology effectively.
Hate Speech and Incitement to Violence: Deepfakes have the potential to be powerful tools for spreading hate speech, posing serious risks to social harmony and public safety. One of the most concerning uses of deepfake technology is the fabrication of videos depicting individuals making provocative statements. Such manipulated content can incite violence or create communal disharmony, exacerbating group tensions. This capability highlights the dangers of deepfakes in influencing public perception and inciting harmful actions based on false representations.
In response to the threats posed by deepfakes, legal provisions like s.153A and 295A of the IPC and like s.196(1)(a) and 299 of the BNS exist to penalize actions that promote enmity between different communities. However, applying these laws to cases involving deepfakes presents unique challenges. Establishing the creator’s intent is crucial, as it determines whether the content was designed to incite hatred or violence. Additionally, the sheer volume of online content makes effective monitoring and regulation difficult. Social media platforms often struggle to promptly identify and remove deepfake videos, allowing harmful content to circulate and potentially inflict damage before corrective measures can be taken. As a result, both legal frameworks and technological solutions must evolve to address the complex issues surrounding deepfakes and their role in promoting hate speech.
Challenges in Prosecution and Evidence: Legal proceedings involving deepfakes encounter numerous significant obstacles that can hinder the pursuit of justice. One major challenge is the technical complexity of deepfake technology. Judicial authorities may lack the necessary understanding to assess deepfake evidence accurately, which can ultimately affect the fairness of trials. This gap in technical knowledge can lead to misunderstandings of how deepfakes are created and their potential implications, making it difficult for judges and juries to evaluate the evidence presented effectively.
The admissibility of digital evidence in court: While electronic records are generally admissible under the Indian Evidence Act, establishing the authenticity of deepfake content is particularly difficult. The nature of deepfakes complicates the verification process, as victims must prove that the content is fake and that the accused created or distributed it with malicious intent. This burden of proof can significantly strain victims, making it harder for them to seek redress. Additionally, cybercrimes involving deepfakes often necessitate international cooperation for effective resolution. This collaboration can be time-consuming and legally complex, as different jurisdictions may have varying laws and procedures regarding deepfakes, further complicating the pursuit of justice in these cases.
Conclusion
Deepfake technology is a double-edged sword. It offers significant advancements in entertainment, education, and the arts while presenting serious risks by creating realistic digital content. On the other hand, it poses serious problems. Deepfakes can produce fake but convincing images or videos of people, leading to privacy violations, spreading false information, and even threatening democracy by undermining trust in the media.
India's current legal framework is not fully equipped to address these challenges. There is a need for specific legislation to define and criminalize harmful deepfakes clearly. By identifying what malicious deepfakes are and making their creation and sharing illegal, the law can provide a solid foundation for tackling this issue. Additionally, it is important to protect victims by setting up quick ways to remove harmful content and provide legal help to those affected. Enhancing law enforcement capabilities is also crucial; police and agencies need the tools and training to investigate and stop deepfake crimes effectively. Promoting international cooperation is also essential since deepfakes can easily cross borders. Working with other countries can help manage the spread of harmful deepfakes globally.
Other nations have recognized the urgency of the deepfake issue and have started making specific laws. India can learn from these examples to create its own laws that address these new technological challenges while respecting people’s rights.
At the same time, it is important to balance regulation with the right to free speech guaranteed by the Indian Constitution. Any restrictions should be reasonable and not hinder freedom of expression. Laws should allow deepfakes for legitimate purposes like satire, parody, or art. Encouraging watermarks or labels can help people recognize synthetic content without limiting creativity. Educating everyone about deepfakes can improve media literacy, helping people think critically about the content they see. Online platforms should have policies to find and remove harmful deepfakes while respecting users' rights.
By thoughtfully addressing these issues, India can protect its citizens from the dangers of deepfakes while encouraging innovation and freedom of expression.
End Notes
[i] 2023 SCC OnLine Del 6914.
[ii] 2022 SCC OnLine Del 4110.
[iii] (2017)10 SCC 1.
Authored by Pranav Dabas, Advocate at Metalegal Advocates. The views expressed are personal and do not constitute legal opinions.