top of page

Title: Deepfake Videos and the Privacy Concerns for Women in 2023

Updated: Sep 14, 2024




Image Source: India Today


Introduction:

In the current digital era, technology is developing and evolving at a rate never seen before. The emergence of Deepfake videos, which are artificial intelligence systems' incredibly realistically altered videos, is one such breakthrough that has drawn a lot of attention. Deepfake technology has created exciting new opportunities across a range of industries, but when used improperly, it can be extremely dangerous, especially for women's security and privacy. In this blog, we explore the issues raised by Deepfake videos and highlight the privacy dangers that women will face in 2023.


Understanding Deepfake Videos:

In order to effectively modify a person's facial expressions and gestures or superimpose a person's face onto another person's body, Deepfake videos use synthetic media techniques. They produce such realistic modifications using advanced algorithms and machine learning that it becomes difficult to tell them apart from real recordings.


Offences caused using Deepfake videos 


IT Act section deals with Deepfake Videos

"A person can be imprisoned for up to three years and/or fined up to Rs 1 lakh when a communication device or computer resource is used (with) mala fide (intention) for cheating to personate," according to Chartered Accountant Ankur Agarwal. Section 66D of the Information Technology Act, 2000 makes this provision clear. 


Section 66E is another component of the IT Act. Since the person's privacy is violated when their photographs are taken, published, or transmitted in the media, Deepfake offenses also violate this section. According to Agarwal, "this offence is punishable with up to three years in prison or a fine of up to Rs 2 lakh." 

The dissemination of incorrect or misleading information has the potential to sway public opinion, erode public confidence, and affect political outcomes. Cyberterrorism offenses may be prosecuted under Information Technology Act, 200014, Section 66-F, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022.

With the use of this technology, phony pictures or films showing people acting or saying things that never happened can be produced, possibly harming people's reputations or disseminating misleading information. Additionally, Deepfakes could be employed maliciously for misinformation campaigns, political propaganda, or non-consensual pornography. When Deepfakes are used to disseminate misleading information or sway public opinion, it can have detrimental effects on both society as a whole and the individuals whose photos or likenesses are utilized without authorization. Section 67, which deals with penalties for publishing or transmitting pornographic material electronically, and Section 67-A, which deals with penalties for publishing or sending content that includes explicit sexual acts, etc. Electronic form), Information Technology Act, 2000, Section 67-B (penalty for publishing or distributing material depicting children in sexually explicit act/pornography in electronic form).


The Privacy Concerns for Women:

  1. Non-consensual Exploitation: Deepfake videos have developed into an effective technique for non-consensual exploitation, allowing malevolent actors to overlay women's faces on pornographic or otherwise sensitive material. Perpetrators can produce phony videos that harm a woman's reputation, relationships, and general well-being by using public photographs or social media profiles.

  2. Cyberbullying and Revenge Porn: Intimate photos or videos are shared without permission in revenge porn, which is made easier by Deepfake videos. For the women concerned, this may lead to serious emotional pain, harm to their careers, and social stigma. These videos may also be exploited as a means of cyberbullying, which could result in more victimization and harassment.

  3. Implications on Consent: The idea of consent is challenged by the increasingly hazy boundaries between reality and fiction in Deepfake videos. It becomes difficult to verify the veracity of videos, which makes it challenging for victims to defend themselves against unfounded allegations or charges.

  4. Spread of Disinformation: Additionally, Deepfake videos can be used to propagate misinformation or sway public opinion. Perpetrators can spread seeds of discontent and erode confidence in a variety of fields, including politics, media, and entertainment, by distorting videos featuring powerful women.


Tackling the Challenges:

  • Technological Countermeasures: Researchers and tech businesses are working hard to create sophisticated detection algorithms that can recognize Deepfake videos. These tools are always being improved with the goal of quickly identifying and flagging false content so that platforms can take the necessary action.

  • Legal Frameworks: Governments across the globe are striving to improve current regulations or propose new ones in order to solve issues associated to Deepfake. To discourage future offenders and safeguard victims, harsher punishments are being implemented for the non-consensual transmission of revenge porn and Deepfake videos.

  • Media Literacy and Education: It is imperative to support media literacy initiatives that inform people about Deepfake technology and its consequences. By increasing knowledge, people will be better able to recognize Deepfakes, comprehend the possible repercussions, and take precautions against them.


Conclusion:

As Deepfake technology develops further, it becomes more and more important to address privacy issues related to its exploitation. In order to safeguard women's privacy and welfare in the digital sphere, governments, corporations, and individuals must work together. In the constantly changing digital landscape of 2023, we may work toward a better and more secure online environment for women by putting strong technological protections into place, supporting legislative frameworks, and encouraging media literacy.


Submitted By: Aditi Singh

Guided By: Jaanvi Sharma


bottom of page