
Trump Signs ‘Take It Down Act’ to Tackle Revenge Porn and Deepfakes
In a landmark move aimed at curbing online abuse, U.S. President Donald Trump signed the “Take It Down Act” into law on May 19, 2025. The legislation criminalizes the sharing of non-consensual explicit content, including those created using artificial intelligence deepfake technology. The act was widely supported across party lines and strongly endorsed by First Lady Melania Trump.
Key Features of the Take It Down Act
The new law establishes strict federal penalties for individuals who circulate sexually explicit images or videos without consent. This includes not only real media but also manipulated content created using AI.
Under the act:
- Offenders can face up to two years in prison.
- Harsher sentences apply in cases involving minors.
- Online platforms must remove flagged content within 48 hours.
- The Federal Trade Commission (FTC) will monitor compliance and penalize violators.
This is the first major U.S. law to directly confront the growing threat of AI deepfakes being used for digital exploitation.
Melania Trump’s Advocacy for Online Safety
Melania Trump played a crucial role in rallying support for the legislation. Tying it to her longstanding “Be Best” initiative focused on child welfare and cyber safety, she labeled revenge porn and AI abuse as “heartbreaking.”
During a Capitol Hill discussion before the bill’s passage, she said, “Every young person deserves a safe online space to express themselves freely, without the looming threat of exploitation or harm.”
Her presence alongside the President during the signing underscored the seriousness the administration places on the issue of digital dignity and safety.
Bipartisan Support and Legislative Background
The Take It Down Act was introduced by Senators Ted Cruz (Republican–Texas) and Amy Klobuchar (Democrat–Minnesota). The bill sailed through the U.S. Senate unanimously and passed the House with overwhelming support—409 votes in favor and only two against.
The urgency to act came after a surge in incidents involving deepfake pornography and the failure of state-level laws to consistently offer legal protection to victims.
Tackling the Threat of AI Deepfakes
With the advent of AI tools that can replicate human faces and voices, the misuse of deepfake technology for creating fake explicit content has risen sharply. The new law is designed to:
- Criminalize the dissemination of such content.
- Ensure victims have a legal pathway to demand removal and seek justice.
- Hold platforms accountable if they fail to act promptly.
Law enforcement and privacy advocates have welcomed the bill as a long overdue measure to confront the abuse of emerging technologies.
Enforcement and Concerns Ahead
Online platforms are expected to build takedown mechanisms and response frameworks within a year. The FTC has been tasked with ensuring compliance and investigating complaints.
However, civil liberties groups have raised concerns about potential overreach. They warn that without clear boundaries, enforcement could infringe on free expression or be misused.
Still, most digital rights organizations have praised the law’s intention to protect individuals—especially minors—from technological exploitation.