AI-Generated Nightmares: The Dark Side of TikTok
In the exponentially growing world of artificial intelligence, a new and concerning phenomenon is emerging: the rise of “true crime deepfakes” on social media platforms like TikTok. These videos, created using AI, bring the victims of real crimes “to life,” allowing them to describe their own gruesome endings from their perspectives. This disturbing trend not only raises questions about the ethics of such creations but also about the scope of our current legislation and regulations surrounding AI and deepfakes.
The videos are published by accounts with thousands of followers, recounting the stories of real crime victims. Often, these stories involve gruesome details, with children sometimes playing a central role. The majority of these videos do not provide any content warnings in advance, leaving viewers unprepared for the shocking content that follows. This trend on TikTok is a manifestation of a broader fascination with so-called “true crime” content, encompassing documentaries, podcasts, and books about real crimes.
While the stories themselves are true, the details are often twisted or misrepresented. The victims are portrayed with different names, ages, and skin colors than what actually occurred. The creators of these videos claim to honor the victims, but in reality, they dishonor them through factual inaccuracies and inappropriate representations. Speculation suggests that some of these misrepresentations are deliberate, possibly as a way to circumvent TikTok’s strict rules regarding deepfakes of non-famous individuals and children.
The videos present alternative generated realities; pseudo-realities known as hyperrealities. Many of these accounts have disclaimers stating that they do not use real photos of victims out of “respect for the family.” However, this is a hollow claim, as the videos are intended to shock and evoke strong emotional reactions to accumulate as many views and likes as possible.
Unsurprisingly, this alarming trend raises various ethical and legal concerns. The videos, made without the consent of the victims’ families, have the potential to retraumatize their loved ones. People who were previously victims are once again victimized. It is also possible that grieving families may pursue civil lawsuits against the creators of such videos, especially if the videos generate revenue. According to experts, filing lawsuits based on defamation provides a possible means to suppress this trend. The difficulty lies in the fact that the individuals in question are already deceased.
It is evident that we are entering an extremely complex territory where ethical and legal boundaries are constantly challenged. Technological advancements like AI and deepfakes need regulation to preserve our humanity, dignity, and respect for life. Exploiting the tragedies of real people for entertainment purposes is not only tasteless but also harmful to the families and friends of the victims. It is crucial that we critically examine the ethical implications of technology and take necessary steps to prevent abuse. The recent trend of AI-generated videos featuring crime victims on TikTok serves as a wake-up call for all of us to be aware of the dark side of technological progress.