Woman

Hunting Digital Phantoms: The Challenges of Tracking Deepfake-Fueled Cyberbullying

Given the direction AI development is going, malevolent people have gained a lot of momentum. Perhaps it is somewhat insidious to provide such an ease of creating synthetic visuals that will expose someone to ridicule, humiliation, and injury to honor, and reputation. Let's admit that the consequences of such events are very deep and that it is difficult or impossible to repair them. First of all, we think of deepfake media, which are very suitable for cyber violence, and the growth of cases in practice is worrying. Children are especially affected by this.

In this blog post, we'll look at the ways that had to be devised to track down malicious AI users, ironically using AI software that detects inappropriate deepfake content.

Deepfake Cyberbullying In The Flash

There will always be meanies. Unfortunately, bad intentions know no age, so someone can have bad experiences with people from the immediate environment in childhood as well as in adulthood. However, one of the key differences between bullying in the era when the Internet was not a common good and now is that it can last 24/7. In the past, the disturbing situation did not follow you to the yard and the house, but now the boundary of the space has been erased.

We're not denying the fun of toying with AI's immeasurable range. However,
deepfake generators walk the line between entertainment and privacy invasion. As a reminder, deepfake generators use deep learning algorithms that can transform visual data based on the sample feed you provide to the AI deepfake tool/app.

The dark trend in the mass use of these software lies in the
weaponization of AI-altered or generated media that contain offensive remarks, mocking, delusion about one's identity, and so on. The easy and quick transfer of such displays via the internet leads to inappropriate content reaching the public space.

AI vs. AI

We are witnessing an ironic twist in the virtual world, which is that deep learning algorithms are being developed to enable deepfakes of explicit content, and at the same time they are being used to detect harassment. This means that content moderation must strengthen its squads to recognize harmful AI-generated pornographic content that has entered the public space without consent.

Machine learning and deep learning, as the heart and soul of artificial intelligence, can be aimed at catching digital baddies. Since they are subject to training on the basis of data sets that serve as an example for further action, extensive delivery of such examples can establish a schematization of endangerment by both deepfake vulgarities and textual ones. And since AI technology is best created for automated processes and operations, this idea has its own development path that already gives the first results.

Even for textual insults and attacks, built-in AI software for content recognition and removal has already been widely developed. He believes that the AskFM social network currently has the best defense, which has a success rate of two-thirds of negative interactions that are successfully removed. However, deepfake monitoring is much more complex and requires thinking about many variables that must be put into a win-win combination.

Monitoring for Deepfakes or Protecting the Individual Rights

Using artificial intelligence to monitor online content for violent deepfakes is like a double-bladed sword. Either way, when AI is in question privacy can be at risk. How to juggle the need to protect potential victims of deepfake-fueled cyber violence, especially from sensitive groups, and simultaneously protect the private rights of all users.

Passive Supervising: This would mean that private conversations and all online content posted by individuals must be monitored at all times. This feels a bit intrusive, as it still looks like you have a voyeur taped to your bedroom window.


Data Protection: The fact that some AI deepfake software could store data on users' interactions creates widespread concern about breaching protection shields and potential misuse of leaked information.


Freedom of Expression: Algorithms are unfortunately blind to nuance, so overly aggressive policing could stifle free expression and discourage people from venturing into freer online activities.


That's why AI solutions should have a well-balanced approach to maintaining cyber wellness. Indeed, it is difficult to reconcile opposites and constantly toggle between extremes, but we must agree that freedom of creativity and privacy are equally important.

Challenges of Proper Detection

As we said, machine learning is not capable of learning at the human level, so it suffers from a kind of color blindness when it comes to recognizing humor and missus, artistry, and insult. Relying solely on AI to solve these edge cases can create the following risks.

False Positives: Wrongfully Accused. AI defense systems are designed to raise a red flag when they encounter video and image deepfakes. For example, someone can use filters and comic effects that the AI can react negatively to and thus raise a false alarm. In this way, even legal action can be initiated for a benign depiction, which is why it is important to create AI software that would recognize subtleties.

• False Negatives: Wrongdoers Operate Undetected. So, the opposite is also possible - skillful deepfakes can bypass the AI scan. Someone who has mastered deepfake technology can use generation methods to go unnoticed by AI guardians. In this way, harmful content can continue to be uploaded.

A Multi-layered Approach Ahead

Through this blog post, we have presented many questions that yearn for answers, but also for intensive work on them so that we can all be safe in the digital environment. Unfortunately, this environment is still not the ultimate comfort zone. Risks lurk around every corner, but it is clear that the initial factor for this is actually the human factor.

But we still have to fight to reduce the impact of those risks, and that requires a multi-layered approach to creating technical solutions for a positive work frame of digital means, especially AI deepfake technology. This should include the continued improvement and refinement of AI algorithms for detecting malicious content. Legal frameworks are equally important, but also raising awareness among people about what constitutes a negative practice of using artificial intelligence.

We still have a bumpy road ahead of us on which we have to get used to the presence of the enormous digital power available to us and learn to wield it in beneficial ways. However, there are big developments and not everything is so gloomy because the negative cases are much less frequent than the positive ones. It is precisely such positive examples that must be used as fuel to influence users not to resort to missuses.

Back to blog