Rise in Deepfakes: A Threat to Trust and Security

Rise in Deepfakes: A Threat to Trust and Security

Deepfake technology has reached new heights with a doctored clip of Australian mining magnate and businessman Andrew “Twiggy” Forrest surfacing on Facebook. This latest addition to a series of deepfakes demonstrates the growing concern over deceptive advertisements and the erosion of trust in the media. The cybersecurity firm Cybertrace identified the video, which portrays Forrest endorsing a fake cryptocurrency trading platform. The altered footage aims to lure unsuspecting viewers into enrolling in a fraudulent platform that promises lucrative earnings for “regular individuals.” Unfortunately, the video directs targets to a notorious site named “Quantum AI,” known for its involvement in scams and financial deceit. The prevalence of such deepfakes on social media platforms like Facebook raises serious questions about the adequacy of measures to combat this emerging threat.

In response to the deepfake phenomenon, Facebook, along with its parent company “Meta,” implemented a prohibition on deepfakes in early 2020. However, the continued circulation of doctored clips demonstrates the challenges faced by these platforms in combating deceptive content effectively. Andrew Forrest himself criticized Facebook for its apparent failure to prevent scams. The billionaire is currently pursuing legal action against the social media giant for another crypto advertising scam that allegedly exploited his image. His frustration is palpable as he speaks out against the perceived negligence of social media companies, stating, “Facebook does nothing – that’s what I hope the legal actions I started will address, to make social media companies liable for the negligent way they run their ad platforms.”

The alarming reality of counterfeit videos is not limited to Andrew Forrest alone. MicroStrategy founder Michael Saylor recently revealed that his team is combating approximately 80 counterfeit videos daily, many of which support various Bitcoin scams. Prominent figures like billionaire Elon Musk have also fallen victim to deepfakes, with modified videos featuring their faces circulating on social media. These malicious videos often contain links to investment schemes, unauthorized products, or unrelated e-commerce sites that vanish after a few days. As a result, users are lured into fraudulent activities, exposing both their finances and personal information to significant risks.

Deepfake technology is rapidly emerging as a top security threat worldwide. Data from Sumsub illustrates a significant increase in the proportion of deepfakes in North America from 2022 to Q1 2023. In the United States, the percentage rose from a mere 0.2% to a concerning 2.6%, while Canada experienced an even steeper surge from 0.1% to 4.6%. These statistics underscore the need for urgent action and reliable detection technologies to combat this growing menace. Failure to keep up with evolving deepfake detection methods leaves both businesses and users vulnerable to exploitation.

The rise of deepfakes poses a threat not only to individuals targeted by fraudulent schemes but also to the overall trust in media and online platforms. It is crucial for social media companies to face their responsibilities and invest in robust measures to detect and eliminate deepfakes effectively. Collaboration with cybersecurity firms, like Cybertrace, can enable timely identification and removal of deceptive content. Furthermore, continuous research and development in deepfake detection technologies are essential to stay ahead of those seeking to exploit AI-generated videos for malicious purposes.

In the face of this evolving threat, individuals must remain vigilant and exercise caution when encountering content shared online. Verifying sources, cross-referencing information, and being mindful of red flags can go a long way in preventing falling victim to deepfake scams. Ultimately, protecting trust and security necessitates a collective effort from social media platforms, technology companies, law enforcement agencies, and users alike.