Deepfake videos featuring Apple CEO Tim Cook flooded YouTube recently, coinciding with the company’s “Glowtime” event. These videos, designed to deceive users into investing in cryptocurrencies, were created using advanced AI tools that replicated Cook’s likeness and voice.
The livestreams appeared on a YouTube channel that looked nearly identical to Apple’s official channel, complete with a fake verification badge. This made it difficult for viewers to distinguish them from genuine content. Following a flood of reports from concerned users, the videos were quickly taken down by YouTube.
According to reports, during the livestream, the AI-generated Tim Cook seemed to be promoting a “get rich quick” scheme. He said,
“Once you complete your deposit, the system will automatically process it and send back double the amount of cryptocurrency you deposited.”
Tim Cook joins Elon Musk’s bandwagon
This isn’t the first instance of deepfake technology being misused to promote crypto scams though. For instance, AI-generated videos of Elon Musk have previously been used in a similar fashion, exploiting his public persona to convince unsuspecting viewers to invest in fraudulent crypto schemes.
Here, it’s worth pointing out that Tesla’s CEO isn’t a stranger to concerns like these. Just recently, he was made party to a lawsuit that claimed that Musk artificially inflated the value of Dogecoin, the market’s largest memecoin. While this was dismissed soon enough, the use of AI-generated Musk means his likeness is often used for real scams.
At the time, AMBCrypto’s report quoted Musk’s lawyers as having said,
“There is nothing unlawful about tweeting words of support for, or funny pictures about, a legitimate cryptocurrency.”
What are social media platforms doing?
Social media platforms like YouTube and Twitter have been actively working to combat such scams. YouTube employs a combination of machine learning algorithms and manual reviews to detect and remove fake content, while Twitter uses advanced AI to identify suspicious activity and remove accounts promoting fraudulent schemes.
Additionally, they deploy automated detection systems and rely on user reports to identify fraudulent content.
The rise of deepfake scams highlights the urgent need for improved digital literacy and security measures. As these fraudulent tactics become more sophisticated, users must remain vigilant and platforms must continue to develop better detection tools to protect their communities from financial harm.
At the same time, governments and regulatory bodies worldwide are also considering new policies and technologies to address the growing risks associated with deepfake media and crypto scams. The ongoing fight against these types of fraud requires a coordinated effort from tech companies, users, and regulators to stay ahead of cybercriminals who continually adapt their tactics to exploit new vulnerabilities.