META announced on Friday (23) that it had alerted US presidential campaigns to potential hacking threats after detecting an Iran-linked attempt to use the WhatsApp messaging service for malicious purposes.
This disclosure follows similar warnings from tech companies like Google and Microsoft, which have also reported Iranian hacking attempts ahead of the upcoming November election between vice-president Kamala Harris and former president Donald Trump.
According to Meta, WhatsApp accounts associated with an Iranian "threat actor" impersonated technical support for companies such as AOL, Google, Yahoo, and Microsoft.
These accounts sent deceptive messages aimed at tricking recipients into providing sensitive information or access to networks.
Meta stated that this malicious activity originated from Iran and targeted individuals in Israel, Palestine, Iran, the United States, and the UK, focusing on political and diplomatic figures, including those linked to the administrations of president Joe Biden and former president Donald Trump.
Meta confirmed that it had blocked the accounts involved in this social engineering effort and linked the hacking attempts to the same group responsible for other attacks identified by Microsoft and Google.
In light of the increased threat environment surrounding the US election, Meta shared its findings with law enforcement and the presidential campaigns to help them remain vigilant against potential adversarial targeting.
On Monday (19), three US intelligence and security agencies said Iran was behind a recent hack targeting Trump's presidential campaign and accused Tehran of seeking to influence the 2024 election.
The statement confirmed a Trump campaign claim from earlier this month, which came after internal communications and a dossier on running mate JD Vance were shared with a US media outlet.
Harris's campaign said on August 13 that it too had been targeted by foreign hackers, but did not give an indication of which country was believed to be behind the attempt.
OpenAI last week said it took down a cluster of ChatGPT accounts being used to create content for a "covert Iranian influence operation."
The operation used ChatGPT to create articles and short commentary, which was then shared on social media platforms or websites, according to OpenAI.
Main topics were the conflict in Gaza, Israel's presence at the Olympic Games, and the US presidential election.
"They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following," OpenAI said.
Groups linked to the Iranian government have "laid the groundwork for influence campaigns on trending election-related topics and begun to activate these campaigns in an apparent effort to stir up controversy or sway voters – especially in swing states," said Clint Watts, general manager of the Microsoft Threat Analysis Center.
Earlier this year, Director of National Intelligence Avril Haines warned that Iran was becoming "increasingly aggressive" in its influence efforts, seeking to stoke discord and undermine democratic institutions as it has done in prior election cycles. (AFP)