Dive Brief:
-
Generative artificial intelligence tools such as ChatGPT could be aiding the proliferation of more convincing email scams aimed at stealing money from businesses, according to cybersecurity firm Fortra.
-
In the first quarter of 2023, threats in corporate inboxes hit new highs with a quarter of all reported emails classified as malicious or untrustworthy, Fortra said in a recent report. Nearly all of these threats (99%) were classified as impersonation attacks.
-
Fraudsters appear to be turning to generative AI to assist them in crafting well-written email messages at scale — without the poor spelling and grammar that has historically been associated with scams, John Wilson, a threat research senior fellow at Fortra, told CFO Dive. Recent evidence also suggests that scammers may be relying on AI to perform language translation, he said.
Dive Insight:
Fortra joins a growing list of organizations reporting an uptick in cybercriminals’ use of social engineering, which refers to manipulation techniques designed to exploit human behavior and error to gain access to valuable information or assets.
“Social engineering has come a long way from your basic Nigerian Prince scam to tactics that are much more difficult to detect,” Verizon said in its 2023 Data Breach Investigations Report.
With business email compromise scams in particular, company employees who perform fund transfer requests tend to be prime targets. Such scams nearly doubled across Verizon’s entire incident dataset and now represent more than 50% of incidents within the social engineering category, according to the company’s report. The median amount stolen through these attacks also increased over the last couple of years to $50,000, it said.
According to Microsoft research released in May, cybercriminals are leveraging residential Internet Protocol addresses to make the intrusions appear to be locally generated and evade security alerts.
The FBI has reported that its Internet Crime Complaint Center received 21,832 complaints involving fraud attempts via business email compromise scams last year, with adjusted losses totaling over $2.7 billion.
Historically, these attacks impersonated an organization’s CEO or some other high-level executive to trick recipients into initiating large financial transactions, according to Fortra. Increasingly, however, threat actors are expanding their target list to include vendors associated with the intended victim.
“By compromising a third party or business partner, the victim organization is prone to highly realistic emails that often contain key insider information, significantly enhancing the legitimacy of an attack,” Fortra’s report said.
Poor grammar and spelling have historically been known as classic signs of email scams. But Wilson said Fortra has observed a rise in scam messages that appear to be “wordsmithed.”
The cybersecurity company has also seen an increase in the number of languages used in attempted payroll diversion schemes, which were almost universally conducted in English just two years ago.
“Today we see these same scams attempted in French, Polish, German, Swedish, Dutch, and several other languages,” Wilson said in an email. “While we cannot be certain if generative AI was used to improve the grammar or to perform translation beyond the capabilities of Google Translate on any specific message, the timing and volume of the improved grammar and expanded language coverage would suggest the use of generative AI.”
Security awareness training can be an important tool in combating the threat, according to Fortra. The cybersecurity firm’s report also recommended additional email security layers that are optimized to detect and respond to advanced email threats.
“Applying algorithms through machine learning that assist in the detection of anomalies and patterns will be increasingly necessary to thoroughly and accurately inspect email,” the company said.