• Google has released a scam advisory
  • 'Cloaking' is being used by threat actors
  • AI is helping scammers take advantage of popular events

Google has revealed a new report outlining the most common techniques threat actors are using against victims, highlighting a practice known as ‘Cloaking’ as a way to deceive users into disclosing sensitive information.

The technique uses tools called ‘cloakers’ to show different content to different users based on identifying information such as IP addresses. Often, cloaking will involve showing one version of a landing page or website to search engines and bots, and another version to real human users.

“Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users”, Laurie Richardson, Vice President, Trust & Safety at Google wrote in the report.

Scareware and malware

Cloaking does have some legitimate uses, such as for advertisers who want to prevent their pages from being scraped by bots, or who want to hide their strategies from competitors. However, Google has observed scammers using cloaking tools to redirect users who click an ad to scareware sites.

This then tricks users into believing their device is infected with malware, or that their account has been blocked due to unauthorized activity - which tricks them into a false ‘customer support’ site, to which they reveal sensitive information.

“The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products.” Google says.

Other techniques outlined were the exploitation of major events. Scammers take advantage of significant events such as elections, sports fixtures, or humanitarian disasters. The well established technique is being bolstered by AI tools, which are able to quickly respond to breaking news and advertise false products and services.

Elsewhere, Google also flagged fake charity scams, which aim to defraud people looking to donate to relief efforts and set up appeals to seem legitimate, with AI tools being used to produce huge amounts of content to overwhelm users to deceive them into clicking malicious links.

"Preventing user harm from malicious scams requires effective cooperation across the online ecosystem," Richardson concluded. "Bad actors are constantly evolving their tactics and techniques...we’re sharpening our detection and enforcement techniques to meet these threats, enhancing our proactive capabilities, and ensuring we have robust and fair policies in place to protect people."

You might also like