w

Website Impersonation Scams Surge, Solutions Fall Short: Study

[ad_1]

Website impersonation scams have become a growing problem, although many businesses aren’t happy with the tools they have to address them.

A study released Tuesday by digital risk protection solutions company Memcyco found that nearly three-quarters of businesses have deployed a digital impersonation protection solution to avert online scams, but 6% of those organizations are satisfied that it protects them and their customers. “That’s really shocking,” Memcyco CMO Eran Tsur told TechNewsWorld.

According to the study, more than two-thirds of businesses (68%) know their websites are being impersonated, and almost half (44%) know this directly impacts their customers. The study is based on a survey of 200 full-time director-to-C-level employees in the security, fraud, digital, and web industries in the United States and the United Kingdom.

“A spoofed website can lead to significant financial losses for customers if they are tricked into providing login credentials or sensitive personal information,” said Matthew Corwin, managing director of Guidepost Solutions, a global security, compliance, and investigations firm.

“Brand reputation can be severely damaged if customers fall victim to scams perpetrated through an impersonated website, eroding trust in the company,” he told TechNewsWorld.

A website impersonation scam can harm more than a company’s reputation. “There can also be direct financial losses from fraud, as well as indirect costs related to remediation, legal fees, and possibly some customer compensation,” Ted Miracco, CEO of Approov Mobile Security, a global mobile application security company, told TechNewsWorld.

Leaning on Customer Reports for Detection

The study also found that the most common way two-thirds (66%) of the surveyed companies became aware of website impersonation attacks was through incident reports from affected customers. “That’s unbelievable,” Tsur said. “Not only are the deployed solutions not protecting against or preventing these attacks, the organizations don’t have a clue whether these attacks have taken place or not.”

Guidepost Solutions’ Corwin noted that businesses that depend primarily on customer reports to detect impersonation scams might miss out on crucial early warnings and the opportunity to defend against emerging threats proactively. “A reactive approach puts the burden on customers, which can damage customer relationships and trust,” he said.

“Learning about scams from customers means the attack has already impacted individuals, causing harm before mitigation even begins,” added Approov’s Miracco. “Regular scans are the only alternative that might take down fake websites that mimic a brand, but this is challenging, as you have to anticipate events before they occur.”

“Working from customer reports is a reactive approach, not a proactive one,” he said. I’m not sure an adequate defense exists yet, so users need to be educated and more careful before responding to emails that look legitimate.”

An even more worrying finding of the study is that over 37% of businesses said they first become aware of fake websites when customers affected by phishing-related scams publicize their experience on social media, a practice known as “brand shaming.”

The study questioned how much longer businesses can afford to rely on customers as their main source of threat intelligence with AI and phishing kits increasingly available off-the-shelf.

“With these kits, everything is fully automated,” Memcyco’s Tsur observed. “You can launch it and forget it.”

Cybersecurity’s Worst Nightmare

Corwin explained that the accessibility of AI-driven tools and pre-packaged phish kits means even less technically skilled individuals can execute convincing impersonation attacks. “AI-enhanced phishing tools can mimic legitimate websites more accurately, deceiving even the most vigilant users and amplifying the threat landscape,” he said.

“Often,” he continued, “cybercriminals will also leverage domain names that appear nearly the same as the legitimate address of a company or brand but contain slight variations or errors, known as ‘combosquatting’ or ‘typosquatting.’”

“AI is very dangerous,” added Miracco. “These tools are so easy to use, even for individuals with no technical skills, allowing virtually anyone to create sophisticated phishing campaigns. It’s our worst cybersecurity nightmare come true — hand-delivered by companies that talk about how wonderful AI will be. Sadly, the early adopters of most technologies are bad actors.”

Patrick Harr, CEO of SlashNext, a network security company in Pleasanton, Calif., noted that website impersonations have existed since the web was born.

“These were typically easy to spot by almost any user,” he said. “What has changed recently is two things — phishers are squatting on legitimate domains, and phishers are using phishing kits and AI to generate near-perfect website pages.”

“Without AI computer vision countermeasures, these are very difficult to discern and will make the threat actors more successful, not less,” he maintained.

Strategies To Combat Website Impersonation Scams

Roger Grimes, a defense evangelist for KnowBe4, a security awareness training provider in Clearwater, Fla., recommended that every company sending emails implement DMARC, SPF, and DKIM, which are global anti-phishing standards. “They attempt to defeat malicious emails and links claiming to be from the legitimate sending domain,” he told TechNewsWorld.

“For example,” he explained, “If I get an email claiming to be from Microsoft, the receiver’s email server/client can use DMARC, SPF, and DKIM to see if the email actually originated from Microsoft.”

Miracco recommended that company websites ensure all web traffic is encrypted with SSL/TLS certificates to make it harder for attackers to intercept and spoof communications.

He added that mobile applications should implement attestation mechanisms to verify their integrity and ensure that interactions with backend APIs only originate from legitimate, unaltered instances of the app. They should also hire threat intelligence services that can monitor for phishing kits, fake domains, and other indicators of impersonation.

To counter tactics like typosquatting, Corwin noted that companies can register obvious variations or likely misspellings of existing domains, including hyphenated names, other popular domain extensions, and characters slightly out of order.

“There are brand monitoring services that will monitor for phishing sites and new domains which contain company intellectual property, and some will even help with automated domain takedown services,” he said. “These may help some companies, but unfortunately, because there are so many potential variations of domain names and current tools make it so easy to create these phishing sites, the risk is likely to persist.”

Miracco added that companies should not only focus on technological defenses but also foster a culture of security awareness among employees and customers.

“Website impersonation scams are a rapidly evolving threat that requires a multi-faceted approach,” he said. AI has enabled this problem, and hopefully, in the near future, we will be deploying AI-enabled solutions that can preempt users from making costly mistakes with a fake site.”

[ad_2]

Source link