News

Technology & Analytics

Google and Facebook are failing to take action to remove online scam adverts even after fraud victims report them, raising concerns that the reactive approach to fraudulent content taken by online platforms is not fit for purpose, Which? research has revealed.

The consumer champion’s survey found that a third (34%) of victims who reported an advert that led to a scam on Google said the advert was not taken down by the search engine, while a quarter (26%) of victims who reported an advert on Facebook that resulted in them being scammed said the advert was not removed by the social media site.

Which? believes that the significant flaws with the current reactive approaches taken to tackling online scams makes a clear case for online platforms to be given legal responsibility for preventing fake and fraudulent adverts from appearing on their sites. Which? is calling for the government to take the opportunity to include content that leads to online scams in the scope of its proposed Online Safety Bill.

Of those who said they had fallen victim to a scam as a result of an advert on a search engine or social media, a quarter (27%) said they’d fallen for a fraudulent advert they saw on Facebook and one in five (19%) said a scam targeted them through Google adverts. Three per cent said they’d been tricked by an advert on Twitter.

The survey also highlighted low levels of engagement with the scam reporting processes on online platforms. Two in five (43%) scam victims conned by an advert they saw online, via a search engine or social media ad, said they did not report the scam to the platform hosting it.

The biggest reason for not reporting adverts that caused a scam to Facebook was that victims didn’t think the platform would do anything about it or take it down – this was the response from nearly a third (31%) of victims.

For Google, the main reason for not reporting the scam ad was that the victim didn’t know how to do so – this applied to a third (32%) of victims. This backs up the experience of Which?’s researchers who similarly found it was not immediately clear how to report fraudulent content to Google, and when they did it involved navigating five complex pages of information.

Worryingly, over half (51%) of 1,800 search engine users Which? surveyed said they did not know how to report suspicious ads that appear in their search listings, while over a third (35%) of 1,600 social media users said they didn’t know how to report a suspicious advert seen on social media channels

Another issue identified by victims that Which? has spoken to is that even if fake and fraudulent adverts are successfully taken down they often pop up again under different names.

One scam victim, Stefan Johansson, who lost £30.50, told Which? he had repeatedly reported a scam retailer operating under the names ‘Swanbrooch’ and ‘Omerga’ to Facebook. He believes the social media site has a ‘scattergun’ approach to removing the ads and says that a week rarely goes by when he doesn’t spot dodgy ads in his newsfeed, posted by what he suspects are unscrupulous companies.

Another victim, Mandy, told Which? she was tricked by a fake Clarks ‘clearance sale’ advert she saw on Facebook. She paid £85 for two pairs of boots, but instead she received a large box containing a pair of cheap sunglasses.

‘I’ve had a lot of back and forth with my bank over the past six months, trying to prove that I didn’t receive what I ordered,’ Mandy said. Facebook has since removed this advert and the advertiser’s account.

The tech giants make significant profits from adverts, including ones that lead to scams. These companies have some of the most sophisticated technology in the world but the evidence suggests they are failing to use it to prevent scammers from abusing the platforms by using fake and fraudulent content on an industrial scale to target victims.

The combination of inaction from online platforms when scam ads are reported, low reporting levels by scam victims, and the ease with which advertisers can post new fraudulent adverts even after the original ad has been removed, suggests that online platforms need to take a far more proactive approach to prevent fraudulent content from reaching potential victims in the first place.

Consumers should also sign up to Which?’s scam alert service in order to familiarise themselves with some of the latest tactics used by fraudsters, particularly given the explosion of scams since the coronavirus crisis. The consumer champion has also launched a Scam Sharing tool to help it gather evidence in its work to protect consumers from fraud. The tool has received more than 2,500 reports since it went live three weeks ago.

Adam French, Consumer Rights Expert at Which?, said: “Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams.

“Which? has launched a free scam alert service to help consumers familiarise themselves with the latest tactics used by fraudsters, but there is no doubt that tech giants, regulators and the government need to go to greater lengths to prevent scams from flourishing.

“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites. The case for including scams in the Online Safety Bill is overwhelming and the government needs to act now.”

You may also like...

Keep Up To Date - Subscribe To Our Email Newsletter Today

Get the latest industry news direct to your inbox on all your devices.

We may use your information to send you details about goods and services which we feel may be of interest to you. We will process your data in accordance with our Privacy Policy as displayed on our parent website https://ebm.media