As the world’s largest search engine and a digital advertising behemoth, Google has a lot to answer for when it comes to misleading or false information being spread using its platforms, both through ads and through content that is monetised by way of those ads. More recently, the company has been on a mission to try to set this aright by taking down more of the bad stuff — be it malware-laden sites, get-rich-quick schemes, offensive content, or fake news — and today it’s publishing the latest of its annual “bad ads” reports to chart that progress.
Overall, it appears that Google has been nabbing more violating content than ever before — a result, it says, of new detection techniques and a wider set of guidelines over what is permissible and what is not. “Wider” is the key word here: Google added 28 new policies for advertisers and a further 20 for publishers in 2017 to try to get a better grip on what’s whizzing around its services.
Here are some of the big numbers out of the report:
- In 2017, Google removed 3.2 billion ads that violated its policies around harmful, misleading and offensive content — nearly twice as many as it did in 2016 when it removed 1.7 billion ads. Google also blocked 320,000 publishers from its ad network (more than three times the 100,000 sites it blocked a year ago); along with 90,000 websites and a whopping and 700,000 mobile apps — all for violating its content policies.
- Google last year also introduced page-level enforcement — a way of evaluating content not just on an overall site but on specific pages within it, and then removing ads on violating pages. The new process has led to more than 2 million pages each month getting blocked from using Google ads.
- It also broke out how it performed across specific categories of violations. Over 12,000 websites were blocked for scraping and using content from legitimate sites (a rise from 10,000 in 2016). And 7,000 AdWords accounts were suspended for “tabloid cloaking” — presenting websites as news organizations when they are not (this is a big rise: only 1,400 sites were ID’d for tabloid cloaking in 2016).
- Google also removed 130 million ads for malicious activity abuses, such as trying to get around Google’s ad review. And 79 million ads were blocked because clicking on them led to sites with malware, while 400,000 sites containing malware were also removed as part of that process. Google also identified and blocked 66 million “trick to click” ads and 48 million ads that tricked you into downloading software.
- In keeping with that last category of ads that trick users, Google is also making a much stronger effort at going after ads that misrepresent people, products or information in a misleading way to users (examples range from college students posing as lawyers, to fake “official” government seals, to dodgy medical claims and empty promises of discounts or other offers).
- In November 2017 the company expanded and updated what falls under “misrepresentative content” and it said that it combed some 11,000 websites that it suspected of being in violation. It eventually blocked 650 sites and 90 publishers. This is actually not a great hit rate: the year before it identified 1,200 sites for violations and blocked 340 of them and terminating 200 publishers.
While all of Google’s figures ultimately point to more content being identified and removed, what’s less clear is what kind of proportion this represents when considering Google’s overall ad inventory and the total number of pages, sites, and apps that run Google ads. As Google’s business continues to grow, and as the number of apps and sites continue to expand — and all of them have — it means that the overall percentages that Google is identifying and shutting down might not be all that different year-to-year. For all we know, the proportion of bad ads that are getting caught might even be decreasing.
Nevertheless, the need for Google to continue to work on improving all thisis an essential one — not just because it’s the right thing to do, but because it’s business suicide not to. If quality is overlooked for too long, eventually people will gravitate away and find new, non-Google experiences to occupy their time.
Or, as Scott Spencer, Director of Sustainable Ads at Google, puts it in his blog post: “In order for this ads-supported, free web to work, it needs to be a safe and effective place to learn, create and advertise. Unfortunately, this isn’t always the case. Whether it’s a one-off accident or a coordinated action by scammers trying to make money, a negative experience hurts the entire ecosystem.”
The bad ads reports comes in the wake of Google taking a much more proactive stance tackling harmful content on one of its most popular platforms, YouTube. In February, the company announced that it would be getting more serious about how it evaluated videos posted to the site, and penalising creators a through a series of “strikes” if they were found to be running afoul of Google’s policies.
The strikes have been intended to hit creators where it hurts them most: by curtailing monetising and discoverability of the videos.
This week, Google started to propose a second line of attack to try to raise the level of conversation around questionable content: it plans to post alternative facts from Wikipedia alongside videos that carry conspiracy theories (although it’s not clear how Google will determine which videos are conspiracies, and which are not).
Whether or not that flies, even as Google gets a grip on its current set of malicious and harmful ad and content types, there will always be more fish to fry when it comes to questionable content. Google’s Spencer says that targets for this year include “several policies to address ads in unregulated or speculative financial products like binary options, cryptocurrency, foreign exchange markets and contracts for difference (or CFDs).”
The company will also put an increased focus on gambling and a better approach to working with organizations that are trying to treat addiction and other problems, but might fall afoul of checks for similar looking, but ultimately scam versions, of the same thing (likely in response to this particular controversy from February).