Skip to main content

Google on how LLMs compare to ML systems for ads safety

Google is working to adopt large language models (LLMs) everywhere. One such area is ads safety, with Google detailing such efforts in its 2023 report. 

Google has been using machine learning systems “for years” to enforce ad policies. However, they need to be “extensively” trained on “hundreds of thousands, if not millions of examples of violative content.” 

In comparison, LLMs are “able to rapidly review and interpret content at a high volume, while also capturing important nuances within that content.”

Take, for example, our policy against Unreliable Financial Claims which includes ads promoting get-rich-quick schemes. The bad actors behind these types of ads have grown more sophisticated. They adjust their tactics and tailor ads around new financial services or products, such as investment advice or digital currencies, to scam users.

It was harder for the older approach to “differentiate between legitimate and fake services and quickly scale our automated enforcement systems to combat scams.” LLMs can quickly recognize new trends, identify the behavior of bad actors, and distinguish between “a legitimate business from a get-rich-quick scam.”

Overall, Google credits “advanced reasoning capabilities” as making possible “more precise enforcement decisions on some of our more complex policies.”

On the flip side, Google “faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deepfakes” in late 2023 and enter 2024. 

We pinpointed patterns in the bad actors’ behavior, trained our automated enforcement models to detect similar ads and began removing them at scale. We also updated our misrepresentation policy to better enable us to rapidly suspend the accounts of bad actors.

Other notable stats in Google’s 2023 Ads Safety report include:

  • “In 2023, we blocked or removed over 5.5 billion ads, slightly up from the prior year, and 12.7 million advertiser accounts, nearly double from the previous year.”
  • “Overall, we blocked or removed 206.5 million advertisements for violating our misrepresentation policy, which includes many scam tactics and 273.4 million advertisements for violating our financial services policy.”
  • “We also blocked or removed over 1 billion advertisements for violating our policy against abusing the ad network, which includes promoting malware.”
  • “In 2023, we blocked or removed over 5.5 billion ads, slightly up from the prior year, and 12.7 million advertiser accounts, nearly double from the previous year.”
  • “In 2023, we blocked or restricted ads from serving on more than 2.1 billion publisher pages, up slightly from 2022.”
  • “We took broader site-level enforcement action on more than 395,000 publisher sites, up markedly from 2022.”

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com