Is Google using a ChatGPT-like system for spam and AI content detection and ranking websites?

Is Google using a ChatGPT-like system for spam and AI content detection and ranking websites?

The headline is intentionally misleading – but only insofar as using the term “ChatGPT” is concerned.

“ChatGPT-like” immediately lets you, the reader, know the type of technology I’m referring to, instead of describing the system as “a next-generation model like GPT-2 or GPT-3.” (Also, the latter really wouldn’t be as clickable…)

What we will be looking at in this article is an older, but highly relevant Google paper from 2020, “Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study.”

What is the paper about?

What they’re essentially saying is that they have found that the same classifiers developed to detect AI-based copy, using the same models to generate it, can be successfully used to detect low-quality content.

Of course, this leaves us with an important question:

Is this causation (i.e., is the system picking it up because it’s genuinely good at it) or correlation (i.e., is a lot of current spam created in a way that is easy to get around with better tools)?

Before we explore that however, let’s look at some of the authors’ work and their findings.

The setup

For reference, they used the following in their experiment:

Two text-generation models, OpenAI’s RoBERTa-based GPT-2 detector (a detector that uses the RoBERTa model with GPT-2 output and predicts whether it is likely AI-generated or not) and the GLTR model, which also has access to top GPT-2 output and operates similarly.

We can see an example of the output of this model on the content I copied from the paper above:

Three datasets Web500M (a random sampling of 500 million English webpages), GPT-2 Output (250k GPT-2 text generations) and Grover-Output (they internally generated 1.2M articles using the pre-trained Grover-Base model, which was designed to detect fake news).

The Spam Baseline, a classifier trained on the Enron Spam Email Dataset. They used this classifier to establish the Language Quality number they would assign, so if the model determined that a document is not spam with a probability of 0.2, the Langage Quality (LQ) score assigned was 0.2.

A note about AI-generated content

Language models have likewise developed over the years. While GPT-3 existed when this paper was written, the detectors they were using were based on GPT-2 which is a significantly inferior model.

GPT-4 is likely just around the corner and Google’s Sparrow is set for release later this year. This means that not only is the tech getting better on both sides of the battleground (content generators vs. search engines), combinations will be easier to pull into play.

Can Google detect content created by either Sparrow or GPT-4? Maybe.

But how about if it was generated with Sparrow and then sent to GPT-4 with a rewrite prompt?

Another factor that needs to be remembered is that the techniques used in this paper are based on auto-regressive models. Simply put, they predict a score for a word based on what they would predict that word to be given those that preceded it.

As models develop a higher degree of sophistication and start creating full ideas at a time rather than a word followed by another, AI detection may slip.

On the other hand, the detection of simply crap content should escalate – which may mean that the only “low quality” content that will win, is AI-generated.

Leave a Reply

Your email address will not be published. Required fields are marked *