Vox writes about OpenAI, text detection and errors

Number 310

Subscribe below to join 4,129 other smart people getting “The Cheat Sheet.” New issues every Tuesday and Thursday.

If you enjoy The Cheat Sheet, please consider joining the 18 awesome people who are contributing a few bucks via Patreon. Or join the 39 awesome citizens who are now paying subscribers. Paid subscriptions start at $8/month. Thanks!

Support “The Cheat Sheet”

Vox recently subjected its readers to a story about OpenAI and text detection. They shouldn’t have done that.

It’s not that the story is terrible. There’s enough of that. It’s just not news — we covered the ChatGPT watermark story a month ago, in Number 308 — and is completely incorrect.

Okay, it’s horrible.

Just to bring you up to speed, when OpenAI launched ChatGPT, they promised to put a watermark on the text so it would be easily detectable. They didn’t. It turns out that OpenAI/ChatGPT created a watermarking system for text created by their bot about two years ago, and the company refused to release it. You’re caught up.

First of all, Vox completely undermines its own credibility on this issue with this paragraph:

At the moment, it’s hard to catch cheating on essays using AI tools. A few tools advertise that they can verify that text was generated by AI, but they’re not very reliable. Since false accusations of plagiarism against students are a huge problem, these tools would have to be extremely accurate to work at all — and they’re just not.

That’s just not true.

Vox didn’t interview anyone who could speak credibly on the issue of AI text detection. They certainly didn’t quote anyone on the subject. Or anyone at all, really. They just decided to ignore all the research and declare that AI detectors aren’t accurate. It’s so lazy and counterproductive.

However, Vox is right when it explains why OpenAI won’t release its watermarking technology:

If OpenAI—and only OpenAI—were to release a watermarking system for ChatGPT, making it easy to see when generative AI has produced a piece of text, it would have no effect whatsoever on plagiarism in student essays. Word would spread quickly and everyone would simply move on to one of the many AI options available today: Meta’s Llama, Anthropic’s Claude, Google’s Gemini. Plagiarism would continue unabated, and OpenAI would lose a large portion of its user base. So it’s no shock that they would keep their watermarking system a secret.

First off, a big shout-out to Vox for listing all the other places students can go to get their AI plagiarism fixes. Nice. Psst, the Sinaloa cartel is reportedly working with the US Drug Enforcement Agency, so here are a few other places you can go to get your cocaine fix. Come on.

But the point is valid and worth emphasizing. OpenAI/ChatGPT won’t release its watermarking solution because it’s bad for business — which should make it clear what business OpenAI is in. If I’m being subtle, OpenAI is in the cheating business. Or, as Vox put it, if OpenAI text was discoverable, the company would “lose a huge chunk of its user base.” And they’d rather hold on to their market share than act with integrity.

We said that a month ago too.

Anyway, I’ll continue here with this from Vox:

I think teachers are definitely capable of finding better ways to assess students

On behalf of all educators everywhere, thank you for your trust, Kelsey Piper, Senior Writer.

And then there is also:

In the debate about plagiarism in schools, the stakes are low.

Again, thanks Kelsey. I’m going to disagree with you on this one. I think academic fraud, the whole value proposition of organized education, and actually learning to write is pretty important.

As a reminder, the piece does not cite any educators, authorities on AI text generation or detection, or academic misconduct. It’s just Kelsey. Jazz hands, everyone.

As a journalist it is impossible to defend this.

Part

I have said before that if I were to write as much about academic fraud in India as it deserves, I would only write about fraud in India.

The country has a problem, to say the least. In terms of chronic malpractice, India is second only to China — maybe even second. I bet academic fraud is more common in China. We only know about India’s appalling lack of academic integrity because of its largely free press. But I’m guessing. And my point is that India has a big problem that undermines the value of all the academic credentials that are given out there.

For your consideration, here are two headlines on deception in India over the past few days.

From the Wall St Diary (subscription required):

Gangs make millions helping Indians cheat on exams

Middle-class families pay thousands of dollars to help their children get an unfair advantage on crucial tests

And from Company Insider:

India’s exam system is in chaos, with some families paying thousands of dollars to help their children cheat

First of all, no joke. This is also not the first time we have shared news about criminal gangs running cheating rings in India (see Number 137).

You wonder, or maybe I am just wondering, how long India can continue to run such a questionable education system and send students who want to go to college to schools in the US, UK, Australia and Europe. I ask this because cheating is a learned behavior. It is a habit. If students cheated in high school, chances are they will cheat in college too. As the saying goes, no one cheats in college for the first time.

Unfortunately, I think we all know the answer. Foreign schools will continue to accept students from India and other fraud-plagued places as long as the students—read: their parents—write full-tuition checks. Which also means that the schools cashing those checks have significant incentives not to look for, or crack down on, cheating in their own hallways.

The system, I fear, is corrupt on both sides. But I digress. India specifically has a serious problem.

Part

Notes, plural.

OKC

I was in Oklahoma last week to speak with the Oklahoma Association of Testing Personnel. Some photos, courtesy of OATP:

It was a great conference and the hosts were fantastic.

Australia

That same evening I had the honor of participating in a virtual panel in Australia with integrity researchers and practitioners. I was surprised and encouraged that approximately 600 people attended. As I told the audience, Australia is leading the way in recognizing, discussing, and addressing academic fraud.

I will share some brief thoughts on the Australia panel with you shortly.

ICAI

I’m also very happy to share that I will be giving the keynote address at the 2025 ICAI conference, the International Center for Academic Integrity. That will be in March, in Chicago. Here’s their announcement:

BTS Question

Every year around this time, around the time the school year starts, I make a public pitch to support The Cheat Sheet by becoming a paying subscriber or donating a few bucks via Patreon. The Patreon link is right at the top, and I think this is the link you can use to become a paying subscriber:

I’m not sure if it’s the right or best model, but there is absolutely no difference between a paying subscriber and a free subscriber. Everything is the same. The Cheat Sheet is designed to connect and share. To hide parts of it seems contradictory.

So, all you get from supporting my work is the overwhelming pride. And a thank you. You can, I’m sure, post a message on your paid subscription, and I’ll publish it. A paid subscription is $8 a month.

If you can, do it. For those who are already doing it, thank you. And whether you are doing it or not, thank you for all you do to support and share The Cheat Sheet.

Part

You May Also Like

More From Author