A startling piece on Plagiarism Today reveals that when users run their text through Grammarly’s “Humanize” rewriter, the system can flag AI-generated content as “Typed by a Human.”
That’s right – what looks like a green light for human authorship may actually mask something entirely different. The inconsistencies are trickier than you might think, and frankly, I’ve got opinions.
Consider this: a test essay generated by ChatGPT was judged by Grammarly’s Authorship tool as 0 % human-typed but 75 % “Rephrased with Grammarly’s AI,” yet still given the green status of “Typed by a Human.”
The problem? Many teachers or reviewers simply see the big label and stop digging. The fine print gets lost.
The article explains that those reviewing reports must understand what they’re seeing. If they don’t – the system’s credibility takes a hit.
This isn’t just a school-yard issue either. As academic integrity services point out, when tools misidentify or mis-classify rewrites and AI-generated text, the risk of unfair accusations or false reassurance skyrockets.
One deep-dive study found that texts mostly edited by AI (rather than fully generated) still carried the machine-mark – and those edits increasingly evade detection in current systems.
Check out research on AI-edited text detection that underscores how even mixing human draft + AI touches complicates authorship attribution.
From a personal standpoint: I see this revolt of “AI in writing” as less about the tech and more about transparency.
If Grammarly builds a “Humanize” tool that lets AI-generated phrases slide into human-written territory, then educators, creators, and platforms need clearer signals.
The article suggests a “yellow” category for “Edited by AI” might help, and I agree: nuance matters.
Worst part? This all comes as many institutions are doubling down on AI-detector tools or automated authorship flags.
But what’s the point of a detector if the pipeline lets AI-written + AI-humanized work pass as entirely human-typed?
To illustrate, a blog post on Originality.ai shows how using rephrase/rewriter features in Grammarly raised AI-detection scores elsewhere – so the laundering isn’t invisible to everyone.
Here’s the practical punch: If you’re a student, writer or publisher relying on Grammarly’s Authorship reports as gold, you might want to pause and ask questions.
How long did it take to write the piece? Was there heavy rewriting by an AI agent?
Did the system track the version history, or did someone paste in a draft later? In many cases the visible “human-typed” status gives false comfort.
For institutions and educators: it’s time to revisit how authorship tools are used. If your teacher sees “75% human-typed” and accepts it without additionally checking draft history or revision time, you’re trusting a system that can be gamed.
Combine the authorship scores with human checks: talk to the writer, review draft logs, ask for outlines. Relying solely on machine-labels won’t cut it.
At its core, what this story reveals is a trust gap. Tools like Grammarly aim to support writers – and they’re great at grammar, tone, clarity.
But when they step into the authorship and authenticity game, unintended consequences pop up.
When AI-assisted tools blur into identity, then distinguishing who really wrote what becomes a labyrinth.
I’ll be watching how Grammarly responds to this criticism – will they tweak their category system, clarify transparency, offer better audit trails?
Because if they don’t, the “laundering” issue may erode trust far beyond students’ essays.

