Now, researchers at Northeastern University have demonstrated a new kind of creative sleuthing that can detect AI-generated writing by focusing not on what gets written but rather how it gets written – stuff like weird punctuation, strange sentence structures and that unpredictable “I wrote this before I had my coffee” feel.
Their tool, reportedly is run on a regular laptop (no giant server bank needed) which reaches 97% accuracy.
It is a simple but cunning idea: Although AI models may spew forth text that seems recognizably human, they often fail to recreate the randomness people bring to the table.
Words we replace with their synonyms, sentences dragged out in explanations when our brains overthink, punctuation mixed up in distraction – these are all the writing fingerprints that the team tracks.
“We may write informally when texting a friend but more formally in sending an email to our boss,” one of the study’s authors, Sohni Rais, said. That variability? It’s harder for AI to fake.
This tool isn’t only for finding AI text in essays or content farms – it’s more universally applicable.
As so-called generative-AI systems become cheaper and more accessible, vouching for “who actually wrote this?” matters more.
In media and in the credibility-preying worlds of education and business, ‘AI assisted’ isn’t going to be enough -– we might have to say “AI generated” or “human written with AI help,” if we’re honest.
There’s pressure mounting for transparency. Some recent work has also made the case that generative-AI models should come with detection tools baked in before they’re publicly released.
This is where things get sticky (and interesting): detection isn’t just a tech issue, it’s also about trust.
For example, previous studies found that many AI-detection tools were 80% accurate or less and biased – humans were erroneously labeled as bots too frequently.
That ups the stakes: If you falsely accuse a human writer of being “AI-generated,” you could harm reputations.
This is my take: It sounds more like a checkpoint in a broader race. Generative AI is getting better and faster by the week, while detection tools are playing catch-up.
But detection isn’t just about reading tealeaves anymore – it’s also about process, provenance and context.
The Northeastern tool highlights “stylometric features” – variance in word use, sentences per paragraph; distance of punctuation to related terms.
So that if you’re a writer, content creator or educator today? Maybe just ask yourself three questions. How do you mark things that have AI help?”
Why not track how much rewriting/by hand editing had occurred? Could someone come along and dispute the veracity of your writing? And if you’re generating content for others, do you have transparency baked in?
In sum: the Northeastern tool is a good start in tipping the balance toward humans in trying to figure out who or what was behind the keyboard.
But the playing field is changing quickly. I think if I had to predict, the next milestone will be “does the detector know how much AI helped?” as opposed to a mere “AI vs human”.
That’s the part that I’m really watching- because when that line becomes hazy, well, the rules of the game aren’t quite so clear anymore.

