tools like these are used to reject CVs and grade school papers btw
no matter how much ai is trash do NOT use ai checkers, they do not work
I witnessed an interaction where a grad school professor used AI detector and threatened to fail a student for submitting “AI generated” paper. It was so stupid, even after showing them how if you just add a few spelling mistakes the detection says human written, or even putting their own email in AI detector to show an example. It’s like the saying “little knowledge is dangerous”
Yeah these are the kind of awful situations that will probably happen way more often as people turn to AI detectors to “find out” if someone is using AI not realizing that they aren’t completely accurate, or even remotely accurate.
When I was at university I was pretty belligerent and if a professor tried that on me I’d have reported them for academic misconduct. They should be grading in the damn papers themselves, if they’re not going to do that then what is the point in them?
Yeah, LLM-based checkers will still have LLM-based problems, most notably being incapable of true analysis, which is the whole point of an AI checker. It’s just the same text predictor shit.
Oh and also there’s an arms race where generative AI has the advantage because eventually it will be capable of generating things entirely indistinguishable from what a human would make (though it will still be susceptible to the hallucinations and errors it’s already famous for).
ESPECIALLY don’t use the “ai text humanizer” function of one that’s absolutely certain that RL authors were AI 🤦🏻
Yep, they’re all trash and should not be relied upon.
I got anywhere from 35% to 70% AI generated results on a book I wrote in 2019, before AI was even released.
eta: it’s not about plagiarism, either. I also ran my novel through plagiarism checkers, since it’s easy to accidentally write passages similar to existing work. 0% on those, but high numbers in the AI checkers.
Seems like AI was trained on your book
I had to write a short story for English literature class in 2006 and I still have the file. Apparently over half of that is AI generated, which is pretty impressive on my part I must say.
Pangram does work, actually. Here’s independent validation by unaffiliated scientists:
https://www.nber.org/papers/w34223
Although white papers are biased, here’s pangram’s white paper:
Looked at the preprint. False positive rate of 0.2%, that’s crazy. I kinda find it hard to believe? It doesn’t seem possible to me.
That’s still 2 out of 1000 which if you’re using this at scale is not a great rate.
Would also be curious how that’s calculated if that’s done whit their test data that they’ve iterated on heavily or with actual feedback (which may never get back to them)
So the AI thinks this human-made text is actually AI-made and offers an AI tool that’ll turn this human-made text into an AI-made text that’ll appear more human than the human-made text? I wonder how it’d rewrite this paragraph.
Sometimes it feels like the formal texts I write (like anything I write in the context of a job application) sound a bit like AI, but I just try to immitate the dumb way HR people write their job postings.
Still no statement from Mary. Sounds like she is guilty and doesn’t know how to respond.
I think this is the most convincing proof that time travel is possible I’ve seen so far.
Occam would probably punch you for ignoring his razor so thoroughly 😄
Occam: I will cut you
This is why I named my cat Occam.
Username doesn’t check out 😁
Her defense was that it wasn’t an “artificial” intelligence: “It’s alive. It’s alive!”
LLMs are predictive models. They scraped as much text as possible to create a model that predicts the next word accurately. To generate text, the LLM assembles a sequence of likely next words.
That exact same sort of model can be turned around and asked, how closely did the actual next word match the predicted one? Good test for training the LLM. A better model will make more accurate predictions.
AI checkers are usually doing that test. Does the real text match what the AI predicted? It sounds like a test of the text, but it really isn’t. In this case, yes. Of course an AI trained on Mary Shelly’s Frankenstein can accurately predict the next word of Mary Shelly’s Frankenstein. It has the whole book memorized, if it were accurate to anthropomorphize computer code.
So the “checker” calls it AI generated. These checkers don’t work.
Actually they’re not doing that check as they don’t have access to the models, they’re running their own statistical transformer that asks “how closely does this match our database”?
It is just comparing against well-known public texts available to AI crawlers.
Yeah, it seems like it’s actually working as intended?
EDIT: peoples hate boner for AI at full display. Calm down, motherfuckers
It isn’t saying “I recognize this text”, it is saying “this text is AI generated”.
And then it’s offering a service to rewrite it, with ai, so that it can’t be recognized as ai.
It’s doing SOMETHING, for sure. I just don’t think what it’s doing results in accurate results for what it claims to measure.
That’s how they get you. You’ll pay money to get AI to make it appear human. Then another AI will detect the AI writing and offer to change it for a fee. They are all in on it. This keeps going until society collapses… Or people stop using fake AI detectors.
Does it say “100% plagiarised”? No. It says it is 100% AI generated which is clearly false.






