







In the trainings my company has offered with Anthropic, we discovered that making the bot go out and ingest information uses the most tokens.
So like, senior engineer of dumbfuckery who told Claude to go read our entire gitlab caused a $5000 ingest event.
I’d expect that if you told target’s AI to read your list of likes and dislikes which were stored in some very large public git repo, it would cost them a lot.
Also make sure to tell it to think really hard about it.
“Look, he may be the antichrist, but he’s OUR antichrist. And he lets us rape children!”


If you’re dumb enough to trust the AI agent at all, but especially one that is provided, owned, and operated by the capitalist company that you’re shopping at and you expect it to act in your best interest, that’s a special kind of stupid.