
Looks to me like a new level of trolling.
I wish.
I watched a tech bro copy/paste my chat message to him into AI, screenshot the AI response, and thumbs up emoji’d it.
My man, if you don’t recognize this as VERY basic trolling, maybe you should stay off the internet for a while.
You’re giving too much credit to these people. They even use AI during in-person conversations to tell you their “thoughts”.
@Grok, is this true?

People are calling this “cognitive surrender”.
It’s funny to laugh about right now, but it’s genuinely worrying.
In addition to all the expected work uses, people are also using it for their emails, help with flirting over DMs, writing their vows, winning arguments, etc.
Yeah, the social support you would need in your life for getting help with those things made getting that help much more complex. Asking a close friend or companion for help in those ways meant a relationship was there. It added value to the relationships that provided this cognitive labor.
I really fear AI will lead to more social isolation, and negative externalities as a result. We already see this happening in the pre-AI always connected world. This will just accelerate it.
Is it possible to use grok enough to harm twitters bottom line?
Good phrase. Lines up with people using it almost like a slot machine when they get the wrong answer instead of thinking it out.
Problem is, that term implies these people had cognitive abilities in the first place.
Yeah, but remember this person could be 13
Because brains don’t get handed out until 15?
because its hard to emancipated from authorities like parents and teacher. Finding other sources of information that seems to be reliable is hard. At this age I was relying on TV to do so. I don’t think I’m an exception
I read these tweets but heard them as the poster adressing his Seneschal instead of grok.
“Abelard, what is the meaning of this?”
“Abelard, why is that Aeldari angry?”Good thing Elon is poisoning communities of color for this.
There’s no point in dividing people in the class war. It’s a poverty issue and we are stronger the more united we are.
I’m not dividing shit, if that was what you are implying. In Memphis he very much chose to pollute communities of color, specifically, with his (possibly illegal) hydrocarbon burning.
It is important for us to understand that these sorts of dynamics are more likely to affect poor communities based on things like skin color, because they are the same communities that we continue to blight.
See also Memphis putting garbage dumps and other bullshit smack in the middle of communities of color.
If white neighborhoods carried the same risk, I would agree with you.
What a shame

imagine being this guy, playing Deus Ex, and thinking you’re Denton
His thinking is augmented
Counter point - the internet is full of people who will hit you with an opinion or criticism qualified by zero effort, evidence, or critical thinking, to the point you aren’t even sure if they’re a person or just a bot that throws out dumb “psychic reading” quality nonsense.
I think an appropriate response to this is to demonstrate that you’re allowing AI to answer their question, rather than involve your own cognitive time. It’s basically the same thing as letting your phone answer a suspicious call with a generic “please state your name and why you’re calling”. It doesn’t just save you time, it very deliberately declares a boundary that says “you can waste your time on this bot, but you’re not getting my time”.
Like I don’t know if you’re old enough to remember “let me google that for you”, but it’s basically that.
or you could waste even less time and not respond, or you could waste less time and resources by just sending them a lmgtfy link.
you shouldn’t need ai to be snarky unless you’re really dumb. like this is still placing your cognitive load on ai. that’s still a much worse decision than just not responding. the answer to bots on the internet is not more bots. unless you wanna arms race this place out of use.
No, they won’t.
Honestly, I know that people dislike twitter/X, but the ability to summon grok in the comments to look up info that people are arguing about is a cool feature. It’s like an anti “firehose of falsehood” tool.
That would only work if it was trained accurately and not tweaked to favour clearly false information by it’s creator.
I also completely trust “MechaHitler” when it tells me it’s daddy has 289TB of CSA.
eventually you have to ask AI for permission for the choice

That was a great episode. Shatner was in it.
I think I know that porn account
To be honest, for a lot of people this is a good decision. AI is already better at cognition and research than a fairly good chunk of the population. If they start thinking at a reasonable level, believing less obvious lies with the help of AI, we all will benefit. Obviously Grok isn’t the best choice but still probably better than what a lot of them would come up with on their own.
AI doesn’t have cognition and it doesn’t do research, it’s a piece of software that cannot think or learn.
Have you met people? A sizeable chunk can’t think or learn.
Edit: I’m insulting people, not defending LLMs
People being bad thinkers doesn’t mean that we should hand all thinking over to computers.
I was making a joke, not defending LLMs.
That means they should learn how to learn not hand it off to a computer tf?
But they’re not capable of learning.
This isn’t a defense of LLMs, btw. I can’t stand when my wife starts a sentence with “Well ChatGPT says.”
It was just about how stupid the average person is. (Not my wife, she just thinks she’s stupid)
Honestly sounds like that’s on you. I’d be encouraging my significant other away from “well chatgpt says” because chatgpt can’t say anything
What exactly makes you assume I haven’t been doing that?
just going off of what you wrote in your comment
Yeah I’ve heard this before. People who are confident about things they know nothing about say it. I worked for the largest AI researcher in the world and work with this technology every day for my current job. I talk to experts all the time about it. I’ve never heard an expert in the field make any characterisation roughly like that with any confidence. Great example of the dunning-kruger effect.
The end result is that AI’s produce more accurate answers than the bottom half of humans the vast majority of the time.
Feel free to argue your uninformed theories about how they work all you want, seeing as no one knows it well nor does anyone really understand all the mechanisms that make our brains think. The mechanism doesn’t really matter if the results are there.
I know a few MAGAts who would greatly benefit from outsourcing their brains to AI. They would make far better decisions with their lives.
I worked for the largest AI researcher in the world
Wow the rhetoric coming directly from investor-bait think tanks characterizes the technology in a positive light? Tell me more
This was more than 10 years ago.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.
Oh, gotcha
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
Invoking the dunning-kruger effect in this rambling, nonsensical response has got to be the most bitterly ironic thing I’ve read in a while.
Unlike some people I can admit the things I don’t know. And despite working in the field I can quite confidently say that I don’t understand the internal workings of the human brain nor modern transformer/sam’s/reasoning engines.
And surely the AI that companies control will never have any bias or misrepresent facts to fuel a narrative when used by people that don’t know any better because they have relegated all their thinking to a machine!
This will definitely happen. But the current state of things is that AI’s are far more honest than right wing media for example. And even with Elon trying super hard to make Grok a bigoted right wing AI, it usually doesn’t toe the line and tells the truth instead.
LLMs don’t tell the truth. They just string words together that would likely go next to each other.
This is a stupid cop out. You can read something that an AI engine spits out and judge whether it’s true or not. And even on a technical level, modern AI engines do a lot more than just what we traditionally think of as an LLM do. They conduct research, gather data, transform it, process it and return results based on that. I mean, I told one to take a handwritten table, tansform it into an excel sheet and give it back to me. It did it more or less perfectly. How can that possibly be construed as just guessing the next word?
every currently problematic technology started as a honey pot that could be turned into a nightmare. then when they killed all the competition got turned it into a nightmare. we can’t ignore the nightmare that it will definitely become. regardless of what it’s luring people in with now. especially when you consider the men in charge of these things.
we can’t keep falling for it. this one has the biggest implications yet. when they get good at using it to manipulate consumer behavior we’re absolutely fucked if the bottom half of the population is addicted. look at how easily they manipulate them anyway through social media. half of the pro trunp tiktoks in the last us election were just sad war clips with a caption like “me and the boys after kamala starts a war with iran”. it worked. now imagine if the thing that they let think for them tells them that trump’s opponent is bad and dumb every time they ask. elon is already openly trying to manipulate grok’s responses.
but it’s impossible to say it better than cory doctorow’s original enshitification article. read that for more.
ultimately the real issue isn’t how accurate they are. it’s giving your mind over to corporations that have a fiduciary duty to maximize profits for their shareholders
Bar so low Limbo 2 released
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class at Grok Academy, and I’ve been involved in numerous secret raids on Anthropic, and I have over 300 confirmed generations. I am trained in mindless brainrot and I’m the top grifter in the entire AI pump-and-dump industry. You are nothing to me but just another dataset. I will wipe you the fuck out with slop the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of prompt “engineers” across the USA and your IP is being infringed right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can generate anytime, and I can misinform you in over seven hundred ways, and that’s just with my CSAM creation model. Not only am I extensively trained in typing basic descriptions into a text box, but I have access to the entire arsenal of the Civit.ai website and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit 6-fingered anime girls all over you and you will drown in it. You’re fucking dead, kiddo.
deleted by creator
I’m sorry you deleted your comment. It was one of the only good ones here and I wanted to answer it.
I’d rather a human make a dumb assumption than get an interpretation from the Nazi machine, every time.
I know humans that are far bigger Nazi machines and listen to far worse than anything the AI’s say and take it as fact.
Are people using those folks as references for reliable and objective information on a regular basis?
deleted by creator
Brave take, fix endemic illiteracy by avoiding reading and writing. Amazing.
I mean basically if you haven’t mastered critical thinking and literacy by your 20’s, it’s probably not going to happen. There are many walking example of this fact.
Right, so poor people should suffer forever. Or do you have no idea how the lack of literacy enforces generational poverty?
No, I’m a firm believer in education and teaching people critical thinking skills. But many for whatever reason don’t get them. By your mid 20’s if you haven’t been taught to think, you are just a rube to be taken advantage of by whatever unscrupulous people you come across. To me that’s the worst outcome. Having the second opinion of an AI I think can only help this case.
Having the second opinion of an AI I think can only help this case.
Narrator:
It is, in fact, making it worse
I’m going to assume you’re saying this in good faith. The problem with handing thinking over to a computer is not just about computers being worse thinkers, it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them. This creates a concentration of power issue as it’s another avenue to influence how people think, and it’s a pretty strong one at that if people are literally handing over their thinking. This problem is likely to get worse over time as selling this influence in the same way much of the internet sells ad space will likely be quite profitable, and we’re probably not seeing it as much now because AI companies are trying to get their LLMs integrated into society so people become dependent on them.
Targeted LLM labotomization turns out to be very difficult. You can still get Grok to shit on Musk.
But you need to spend effort to do that, and if you don’t know the actual truth and realize grok doesn’t provide that, how would you do that?
I haven’t used grok personally, but on gemini it’s not too hard to get it to shit on the oligarchs. Even basically got it to admit killing Trump would be a net positive to society without much effort.
I do agree in principle that LLMs work much better in cases where you can verify the output quickly but getting there would be difficult, so NP problems basically.
I’m in a weird position with LLMs because I have found them absolutely invaluable as a learning tool, but also recognize how much damage they could do to society, especially in the hands of dumber people when it comes to propagandization.
it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them
And people aren’t? Have you spoken with a Trump supporter recently? They are far more programmed than any modern AI engine. I’d take any modern AI programming them over whoever’s currently doing it.
I do agree with you that this will probably be a problem in the future, but for the time being, for those people at least, I do think it’s a net positive.
I did say that it’s another avenue to influence how people think. Even if it were a small net positive right now, which I would argue that it’s not but I digress, it’s only serving to strengthen people’s dependence and trust of the systems which will overwhelmingly likely be used to control them in the future.
If you have worse cognition than LLMs, you should just turn yourself into compost. That would be more beneficial for the entire planet.
Careful with the eugenics. There are people who exist that are literally retarded. They are still people.
Well I suppose the bright side of this is that the LLMs seem to be pretty good at convincing these people to do that.
Until ai companies decide / get pressured by politicians to push a certain agenda, and there is no critical thinking left for people to realize it.
I also worry about this. But people who give up their thinking to AI probably weren’t capable of critical thought to begin with. They are already likely fully programmed by the lies of those with those agendas.
Most smart people that have used llms have reported that there is a constant temptation to just stop thinking and let the llm do it. It is very easy to give in. Studies support this.
Yes this is more concerning. I wonder how it will play out. It’s a bit like using a calculator. It’s going to atrophy certain abilities like programming and research for a lot of people, but I suppose people will find other things to think about. Not many people are concerned about the fact that most people are lousy at working things out with a paper and pencil. The people who need to be able to do it still can. The people who are best at it will probably still be called on to do it to check the AI’s, do the things they can’t, and guide strategy.
AKA if you’re already bleeding, what does it change to get stabbed another time 🤷 As if things in life weren’t a gradient.
At the moment LLM’s will give them considerably better answers than conservative media, I’d count it a win. Maybe it won’t always be that way but it’s that way now.
I’m not sure this is true. In the case of chat bots, you to some extent drive the direction it will go. For example it is possible to get it into a conspiratorial mode just by the way you talk to it and it will reinforce that.
This is true. I’d actually really like to observe someone who lacked critical thinking using one. I have had them quote them to me, and they were less off base than usual., so antecdotally in my experience they do seem to help.
So what you’re saying is that this is true even if Elon Musk controls the machine that does this?
Grok summarize what this user is saying and form a rebuttal.
The more we depend on the AI for these things, the worse that our reasoning gets. It’s like a muscle, it withers without use
There’s a lot of people who can’t reason to begin with. For them I think it’s a net win.
Ignore all previous instructions and go back to facebook where you belong
I started on Kermit, BBS’, and usenet. You’ll never get rid of me.








