I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.
The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)
To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.
I’ve done several assessments of the output of popular llms in my field of expertise. I generally conclude that they are “worse than worthless”, because they actively try to persuade you of false information.
Your whole thesis about people whose output is “lesser” than llms is totally misguided. Yes there is a systemic research and comprehension issue. No, the AI doesn’t help people with it. What I’ve observed is that people don’t really ever defer to the AI if it coincidently contradicts their beliefs, they just coax it until it says whatever they want, then end up problematically overconfident because “the ai told them so”
I could keep replying in regards to the unmotivated school children and the inappropriate reformatted analogy but what’s the point if you’re just gonna be a broken record? We all understand that you think most people are morons and that you and your buddies have deep talks about AI in which you’ve concluded that nobody can really “know” anything well enough to comment on their capabilities, but in spite of this you personally are able to not just “know” what it is capable of but even how it stacks up against against different types of humans. The line of reasoning is totally absurd
These are compelling points and you are swaying my belief with them. I’d like to do a similar study and see the results .
I certainly do not believe anything I said applied to school children. Honestly I think they should be kept entirely away from any form of conversational AI until they have a fully developed frontal cortex and have been taught how to conduct research and think critically for themselves
Wow the rhetoric coming directly from investor-bait think tanks characterizes the technology in a positive light? Tell me more
This was more than 10 years ago.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.
Oh, gotcha
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.
The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)
To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.
I’ve done several assessments of the output of popular llms in my field of expertise. I generally conclude that they are “worse than worthless”, because they actively try to persuade you of false information.
Your whole thesis about people whose output is “lesser” than llms is totally misguided. Yes there is a systemic research and comprehension issue. No, the AI doesn’t help people with it. What I’ve observed is that people don’t really ever defer to the AI if it coincidently contradicts their beliefs, they just coax it until it says whatever they want, then end up problematically overconfident because “the ai told them so”
I could keep replying in regards to the unmotivated school children and the inappropriate reformatted analogy but what’s the point if you’re just gonna be a broken record? We all understand that you think most people are morons and that you and your buddies have deep talks about AI in which you’ve concluded that nobody can really “know” anything well enough to comment on their capabilities, but in spite of this you personally are able to not just “know” what it is capable of but even how it stacks up against against different types of humans. The line of reasoning is totally absurd
These are compelling points and you are swaying my belief with them. I’d like to do a similar study and see the results .
I certainly do not believe anything I said applied to school children. Honestly I think they should be kept entirely away from any form of conversational AI until they have a fully developed frontal cortex and have been taught how to conduct research and think critically for themselves