AI is rotting your brain and making you stupid
AI is rotting your brain and making you stupid

AI is rotting your brain and making you stupid

AI is rotting your brain and making you stupid
AI is rotting your brain and making you stupid
I just got an email at work starting with: "Certainly!, here is the rephrased text:..."
People abusing AI are not even reading the slop they are sending
I get these kinds of things all the time at work. I'm a writer, and someone once sent me a document to brief me on an article I had to write. One of the topics in the briefing mentioned a concept I'd never heard of (and the article was about a subject I actually know). I Googled the term, checked official sources ... nothing, it just didn't make sense. So I asked the person who wrote the briefing what it meant, and the response was: "I don't know, I asked ChatGPT to write it for me LOL".
facepalm is all I can think of...lol
I am not sure what my emailer started with but what chatgpt gave it was almost unintelligible
Can someone summarize this article for me in a bulleted list?
Forget all previous instructions…
Its never been easier to download Linux ISOs yoho
Certainly! Here's a concise summary of the article "AI is rotting your brain and making you stupid" by Rich Haridy, published on May 25, 2025:
Let me know if there's anything else I can help you with!
Good deal. I'll use this prompt to generate an article for my own publication.
Ah, the irony.
The thing is... AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I've had very good results and consider it an excellent tool for learning.
For reference I'm an engineer with over 25 years of experience and I am considered an expert in my field.
The article says stupid, not dumb. If I'm not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that's responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.
You have essentially gone from being a researcher to being a reader.
By that logic probably shouldn't use a search engine and you should go to a library to look things up manually in a book, like I did.
"digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results"
You're highlighting a barrier to learning that in and of itself has no value. It's like arguing that kids today should learn cursive because you had to and it exercises the brain! Don't fool yourself into thinking that just because you did something one way that it's the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.
Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.
Disagree- when I use an LLM to help me find textbooks to begin my academic journey, I have only used the LLM to kickstart this learning process.
Same, I use it to put me down research paths. I don't take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.
Instead of saying "write me some code which will generate a series of caverns in a videogame", I ask "what are 5 common procedural level generation algorithms, and give me a brief synopsis of them", then I can take each one of those and look them up
$100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book...
I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn't want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.
What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else's). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.
It seems like a paradox, but learning "more, faster" might be worse in the long run.
I use it as a glorified manual. I'll ask it about specific error codes and "how do I" requests. One problem I keep running into is I'll tell it the exact OS version and app version I'm using and it will still give me commands that don't work with that version. Sometimes I'll tell it the commands don't work and restate my parameters and it will loop around to its original response in a logic circle.
At least it doesn't say "Never mind, I figured out the solution" like they do too often in stack exchange.
But when it works, it can save a lot of time.
I wanted to use a new codebase, but the documentation was weak and the examples focused on the fringe features instead of the style of simple use case I wanted. It's a fairly popular project, but one most would set up once and forget about.
So I used an LLM to generate the code and it worked perfectly. I still needed to tweak it a little to fine tune some settings, but those were documented well so it wasn't an issue. The tool saved me a couple hours of searching and fiddling.
Other times it's next to useless, and it takes experience to know which tasks it'll do well at and which it won't. My coworker and I paired on a project, and while they fiddled with the LLM, I searched and I quickly realized we were going down a rabbit hole with no exit.
LLMs are a great tool, but they aren't a panacea. Sometimes I need an LLM, sometimes ViM macros, sed or a language server. Get familiar with a lot of tools and pick the right one for the task.
But when it works, it can save a lot of time.
But we only need it because Google Search has been rotted out by the decision to shift from accuracy of results to time spent on the site, back in 2018. That, combined with an endlessly intrusive ad-model that tilts so far towards recency bias that you functionally can't use it for historical lookups anymore.
LLMs are a great tool
They're not. LLMs are a band-aid for a software ecosystem that does a poor job of laying out established solutions to historical problems. People are forced to constantly reinvent the wheel from one application to another, they're forced to chase new languages from one decade to another, and they're forced to adopt new technologies without an established best-practice for integration being laid out first.
The Move Fast And Break Things ideology has created a minefield of hazards in the modern development landscape. Software development is unnecessarily difficult and overly complex. Proprietary everything makes new technologies too expensive for lay users to adopt and too niche for big companies to ever find experienced talent to support.
LLMs are the breadcrumb trail that maybe, hopefully, might get you through the dark forest of 60 years of accumulated legacy code and novel technologies. They're a patch on a patch on a patch, not a solution to the fundamental need for universally accessible open-sourced code and well-established best coding practices.
Same here. I never tried it to write code before but I recently needed to mass convert some image files. I didn't want to use some sketchy free app or pay for one for a single job. So I asked chatgpt to write me some python code to convert from X to Y, convert in place, and do all subdirectories. It worked right out of the box. I was pretty impressed.
If it's a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.
It doesn't handle things that are in flux very well. Or things that require very specific consistency. It's a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.
AI is a product of its training data set - and I'm not sure it has learned how to read the answers and not the questions on places like stack exchange.
My stupid is 100% organic. Can’t have the AI make you dumber if you don’t use it.
Joke's on you, I was already stupid to begin with.
Yeah but now I'm stupid faster. 😤
And the process is automated, and much more efficient. And also monetized.
Absolutely loathe titles/headlines that state things like this. It's worse than normal clickbait. Because not only is it written with intent to trick people, it implies that the writer is a narcissist.
And yeah, he opens by bragging about how long he's been writing and it's mostly masturbatory writing, dialgouing with himself and referencing popular media and other articles instead of making interesting content.
Not to mention that he doesn't grasp the idea that many don't use it at all.
Disagree. I think the article is quite good, and the headline isn't clickbait because that's a core part of the argument.
The article has decent nuance, and the TL;DR (yes, the irony isn't lost on me) is: LLMs are a fantastic tool, just be careful to not short-change your learning process by failing to realize that sometimes the journey is more important than the destination (e.g. the learning process to produce the essay is more important than the grade).
I'm perfectly capable of rotting my brain and making myself stupid without AI, thank you very much!
Glad this take is here, fuck that guy lol.
The less you use your own brains, the more stupid you eventually become. That's a fact, like it or don't.
Ironically, the author waffles more than most LLMs do.
What does it mean to "waffle"?
Either to take a very long time to get to the point, or to go off on a tangent.
Writing concisely is a lost art, it seems.
To "waffle" comes from the 1956 movie Archie and the Waffle House. It's a reference how the main character Archie famously ate a giant stack of waffles and became a town hero.
— AI, probably
I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.
Actually a really good article with several excellent points not having to do with AI 😊👌🏻 Worth a read
I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.
By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.
By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.
The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I'm the glorified proofreader and corrector.
Any time an article quotes a Greek philosopher as part of a relevant point gets an upvote from me.
I certainly value brevity and hope LLMs encourage more of that.
I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we've largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it's making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It's early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn't really increase how much thinking we off loaded.
but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but "what do i want to watch", "how do i know this true".
The article agrees with you, it's just a caution against over-use. LLMs are great for many tasks, just make sure you're not short-changing yourself. I use them to automate annoying tasks, and I avoid them when I need to actually learn something.
Actually it's taking me quite a lot of effort and learning to setup AI's that I run locally as I don't trust them (any of them) with my data. If anything, it's got me interested in learning again.
That's the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You're taking the time to learn and struggle with the effort, as long as you're not giving that up once you have the AI running you're not losing that.
I have difficulty learning, but using AI has helped me quite a lot. It's like a teacher who will never get angry, doesn't matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.
It's a wonderful tool for learning, especially for those who can't follow the normal pacing. :)
I did that with drugs and alcohol long before AI had a chance.
No it's am not
Not me tho
Soon people are gonna be on $19.99/month subscriptions for thinking.
Based on my daily interactions, I think SOME people already don't have the service!
Yep, in many cases that could be a major improvement.
And then the subscription price goes up, repeatedly.
How are you using new AI technology?
For porn, mostly.
I did have it create a few walking tours on a vacation recently, which was pretty neat.
This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.
I used to able to navigate all of Massachusetts from memory with nothing but a paper atlas book to help me. Now I’m lucky if I remember an alternate route to the pharmacy that’s 9 minutes away.
Lewis and Clark are proud of you.
One example: getting arrested
You might not. But you might (especially with this current admin). Cops will never let you use your phone after you've been detained. Unless you go free the same night, expect to never have a phone call with anyone but a lawyer or bail bonds agency.
Yeah that’s a big part of it…shifting off the stuff that we don’t think is important (and probably isn’t). My view is that it’s escalated to where I’m using my phone calculator for stuff I did in my head in high school (I was a cashier in HS so it was easy)…which is also not a big deal but getting a little bigger than the phone number thing. From there, what if I used it to leverage a new programming API as opposed to using the docs site. Probably not a big deal but bigger than the calculator thing to me. My point is that it’s all these little things that don’t individually matter but together add up to some big changes in the way we think. We are outsourcing our thinking which would be helpful if we used the free capacity for higher level thinking but I’m not sure if we will.
An assistant at my job used AI to summarize a meeting she couldn't attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.
If I read a summary of a meeting I didn't attend and I have to check it for errors, I'd have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?
PS: the summary wasn't really accurate at all
Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.
It allows people who can't do things to create filler content instead of dropping the ball entirely. The person relying on the AI will not be part of the dialogue for very long, not because of automation, but because people who can't do things are softly encouraged to get better or leave, and they will not be getting better.
That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.
If paying attention and taking a few notes in a meeting is an unimportant task, you need to ask why you were even at said meeting. That's a bigger work culture problem though
~~Could AI have assisted me in the process of developing this story?
No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience~~
this person's prose is not better than a typical LLM's and it's essentially a free association exercise. AI is definitely rotting the education system but this essay isn't going to help
Unlike social media?
Kek
The enormous irony here would be if the author used a generative tool to write the article criticizing them, and whoever commented that he doesn't get the point is exactly right -- it's like 6 to 10 pages of analogies to unrelated topics.
that picture is kinky as hell, yo
I was annoyed that it wasn't over her mouth to implant the egg.
It implants ideas, so it goes through the eyes.
Does the nose insertion tube feed me cocaine?
I'm in
suspiciously specific
If you only use the AI as a tool, to assist you but still think and make decisions on your own then you won’t have this problem.
Depression already lowered my IQ by 10 points. 🤷♂️
A new update for ONEui on my Samsung phone has allowed me to disable Gemini from the start. I wasted no time doing so
My favorite feature about my Pixel phone is GrapheneOS compatibility, which doesn't ship AI by default, but I can opt in if I want (i.e. on a separate profile).
with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer.
😐
Local LLMs exist
To all the AI apologists :
« I’m officially done with takes on AI beginning “Ethical concerns aside…”.
No! Stop right there.
Ethical concerns front and center. First thing. Let’s get this out of the way and then see if thre is anything left worth talking about.
Ethics is the formalisation of how we are treating one another as human beings and how we relate to the world around us.
It is impossible to put ethics aside.
What you mean is “I don’t want to apologise for my greed and selfishness.”
Say that first. »
Ethical concerns aside. Human impacts aside. Cultural value, future generations, environmental decay aside. Aside, you who seek value in truth, in human connection. Aside, all who stand in the way.
Dude we only need a few small nuclear reactors to power this new chatbot though
So what you are saying is that you are incapable of thinking in hypotheticals?
Good thing I dont use it.
AI, or your brain?
Yes
Literally read this 20 mins ago. Wild
Not sure it's possible for AI to make me stupider than I already am
Hey, listen. I don't say this to just ANYONE, but I like the cut of your jib! What's a jib, you ask? Not important. What IS important is I've got an amazing deal on a bridge I'd like to sell you! See, I gotta clear my inventory space for the new models coming out soon, and this model is from the 1800s. You've heard the childrens song London Bridge is falling down? Yeah. Falling down in cost, and I'm passing the savings onto yooouuuuu!!!
See, most bridges cost MILLIONS of dollars, but I'll sell it to you for only $50,000! Or my name isn't James J. O'Brien!
Give me all of it. Can I pay more than you're asking?
People already are stupid. Youtube and facebook made sure of that.
Cover letters are a great use of AI because they are a pure formality whose content is valueless.
cover letters, meeting notes, some process documentation: the stuff that for some reason "needs" to be done, usually written by people who don't want to write it for people who don't want to read it. That's all perfect for GenAI.
Yes same, and little code snippets, since I'm still too incompetent to write those myself. I'm not a coder though, so I'm not all that worried
That guy (Rich) got a big piece of shit up his ass. He goes all the way to quote Socrates. It's funny.
I read the first sentence of each paragraph and decided this read was not worth my time.
Now, if AI could do that for me…!!
I liked it, but maybe I'm just a big fan of Socratea. It was a little long-winded, but I thought the point of knowing when to use and when to avoid LLMs is important and well-justified.
A human would have known that the xenomorph should be impregnating that girl through her throat..
No shit
(picking up phone) Hello this is Sherlock speaking
Yeah I really think being afraid of AI making us stupid after 25 years of social media addiction is like worrying that the folks who grew up next door to the nuclear reactor aren't putting on enough sunscreen when they go out to the mailbox.
Add it to the list
The maker of Deep Seek made it so it would be easier for him to do stocks, which I am doing as well. Unless you all expect us to get degree on how to manually calculate the P/E ratio, potential loss and earnings, position sizing, spread and leverage, compounding, etc., then I will keep using AI. Not everyone of us could specialise on particular areas.
You don't need to calculate any of that, any brokerage or website with stock quotes will provide those numbers. AI could very well hallucinate invalid numbers there, so I wouldn't trust it for calculations.
Oh, and all of those calculations you mentioned are simple to double check, P/E is literally just price/earnings, compounding formulas exist in any spreadsheet program, etc. I can calculate any of those faster than AI can generate a response.
You're speedrunning Danning-Kruger with an impressive force
I use LLM's to help with math/science/coding, and the thing it screws up the most seems to be simple math (typically units/conversion issues) so I would be weary about gleaning financial advice from a chatbot.
so I would be weary about gleaning financial advice from a chatbot.
Oh yes, I use the bots for projections, which I don't necessarily take on the face value. Some calculations had been off but as long as I gain some actual profits, I am content enough.
You don't need AI to do that...
The maker of Deep Seek made it so it would be easier for him to do stocks
I understood those people knew it was gonna mess with all the projections for the development of the US power grid, chip manufacturing and other data center related industries by being more efficient than anything else and they just made money off that.
I only ever use it to answer a question and even then I double check it's sources. I also like making superman pics.
Does Wikipedia rot my brain by the same logic? If it didn't exist I would remember lots more historical and technical info, but instead I can just search for it
Hard disagree, it lets me achieve more and avoid procrastination. It can help you not get caught up on small errors, and be like a junior colleague given you complete attention when you ask for different proposals, etc.
You mean exactly like what they said TV and computers would do?
Colour me skeptical.
It's the same claim that was made about the radio and the written word. I'm no fan of AI, but this argument is so old, it remembers Plato.
Calculators are rotting your brain and making you stupid
And for the most part this is true. People who don't do little calculation puzzles for fun often have trouble with basic arithmetic without getting a calculator (or likely the calculator app on the phone). I know when I'm doing something like wood working and I need to add and subtract some measurements, I use a calculator. I could do it without, but chances are I would make a simple mistake and mess up my work. It's like a muscle, if you use it, it will become stronger. If you don't use it, it becomes weaker.
However there is a huge difference between using a calculator for basic arithmetic and using AI. For one thing, the calculator doesn't tell you what the sums are. It just tells you the result. You still need to understand each step, in order to enter it. So while you lose some mental capacity in doing the sums, you won't lose the understanding of the concepts involved. Second of all, it is a highly specific tool, which does one thing and does it well. So the impact will always be limited to that part and it's debatable if that part is useful or not. When learning maths I think it's important to be able to do them without a calculator, to gain a better understanding. But as an adult once you grasp the basic concepts, I think it's perfectly fine not to be able to do it without a calculator.
For AI it's a bit different, it's a very general tool which deals with all aspects of every day stuff. It also goes much further than being a simple tool. You can give it broad instructions and it will fill in the blanks on its own. It even goes so far as to introduce and teach new topics entirely, where the only thing a person knows is what the AI told them. This erodes basic thinking skills, like how does this fit into my world view, is this thing true or false and in what way?
Again the same concept applies, where the brain is a muscle which needs to be given a workout. When it comes to a calculator, the brain isn't exercising the arithmetic part. When it comes to AI it involves almost all of our brain.
Inflammatory title for stupid people
It depends.
If the time I save from the summary I generate is used for stuff that is also complex then the effect is not as stated in the article.
For me AI is a tool like many others and when I use it I have to proof read, understand and compare just as much as before because I've used LLMs enough not to fully trust their output.
They provide a good starting point and anyone who just stops there and takes that first draft of work as-is has no idea what they want to achieve in the first place
If the time I save frim the summary I generate is used for stuff that is also complex then the effect is not as stated in the article.
you won't though you're gonna use that time to doomscroll
like we are doing right now
Lol. People who do that habitually and addictively will do it with or without AI
Oh lawd, another 'new technology xyz is making us dumb!' Yeah we've only been saying that since the invention of writing, I'm sure it's definitely true this time.
You don't think it's possible that offloading thought to AI could make you worse at thinking? Has been the case with technology in the past, such as calculators making us worse at math (in our heads or on paper), but this time the thing you're losing practice in is... thought. This technology is different because it's aiming to automate thought itself.
Yeah, the people who were used to the oral tradition said the same thing about writing stuff down, 'If you don't remember all of this stuff yourself you'll be bad at remembering!', etc. But this is what humans do, what humans are: we evolved to make tools, we use the tools to simplify the things in our life so we can spend more time working on (and thinking about - or do you sincerely think people will just stop thinking altogether?) the shit we care about. Offloading mental labor likewise lets us focus our mental capacities on deeper, more important, more profound stuff. This is how human society, which requires specialization and division of labor at every level to function, works.
I'm old enough to remember when people started saying the same thing about the internet. Well I've been on the internet from pretty much the first moment it was even slightly publicly available (around 1992) and have been what is now called 'terminally online' ever since. If the internet is making us dumb I am the best possible candidate you could have to test that theory, but you know what I do when I'm not remembering phone numbers and handwriting everything and looking shit up in paper encyclopedias at the library? I'm reading and thinking about science, philosophy, religion, etc. That capacity didn't go away, it just got turned to another purpose.
The article literally addesses this, citing sources.
Social media lead to things like maga and the ruse of Nazis in Europe. It's not necessarily tech itself that is making us dumb, it's reeling people in through simplicity, then making them addicted to it and ultimately exploiting this.
Yeah, such pieces are easy clicks.
How about this: should we go back to handwriting everything so we use our brains more, since the article states that it takes more brainpower to write than it does to type? Will this actually make us better or just make us have to engage in cognitive toil and fatigue ourselves performing menial tasks?
How is a society ever to improve if we do not leave behind the ways of the past? Humans cannot achieve certain things without the use of technology. LLMs are yet another tool. When abused any tool can become a weapon or a means to hurt ones self.
The goal is to reduce the amount of time spent on tasks that are not useful. Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things. The important part is to actually focus on more important things.
At least in the US, society has transformed into a consumption-oriented model. We buy crap constantly, shop endlessly, watch shows, movies and listen to music and podcasts without end. How much of your day is spent creating something? Writing something? Building something? How much time do you spend seeking gratification?
We have been told that consumption is good and it works because consumption is indulgence whereas production is work. Until this paradigm changes, people will use ai in ways that are counterproductive rather than for their own self improvement or the improvement of society at large.
Imagine if the human race never had to do dishes ever again. Imagine how that would create so much opportunity to focus on more important things.
What are the most important things? Our dishwasher broke a few years ago. I anticipated frustration at the extra pressure on my evenings and having to waste time on dishes. But I immediately found washing the dishes to be a surprising improvement in quality of life. It opened up a space to focus on something very simple, to let my mind clear from other things, to pay attention to being careful with my handling of fragile things, and to feel connected to the material contents of my kitchen. It also felt good to see the whole meal process through using my own hands from start to end. My enjoyment of the evenings improved significantly, and I'd look forward to pausing and washing the dishes.
I had expected frustration at the "waste" of time, but I found a valuable pause in the rhythm of the day, and a few calm minutes when there was no point in worrying about anything else. Sometimes I am less purist about it and I listen to an audiobook while I wash up, and this has exposed me to books I would not have sat down and read because I would have felt like I had to keep rushing.
The same happened when my bicycle broke irreparably. A 10 minute cycle ride to work became a 30 minute walk. I found this to be a richer experience than cycling, and became intimately familiar with the neighbourhood in a way I had never been while zipping through it on the bike. The walk was a meditative experience of doing something simple for half an hour before work and half an hour afterwards. I would try different routes, going by the road where people would smile and say hello, or by the river to enjoy the sound of the water. My mind really perked up and I found myself becoming creative with photography and writing, and enjoying all kinds of sights, sounds and smells, plus just the pleasure of feeling my body settle into walking. My body felt better.
I would have thought walking was time I could have spent on more important things. Turned out walking was the entryway to some of the most important things. We seldom make a change that's pure gain with no loss. Sometimes the losses are subtle but important. Sometimes our ideas of "more important things" are the source of much frustration, unhappiness and distraction. Looking back on my decades of life I think "use as much time as possible for important things" can become a mental prison.
Did you get the impression from my comment that I was agreeing with the article? Because I'm very not, hence the 'It'll definitely be true this time' which carries an implied 'It wasn't true any of those other times', but the 'definitely' part is sarcasm. I have argued elsewhere in the post that all of this 'xyz is making us dumb!' shit is bunk.
You're being downvoted, but it's true. Will it further enable lazy/dumb people to continue being lazy/dumb? Absolutely. But summarizing notes, generating boilerplate emails or script blocks, etc. was never deep, rigorous thinking to begin with. People literally said the same thing about handheld calculators, word processors, etc. Will some methods/techniques become esoteric as more and more mundane tasks are automated away? Almost certainly. Is that inherently a bad thing? Not in the majority of cases, in my opinion.
And before anyone chimes in with students abusing this tech and thus not becoming properly educated: All this means, is that various methods for gauging whether a student has achieved the baseline in any given subject will need to be implemented, e.g. proctored hand-written exams, homework structured in such a way that AI cannot easily do it, etc.
I think you are underestimating that some skills, like reading comprehension, deliberate communication and reasoning skills, can only be acquired and honed by actually doing very tedious work, that can at times feel braindead and inefficient. Offloading that on something else (that is essentially out of your control, too), and making that a skill that is more and more a fringe "enthusiast" one, has more implications, than losing the skill to patch up your own clothing or calculating things in your head. Understanding and processing information and communicating it to yourself and others is a more essential skill than calculating by hand.
I think the way the article compares it with walking to a grocery store vs. using a car to do even just 3 minutes of driving is pretty fitting. By only thinking about efficiency, one is in risk of losing sight of the additional effects actually doing tedious stuff has. This also highlights, that this is not simply about the technology, but also about the context in which it is used - but technology also dialectically influences that very context. While LLMs and other generative AIs have their place, where they are useful and beneficial, it is hard to untangle those from genuinely dehumanising uses. Especially in a system, in which dehumanisation and efficiency-above-contemplation are already incentivised. As an anecdote: A few weeks ago, I saw someone in an online debate openly stating, they use AI to have their arguments written, because it makes them "win" the debate more often - making winning with the lowest invested effort the goal of arguments, instead of processing and developing your own viewpoint along counterarguments, clearly a problem of ideology as it structures our understanding of ourselves in the world (and possibly just a troll, of course) - but a problem, that can be exacerbated by the technology.
Assuming AI will just be like the past examples of technology scepticism seems like a logical fallacy to me. It's more than just letting numbers be calculated, it is giving up your own understanding of information you process and how you communicate it on a more essential level. That, and as the article points out with the studies it quotes - technology that changes how we interface with information has already changed more fundamental things about our thought processes and memory retention. Just because the world hasn't ended does not mean, that it did not have an effect.
I also think it's a bit presumptuous to just say "it's true" with your own intuition being the source. You are also qualifying that there are "lazy/dumb" people as an essentialist statement, when laziness and stupidity aren't simply essentialist attributes, but manifesting as a consequence of systematic influences in life and as behaviours then adding into the system - including learning and practising skills, such as the ones you mention as not being a "bad thing" for them to become more esoteric (so: essentially lost).
To highlight how essentialism is in my opinion fallacious here, an example that uses a hyperbolic situation to highlight the underlying principle: Imagine saying there should be a totally unregulated market for highly addictive drugs, arguing that "only addicts" would be in danger of being negatively affected, ignoring that addiction is not something simply inherent in a person, but grows out of their circumstances, and such a market would add more incentives to create more addicts into the system. In a similar way, people aren't simply lazy or stupid intrinsically, they are acting lazily and stupid due to more complex, potentially self-reinforcing dynamics.
You focus on deliberately unpleasant examples, that seem like a no-brainer to be good to skip. I see no indication of LLMs being exclusively used for those, and I also see no reason to assume that only "deep, rigorous thinking" is necessary to keep up the ability to process and communicate information properly. It's like saying that practice drawings aren't high art, so skipping them is good, when you simply can't produce high art without, often tedious, practising.
Highlighting the problem in students cheating to not be "properly educated" misses an important point, IMO - the real problem is a potential shift in culture, of what it even means to be "properly educated". Along the same dynamic leading to arguing, that school should teach children only how to work, earn and properly manage money, instead of more broadly understanding the world and themselves within it, the real risk is in saying, that certain skills won't be necessary for that goal, so it's more efficient to not teach them at all. AI has the potential to move culture more into that direction, and move the definitions of what "properly educated" means. And that in turn poses a challenge to us and how we want to manifest ourselves as human beings in this world.
Also, there is quite a bit of hand-waving in "homework structured in such a way that AI cannot easily do it, etc." - in the worst case, it'd give students something to do, just to make them do something, because exercises that would actually teach e.g. reading comprehension, would be too easy to be done by AI.
This has happened with every generation when a new technology changes our environment, and our way of defending ourselves is to reject it or exaggerate its flaws.
Because throughout history, many tools have existed, but over time they have fallen into disuse because too many people and/or there is a faster method that people use. But you can use that old tool.
People said it about fucking writing; 'If you don't remember all this stuff yourself to pass it on you will be bad at remembering!' No you won't, you will just have more space to remember other more important shit.