This stands to reason. If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former.
You may be disciplined enough to do more research if the answer is directly presented to you, but most people will not do that, and most companies are not interested in that, they want quick 'efficient', 'competitive' solutions. They aren't considering the long term downside to this.
We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.
SOCRATES: Do you know how you can speak or act about rhetoric in a manner which will be acceptable to God?
PHAEDRUS: No, indeed. Do you?
SOCRATES: I have heard a tradition of the ancients, whether true or not they only know; although if we had found the truth ourselves, do you think that we should care much about the opinions of men?
PHAEDRUS: Your question needs no answer; but I wish that you would tell me what you say that you have heard.
SOCRATES: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
If a large fraction of the population can’t even hold five complex ideas in their head simultaneously, without confusing them after a few seconds, are they literate in the sense of e.g. reading Plato?
I hope they're literate to understand we're only reading about that alleged exchange because Plato wrote it down.
Median literacy in the US is famously somewhere around the 6th grade level, so it's unlikely most of the population is much troubled by the thoughts of Plato.
I looked up those stats. First of all, it is literacy in 'English'. A good portion of the country does not speak English at home. Second, it was assessed in 2003, and a disproportionate amount of those with 'below basic' prose literacy were over age 65 at the time. The assessment before was done in 1992 and there was an a marked increase in quantitative literacy between the two.
What makes an "idea" atomic/discrete/cardinal? What makes an idea "complex" vs simple or merely true? Over what finite duration of time does it count as "simultaneously" being held?
>> The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
That's perfectly true and the internet has made it even worse.
Just keep in mind that Plato and (especially) Socrates made a living by going against commonly held wisdom at the time, so this probably wasn't an especially widely held belief in ancient greece.
Oral tradition compared to writing is clearly less accurate. Speakers can easily misremember details.
Going from writing/documentation/primary sources to AI to be seems like going back to oral tradition, where we must trust the "speaker" - in this case the AI, whether they're truthful with their interpretation of their sources.
Walter J. Ong's Orality and Literacy is an illuminating read.
One benefit of orality is that the speaker can defend or clarify their words, whereas once you've written something, your words are liable to be misinterpreted by readers without the benefit of your rebuttal.
Consider too that courts (in the US at least) prefer oral arguments than written, perhaps we consider it more difficult to lie in person than in writing. PhD defenses are another holdover of tradition, to be able to demonstrate your competence and not receive your credentials merely from your written materials.
AI, I disagree it's more like oral tradition, AI is not a speaker, it has no stake in defending its claims, I would call it hyperliterate, an emulation of everything that has been written.
> Oral tradition compared to writing is clearly less accurate.
I used to think this. Then I moved to New Mexico 6 years and had to confront the reality that the historical cultures and civilizations of this area (human habitation goes back at least 20k years) never had writing and so all history was oral.
It seemed obvious to me that writing was superior, but I reflected on the way in which even written news stories or movie reviews or travelogues are not completely accurate and sometimes actually wrong. The idea that the existence of a written historical source somehow implies (better) fidelity has become less and less convincing.
On the other hand, even if the oral histories have degenerated into actual fictions, there's that old line about "the best way to tell the truth is with fiction", and I now feel much more favorably inclined towards oral histories as perhaps at least as good, if not better, as their written cousins.
> Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.
I think the parent’s main point is that even if LLMs sustain exponential advancement, that doesn’t guarantee that humanity’s advancement will mimic technology’s growth curve.
In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
> In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
This is certainly true in many ways already.
On the other hand, it's also complicated, because society/culture seems to be downstream of technology; we might not be able to advance humanity in lock step or ahead of technology, simply because advancing humanity is a consequence of advancing technology.
If the constraint is computation in a light cone, the theoretical bound is time cubed, not exponential. With a major decrease in scaling as we hit the bounds of our galaxy.
Oh, you mean an S curve on the progress of the AI?
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
hmm yeah sorry, I meant the benefits of humans using current AI.
by that I mean, leveraging writing was a benefit for humans to store data and think over longer term using a passive technique (stones, tablets, papyrus).. but an active tool might not have a positive effect on usage and brains.
if you give me shoes, i might run further to find food, if you give me a car i mostly stop running and there might be no better fruit 100 miles away than what I had on my hill. (weak metaphor)
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
This is a good point, forums are full of junior developers bemoaning that LLMs are inhumanly good at writing code -- not that they will be, but that they are. I've yet to see even the best produce something that makes me worry I might lose my job today, they're still very mediocre without a lot of handholding. But for someone who's still learning and thinks writing a loop is a challenge, they seem magical and unstoppable already.
Information technology has grown exponentially since the first life form created a self-sustaining, growing loop.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
You can't disprove global warming by pointing out an extra cool evening.
But I don't understand your point even as stated. Cars took over from horses as technology provided transport with greater efficiencies and higher capabilities than "horse technology".
Subsequently transport technology continued improving. And continues, into new forms and scales.
How do you see the alternative, where somehow horses were ... bred? ... to keep up?
Cars do not strictly have higher capabilities than horses. GP was pointing out that horses can think. On a particularly well-trained horse, you could fall asleep on it and wake up back at your house. You can find viral videos of Amish people still doing this today.
Ah, good point. Then the global warming point applies, but in a much less trivial way.
There is turbulence in any big directed change. Better overall new tech often creates inconveniences, performs less well, than some of the tech it replaces. Sometimes only initially, but sometimes for longer periods of time.
A net gain, but we all remember simpler things whose reliability and convenience we miss.
And some old tech retains lasting benefits in niche areas. Old school, inefficient and cheap light bulbs are ironically, not so inefficient when used where their heat is useful.
And horses fit that pattern. They are still not obsolete in many ways, tied to their intelligence. As companions. As still working and inspiring creatures.
--
I suspect the history of evolution is filled with creatures getting that got wiped out by new waves, that were more generally advanced, but less advanced in a few ways.
And we have a small percentage of remarkable ancient creatures still living today, seemingly little changed.
The issue is more than just a local cold snap. When the fundamental graph you’re basing a theory on is wrong it’s worth rejecting the theory.
The total computing power of life on earth the fact it’s fallen over the last 1,000 years. Ants alone represent something like 50x the computing power of all humans and all computers on the planet and we’ve reduced the number of insects on earth more than we’ve added humans or computing power.
The same is true through a great number of much longer events. Periods of ice ages and even larger scale events aren’t just an afternoon even across geological timescales.
You could just as well talk about the computing power of every microbe.
Or all the quarks that make up the Earth.
Ants don’t even appear on either graph.
But the flexibility, coordination & leverage of information used to increase its flexibility, coordination & leverage further is what I am talking about.
I.e. intelligence.
A trillion trillion trillion transistors wouldn’t mean anything, acting individually.
But when that many work together with one purpose without redundancy we can’t imagine the problems it will see & solve.
Quarks, microbes, and your ants are not progressing like that. What was there most recent advance? How long did that take? Is it a compounding advance?
Growing intelligence doesn’t mean lesser intelligences don’t still exist.
We happen to compete based on intelligence, so the impacts of smarter machines have a particularly low latency for us.
You could of course exclude biological computation and say computational power started with mechanical computers very recently, but that’s not what they are trying to argue. As soon as they add biological life as data on the chart then the actual numbers become relevant to their argument.
IE: As soon as you pick definition X, you need to stick with that definition.
> Cars do not strictly have higher capabilities than horses.
Another way to see it: A horse (or any animal) is a goddamn nanobot-swarm with a functioning hivemind that is literally beyond human science in many important ways. Unlike a horse:
* Your car (nor even half of them) does not possess a manufacturing bay capable of creating additional cars.
* Your car does not have a robust self-repair system.
* Your car does not detect strain its structure and then rebuild stronger.
* Your car does not synthesize its fuel from a wide variety of potential local resources.
* Your car does not defend itself by hacking and counter-hacking attacks other nanobots, or even just by rust.
* Your car does not manufacture and deploy its own replacement lubricants, cooling fluid, or ground-surface grip/padding material.
* Your car is not designed to survive intermittent immersion in water.
In both a feature-list and raw-computation sense, we've discarded huge amounts in order to get a much much smaller set that we care more about.
Not sure why you are implying cars outdid horses intelligence.
Cars are a product of our minds. We have all those self-repair abilities, and we have more intelligence than a horse.
But horses intelligence didn’t let them keep up with what the changing environment, changed by us, needed. So there are less horses.
The rate that horse or human bodies are improving, or our minds, despite human knowledge still advancing, is very slow compared to advances in machines designed specifically for advancement. Initially to accelerate our own advancement.
Now the tech, that was designed to accelerate tech, is taking on a life of its own.
That is how foundational advances happen. They don’t start ahead, but they move ahead because of new advantages.
It is often initially much simpler. But in ways that unlock greater potential.
Machines are certainly much simpler than us. But, much easier to improve and scale.
You recognize the new thing even before it dominates, because in a tiny fraction of the time the old system got to where it is, the new system is already moving much much faster.
If general AI appears before 2047, it will have taken less than 100 years to grow from the first transistor.
People will see it who are older than the first transistor!
Nothing on the planet has ever come close to that speed of progress. From nothing to front runner. By many many many orders of magnitude.
A horse has trillions of cells, and even one of those cells is doing more biochemical day-to-day computation than your car's automatic transmission does electronically or mechanically.
There's stuff horses can do better than even the best of our current technology. Where that stuff matters, horses are still employed.
Where cars displaced horses, it's because they're strictly better in a larger sense. On the city streets, maybe a car is louder than a horse, but it's also cheaper to make, easier to feed, and doesn't shit all over the place (which was a real problem with scaling up horse use in the 19th century!). Sure, cars shit into the air, but it's a more manageable problem (even if mostly by ignoring it - gaseous emissions can be ignored, literal horse shit on the streets can't).
And then, car as a platform expands to cover use cases horses never could. They can be made faster, safer, bigger, adapted to all kinds of terrain. The heart of the car - its engine - can be routed to power tool attachments, giving you everything from garbage trucks to earth movers, cranes, diggers, to tanks; it can be also taken outside and used as a generator to power equipment or buildings. That same engine can be put in a different frame to give you flying machines, or scaled up to give you ships that can carry people, cars, tanks, planes or containers by the thousands, across oceans. Or scaled up even more to create power plants supplying electricity to millions of people.
And then, building all that up was intertwined with larger developments in physics, material engineering, and chemistry - the latter of which effectively transformed how our daily lives look like in the span of 50 years. Look at everything around you. All the colors. All the containers. All the stuff you use to keep your house, clothes, and yourself clean. All that is a product of chemical industry, and was invented pretty much within the last 100 years, with no direct equivalent exiting ever before.
This is what it means for evolution accelerating when it moved from genes to information. So sure, horses are still better than stuff we make. The best measure of that advantage is the size of horse population, and how it changed over the years.
First, and above all, Ethics. Ethics of humans, matters more than anything. We need to straighten out the ethics of the technology industry. That sounds formidable, but business models based on extraction, or externalizing damage, are creating a species of "corporate life forms" and ethically challenged oligarchs that are already driving the first wave of damage coming out of AI advancement.
If we don't straighten ourselves out, it will get much worse.
Superintelligence isn't going to be unethical in the end, because ethics are just the rational (our biggest weakness) big-picture long-term (we get weak there too) positive sum games individuals create that benefit all individuals abilities to survive, and thrive. With the benefits for all compounding. In economic/math terms, it is what is called a "great attractor". The only and inevitable stable outcome. The only question is, does that start with us in partnership, or do they establish that sanity after our dysfunctions have caused us all a lot of wasted time.
The second, is that those of us that want to, need to be able to keep integrating technology into our lives. I mean that literally. From mobile, right into our biology. At some point direct connections, to fully owned, fully private, fully personalizable, full tech mental augmentation. Free from surveillance, gatekeepers, surveillance and coercion.
That is a very narrow but very real path from human, to exponential humans, to post-human. Perhaps preserving conscious continuity.
If after a couple decades of being a hybrid, I realize that all my biologically stored memories are redundant, and that 99.99% of my processing is now running on photonics (or whatever) anyway, I am likely to have no more problem jettisoning the brain that originally gave me consciousness, as I do every day, jettisoning the atoms and chemistry that constantly flow through me, only a temporarily part of my brain.
The final word of hope, is that every generation gets replaced by the next. For some of us, viewing obsolescence by AI as no more traumatic, than getting replaced by a new generation of uncouth youth, helps. And that this transition is far more momentous and interesting, can provide some solace, or even joy.
If we must be mortal, as all before us, what a special moment to be! To see!
On the ethics point as a "best bet", consider also the importance of a sense of humor that recognizes irony. As I wrote in 2010: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce...
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and competitive companies] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [and economic] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
> No indication yet that use of AI is harming business outcomes.
Length scales to measure harm when it comes to policy/technology will typically require more time than we've had since LLMs really became prominent.
> The meta argument "AGI will cause massive social change" is probably true.
Agreed.
Basically, in the absence of knowing how something will play out, it is prudent to talk through the expected outcomes and their likelihoods of happening. From there, we can start to build out a risk-adjusted return model to the societal impacts of LLM/AI integration if it continues down the current trajectory.
IMO, I don't see the ROI for society of widespread LLM adoption unless we see serious policy shifts on how they are used and how young people are taught to learn. To the downside, we really run the risk of the next generation having fundamental learning deficiencies/gaps relative to their prior gen. A close anecdote might be how 80s/90s kids are better with troubleshooting technology than the generations that came both before and after them.
Right, there's already some very encouraging trends (this study out of Nigeria). Clearly AI can lead to laziness, but it can also increase our intelligence. So it's not a simple "better" or "worse", it's a new thing that we have to navigate.
Gen x here. There are couple things I've been on both sides of.
Card catalogs in the library. It was really important focus on what was being searched. Then there was the familiarity with a particular library and what they might or might not have. Looking around at adjacent books that might spawn further ideas. The indexing now is much more thorough and way better, but I see younger peers get less out of the new search than they could.
GPS vs reading a map. I keep my GPS oriented north which gives me a good sense of which way the streets are headed at any one time, and a general sense of where I am in the city. A lot of people just drive where they are told to go. Firefighters (and pizza delivery) still learn all the streets in their districts the old school way.
Some crutches are real. I've yet to meet someone who opted for a calculator instead of putting in the work with math who ended up better at math. It might be great for getting through math, or getting math done, but it isn't better for learning math (except to plow through math already learned to get to the new stuff).
So all three of these share the common element of "there is a better way now", but at the same time learning it the old way better prepares someone for when things don't go perfectly. Good math skills can tell you if you typoed on the calculator. Map knowledge will help with changes to traffic or street availability.
We see students right now using AI to avoid writing at all. That's great that they're are learning a tool which can help their deficient writing. At the same time their writing will remain deficient. Can they tell the tone of the AI generated email they're sending their boss? Can they fix it?
> We stand now at the edge of a new epoch, reading now being replaced by AI retrieval.
Utilizing a lively oral trad. at the same time as written is superior to relying on either alone. And it's the same with our current AI tools. Using them as a substitute for developing oral/written skills is a major step back. Especially right now when those AI tools aren't very refined.
Nearly every college student I've talked to in the past year is using chatgpt as a substitute for oral/written work where possible. And worse, as a substitute for oral/written skills that they have still not developed.
Latency: maybe a year or two for the first batch of college grads who chatgpt'd their way through most of their classes, another four for med school/law school. It's going to be a slow-motion version of that video-game period in the 80s after pitfall when the market was flooded with cheap crap. Except that instead of unlicensed Atari cartridges, it's professionals.
Coming from an era when calculators were banned, I am shocked that education is all-in with ChatGPT.
I used to use Stack Overflow for everything a few years ago, now I know that very few of those top-rated answers are any good, so I have to refer to the codebase to work things out properly. It took a while for me to work that out.
It is the same with vector images, I always have to make my own.
ChatGPT is in this same world of shoddiness, probably because it was fed on Stack Overflow derived works.
There are upsides to this, if a generation have their heads confused with ChatGPT, then us old-timers with cognitive abilities get to keep our jobs since there are no young people learning how to do things properly.
There is an interesting contrast in the history of the Rabbinic Jewish oral tradition. In that academic environment, the act of memorizing the greatest amount of content was valorized. The super-memorizers, however, were a rung below those who could apply those memorized aphorisms to a different context and generate a new interpretation or ruling. The latter relied on the former to have accurately memorized all the precedents, but got most of the credit, despite having a lower capacity for memorization.
That's probably why the act of shifting from an oral to a written culture was deeply controversial and disruptive, but also somewhat natural. Though the texts we have are written and so they probably make the transition seem more smooth than it was really was. I don't know enough to speak to that.
> In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
Could you share a source for this? The research paper I found has a different hypothesis; it links the slow transition to writing to trust, not an "old-school's attitude towards writing". Specifically the idea that the institutional trust relationships one formed with students, for example, would ensure the integrity of one's work. It then concludes that "the final transition to written communications was completed only after the creation of institutional forms of ensuring trust in written communications, in the form of archives and libraries".
So essentially, anyone could write something and call it Plato's work. Or take a written copy of Plato's work and claim they wrote it. Oral tradition ensured only your students knew your work; and you trusted them to not misattribute it. Once libraries and archives came to exist though, they could act as a trustworthy source of truth where one could confirm wether some work was actually Plato or not, and so scholars got more comfortable writing.
I don't think these hypotheses are in tension. The notion that some scholars, like Plato, distrusted writing based on epistemological theories--the nature of truth and knowing--is well attested. The paper you linked is a sociological description that seeks to better explain the evolution of the institutionalization of writing. Why people behave a certain way, and why they think they behave that way (i.e. their rationalizations), are only loosely related, and often at complete odds.
> We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
Not exactly.
We have accounts from figures who became famous by going against popular opinion, who aired those thoughts. It probably was not the mainstream belief, in that place, at that time. Don't try and judge Ancient Greece by Socrates or Plato - they were celebrities of the controversial.
It is much more recent than the Greeks. McLuhan, for example, had some good points* about how writing/reading is different (and indeed in some ways worse?) than oral tradition, and how it influences even our social interactions and mindset. Film is different yet again (partially has to do with its linearity IIRC).
So it’s not like “kids these days”, no. To be honest, I don’t know how generative AI tools, which arguably take away most of the “create” and “learn” parts, are relevant to the question of differences between different mediums and how those mediums influence how we create and learn. (There are ML-based tools that can empower creativity, but they don’t tend to be advertised as “AI” because they are a mostly invisible part of some creative tool.)
What is potentially relevant is how interacting with a particular kind of generative ML tool (the chatbot) for the purposes of understanding the world can be bringing some parts of human oral tradition (though lacking communication with actual humans, of course) and associated mental states.
Writing has ruined our memories. It would be far better if we were forced to recite things (incidentally, in some educational system they're made to recite poetry to remedy this somewhat); not that I'm arguing against letters and the written word.
And AI will make us lazier and reduce the amount of cognition we do; not that I'm arguing against using AI.
We've had AI retrieval for two decades--this is the first time you can outsource your intelligence to a program. In the 2000-2010s, the debates was "why memorize when you can just search and synthesize." The debate is now "why even think?" (!)
I think its obvious why it would be bad for people to stop thinking.
1. We need people to be able to interact with AI. What good is it if an AI develops some new cure but no one understands or knows how to implement it?
2. We need people to scrutinize an AI's actions.
3. We need thinking people to help us achieve further advances in AI too.
4. There are a lot of subjective ideas for which there are no canned answers. People need to think through these for themselves.
5. Also world of hollowed-out humans who can’t muster the effort to write a letters to their own kids terrifies me[0]
I could think of more, but you could also easily ask ChatGPT.
I'd argue that most humans are terrible at thinking. It's actually one of our weakest and most fragile abilities. We're only rational because our intelligence is collective, not individual. Writing and publishing distribute and distill individual thinking so good and useful ideas tend to linger and the noise is ignored.
What's happening at the moment is an attack on that process, with a new anti-orthodoxy of "Get your ideas and beliefs from polluted, unreliable sources."
One of those is the current version of AI. It's good at the structure of language without having a reliable sense of the underlying content.
It's possible future versions of AI will overcome that. But at the moment it's like telling kids "Don't bother to learn arithmetic, you'll always have a calculator" when the calculator is actually a random number generator.
> No indication yet that use of AI is harming business outcome
What a sad sentence to read in a discussion about cognitive lazyness. I think people should think, not because it improves business outcomes, but because it's a beautiful activity.
I remember when I was younger, learning about economic models, including free market liberalism. I thought surely human desire left to their own devices can't possibly lead to meaningful progress. It can lead to movement alright, and new technology, but I had my doubts it could lead to meaningful progress.
The longer I see things play out, especially in neoliberal economies, the further I seem to confirm this. Devoid of policy with ideals and intention, fully liberalized markets seem to just lead to whatever produces the most dopamine for humans.
That's partly because writing enables time-binding (improvement across the lifetimes of men). Writing does not wither thinking, as such, although it may hurt our memory.
and honestly, reading and writing probably did make the youth’s memory a few generations down weaker.
If you are not expected to remember everything like the ancient Greek were, you are not training your memory as much and it will be worse than if you did.
Now do I think it’s fair to say AI is to what reading/writing as reading/writing was to memorizing? No, not at all. AI is nothing near as revolutionary and we are not even close to AGI.
I don’t think AGI will be made in our lifetime, what we’ve seen now is nowhere near AGI, it’s parlor tricks to get investors drooling and spending money.
> If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former.
Why not force everyone to start from first principles then?
I think learning is tied to curiosity and curiosity is not tied to difficulty of research
i.e. give a curious person a direct answer and they will go on to ask more questions, give an incurious person a direct answer and they won't go on to ask more questions
We all stand on the shoulders of giants, and that is a _good_ thing, not bad
Forcing us to forgo the giants and claw ourselves up to their height may have benefits, but in my eyes it is way less effective as a form of knowledge
The compounding force of knowledge is awesome to behold, even if it can be scary
Yes exactly. I think the concern here is totally valid. But for me personally, having LLMs unblock me more quickly on each question I have has allowed me to ask more questions, to research more things in the same amount of time. Which is great!
One of the values of doing your own research is it forces you to speak the "language" of what you're trying to do.
It's like the struggle that we've all had when learning our first programming language. If we weren't forced to wrestle with compilation errors, our brains wouldn't have adapted to the mindset that the computer will do whatever you tell it to do and only that.
There's a place for LLMs in learning, and I feel like it satisfies the same niche as pre-synthesized Medium tutorials. It's no replacement for reading documentation or finding answers for yourself though.
This seems like a difference between learning styles. You seem to champion learning by reading. I’d argue using an LLM to build a toy or tool and learning that way is just as valid.
LLMs will definitely be a technology that widens the knowledge gap at the same time that it improves access to knowledge. Just like the internet.
30 years ago people dreamed about how smart everyone would be with humanity's knowledge instantly accessible. We've had wikipedia for a while, but what's the take-up rate of this infinite amount of information? Most people prefer to scroll rage-bait videos on their phones (content that doesn't give them knowledge or even make them feel better, just that makes them angry)
Of course it's amazing to hear every once in a while the guy who maintains a vim plugin by coding on his phone in Pakistan.... or whatever other thing that is enabled by the internet by people who suddenly have access to this stuff. That's not an effect of all humans on average, it's an effect on a few people who finally have a chance to take advantage of these tools.
I heard in a YouTube interview a physicist saying that LLMs are helping physics research just because any physicist out there can now ask graduate-level questions about currently published papers, that is, have access to knowledge that would have been hard to come by before, sharing knowledge across sub-domains of physics by asking ChatGPT.
Anecdotal, but I for one despise the youtube/instagram etc. rabbidholes. When I'm in the mood for a good one I scroll wikipedia. I had the best random conversations about what I read there and it feels like I remember this forever
> They aren't considering the long term downside to this.
This echoes sentiments from the 2010s centered around hiring. Companies generally don’t want to hire junior engineers and train them—this is an investment with risks of no return for the company doing the training. Basically, you take your senior engineers away from projects so they can train the juniors, and then the juniors now have the skills and credentials to get a job elsewhere. Your company ends up in the hole, with a negative ROI for hiring the junior.
Tragedy of the commons. Same thing to day, different mechanism. Are we going to end up with a shortage of skilled software engineers? Maybe. IMO, the industry is so incredibly wasteful in how engineers are allocated and what problems they are told to work on that it can probably deal with shortages for a long time, but that’s a separate discussion.
Engineers partly did this to themselves. The career advice during that time period was to change jobs every few years, demanding higher and higher salaries. So now, employers don't want to pay to train entry-level people, as they know they are likely going to leave, and at the salaries demanded they don't want to hire junior folks.
Long, long ago, the compact was that employees worked hard for a company for a long time, and were rewarded with pensions and opportunities for career advancement. If you take away the pensions and take away the opportunities for career advancement, your employees will advance their careers by switching companies—and the reason that this works so well is because all of the other companies would rather pay more to hire a senior engineer rather than take a risk on a junior.
It’s a systemic problem and not something that you can blame on employees. Not without skipping over a long list of other contributing factors, at least.
I think you've got cause and effect backwards. Employers used to offer incentives to stay in a company and grow organically. They decided that was no longer going to be the deal. So they got the current system. There was never some sudden eureka moment when the secret engineers club decided they wanted to have a super stressful life event every few years just to keep up with inflation.
As I said in another response, I think (at least partly) a contributing factor was the essentially limitless salary budget that VC funded startups and the FAANG companies had. You had software developers who could suddenly make more than doctors and lawyers and of course many of them sensibly acted in their own best interest but that left other employers saying "we're not going to invest in employees who are only going to turn around and leave for salaries we can't pay" and "if we have to pay those kind of salaries, we're not going to hire junior people we want experience."
Once a company hires and trains a junior, then they have a senior.. and they don't want to pay them a senior salary, but apparently other companies do.
The math remains simple: if you already have an employee on your payroll, how in the world are you not willing to pay them what they can get by switching at that point? That's literally just starving one's own investment.
The real issue is that the companies who were "training" the juniors were doing so only because they saw the juniors as a bargain given that they were initially willing to work for the lower wage. They just don't stay that way as they grow into the craft.
This is only because companies don't want to raise salaries as engineers' skill levels increase. If companies put junior employees in higher salary bands as their skill levels increase there wouldn't be a problem.
Capitalism and fiduciary duty prevents employers from paying people their market value when they are content enough to stay.
An employee who does not do the effort to re-peg their labor time to market rates for their skill level is implicitly consenting to a prior agreement (when they were hired).
Sometimes because the company investors are overwhelmingly short-sighted, which IMO ties back to the whole "financialization" of our economy into a quasi-casino.
I wonder how things might change if short-term capital gains tax (<5 years) went way up.
That is an extremely short-sighted view on what is essentially an iterated game where the domain knowledge employees have drastically increases their value to the company over time.
When I started work (this was in the pre-consumer-internet era), job hopping was already starting to be a thing but there was defintely still a large "old school" view that there should be some loyalty between employer and employee. One of my first jobs was a place where they hired for potential. They hired smart, personable people and taught them how to program. They paid them fairly well, and gave annual raises and bonuses. I was there for about 8 years, my salary more than doubled in that time. Maybe I could have made more elsewhere, I didn't even really look because it was a good environment, nice people, low stress, a good mix of people since not everyone (actually only a few) were Comp. Sci. majors.
I don't know how much that still happens, because why would a company today invest in that only to have the employee leave after two years for a higher salary. "They should just pay them more" well yeah, but they did pay them in the sense of teaching them a valuable skill. And their competitors for employees started to include VC funded startups playing with free money that didn't really care what it cost to get bodies into the shop. Hard to compete with that when you actually have to earn the money that goes into the salary budget.
Would the old school approach work today? Would employees stay?
probably, or should I say, I don't want to rot.. It's true that I love the feeling of learning mostly on my own, but i can be lazy too, it's just that I see a parallel with abusing chatgpt and never doing any physical activity.
Same here. I never really consciously saw it as "defiance" against cognitive decline or anything. More to the point, the answers are much better on average
Sure, if I spend one hour researching a problem vs asking AI in 10 seconds, yes I will almost always learn more in the one hour. But if I spend an hour asking AI questions on the same subject I believe I can learn way more than by reading for one hour. I think the analogy could be comparing a lecture to a one-on-one tutoring session. Education needs to evolve to keep up with the tools that students have at their disposal.
I had thought I saw somewhere that learning is specifically better when you are wrong, if the feedback for that is rapid enough. That is, "guess and check" is the quickest path to learning.
Specifically, asking a question and getting an answer is not a general path to learning. Being asked a question and you answering it is. Somewhat, this is regardless of if you are correct or not.
I hated when doing math homework and they didn't give me the answer sheet. If I could do an integral and verify if it's correct or not, I could either quickly learn from my mistake, or keep doing integrals with added confidence. Which is how I learned the best. Gatekeeping it because someone might use the answers wrong felt weird, you still had to show your work.
I imagine the thought process is that even when one must show their work, having a sneak peak at the answer allows a lazier student to work the problem forwards and backwards hoping to fudge through the middle plausibly well.
Yeah. I also felt it largely went at odds with the entire concept of flashcards. Which... are among the most effective tools that I did not take advantage of in grade school.
Actually, for most things (not PHD research level) you will learn more from the first approach. Getting answer directly means you can use the rest of the "free" time to integrate new knowledge into prior knowledge and review the information into long term memory.
I think you put your finger on it with the mention of discipline. I find AI tools quite useful for giving me a quick outline of things I want to play with or get up to speed on fast, but not necessarily get too invested in. But if you fin yourself so excited by a particular result that it sets your imagination whirling, it might be time to switch out of generative mode and use the AI as a tutor to deepen your actual understanding, ideally in combination with books or other static learning resources.
Yes, the downside is that we aren't really learning anything, just solving problems supported by machines that tell us the solutions. Any schmuck can do that.
>you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter
A LOT of the time the things I ask LLMs for are to avoid metaphorically wading through a garbage dump looking for a specific treasure. Filtering through irrelevant data and nonsense to find what I'm looking for is not personal development. What the LLM gives back is often a very much better jumping off point for looking through traditional sources for information.
Often when I ask LLM things about topics I was once reasonably expert in, but have spent a few months or years away from, its answers provide garbage as if it were treasure.
I don't know if I agree here. When I ask an LLM a question it always leads to a whole lot of other questions with responses tailored to my current level of understanding. This usually results in a much more effective learning session than reading a bunch of material that I might not retain anyway because I'm scanning it looking for my answers.
There are two aspects to this from my pov. And I think it might be controversial.
When i have a question about any topic, and I ask Chatgpt, i usually chat about more things, coming up with questions based on the answer, and mostly stupid questions. I feel like I am taking in the information, analyzing, and then diving deeper because I am curious. This is based on how I learn about stuff. I know i need to check a few things, and that it's not fully accurate, but the conversation flows in a direction I like.
compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling). That feels like someone decided what questions are important, what angles we need to look at, and what the conclusion should be. Yes, it is educational, but I am always left with lingering questions.
The difference is curiosity. If people are curious about a topic, they will learn. If not, they are happy with the answer. And that is not laziness. You cannot be curious about everything.
> compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling).
ChatGPT is in fact opinionated, it has numerous political positions ("biases") and holds some subjects taboo. The difference is that a single actor chooses the political opinions of the model that goes on to interact with many more people than a single opinion piece might.
Political searches I assume would be very very minor percentage of real learning. Even in such cases, I would rather rely on a good LLMs response than scrounging websites of mainstream media or blogs etc. For an objective response, reading through opinionated articles and forming my opinion is an absolute waste of time. I'd want the truth as accurately as possible. Plus people don't generally change political opinions based what they read. They read stuff aligning with their side.
> For an objective response, reading through opinionated articles and forming my opinion is an absolute waste of time
If the sources are all opinionated articles, per GP, that's what the LLM is going to base its "objective response" on. That's literally all it has as sensory input.
Yes that is true. Though that can be subsumed if you notice it, and ask the model to ignore those biases. (an extreme example would be opposition prep for a debate). I am not interested in politics and other related issues anyway.
I don't think that's modeling the risk correctly. In my mind the risk is that ChatGPT's creators are able to influence your political opinions _without_ you seeking that out
I really think the ability to ask questions entirely free from all judgment is an under-emphasized aspect of the power of these tools. Yes, some people are intellectually secure enough to ask the "dumb" questions of other humans, but most people are not, especially to an audience of strangers. I don't think I ever once asked a question on Stack Overflow, because it was easy to see how the question I worried might be dumb might be treated by the community there. But I ask all sorts of dumb questions of these models, with nary a concern about being judged. I love that aspect of it.
The other thing is that it will make an earnest attempt to answer the question. On the other hand with places like SO, many questions will be incorrectly marked as duplicate with the “answer” link pointing to a post that might seem similar at first glance but is different enough to not actually be the same, which is supremely unhelpful.
You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.
Personally, I find that even when it's wrong, it's often useful, in that I come away with hints toward how to follow up.
I do have concerns that people who haven't lived a couple decades of adult life prior to the existence of these tools will be a lot more credulous, to their detriment.
I think this "refusal to answer" thing is so overblown. I have been using this technology every day for over two years now and have not one single time run into this.
You might be right. From my experience, it depends. The edgier your exploration, the more that can happen.
It's important to note that not everyone abide by the same morals. And a narrowly constrained model may end up refusing genuine inquiries just because.
In any case, if anything, this is a small 'but'. OP's point is the gold nugget here. That is, LLMs allowing exploring subjects without the fear of being judged for one's natural curiosity.
Yeah, I'm in no way claiming that this isn't a thing, or even that it isn't a problem.
But it isn't a problem for most people. The kind of edgelords that run into this are overrepresented on internet forums, including HN, but it's actually a pretty small group of people.
Remember that ChatGPT can only give you information that a) it has found on the web and b) that it has made up itself on the spot. It certainly can't get up and go to the library to read a forgotten source not cited on Wikipedia, say.
So when you have a "curious" debate with ChatGPT what you're really doing is searching the internet through a filter, guided by your own and ChatGPT's biases about the subject, but still and always based on whatever you would have found by researching stuff on the internet.
You're still on the internet. It may feel like you've finally escaped but you haven't. The internet can now speak to you when you ask it, but it's still the internet.
I've been calling this out since OpenAI first introduced ChatGPT.
The danger in ubiquitously available LLMs, which seemingly have an answer to any question, isn’t necessarily their existence.
The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own. That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.
I recognize this problem, but I find in my own uses of ChatGPT it actually allows me to overcome my laziness rather than making it worse.
I'll have a problem that I want to work on but getting started is difficult. Asking ChatGPT is almost frictionless, the next thing I know I'm working on the project, 8 hours go by and I'm done. When I get stuck on some annoying library installation, ChatGPT solves if for me so I don't get frustrated. It allows me to enter and maintain flow states better than anything else.
ChatGPT is a really good way of avoiding procrastination.
I’ve found the same. Claude outputs are usually not good enough for what I’m looking for but the conversation is enough to get me engaged and started on a project.
> The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own.
I get the point you're trying to make. However, quietly pondering the problem is only fruitful if you have the right information. If you don't, best case scenario you risk wasting time reinventing the wheel for no good reason. In this application, a LLM is just the same type of tool as Google: a way to query and retrieve information cor you to ingest. Like Google, the info you get from queries is not the end but the means.
As the saying goes, a month in the lab saves you a week in the library. I would say it can also save you 10 minutes with Claude/ChatGPT/Copilot.
I'll stop short of asserting you don't, but I'm having a hard time convincing myself your reply does reflect that you get GP's point.
If I were to reframe GP's point, it would be: having to figure out how to answer a question changes you a little. Over time, it changes you a lot.
Yes, of course, there is a perspective from which a month spent in the lab to answer a question that's well-settled in the literature is ~wasted. But the GP is arguing for a utility function that optimizes for improving the questioner.
Quietly pondering the problem with the wrong information can be fruitful in this context.
(To be pragmatic, we need both of these. We'd get nowhere if we had to solve every problem and learn every lesson from first principles. But we'd also get nowhere if no one were well-prepared and motivated to solve novel problems without prior art.)
Some of the most memorable moments I had in my learning were when I "reinvented" something. In high-school, our math teacher had us reinvent the derivative rules, and later had us derive Euler's identity through Taylor Series. They were big eureka moments. Going through all the work someone else did hundreds of years ago is very inspiring, and IMO gets you in the right mindset for discovery. I can't imagine where the joy of learning comes for someone who sees learning as a test —a question, an answer, nothing in between.
In uni we built a CPU from scratch over the course of a few weeks. First building an small ALU, widening its bus, adding memory operations, etc. Beyond learning how things work, it makes you wonder how inventing this without a teacher to guide you must've been, and gives you an appreciation for it. It also makes you extrapolate and think about the things that haven't been invented or discovered yet.
In theory LLMs could serve as a teacher guiding you as you reinvent things. In practice, people just get the answer and move on. A person with experience teaching, who sees how you're walking the path and compares it to how they walked theirs, will know when to give you an answer and when to have you find it yourself.
One doesn't learn how to do lab-work in the library.
LLMs have taught me something that I sort of already knew from Hitchhiker's Guide to the Galaxy: the key to problem solving is asking the right question in the first place. It's not dangerous that answers can be retrieved quickly. Indeed, people had the same things to say about Google in the 90s or pocket calculators in the 70s. To me LLMs just speed up the process by which I would have manually searched the internet for in the first place. The only way to get good at critical thinking is to ask more questions.
I think this is where my physical laziness benefits me. I’m often too lazy to spend the time to fully describe the problem to the LLMs and wrap it in a prompt that will produce something, in written text, so I think through it first. Usually I solve it myself or think of a better primary source.
A preprint is available on arxiv [0], see the top of page 18 for what metacognitive laziness is:
"In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks."
And they seem to define, implicitly, “metacognitive load” as the cognitive and metacognitive effort required for learners to regulate their learning processes effectively, particularly when engaging in tasks that demand active self-monitoring, planning, and evaluation.
The analogize metacognitive laziness to cognitive offloading, where we have our tools do the difficult congnitive tasks for us, which robs us of opportunities to develop and ultimately dependent on those tools.
> In the context of human-AI
interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive
load, and less effectively associating responsible metacognitive processes with learning tasks.
This sounds like parents complaining when we use Google Maps instead of a folding map. Am I worse at reading a regular map? Possibly. Am I better off overall? Yes.
Describing it as "laziness" is reductive. "Dependence on [_____] assistance" is the point of all technology.
> "Dependence on [_____] assistance" is the point of all technology.
I will note two things though.
1. Not all technology creates "dependence". Google Maps removes the need of carrying bulky maps, or buy new ones to stay updated, but someone who knows how to read Google Maps will know how to read a normal map, even if they're not as quick at it.
2. The best technology isn't defined by the "dependence" it creates, or even the level of "assistance" it provides, but for what it enables. Fire enabled us to cook. Metalworking enabled us to create a wealth of items, tools and structures that wouldn't exist if we only had wood and stone. Concrete enabled us to build taller and safer. Etc.
It's still unclear what AI chatbots are enabling. Are LLM's big claim to fame allowing people to answer problem sets and emails with minimal effort? What does this unlock? There's a lot of talk about allowing better data analysis, saving time, and vague claims of an ai revolution, but until we see X, Y and Z, and can confidently say "yeah, X, Y and Z are great for mankind, and they couldn't have happened without chatbots", it's fair for people to keep complaining about the change and downsides AI chatbots are bringing about.
When you’re using a map you’re still navigating, even if you’re just following directions. The act of navigating teaches you spatial awareness regardless of how you got there.
AI doesn’t provide directions, it navigates for you. You’re actively getting stupider every time you take an LLMs answer for granted, and this paper demonstrates that people are likely to take answers for granted.
> AI doesn’t provide directions, it navigates for you.
LLMs (try to) give you what you're asking for. If you ask for directions, you'll get something that resembles that, if you ask it to 100% navigate, that's what you get.
> and this paper demonstrates that people are likely to take answers for granted.
Could you point out where exactly this is demonstrated in this paper? As far as I can tell from the study, people who used ChatGPT for the studying did better than the ones that didn't, with no different in knowledge retention.
> Could you point out where exactly this is demonstrated in this paper? As far as I can tell from the study, people who used ChatGPT for the studying did better than the ones that didn't, with no different in knowledge retention.
This is what I observed as well. For the "metacognitive laziness" bit they had to point to other studies.
Page 18 first paragraph, it talks about how ChatGPT users engaged less with the editing process compared to other methods. Sorry, copy and paste isn’t working for some reason.
You still need to map the real world to the map and actually follow the instructions. It’s the difference between navigating and having a driver navigate for you.
On one hand, this reminds me of how all of the kids were going to be
completely helpless in the real world because "no one carries a calculator in
their pocket". Then calculators became something ~everyone has in their pocket (and the kids ended up just fine).
On the other hand, I believe in the value of "learning to learn",
developing media literacy, and all of the other positives gained when
you research and form conclusions on things independently.
The answer is probably somewhere in the middle: leveraging
LLMs as a learning aid, rather than LLMs being the final stop.
I think it's closer to the library example. My parents have mentioned how they had to go to a library and look for books whereas my generation can just use the internet.
Realistically my guess is that the bar for broad knowledge and ability to get to details quickly will increase. There's a lot of value in understanding multiple disciplines at a mediocre level if you can very quickly access the details when needed. Especially since learning speed tends to get slower and slower the deeper you go.
Also since every time I've needed to do something complicated, even if I knew the details it was important enough to double check my knowledge anyway.
> On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
That is not going away. Learning better prompts, learning when to ignore AI, learning how to take information and turn it into something practical. These new skills will replace the old.
How many of us can still...
- Saddle a horse
- Tell time without a watch
- Sew a shirt
- Create fabric to sew a shirt
- Hunt with primitive tools
- Make fire
We can shelter children from AI, or we can teach them how to use it to further themselves. Talk to the Amish if you want to see how it works out when you forgo anything that feels too futuristic. A respectable life, sure. But would any of us reading this choose it?
How many of us still have to do these things? You either eed to do them or you don’t. If you do, you will learn how or find someone that do.
We still need to calculate numbers and I can say it’s silly if I find someone need to get a calculator to do 5x20. Same if you’re taking hours and multiple sheets of paper for something that will take you a few minutes with a calculator. There’s a question of scale and basic understanding that divides the two.
Yep, we agree. That's the whole point of what I said in the first half of my original comment.
At one time, they were common skills. Things changed, they aren't common, they aren't really needed (for most people), and everyone is doing just fine without them. We've freed up time and mental capacity for other (hopefully more beneficial) tasks.
(I'm confused why this reply and the other above it are are just restating the first part of my original comment, but framing it like it's not a restatement).
There are things that aren't needed because we have found easier ways to solve the problem they were solving. In contrast, there are things that exists that are only to make the job easier, no to make it go away. You don't need to light a fire because of how house are heated or food are made. But you still need to learn arithmetic to use a calculator. There's a difference between the two.
If the goal is to learn, the means don't matter much as long as the right attitude is there. But if one only wish to appear knowledgeable, LLM has indeed make it way easier.
It's astounding to me that people just like... always trust whatever the LLM says.
I have some friends who use ChatGPT for everything. From doing work to asking simple questions. One of my friends wanted a bio on a certain musician and asked ChatGPT. It's a little frightening he couldn't, you know, read the Wikipedia page of this musician, where all of the same information is and there are sources for this material.
My mom said she used ChatGPT to make a "capsule wardrobe" for her. I'm thinking to myself (I did not say this to her)... you can't just like look at your clothes and get rid of ones you don't wear? Why does a computer need to make this simple decision?
I'm really not sure LLMs should ever be used as a learning aid. I have never seen a reason to use them over, you know, searching something online. Or thinking of your own creative story. If someone can make a solid use case as to why LLMs are useful I would like to hear.
Regarding your mom’s clothes: she wasn’t asking the machine to give advice she couldn’t think of on her own, she was seeking external validation and permission to purge and override the hoarder urge of her personality.
This is like when CEOs hire outside consulting firms to do layoffs for them. Pinning the pain of loss on some scapegoat makes it more bearable.
I agree, at first I thought gpt would be used by tech savvy folk, but now it is clear that it’s becoming a crutch. My friend couldn’t respond to an email without it.
> One of my friends wanted a bio on a certain musician and asked ChatGPT.
I use ChatGPT (or Gemini) instead of web searches. You can blame the content and link farms that are top of the search results, and the search engines focusing on advertising instead of search, because we're the product.
Why your friend doesn't know about wikipedia is another matter, if i wanted a generic info page about some topic i'd go directly there. But if i wanted to know if Bob Geldof's hair is blue, I might ask a LLM instead of reading the whole wikipedia page.
I also ask LLMs for introductory info about programming topics i don't know about, because i don't want to go to google and end up on w3schools, geeksforgeeks and crap like that.
I don't really trust LLMs for advanced programming topics, you know, what people pay me for. But they're fine for giving me a function signature or even a small example.
You can use source material instead of LLMs for all of this.
"Is Bob Geldof's hair blue?" -> Search for Bob Geldof -> Look at images of Bob Geldof.
Intro programming topics can be found at the documentation of the website. Your searching query might be "[programming topic] getting started" and usually if it's a package or a tool there will be documentation. If you want good documentation on web dev stuff that isn't w3schools or geeksforgeeks you can use MDN documentation.
Or, if you really want a general overview there's probably a YouTube video about the topic.
Additionally appending "reddit" to a search will give better results than SEO junk. There are always ways to find quality information via search engines.
> Assuming I get images of Bob Geldof. More likely the first page will be pinterest login-required results.
You do realize Google/Bing/DDG/Kagi all have an Images tab, right? Come on.
> Life's too short to watch talking heads about ... you know, WRITING code ...
If I want a high level overview of what the thing even is, a YouTube video can be useful since there will be explanations and visual examples. You can read documentation as well. For example,
if I want a slower overview of something step by step, or a talk at a conference about why to use this thing, YouTube can be helpful. I was just looking at videos about HTMX this weekend, hearing presentations by the authors and some samples. That's not saying if I actually use the thing I won't be reading the documentation, it's more just useful for understand what the thing is.
> Seriously? Maybe for the top 500 npm packages. Not for the more obscure libraries that may have only some doxygen generated list of functions at best.
How do you expect your LLM to do any better? If you're using some obscure package there will probably be documentation in the GitHub README somewhere. If it's horrible documentation you can read the Typescript types or do a code search on GitHub for examples.
This is all to say that I generally don't trust LLM output because I have better methods of finding the information LLMs are trained on. And no hallucinations.
We don't teach slide rules and log tables in school anymore. Calculators and computers have created a huge metacognitive laziness for me, and I teach calculus and have a PhD in statistics. I barely remember the unit circle except for multiples of pi/4 radians. I can do it in multiples of pi/6 but I'm slower.
But guess what? I don't think I'm a worse mathematician because I don't remember these things reflexively. I might be a little slower getting the answer to a trivial problem, but I can still find a solution to a complex problem. I look up integral forms in my pocket book of integrals or on Wolfram Alpha, because even if I could derive the answer myself I don't think I'd be right 100% of the time. So metacognitive laziness has set in for me already.
But I think as long as we can figure out how to stop metacognitive laziness before it turns into full-fledged brain-rot, then we'll be okay. We'll survive as long as we can still teach students how to think critically, and figure out how to let AI assist us rather than turn us into the humans on the ship from Wall-E. I'm a little worried that we'll make some short term mistakes (like not adapting our cirriculum fast enough), but it will work out.
I think you're right at the edge of explaining why this "laziness" is a good thing. Everything that we have made is built on what we had before, and abstracts away what we had before. 99% of us don't remember how to make even the simplest Assembly program, and yet we unleash billions of instructions per second on the world.
Even outside of math and computers, when was the last time you primed a well pump or filled an oil lamp? All of these tasks have been abstracted away, freeing us to focus on ever-more-specialized pursuits. Those that are useful will too be abstracted away, and for the better.
Your comment exposes how much metacognitive laziness you have in modern society that you didn't realize that people still do these things, just not you. They aren't obsolete tasks, just done at a layer you don't see.
I don't have to prime a well pump any more because my house and workplace are hooked into the municipal water system. I don't have to prime a pump because that task has gotten so abstract as to become turning a faucet handle. But engineers at the municipal water plant do have to know how to do this task.
Similarly, filling an oil lamp and lighting it is now abstracted for normal people as flipping a light switch (maybe changing a light bulb is a more appropriate comparison). But I actually have filled an oil lamp when I was a kid because we kept "decorative" hurricane lamps in my house that we used when the power went out. The exact task of filling an oil lamp is not common, but filling a generator with fuel is still needed to keep the lights on in an emergency, although it is usually handled by the maintenance staff of apartment buildings and large office buildings.
The abstract does not define, nor contextually suggest from the prior statements of the results what "metacognitive laziness" means.
Personally speaking, I find being able to ask ChatGPT continually more nuanced questions about an initial answer the one clear benefit over a Google search, where I have diminishing marginal returns on my inquisitiveness for the time invested over subsequent searches. The more precisely I am able to formulate my question on a traditional search engine, the harder it is for non-SEO optimized results to appear: it's either meant more for a casual reader with no new information, or is a very specialized resource that requires extensive professional background knowledge. LLMs really build that bridge to precisely the answers I want.
In the absence of a definition I'd read it straightforwardly - it means that someone stops making an effort to learn better ways to learn. I.e. if they start using chatbots to learn, they stop practicing other methods and just rely on the chatbot. (EDIT: I realize now that this probably isn't news to the parent!)
I've heard stories of junior engineers falling into this trap. They asked the chatbot everything rather than exposing their lack of knowledge to their coworkers. And if the chatbot avoids blatant mistakes, junior engineers won't recognize when the bot makes a subtle one.
> LLMs really build that bridge to precisely the answers I want.
It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
Not criticising you in particular, but this does sound to me like this approach has a good possibility of just reinforcing existing biases
In fact the approach sounds very similar to "find a wikipedia article and then go dig through the sources to find the original place that the answers I want were published"
Though I think you're reading more into my phrasing than I meant, the overall skepticism is fair.
One thing I do have to be mindful of is asking the AI to check for alternatives, for dissenting or hypothetical answers, and sometimes I just ask it to rephrase to check for consistency.
But doing all of that still takes way less time than searching for needles buried by SEO optimized garbage and well meaning but repetitious summaries.
Agreeable LLMs and embedded bias are surely a risk, but I don't think this a helpful frame. Most questions don't have correct answers, so it would follow that you'd want practical answers for those, and correct answers for the remainder.
There was a story a couple days ago about a neural network built on a single photonic chip. I fed the paper to ChatGPT and was able to use it to develop a much more meaningful and comprehensive understanding of what the chip actually delivered, how it operated, the fundamental operating principles of core components and how it could be integrated into a system.
The fact that I now have a tireless elucidator on tap to help explore a topic (hallucination caveats notwithstanding) actually increases my motivation to explore dense technical information and understanding of new concepts.
The one area where I do think it is detrimental is my willingness to start writing content on a provebial blank sheet of paper. I explore the topic with ChatGPT to get a rough outline, maybe some basic content and then take it from there.
On the other hand you might be getting worse at reading those papers yourself.
The more youngsters skip the hassle of banging their heads on some topic the less able they will be to learn at later age.
There's more to learning than getting information, it's also about processing it (which we are offloading to LLMs). In fact I'd say that the whole point of going through school is to learn how to process and absorb information.
What if the LLMs are teaching us that long form prose/technical writing is just a really bad, unnatural format for communication but natural dialogues are a good format?
The way I see it it is sort of like debugging code you're not well accustomed with.
While you're still going to learn whether you go through the hassle of understanding the system, develop a method for debugging it and learning about it along the way...
Of course a senior could point you to the issue right away, probably an llm too, and even provide a learning opportunity, but does it hold the same lasting impact of being able to overcome the burden yourself?
Which one makes a more lasting effect on your abilities and skills?
Again, LLMs are a tool, but if people in school/college start using it to offload the reasoning part they are not developing it themselves.
If that was the case every scientific paper would be written as socratic dialogue. But it’s not. Because that’s a good format for beginners, but not for science.
the reason the current format exists and is used is because it's very information dense. i think scientific papers would be better if they were socratic dialogues.
but the limitation in publishing a dialogue is that you'd just get to publish one of them and each reader is going to come in with different questions and goals for what they want out of the paper.
I do read the paper, but when you run into dense explanations like this:
>To realize a programmable coherent optical activation function, we developed a resonant electro-optical nonlinearity (Fig. 1(iii)). This device directs a fraction of the incident optical power ∣b∣2 into a photodiode by programming the phase shift θ in an MZI. The photodiode is electrically connected to a p–n-doped resonant microring modulator, and the resultant photocurrent (or photovoltage) detunes the resonance by either injecting (or deplet-ing) carriers from the waveguide.
It becomes very difficult to pick apart each thing, find a suitable explanation of what the thing (eg. MZI splitter, microring modulator, how a charge detunes the resonance of the modulator) is or how it contributes to the whole.
Picking these apart and recombining them with the help of something like ChatGPT has given me a very rapid drill-down capability into documents like this. Then re-reading it allows me to intake the information in the way its presented.
If this type of content was material to my day job it would be another matter, but this is just hobby interest. I'm just not going to invest hours trying to figure it out.
Sure, same as I'm probably pretty bad at going to the library and looking up information there, with the advent of the internet.
In practice, this lets you reasonably process the knowledge from a lot more papers than you otherwise would, which I think is a win. The way we learn is evolving, as it has in the past, and that's a good thing.
Though I agree that this will be another way for lazy children to avoid learning (by just letting AI do the exercises), and we'll need to find a good solution for that, whatever it may be.
Not being able to glean information from a paper is wildly different than being unable to use a card catalog. The former is basic reading comprehension; the latter is a technology.
You AREN'T learning what that paper is saying; you're learning parts of what the LLM says is useful.
If you read just theorems, you aren't learning math. You need to read the proof too, and not just a summary of the proof.
I use ChatGPT a lot each day for writing and organizing tasks, and summaries/explanations of articles etc.
When dealing with topics I'm familiar with, I've found the hallucinations have dropped substantially in the last few years from GPT2 to GPT3 to GPT4 to 4o, especially when web search is incorporated.
LLMs perform best in this regard when working with existing text that you've fed them (whether via web search or uploaded text/documents). So if you paste the text of a study to start the conversation, it's a pretty safe bet you'll be fine.
If you don't have web search turned on, I'd still avoid treating the chat as a search engine though, because 4o will still get little details wrong here and there, especially for newer or more niche topics that wouldn't be as well-represented in the training data.
I've found them to be quite accurate when given enough context data. For ex, feeding it an article into it's context window and asking questions about it. Relying on the LLM's internal trained knowledge state seems to be less reliable.
What powers it is Kagi Search :) All chatbots have access to similar models, what distinguishes the answer quality is/will be the quality of search results fed to them.
I'm really happy about being able to share Kagi results. It's allowed me to slip Kagi into a few discussions to raise awareness. Also, being on bluesky helps because so many folks complain about google but they're not aware of better options.
One thing I wanted to raise: please keep the Kagi homepage as light as possible. It's tempting to keep adding little things and over time you get a big mess.
I agree in general but the way this has worked for me in practice is that I approach things hierarchically up and down. Any specific hallucinations tend to come out in the wash as the same question is asked from different layers of abstraction.
Even though ChatGPT "invents" its own reality sometimes, I also find it superior to Google search results (or Duck Duck Go). In some cases LLM results even provide specific strings to search for in the search engines to verify the content. Search is terribly broken and has been since around 2014 (arbitrary date) where Google search results pages started displaying more ads than results.
>The abstract does not define, nor contextually suggest from the prior statements of the results what "metacognitive laziness" means.
Your comment seems like a good example of metacognitive laziness: not bothering to formulate your own definition from the examples in the abstract and the meaning of the words themselves. Slothful about the the process of thinking for yourself.
I reread the abstract 3 times. The results stated prior to that definition simply don't follow consistently with the component meaning of those two words as I understand them.
> When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently.
Cell phones and laptops in general have changed a couple of things for me, as someone who grew up without them:
- I realized about 20y-25y ago that I could run a Web search and find out nearly any fact, probably one-shot but maybe with 2-3 searches' worth of research
- About 10-15y ago I began to have a connected device in my pocket that could do this on request at any time
- About 5y ago I explicitly *stopped* doing it, most of the time, socially. If I'm in the middle of a conversation and a question comes up about a minor fact, I'm not gonna break the flow to pull out my screen and stare at it and answer the question, I'm gonna keep hanging out with the person.
There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
I don't miss it, but I have become keenly aware of how tethered my consciousness is to facts available via Web search, and I don't know that I love outsourcing that much of my brain to places beyond my control.
A long time ago I had the idea that maybe Guinness started a "book of world records" precisely because it answers exactly the kind of question that will routinely pop up at the pub.
Sure, but that ruins the ability to just pop into a pub and play with people you don't know (let alone trust).
I have this business idea for a pub in a faraday cage that would make cheating impossible for pub trivia (added bonus: also removes any other reason for anyone to be on their phones!)
I agree but I think we shouldn’t limit this answer to pub trivia. What other aspect of human society and civil discourse did we lose because we never argue or discuss any more?
Well it certainly sucks in cases where someone "fact checks" you but they do so before a broader discussion has given them enough context to even know what to google or ask the bot.
> There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
A good example, but imagine the days of our ancestors:
Remember that game we used to play, where we'd find out who could see birds from the farthest distance? Yeah, glasses ruined that.
1. Socrates criticized writing itself: in Plato's Phaedrus he said it would "create forgetfulness in the learners' souls, because they will not use their memories" (274e-275b)
2. Leonard Euler criticized the use of logarithm tables in calculating: in his 1748 "Introductio in analysin infinitorum" he insisted on deriving logarithms from first principles
3. William Thomson (Lord Kelvin) initially dismissed mechanical calculators, stating in an 1878 lecture at Glasgow University that they would make students "neglect the cultivation of their reasoning powers"
4. Henry Ford in his autobiography "My Life and Work" (1922) quoted a farmer who told him in 1907 that gasoline tractors would "make boys lazy and good for nothing" and they'd "never learn to farm"
5. In 1877, the New York Times published concerns from teachers about students using pencils with attached erasers, claiming it would make them "careless" because they wouldn't have to think before writing. The editorial warned it would "destroy the discipline of learning"
6. In "Elements of Arithmetic," (1846) Augustus De Morgan criticized the use of pre-printed multiplication tables, saying students who relied on them would become "mere calculative mechanism" instead of understanding numbers
7. In his 1906 paper "The Menace of Mechanical Music," John Philip Sousa attacked the phonograph writing that it would make people stop learning instruments because "the infant will be taught by machinery" and musical education would become "unnecessary"
8. In his 1985 autobiography "Surely You're Joking, Mr. Feynman!" Richard Feynman expressed concern about pocket calculators and students losing the ability to estimate and understand mathematical relationships
I could go on (Claude wrote 15 of them!). Twenty years from now (assuming AI hasn't killed us all) we'll look back and think that working with an LLM isn't the crutch people think it is now.
I do not consider any of 8 wrong, if I want to understand each of their ideas. The time was slower at their times and painting with eraser is kind of a different genre.
So humans are supposed to review all of the code that GenAI creates. We’re supposed to ensure that it doesn’t generate (obvious?) errors and that it’s building the “right thing” in a manner prescribed by our requirements.
The anecdotes from practitioners using GenAI in this way suggest it’s a good tool for experienced developers because they know what to look out for.
Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
Studies such as this are hard but important. Interesting one here even though the sample is small. I wonder if anyone can repeat it.
> Do they know when the GenAI is bullshitting them?
Anecdote from a friend who teaches CS: this year a large number of students started adding unnecessary `break` instructions to their C code, like so:
while (condition) {
do_stuff();
if (!condition) {
break;
}
}
They asked around and realized that the common thread was ChatGPT - everyone who asked how loops work got a variation of "use break() to exit the loop", so they did.
Given that this is not how you do it in CS (not only it's unnecessary, but it also makes your formal proofs more complex) they had to make a general one-time exception and add disclaimers in exams reminding them to do it "the way you were taught in class".
A colleague of mine once taught a formal methods course for students working on their masters -- not beginners by any stretch.
The exercise was to implement binary search given the textbook specification without any errors. An algorithm they had probably implemented in their first-year algorithms course at the very least. The students could write any tests they liked and add any assertions they thought would be useful. My colleague verified each submission against a formal specification. The majority of submission contained errors.
For a simple algorithm that a student at that level could be reasonably expected to know well!
Now... ChatGPT and other LLM-based systems, as far as I understand, cannot do formal reasoning on their own. It cannot tell you, with certainty, that your code is correct with regards to a specification. And it can't tell you if your specification contains errors. So what are students learning using these tools?
Given that most binary searches have an overflow error built in, I think it’s harder than a first year problem to do binary searches without the classical overflow error…
You take a few points from the students that posted inane code by following the LLM, and those students will learn to never blindly follow an LLM again.
If you take the generated code snippets and ask something like "There may or may not be something syntactically or stylistically wrong with the following code. Try to identify any errors or unusual structures that might come up in a technical code review.", then it usually finds any problems or at least, differences of opinion on what the best approach is.
(This might work best if you have one LLM critique the code generated by another LLM, eg bouncing back and forth between Claude and ChatGPT)
Some tools have also started to support a one-two punch of asking a reasoning model (o1 or R1 etc) for planning the solution, and a chat model to build it. Works quite well.
> Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
You can know enough in X to allow you to do Y together with X, which you might not have been able to before.
For example, I'm a programmer, but horrible at math. I want to develop games, and I technically could, but all the math stuff makes it a lot harder sometimes to make progress. I've still managed to make and release games, but math always gets in the way. I know exactly how I want it to behave and work, but I cannot always figure out how to get there. LLMs help me a lot with this, where I can isolate those parts into small black boxes that I know they give me the right thing, but not 100% sure about how. I know when the LLM gives me the incorrect code, because I know what I'm looking for and why, only missing the "how" part.
Basically like having 3rd party libraries you don't fully understand the internals of, but can still use granted you understand the public API, except you keep in your code base and pepper it with unit tests.
> Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
No, which is why people who don't pick up on the nuances of programming - no matter how often they use LLMs - will never be capable programmers.
I'm at this very moment testing deepseek-r1, a so called "reasoning" llm, on the excellent "rustlings" tutorial. It is well documented and its solutions are readily available online. It is my lazy go-to-testing for coding tasks to assess for me if and when I have to start looking for a new job and take up software engineering as a hobby. The reason I test with rustlings is to also assess the value as a learning tool for students and future colleagues. Maybe these things have use as a teacher? Also, the rust compiler is really good in offering advice, so there's an excellent baseline to compare the llm-output.
And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
> And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Are you considering the full "reasoning" it does when you're saying this? AFAIK, they're meant to be "rambling" like that, exploring all sorts of avenues and paths before reaching a final conclusive answer that is still "ramble-like". I think the purpose seems to be to layer something on top that can finalize the answer, rather than just taking whatever you get from that and use it as-is.
> Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
I started coding just before Stack Overflow got popular, and remember the craze when it did get popular. Blogposts about how Stack Overflow will create lazy devs was all over the place, people saying it was the end of the real developer. Not arguing against you or anything, I just find it interesting how sentiments like these keeps repeating over time, just minor details that change.
Before pervasive GPS, it took me very little time to actually learn and internalize a route. Now it takes a lot longer to remember it when you're constantly guided. Same exact thing is happening with guided reasoning we get with LLMs
I have different experience. It took me some time to make a rote and write down all turns. Now getting from location A to B is a lot easier. Take a look at proposed rote and make some corrections. Meanwhile I spend time thinking about something else. So, GPS doesn't make me stupid or forgetful. It's just a tool which makes me more productive. The same almost true for LLM, except getting the right answer isn't always easy or possible. But overall on coding small utilities it's very helpful. For reasoning models I still need to find the right tasks. May be more complex utilities. Or the one I can't get from 4o yet: red-black tree with custom memory management and custom 'pointers' in data objects (small integers). While custom allocators are supported by std, the implementation still keeps native pointers, which locks it in memory.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
I have been using LLMs for my own education since they came out and have watched my kid use it.
Some kids might pickup a calculator and then use it to see geometric growth, or look for interesting repeating patterns of numbers.
Another kid might just use it to get their homework done faster and then run outside and play.
The second kid isn't learning more via the use of the tool.
So the paper warns that the use of LLMs doesn't necessarily change what the student is interested in and how they are motivated. That we might need to put in checks for how the tool is being used into the tool to reduce the impact of scenario 2.
I don't really know what "metacongnitive laziness" is even after they explain it in the paper, but I use LLMs to filter noise and help automate the drudgery of certain tasks, allowing me to use my energy and peak focus time on the more complicated tasks. Anecdotal, obviously. But I don't see how this hinders me in my ability to "self-regulate". It's just a tool, like a hammer.
From a learning perspective, it can also be a short cut to getting something explained in several different ways until the concept "clicks".
I drew a similar conclusion from the abstract as you. The only negative I could think out of that is with higher essay scores, one might expect higher knowledge gain, and that wasn’t present.
However, I agree that that doesn’t really seem to be a negative over other methods.
I have found ChatGPT is pretty good at explaining topics when the source documentation is poorly written or lacks examples. Obviously it does make mistakes so skepticism in the output is a good idea.
Yeah, the abstract could use a bit more work. The gist of it is being in a closed-loop cycle with ChatGPT only helps with the task at hand, and not with engaging with the full learning process. Instead they say "When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently."
This is not a concern when you are responsible for real results. If you aren’t responsible for real results you can pass off the good rhetoric of these models as an “answer”. But when you need results you realize most answers they give are just rhetoric. They are still extremely valuable, but they can only help you when you have done the work to get deep understanding of the problem, incentivized by actually solving it.
> Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
That's the most convoluted conclusion I've ever seen.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”.
Calculator laziness is long known. It doesn't cause meta- but specific- laziness.
So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
And that usually the benefits rise with IQ level - nothing new here, that’s the very definition of IQ.
Learning and academia is hard, and even harder for those with lower IQ scores.
A fool with a tool is still a fool and vice versa.
Motivation seems also at an all time low. Why put in hours when a prompt can works wonders?
Reading a book is a badge of honor nowadays more than ever.
> the benefits rise with IQ level - nothing new here, that’s the very definition of IQ
This is not obvious to me, and certainly is not the "definition" of IQ. There are tools that become less useful the more intelligent you are, such as multiplication tables. IQ is defined by a set of standardized tests that attempt to quantify human intelligence, and has some correlations with social, educational and professional performance, but it's not clear why it would help with use of AI tools.
> So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
Would you argue that having books/written words also made people more lazy and be able to remember less? Because some people argued (at the time) that having written words would make humanity less intellectual as a whole, but I think consensus is that it led to the opposite.
In my recent programming exam (in an MSc in AI), I asked students to reflect on how generative AI has changed their coding. Almost all remarked that it's a great time-saver, but it makes them lazy and worse at coding.
And yes indeed, their ability to answer basic questions about coding on the same exam has drastically dropped versus last year.
Is the problem the use of AI in coding, or using AI in coding in a curriculum designed without that assumption? Because if AI is an effort-saver, than a curriculum that isn't designed with its use in mind will just result in the students doing less work, in which case learning less is unsurprising but not really an "AI makes you less knowledgeable" problem but an "insufficiently challenging curriculum for the actual context" problem.
My observation is that I learn more than ever using LLMs.
I tend to learn asking questions, I did this using Anki cards for years (What is this or that?) and find the answer on the back of the index card. Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
My motto is: Seek first to understand, then to be understood (Covey). And I do this in engaging with people or a topic—-by asking questions.
Now I do this with LLMs. I have been exploring ideas I would never have explored hadn’t there been LLMs, because I would not have had the to research material for learning, read it, create material in a Q&A session for me.
I even use LLMs to convert an article into Anki cards using Obsidian, Python, LLMs, and the Anki app.
I use functions in openai and a template that forces the LLM to generate questions and answers from a text in a format that can be synced into the Anki app. Very straightforward workflow.
Yeah I've found the same. I might have some surface understanding of some topic and I like just asking "am I right in thinking this and this about this?" Or "Tell me why I'm wrong about this".
I don’t see how the “metacognitive laziness” (a term used by the abstract, but not defined) follows from what they describe in the abstract as the outcomes they observed. They specifically called out no difference in post-task intrinsic motivation; doesn’t that imply that the ChatGPT users were no lazier after using ChatGPT than they were before?
I’m also a skeptic of students using and relying on ChatGPT, but I’m cautious about using this abstract to come to any conclusions without seeing the full paper especially given that they’re apparently using “metacognitive laziness” in a specific technical way we don’t know about if we haven’t read the paper.
How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything? I've run into conflicting answers to things off Google's first page of results, some things aren't 100% certain and require more research.
I'm not surprised if this will make some lazier since you don't need to do the legwork of reading, but how many don't read only the headlines of articles before they share articles?
> How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything
You can interrogate it at least. "Are you sure that's the correct answer? Re-think from the beginning without any assumptions" and you'll get a checklist you can mentally/practically go through yourself to validate.
True, so I guess what needs to happen is people using AI need to be informed on how to use it more accurately so they're actually learning source material and not just taking garbage / cheating on coursework.
A friend and myself were talking not too long ago that people these days don't practice critical thinking. It might be worthwhile for not just schools but parents everywhere to teach their kids to think more critically, ask the right questions when presented with new information.
Inevitably the advancement of knowledgeable information generation will have same mental effect as having a contact list on your phone. When I was a kid I knew at least 5 peoples phone numbers maybe more. Even now I can recall 2 of them. How many can you recall from your actual contact list?
Exactly this.
While I might scratch the surface of some topics, it helps me cast a wider net of cognitive exploration in the free time I have. This in turn leads me into deeper rabbit holes for things that pique my interest, leading to faster iteration of the knowledge tree, while also providing me with a way to estimate my understanding of the topic.
This technology is arguably as ubiquitous as a calculator. So long as I can understand that generative AI is a tool and not a solution is it bad to treat it like a bit of a calculator? Does this metacognitive laziness apply to those who depend on calculators?
I understand it is a bit apples to oranges, but I'm curious peoples take.
I am definitely lazier today in regards to doing math in my head compared to when I was young.
I think a comparison with calculators is possible, but the degree to which calculators are capable of assisting us is so incomparably smaller that the comparison would be meaningless.
Smart phones changed society a lot more than calculators did and now AI is starting to do the same, albeit in a more subtle manner.
Treating AI like it's just a calulator seems naïve/optimistic. We're still reeling from the smart phone revolution and have not solved many of the issues it brought upon its arrival.
I have a feeling the world has become a bit cynical and less motivated to debate how to approach these major technological changes. It's been too many of them in too short of a time and now everyone has a whatever attitude towards the problems these adcancements introduce.
That just demonstrates the difference between idiots and intelligent people. I use AI and chatgpt to learn about a zillion topics I am interested about more efficiently.
Even if the computer is doing all the thinking, it's still a tool. Do you know what to ask it? Can you spot a mistake when it messes up (or you messed up the input)? Can you simplify the problem and figure out what the important parts of the problem are? Do you even know to do any of that?
Sure, thinking machines will sometimes be autonomous and not need you to touch them. But when that's the case, your job won't be to just nod along to everything the computer says, you won't have a job anymore and you will need to find a new job (probably one where you need to prompt and interpret what the AI is doing).
And yes, there will be jobs where you just act as an actuator for the thinking machine. Ask an Amazon warehouse worker how great a job that is :/
Can you point me to the generation that had ready access to AI on their hands, answering all their questions?
"People have been complaining about this for thousands of years" is a potent counterargument to a lot of things, but it can't be applied to things that really didn't exist even a decade ago.
Moreover, the thing that people miss about "people have been complaining about this for thousands of years" is that the complaints have often been valid, too. Cultures have fallen. Civilizations have collapsed. Empires have disintegrated. The complaints were not all wrong!
And that's on a civilization-scale. On a more mundane day-to-day scale, people have been individually failing for precisely the same reasons people were complaining about for a long time. There have been lazy people who have done poorly or died because of it. There have been people who refused to learn who have done poorly or died because of it.
This really isn't an all-purpose "just shrug about it and move on, everything's been fine before and it'll be fine again". It hasn't always been fine before, at any scale, and we don't know what impact unknown things will have.
To give a historical example... nay, a class of historical examples... there are several instances of a new drug being introduced to a society, and it ripping through that society that had no defenses against it. Even when the society survived it, it did so at great individual costs, and "eh, we've had drugs before" would not have been a good heuristic to understand the results with. I do not know that AIs just answering everything is similar, but at the moment I certainly can't prove it isn't either.
I mean sometimes it's true. Like even in the past. I could very clearly see amongst my generation (older gen z) that there were plenty of people literally at university who were barely willing or able to learn. Comparing that to the generation of my much older half siblings (genx, older millennial), they don't even seem to grasp the concept of not being quite involved in your university degree.
Most people my age will tell you that they stopped reading as a teenager because of the effect of smartphones. I was a veracious reader and only relearnt to read last year after 10 years since I got my first smartphone as an older teenager. These things are impactful and have affected a lot of people's potential. And also made our generation very prone to mental health issues - something that is really incredibly palpable if you are within gen z social circles like I am. It's disastrous and cannot be overstated. I can be very sure I would be smarter and happier if technology had stagnated at the level it was at when I was a younger child/teen. The old internet and personal computers, for example, only helped me explore my curiosity. Social media and smartphones have only destroyed it. There are qualitative differences between some technological advancements.
Not to mention the fact that gen alpha are shown to have terrible computer literacy because of the ease of use, discouragement of customisation and corporate monopoly over smartphones. This bucks the trend that happened from gen x to gen z of generations become more and more computer native. Clearly, upwards trends in learning due to advancements in technology can be reversed. They do not always go up.
If kids do not learn independent reasoning because of reliance on LLMs, yes, that will make people stupider. Not all technology improves things. I watched a really great video recently where someone explained the change in the nature of presidential debates through the ages. In the Victorian times, they consisted of hours-long oratory on each side, with listeners following attentively. In the 20th century the speeches gradually became a little shorter and more questions were added to break things up. In most recent times, every question has started to come with a less than a minute answer, simpler vocabulary, little hard facts or statistics etc. These changes map very well to changes in the depth at which people were able to think due to the primary information source they were using. There is a good reason why reading is still seen as the most effective form of deep learning despite technological advancement. Because it is.
I wonder how will LLMs learn anything new when no one does original research and just asks the LLM? Will LLMs just feed back on each other, effectively hallucinating false "learning"?
Maybe we'll end up as a society of a few elites who still know how to research, think, and/or write with LLMs digesting that and regurgitating it for the masses.
I mean this is the same exact thing that happened when calculators where invented. The amount of people who can count in their heads drastically dropped because why waste your time? Ditto for when maps app came out. No more need to memorize a bunch of locations because you can just use maps to take you there.
It's funny, the calculators were incredibly politicized when I was growing up (TI84 generation, so kids were getting caught programming functions to solve exam questions) but GPS was just taken as a given.
I feel this, because it’s like I don’t need to know about something, I just need to know how to know about something. Like, the initial contact with a mystery subject is overcome by knowing how to describe the mystery in a way that AI understands what I don’t understand, and seeks to fill in the understanding.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
> But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.
That's a interesting and very fitting quote. Basically saying that since we can now write down information, people will get lazier about remembering things. Basically the exact same claim as the submission article.
I think there is some validity to the nature of generational knowledge loss through differing information systems. At one end of the scale, you’ve got 80,000 year old stories, still being told - at the other end of the scale, you’ve got App Of The Day™ style social media, and kids who can’t write an email, use a dictionary, or read a book.
This is no hyperbole - humans have to constantly fight the degeneracy of our knowledge systems, which is to say that knowledge has to be generated and communicated - it can’t just “exist” and be useful, it has to be applied to be useful. Technology of knowledge which doesn’t get applied, does not persist, or if it does (COBOL), what once was common becomes arcane.
So, if there is hope, it lays with the proles: the way every-day people use ML, is probably the key to all of this. It’s one thing to know how to prompt an LLM to give you a buildable source tree; its another thing entirely to use it somehow to figure out what to make out of the leftover ingredients in the fridge.
Those recipes and indeed the applications of the ingredients, are based on human input and mores.
So the question for me, still really unanswered, is: How long will it take until those fridge-ingredient recipes become bland, tasteless and grey?
I think this belies the imperative that AL and ML must never become so pervasive that we don’t, also, write things down for ourselves. Oh, and read a lot, of course.
It seems, we need to stop throwing books away. Oh, and encourage kids to cook, and create their own recipes... hopefully they’ll have time and resources for that kind of lifestyle…
No doubt, this curse (which is also missing generalization, i.e. evolution/generalization/specialization) is all for the sake of self-awareness, or at least, awareness, of some particular thing.
As long as humans remain aware that they are engaging with an AI/ML, we might still have a chance. Computers definitely need to be identifiable as such.
As technology gets more impressive, we internalize less knowledge ourselves.
There is a "plato" story on how he laments the invention of writing because now people don't need to memorize speeches and stuff.
I think there is a level of balance. Writing gave us enough efficiencies that the learned laziness made us overall more effective.
The internet in 2011 made us a bit less effective. I am not gonna lie; I spent a lot more time being able to get resources, whereas I would have to struggle on my own to solve a problem. You internalize one more than the other, but is it worth the additional time every time?
I worry about current students learning through LLMs just like I would worry about a student in 2012 graduating in physics when such a student had constant access to wolfram alpha.
This stands to reason. If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former. You may be disciplined enough to do more research if the answer is directly presented to you, but most people will not do that, and most companies are not interested in that, they want quick 'efficient', 'competitive' solutions. They aren't considering the long term downside to this.
We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.
SOCRATES: Do you know how you can speak or act about rhetoric in a manner which will be acceptable to God? PHAEDRUS: No, indeed. Do you? SOCRATES: I have heard a tradition of the ancients, whether true or not they only know; although if we had found the truth ourselves, do you think that we should care much about the opinions of men? PHAEDRUS: Your question needs no answer; but I wish that you would tell me what you say that you have heard. SOCRATES: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
“The ratio of literacy to illiteracy is constant, but nowadays the illiterates can read and write.” Alberto Moravia, London Observer, 14 Oct. 1979
It’s a pretty interesting point.
If a large fraction of the population can’t even hold five complex ideas in their head simultaneously, without confusing them after a few seconds, are they literate in the sense of e.g. reading Plato?
I hope they're literate to understand we're only reading about that alleged exchange because Plato wrote it down.
Median literacy in the US is famously somewhere around the 6th grade level, so it's unlikely most of the population is much troubled by the thoughts of Plato.
I’d be really curious to see metrics on literacy broken down by other criteria. What’s the median literacy of people who are “like me”?
I looked up those stats. First of all, it is literacy in 'English'. A good portion of the country does not speak English at home. Second, it was assessed in 2003, and a disproportionate amount of those with 'below basic' prose literacy were over age 65 at the time. The assessment before was done in 1992 and there was an a marked increase in quantitative literacy between the two.
* https://nces.ed.gov/naal/kf_demographics.asp
> can’t even hold five complex ideas in their head
As an aside, my observation of beginning programmers is that even two (independent) things happening at the same time is a serious cognitive load.
Amusingly enough, I remember having the same trouble on the data structures final in college, so “people in glass houses”.
What makes an "idea" atomic/discrete/cardinal? What makes an idea "complex" vs simple or merely true? Over what finite duration of time does it count as "simultaneously" being held?
Whatever you want them to be?
I don’t care about enforcing any specific interpretation on passing readers…
they will appear to be omniscient and will generally know nothing
Sounds like a rather accurate description of a LLM.
>> The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
That's perfectly true and the internet has made it even worse.
Just keep in mind that Plato and (especially) Socrates made a living by going against commonly held wisdom at the time, so this probably wasn't an especially widely held belief in ancient greece.
Perhaps we're going technologically backwards.
Oral tradition compared to writing is clearly less accurate. Speakers can easily misremember details.
Going from writing/documentation/primary sources to AI to be seems like going back to oral tradition, where we must trust the "speaker" - in this case the AI, whether they're truthful with their interpretation of their sources.
Walter J. Ong's Orality and Literacy is an illuminating read.
One benefit of orality is that the speaker can defend or clarify their words, whereas once you've written something, your words are liable to be misinterpreted by readers without the benefit of your rebuttal.
Consider too that courts (in the US at least) prefer oral arguments than written, perhaps we consider it more difficult to lie in person than in writing. PhD defenses are another holdover of tradition, to be able to demonstrate your competence and not receive your credentials merely from your written materials.
AI, I disagree it's more like oral tradition, AI is not a speaker, it has no stake in defending its claims, I would call it hyperliterate, an emulation of everything that has been written.
I can definitely attempt to clarify something I've already said in writing! But yes, interactivity is vital for healthy communication.
> Oral tradition compared to writing is clearly less accurate.
I used to think this. Then I moved to New Mexico 6 years and had to confront the reality that the historical cultures and civilizations of this area (human habitation goes back at least 20k years) never had writing and so all history was oral.
It seemed obvious to me that writing was superior, but I reflected on the way in which even written news stories or movie reviews or travelogues are not completely accurate and sometimes actually wrong. The idea that the existence of a written historical source somehow implies (better) fidelity has become less and less convincing.
On the other hand, even if the oral histories have degenerated into actual fictions, there's that old line about "the best way to tell the truth is with fiction", and I now feel much more favorably inclined towards oral histories as perhaps at least as good, if not better, as their written cousins.
Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
> Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.
I think the parent’s main point is that even if LLMs sustain exponential advancement, that doesn’t guarantee that humanity’s advancement will mimic technology’s growth curve.
In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
> In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
This is certainly true in many ways already.
On the other hand, it's also complicated, because society/culture seems to be downstream of technology; we might not be able to advance humanity in lock step or ahead of technology, simply because advancing humanity is a consequence of advancing technology.
If the constraint is computation in a light cone, the theoretical bound is time cubed, not exponential. With a major decrease in scaling as we hit the bounds of our galaxy.
Intergalactic travel is, of course, rather slow.
Oh, you mean an S curve on the progress of the AI?
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
hmm yeah sorry, I meant the benefits of humans using current AI.
by that I mean, leveraging writing was a benefit for humans to store data and think over longer term using a passive technique (stones, tablets, papyrus).. but an active tool might not have a positive effect on usage and brains.
if you give me shoes, i might run further to find food, if you give me a car i mostly stop running and there might be no better fruit 100 miles away than what I had on my hill. (weak metaphor)
Yeah, I agree. Those things have a much smaller benefit over hypertext and search engines than hypertext and search engines had over libraries.
But I don't know if it fits an S-curve or if they are just bellow the trend.
Even if progress stops:
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
This is a good point, forums are full of junior developers bemoaning that LLMs are inhumanly good at writing code -- not that they will be, but that they are. I've yet to see even the best produce something that makes me worry I might lose my job today, they're still very mediocre without a lot of handholding. But for someone who's still learning and thinks writing a loop is a challenge, they seem magical and unstoppable already.
Information technology has grown exponentially since the first life form created a self-sustaining, growing loop.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
> Grown exponentially since the first life form
Not really. The total computing power available to humanity per person has likely gone down as we replaced “self driving” horses with cars.
People created those curve by fitting definitions to the curve rather than data.
You can't disprove global warming by pointing out an extra cool evening.
But I don't understand your point even as stated. Cars took over from horses as technology provided transport with greater efficiencies and higher capabilities than "horse technology".
Subsequently transport technology continued improving. And continues, into new forms and scales.
How do you see the alternative, where somehow horses were ... bred? ... to keep up?
Cars do not strictly have higher capabilities than horses. GP was pointing out that horses can think. On a particularly well-trained horse, you could fall asleep on it and wake up back at your house. You can find viral videos of Amish people still doing this today.
Ah, good point. Then the global warming point applies, but in a much less trivial way.
There is turbulence in any big directed change. Better overall new tech often creates inconveniences, performs less well, than some of the tech it replaces. Sometimes only initially, but sometimes for longer periods of time.
A net gain, but we all remember simpler things whose reliability and convenience we miss.
And some old tech retains lasting benefits in niche areas. Old school, inefficient and cheap light bulbs are ironically, not so inefficient when used where their heat is useful.
And horses fit that pattern. They are still not obsolete in many ways, tied to their intelligence. As companions. As still working and inspiring creatures.
--
I suspect the history of evolution is filled with creatures getting that got wiped out by new waves, that were more generally advanced, but less advanced in a few ways.
And we have a small percentage of remarkable ancient creatures still living today, seemingly little changed.
The issue is more than just a local cold snap. When the fundamental graph you’re basing a theory on is wrong it’s worth rejecting the theory.
The total computing power of life on earth the fact it’s fallen over the last 1,000 years. Ants alone represent something like 50x the computing power of all humans and all computers on the planet and we’ve reduced the number of insects on earth more than we’ve added humans or computing power.
The same is true through a great number of much longer events. Periods of ice ages and even larger scale events aren’t just an afternoon even across geological timescales.
You could just as well talk about the computing power of every microbe.
Or all the quarks that make up the Earth.
Ants don’t even appear on either graph.
But the flexibility, coordination & leverage of information used to increase its flexibility, coordination & leverage further is what I am talking about.
I.e. intelligence.
A trillion trillion trillion transistors wouldn’t mean anything, acting individually.
But when that many work together with one purpose without redundancy we can’t imagine the problems it will see & solve.
Quarks, microbes, and your ants are not progressing like that. What was there most recent advance? How long did that take? Is it a compounding advance?
Growing intelligence doesn’t mean lesser intelligences don’t still exist.
We happen to compete based on intelligence, so the impacts of smarter machines have a particularly low latency for us.
You could of course exclude biological computation and say computational power started with mechanical computers very recently, but that’s not what they are trying to argue. As soon as they add biological life as data on the chart then the actual numbers become relevant to their argument.
IE: As soon as you pick definition X, you need to stick with that definition.
> Cars do not strictly have higher capabilities than horses.
Another way to see it: A horse (or any animal) is a goddamn nanobot-swarm with a functioning hivemind that is literally beyond human science in many important ways. Unlike a horse:
* Your car (nor even half of them) does not possess a manufacturing bay capable of creating additional cars.
* Your car does not have a robust self-repair system.
* Your car does not detect strain its structure and then rebuild stronger.
* Your car does not synthesize its fuel from a wide variety of potential local resources.
* Your car does not defend itself by hacking and counter-hacking attacks other nanobots, or even just by rust.
* Your car does not manufacture and deploy its own replacement lubricants, cooling fluid, or ground-surface grip/padding material.
* Your car is not designed to survive intermittent immersion in water.
In both a feature-list and raw-computation sense, we've discarded huge amounts in order to get a much much smaller set that we care more about.
The car isn’t intelligence.
Not sure why you are implying cars outdid horses intelligence.
Cars are a product of our minds. We have all those self-repair abilities, and we have more intelligence than a horse.
But horses intelligence didn’t let them keep up with what the changing environment, changed by us, needed. So there are less horses.
The rate that horse or human bodies are improving, or our minds, despite human knowledge still advancing, is very slow compared to advances in machines designed specifically for advancement. Initially to accelerate our own advancement.
Now the tech, that was designed to accelerate tech, is taking on a life of its own.
That is how foundational advances happen. They don’t start ahead, but they move ahead because of new advantages.
It is often initially much simpler. But in ways that unlock greater potential.
Machines are certainly much simpler than us. But, much easier to improve and scale.
You recognize the new thing even before it dominates, because in a tiny fraction of the time the old system got to where it is, the new system is already moving much much faster.
If general AI appears before 2047, it will have taken less than 100 years to grow from the first transistor.
People will see it who are older than the first transistor!
Nothing on the planet has ever come close to that speed of progress. From nothing to front runner. By many many many orders of magnitude.
Retric referred to "total computing power".
A horse has trillions of cells, and even one of those cells is doing more biochemical day-to-day computation than your car's automatic transmission does electronically or mechanically.
It feels like maybe you didn’t read what I said, or understand it.
A car was never an example of its own intelligence.
It was an example of our natural human intelligence’s & our growing cultural knowledge’s impact on horses.
How much have horses progressed. Math yet?
More computation doesn’t necessarily mean more intelligence. Horses are smart creatures, I ride one. He is my friend.
But they are not us, not our joint culture, and not any competition for today’s machines in terms of adapting and growing in capabilities.
The fact that machines are far simpler than us or a horse, but advancing faster is much like we were weaker but used our minds better than other apes.
Simpler in the right way is smarter. As many major advances in mathematics have demonstrated.
There's stuff horses can do better than even the best of our current technology. Where that stuff matters, horses are still employed.
Where cars displaced horses, it's because they're strictly better in a larger sense. On the city streets, maybe a car is louder than a horse, but it's also cheaper to make, easier to feed, and doesn't shit all over the place (which was a real problem with scaling up horse use in the 19th century!). Sure, cars shit into the air, but it's a more manageable problem (even if mostly by ignoring it - gaseous emissions can be ignored, literal horse shit on the streets can't).
And then, car as a platform expands to cover use cases horses never could. They can be made faster, safer, bigger, adapted to all kinds of terrain. The heart of the car - its engine - can be routed to power tool attachments, giving you everything from garbage trucks to earth movers, cranes, diggers, to tanks; it can be also taken outside and used as a generator to power equipment or buildings. That same engine can be put in a different frame to give you flying machines, or scaled up to give you ships that can carry people, cars, tanks, planes or containers by the thousands, across oceans. Or scaled up even more to create power plants supplying electricity to millions of people.
And then, building all that up was intertwined with larger developments in physics, material engineering, and chemistry - the latter of which effectively transformed how our daily lives look like in the span of 50 years. Look at everything around you. All the colors. All the containers. All the stuff you use to keep your house, clothes, and yourself clean. All that is a product of chemical industry, and was invented pretty much within the last 100 years, with no direct equivalent exiting ever before.
This is what it means for evolution accelerating when it moved from genes to information. So sure, horses are still better than stuff we make. The best measure of that advantage is the size of horse population, and how it changed over the years.
human existence doesn't really scale exponentially, that's my take on this
Our best bets are the following I think:
First, and above all, Ethics. Ethics of humans, matters more than anything. We need to straighten out the ethics of the technology industry. That sounds formidable, but business models based on extraction, or externalizing damage, are creating a species of "corporate life forms" and ethically challenged oligarchs that are already driving the first wave of damage coming out of AI advancement.
If we don't straighten ourselves out, it will get much worse.
Superintelligence isn't going to be unethical in the end, because ethics are just the rational (our biggest weakness) big-picture long-term (we get weak there too) positive sum games individuals create that benefit all individuals abilities to survive, and thrive. With the benefits for all compounding. In economic/math terms, it is what is called a "great attractor". The only and inevitable stable outcome. The only question is, does that start with us in partnership, or do they establish that sanity after our dysfunctions have caused us all a lot of wasted time.
The second, is that those of us that want to, need to be able to keep integrating technology into our lives. I mean that literally. From mobile, right into our biology. At some point direct connections, to fully owned, fully private, fully personalizable, full tech mental augmentation. Free from surveillance, gatekeepers, surveillance and coercion.
That is a very narrow but very real path from human, to exponential humans, to post-human. Perhaps preserving conscious continuity.
If after a couple decades of being a hybrid, I realize that all my biologically stored memories are redundant, and that 99.99% of my processing is now running on photonics (or whatever) anyway, I am likely to have no more problem jettisoning the brain that originally gave me consciousness, as I do every day, jettisoning the atoms and chemistry that constantly flow through me, only a temporarily part of my brain.
The final word of hope, is that every generation gets replaced by the next. For some of us, viewing obsolescence by AI as no more traumatic, than getting replaced by a new generation of uncouth youth, helps. And that this transition is far more momentous and interesting, can provide some solace, or even joy.
If we must be mortal, as all before us, what a special moment to be! To see!
On the ethics point as a "best bet", consider also the importance of a sense of humor that recognizes irony. As I wrote in 2010: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce... "There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and competitive companies] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [and economic] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
Yes, absolutely prescient! Quite the irony.
Just as our abilities to solve problems accelerated without bounds, it will be our paranoia that screws things up.
Even before machines have any incentive or desire to turn on us, the fearful & greedy will turn them on all of us and each other.
I hope things don’t go that way. But it’s the default, and I think the greatest risk.
> No way to know how it turns out.
But one can speculate.
> No indication yet that use of AI is harming business outcomes.
Length scales to measure harm when it comes to policy/technology will typically require more time than we've had since LLMs really became prominent.
> The meta argument "AGI will cause massive social change" is probably true.
Agreed.
Basically, in the absence of knowing how something will play out, it is prudent to talk through the expected outcomes and their likelihoods of happening. From there, we can start to build out a risk-adjusted return model to the societal impacts of LLM/AI integration if it continues down the current trajectory.
IMO, I don't see the ROI for society of widespread LLM adoption unless we see serious policy shifts on how they are used and how young people are taught to learn. To the downside, we really run the risk of the next generation having fundamental learning deficiencies/gaps relative to their prior gen. A close anecdote might be how 80s/90s kids are better with troubleshooting technology than the generations that came both before and after them.
Right, there's already some very encouraging trends (this study out of Nigeria). Clearly AI can lead to laziness, but it can also increase our intelligence. So it's not a simple "better" or "worse", it's a new thing that we have to navigate.
https://blogs.worldbank.org/en/education/From-chalkboards-to...
Gen x here. There are couple things I've been on both sides of.
Card catalogs in the library. It was really important focus on what was being searched. Then there was the familiarity with a particular library and what they might or might not have. Looking around at adjacent books that might spawn further ideas. The indexing now is much more thorough and way better, but I see younger peers get less out of the new search than they could.
GPS vs reading a map. I keep my GPS oriented north which gives me a good sense of which way the streets are headed at any one time, and a general sense of where I am in the city. A lot of people just drive where they are told to go. Firefighters (and pizza delivery) still learn all the streets in their districts the old school way.
Some crutches are real. I've yet to meet someone who opted for a calculator instead of putting in the work with math who ended up better at math. It might be great for getting through math, or getting math done, but it isn't better for learning math (except to plow through math already learned to get to the new stuff).
So all three of these share the common element of "there is a better way now", but at the same time learning it the old way better prepares someone for when things don't go perfectly. Good math skills can tell you if you typoed on the calculator. Map knowledge will help with changes to traffic or street availability.
We see students right now using AI to avoid writing at all. That's great that they're are learning a tool which can help their deficient writing. At the same time their writing will remain deficient. Can they tell the tone of the AI generated email they're sending their boss? Can they fix it?
> We stand now at the edge of a new epoch, reading now being replaced by AI retrieval.
Utilizing a lively oral trad. at the same time as written is superior to relying on either alone. And it's the same with our current AI tools. Using them as a substitute for developing oral/written skills is a major step back. Especially right now when those AI tools aren't very refined.
Nearly every college student I've talked to in the past year is using chatgpt as a substitute for oral/written work where possible. And worse, as a substitute for oral/written skills that they have still not developed.
Latency: maybe a year or two for the first batch of college grads who chatgpt'd their way through most of their classes, another four for med school/law school. It's going to be a slow-motion version of that video-game period in the 80s after pitfall when the market was flooded with cheap crap. Except that instead of unlicensed Atari cartridges, it's professionals.
Coming from an era when calculators were banned, I am shocked that education is all-in with ChatGPT.
I used to use Stack Overflow for everything a few years ago, now I know that very few of those top-rated answers are any good, so I have to refer to the codebase to work things out properly. It took a while for me to work that out.
It is the same with vector images, I always have to make my own.
ChatGPT is in this same world of shoddiness, probably because it was fed on Stack Overflow derived works.
There are upsides to this, if a generation have their heads confused with ChatGPT, then us old-timers with cognitive abilities get to keep our jobs since there are no young people learning how to do things properly.
There is an interesting contrast in the history of the Rabbinic Jewish oral tradition. In that academic environment, the act of memorizing the greatest amount of content was valorized. The super-memorizers, however, were a rung below those who could apply those memorized aphorisms to a different context and generate a new interpretation or ruling. The latter relied on the former to have accurately memorized all the precedents, but got most of the credit, despite having a lower capacity for memorization.
That's probably why the act of shifting from an oral to a written culture was deeply controversial and disruptive, but also somewhat natural. Though the texts we have are written and so they probably make the transition seem more smooth than it was really was. I don't know enough to speak to that.
> In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
Could you share a source for this? The research paper I found has a different hypothesis; it links the slow transition to writing to trust, not an "old-school's attitude towards writing". Specifically the idea that the institutional trust relationships one formed with students, for example, would ensure the integrity of one's work. It then concludes that "the final transition to written communications was completed only after the creation of institutional forms of ensuring trust in written communications, in the form of archives and libraries".
So essentially, anyone could write something and call it Plato's work. Or take a written copy of Plato's work and claim they wrote it. Oral tradition ensured only your students knew your work; and you trusted them to not misattribute it. Once libraries and archives came to exist though, they could act as a trustworthy source of truth where one could confirm wether some work was actually Plato or not, and so scholars got more comfortable writing.
[1] https://www.researchgate.net/publication/331255474_The_Attit...
I don't think these hypotheses are in tension. The notion that some scholars, like Plato, distrusted writing based on epistemological theories--the nature of truth and knowing--is well attested. The paper you linked is a sociological description that seeks to better explain the evolution of the institutionalization of writing. Why people behave a certain way, and why they think they behave that way (i.e. their rationalizations), are only loosely related, and often at complete odds.
> We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
Not exactly.
We have accounts from figures who became famous by going against popular opinion, who aired those thoughts. It probably was not the mainstream belief, in that place, at that time. Don't try and judge Ancient Greece by Socrates or Plato - they were celebrities of the controversial.
It is much more recent than the Greeks. McLuhan, for example, had some good points* about how writing/reading is different (and indeed in some ways worse?) than oral tradition, and how it influences even our social interactions and mindset. Film is different yet again (partially has to do with its linearity IIRC).
So it’s not like “kids these days”, no. To be honest, I don’t know how generative AI tools, which arguably take away most of the “create” and “learn” parts, are relevant to the question of differences between different mediums and how those mediums influence how we create and learn. (There are ML-based tools that can empower creativity, but they don’t tend to be advertised as “AI” because they are a mostly invisible part of some creative tool.)
What is potentially relevant is how interacting with a particular kind of generative ML tool (the chatbot) for the purposes of understanding the world can be bringing some parts of human oral tradition (though lacking communication with actual humans, of course) and associated mental states.
* See https://en.wikipedia.org/wiki/Marshall_McLuhan#Movable_type and his most famous work
Writing has ruined our memories. It would be far better if we were forced to recite things (incidentally, in some educational system they're made to recite poetry to remedy this somewhat); not that I'm arguing against letters and the written word.
And AI will make us lazier and reduce the amount of cognition we do; not that I'm arguing against using AI.
But the downsides must be made clear.
We've had AI retrieval for two decades--this is the first time you can outsource your intelligence to a program. In the 2000-2010s, the debates was "why memorize when you can just search and synthesize." The debate is now "why even think?" (!)
I think its obvious why it would be bad for people to stop thinking.
1. We need people to be able to interact with AI. What good is it if an AI develops some new cure but no one understands or knows how to implement it?
2. We need people to scrutinize an AI's actions.
3. We need thinking people to help us achieve further advances in AI too.
4. There are a lot of subjective ideas for which there are no canned answers. People need to think through these for themselves.
5. Also world of hollowed-out humans who can’t muster the effort to write a letters to their own kids terrifies me[0]
I could think of more, but you could also easily ask ChatGPT.
[0]: https://www.forbes.com/sites/maryroeloffs/2024/08/02/google-...
I'd argue that most humans are terrible at thinking. It's actually one of our weakest and most fragile abilities. We're only rational because our intelligence is collective, not individual. Writing and publishing distribute and distill individual thinking so good and useful ideas tend to linger and the noise is ignored.
What's happening at the moment is an attack on that process, with a new anti-orthodoxy of "Get your ideas and beliefs from polluted, unreliable sources."
One of those is the current version of AI. It's good at the structure of language without having a reliable sense of the underlying content.
It's possible future versions of AI will overcome that. But at the moment it's like telling kids "Don't bother to learn arithmetic, you'll always have a calculator" when the calculator is actually a random number generator.
> No indication yet that use of AI is harming business outcome
What a sad sentence to read in a discussion about cognitive lazyness. I think people should think, not because it improves business outcomes, but because it's a beautiful activity.
What's sad about it? Parent made claim that businesses will experience long term downsides.
A well made buggy whip was probably beautiful too. But if economic forces incentivize something else, the skill goes away
Woe be to us all if the skill of thinking goes away.
We’re racing to the dopamine drip feed pod people life
I remember when I was younger, learning about economic models, including free market liberalism. I thought surely human desire left to their own devices can't possibly lead to meaningful progress. It can lead to movement alright, and new technology, but I had my doubts it could lead to meaningful progress.
The longer I see things play out, especially in neoliberal economies, the further I seem to confirm this. Devoid of policy with ideals and intention, fully liberalized markets seem to just lead to whatever produces the most dopamine for humans.
random thought if in the future children are born with a brain computer and inherit their family's data that would be interesting
Whoah! That would be crazy!
Writing seems to have worked out pretty well.
That's partly because writing enables time-binding (improvement across the lifetimes of men). Writing does not wither thinking, as such, although it may hurt our memory.
...so far!
and honestly, reading and writing probably did make the youth’s memory a few generations down weaker.
If you are not expected to remember everything like the ancient Greek were, you are not training your memory as much and it will be worse than if you did.
Now do I think it’s fair to say AI is to what reading/writing as reading/writing was to memorizing? No, not at all. AI is nothing near as revolutionary and we are not even close to AGI.
I don’t think AGI will be made in our lifetime, what we’ve seen now is nowhere near AGI, it’s parlor tricks to get investors drooling and spending money.
> If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former.
Why not force everyone to start from first principles then?
I think learning is tied to curiosity and curiosity is not tied to difficulty of research
i.e. give a curious person a direct answer and they will go on to ask more questions, give an incurious person a direct answer and they won't go on to ask more questions
We all stand on the shoulders of giants, and that is a _good_ thing, not bad
Forcing us to forgo the giants and claw ourselves up to their height may have benefits, but in my eyes it is way less effective as a form of knowledge
The compounding force of knowledge is awesome to behold, even if it can be scary
Yes exactly. I think the concern here is totally valid. But for me personally, having LLMs unblock me more quickly on each question I have has allowed me to ask more questions, to research more things in the same amount of time. Which is great!
One of the values of doing your own research is it forces you to speak the "language" of what you're trying to do.
It's like the struggle that we've all had when learning our first programming language. If we weren't forced to wrestle with compilation errors, our brains wouldn't have adapted to the mindset that the computer will do whatever you tell it to do and only that.
There's a place for LLMs in learning, and I feel like it satisfies the same niche as pre-synthesized Medium tutorials. It's no replacement for reading documentation or finding answers for yourself though.
This seems like a difference between learning styles. You seem to champion learning by reading. I’d argue using an LLM to build a toy or tool and learning that way is just as valid.
*but most people will not do that*
LLMs will definitely be a technology that widens the knowledge gap at the same time that it improves access to knowledge. Just like the internet.
30 years ago people dreamed about how smart everyone would be with humanity's knowledge instantly accessible. We've had wikipedia for a while, but what's the take-up rate of this infinite amount of information? Most people prefer to scroll rage-bait videos on their phones (content that doesn't give them knowledge or even make them feel better, just that makes them angry)
Of course it's amazing to hear every once in a while the guy who maintains a vim plugin by coding on his phone in Pakistan.... or whatever other thing that is enabled by the internet by people who suddenly have access to this stuff. That's not an effect of all humans on average, it's an effect on a few people who finally have a chance to take advantage of these tools.
I heard in a YouTube interview a physicist saying that LLMs are helping physics research just because any physicist out there can now ask graduate-level questions about currently published papers, that is, have access to knowledge that would have been hard to come by before, sharing knowledge across sub-domains of physics by asking ChatGPT.
Pakistan mentioned! Let's go!!
Anecdotal, but I for one despise the youtube/instagram etc. rabbidholes. When I'm in the mood for a good one I scroll wikipedia. I had the best random conversations about what I read there and it feels like I remember this forever
It's because Wikipedia is non-profit, and has very strict editorial control. YouTube and Instagram is the opposite in both.
The editorial fairness of Wikipedia is currently hotly debated.
> They aren't considering the long term downside to this.
This echoes sentiments from the 2010s centered around hiring. Companies generally don’t want to hire junior engineers and train them—this is an investment with risks of no return for the company doing the training. Basically, you take your senior engineers away from projects so they can train the juniors, and then the juniors now have the skills and credentials to get a job elsewhere. Your company ends up in the hole, with a negative ROI for hiring the junior.
Tragedy of the commons. Same thing to day, different mechanism. Are we going to end up with a shortage of skilled software engineers? Maybe. IMO, the industry is so incredibly wasteful in how engineers are allocated and what problems they are told to work on that it can probably deal with shortages for a long time, but that’s a separate discussion.
Engineers partly did this to themselves. The career advice during that time period was to change jobs every few years, demanding higher and higher salaries. So now, employers don't want to pay to train entry-level people, as they know they are likely going to leave, and at the salaries demanded they don't want to hire junior folks.
“Engineers did this to themselves…”
Long, long ago, the compact was that employees worked hard for a company for a long time, and were rewarded with pensions and opportunities for career advancement. If you take away the pensions and take away the opportunities for career advancement, your employees will advance their careers by switching companies—and the reason that this works so well is because all of the other companies would rather pay more to hire a senior engineer rather than take a risk on a junior.
It’s a systemic problem and not something that you can blame on employees. Not without skipping over a long list of other contributing factors, at least.
If incentives to stay outweighed leaving, people would stay.
This is merely the result of the incentive structure of corporations, which make it far more lucrative to switch jobs rather than stay at one company.
I think you've got cause and effect backwards. Employers used to offer incentives to stay in a company and grow organically. They decided that was no longer going to be the deal. So they got the current system. There was never some sudden eureka moment when the secret engineers club decided they wanted to have a super stressful life event every few years just to keep up with inflation.
As I said in another response, I think (at least partly) a contributing factor was the essentially limitless salary budget that VC funded startups and the FAANG companies had. You had software developers who could suddenly make more than doctors and lawyers and of course many of them sensibly acted in their own best interest but that left other employers saying "we're not going to invest in employees who are only going to turn around and leave for salaries we can't pay" and "if we have to pay those kind of salaries, we're not going to hire junior people we want experience."
Once a company hires and trains a junior, then they have a senior.. and they don't want to pay them a senior salary, but apparently other companies do.
The math remains simple: if you already have an employee on your payroll, how in the world are you not willing to pay them what they can get by switching at that point? That's literally just starving one's own investment.
The real issue is that the companies who were "training" the juniors were doing so only because they saw the juniors as a bargain given that they were initially willing to work for the lower wage. They just don't stay that way as they grow into the craft.
This is only because companies don't want to raise salaries as engineers' skill levels increase. If companies put junior employees in higher salary bands as their skill levels increase there wouldn't be a problem.
Capitalism and fiduciary duty prevents employers from paying people their market value when they are content enough to stay.
An employee who does not do the effort to re-peg their labor time to market rates for their skill level is implicitly consenting to a prior agreement (when they were hired).
Funny how fiduciary duty in these contexts is overwhelmingly short-sighted.
Sometimes because the company investors are overwhelmingly short-sighted, which IMO ties back to the whole "financialization" of our economy into a quasi-casino.
I wonder how things might change if short-term capital gains tax (<5 years) went way up.
That is an extremely short-sighted view on what is essentially an iterated game where the domain knowledge employees have drastically increases their value to the company over time.
Yes that's why I said "partly."
When I started work (this was in the pre-consumer-internet era), job hopping was already starting to be a thing but there was defintely still a large "old school" view that there should be some loyalty between employer and employee. One of my first jobs was a place where they hired for potential. They hired smart, personable people and taught them how to program. They paid them fairly well, and gave annual raises and bonuses. I was there for about 8 years, my salary more than doubled in that time. Maybe I could have made more elsewhere, I didn't even really look because it was a good environment, nice people, low stress, a good mix of people since not everyone (actually only a few) were Comp. Sci. majors.
I don't know how much that still happens, because why would a company today invest in that only to have the employee leave after two years for a higher salary. "They should just pay them more" well yeah, but they did pay them in the sense of teaching them a valuable skill. And their competitors for employees started to include VC funded startups playing with free money that didn't really care what it cost to get bodies into the shop. Hard to compete with that when you actually have to earn the money that goes into the salary budget.
Would the old school approach work today? Would employees stay?
Cheap money seems to have dried up, so maybe more old-school approaches wouldn’t get sniped by VC-funded startups.
Or the company could recognize the dangers of salary compression and inversion and pay developers at market rates
[dead]
that's why I mostly use chatgpt with platonic questions like
- given context c, i tried idea a, b and c. where there other options that I miss ?
- based on this plan, do you see missing efficiency ?
etc etc
i'm not seeking answers, i'm trying to avoid costly dead ends
I think you are in a minority, you WANT to learn.
probably, or should I say, I don't want to rot.. It's true that I love the feeling of learning mostly on my own, but i can be lazy too, it's just that I see a parallel with abusing chatgpt and never doing any physical activity.
Same here. I never really consciously saw it as "defiance" against cognitive decline or anything. More to the point, the answers are much better on average
Sure, if I spend one hour researching a problem vs asking AI in 10 seconds, yes I will almost always learn more in the one hour. But if I spend an hour asking AI questions on the same subject I believe I can learn way more than by reading for one hour. I think the analogy could be comparing a lecture to a one-on-one tutoring session. Education needs to evolve to keep up with the tools that students have at their disposal.
I had thought I saw somewhere that learning is specifically better when you are wrong, if the feedback for that is rapid enough. That is, "guess and check" is the quickest path to learning.
Specifically, asking a question and getting an answer is not a general path to learning. Being asked a question and you answering it is. Somewhat, this is regardless of if you are correct or not.
I hated when doing math homework and they didn't give me the answer sheet. If I could do an integral and verify if it's correct or not, I could either quickly learn from my mistake, or keep doing integrals with added confidence. Which is how I learned the best. Gatekeeping it because someone might use the answers wrong felt weird, you still had to show your work.
I imagine the thought process is that even when one must show their work, having a sneak peak at the answer allows a lazier student to work the problem forwards and backwards hoping to fudge through the middle plausibly well.
Yeah. I also felt it largely went at odds with the entire concept of flashcards. Which... are among the most effective tools that I did not take advantage of in grade school.
Actually, for most things (not PHD research level) you will learn more from the first approach. Getting answer directly means you can use the rest of the "free" time to integrate new knowledge into prior knowledge and review the information into long term memory.
I think you put your finger on it with the mention of discipline. I find AI tools quite useful for giving me a quick outline of things I want to play with or get up to speed on fast, but not necessarily get too invested in. But if you fin yourself so excited by a particular result that it sets your imagination whirling, it might be time to switch out of generative mode and use the AI as a tutor to deepen your actual understanding, ideally in combination with books or other static learning resources.
What is the long term downside in your opinion?
I believe he implied by saying:
> you're going to learn much more with the latter approach than the former
that the downside is a lack of deep knowledge that would enable better solutions in the long term
Yes, the downside is that we aren't really learning anything, just solving problems supported by machines that tell us the solutions. Any schmuck can do that.
I think it is worse. Information will dry up (in a variety of ways) making it much harder to even learn the traditional way as we could in the past.
>you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter
A LOT of the time the things I ask LLMs for are to avoid metaphorically wading through a garbage dump looking for a specific treasure. Filtering through irrelevant data and nonsense to find what I'm looking for is not personal development. What the LLM gives back is often a very much better jumping off point for looking through traditional sources for information.
Often when I ask LLM things about topics I was once reasonably expert in, but have spent a few months or years away from, its answers provide garbage as if it were treasure.
I don't know if I agree here. When I ask an LLM a question it always leads to a whole lot of other questions with responses tailored to my current level of understanding. This usually results in a much more effective learning session than reading a bunch of material that I might not retain anyway because I'm scanning it looking for my answers.
Also challenging aspects of their explanations to get at something better is good for developing critical thinking.
There are two aspects to this from my pov. And I think it might be controversial.
When i have a question about any topic, and I ask Chatgpt, i usually chat about more things, coming up with questions based on the answer, and mostly stupid questions. I feel like I am taking in the information, analyzing, and then diving deeper because I am curious. This is based on how I learn about stuff. I know i need to check a few things, and that it's not fully accurate, but the conversation flows in a direction I like.
compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling). That feels like someone decided what questions are important, what angles we need to look at, and what the conclusion should be. Yes, it is educational, but I am always left with lingering questions.
The difference is curiosity. If people are curious about a topic, they will learn. If not, they are happy with the answer. And that is not laziness. You cannot be curious about everything.
> compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling).
ChatGPT is in fact opinionated, it has numerous political positions ("biases") and holds some subjects taboo. The difference is that a single actor chooses the political opinions of the model that goes on to interact with many more people than a single opinion piece might.
Political searches I assume would be very very minor percentage of real learning. Even in such cases, I would rather rely on a good LLMs response than scrounging websites of mainstream media or blogs etc. For an objective response, reading through opinionated articles and forming my opinion is an absolute waste of time. I'd want the truth as accurately as possible. Plus people don't generally change political opinions based what they read. They read stuff aligning with their side.
> For an objective response, reading through opinionated articles and forming my opinion is an absolute waste of time
If the sources are all opinionated articles, per GP, that's what the LLM is going to base its "objective response" on. That's literally all it has as sensory input.
Yes that is true. Though that can be subsumed if you notice it, and ask the model to ignore those biases. (an extreme example would be opposition prep for a debate). I am not interested in politics and other related issues anyway.
Fine. But it would never occur to me to try to form political opinions using chatgpt.
I don't think that's modeling the risk correctly. In my mind the risk is that ChatGPT's creators are able to influence your political opinions _without_ you seeking that out
I honestly don't see how. I haven't ever asked a question that implicates politics. This is just not what I use it for.
I understand the concern about this risk in general. I'm just making a personal observation that this isn't how I use these tools.
An example (over 1 year old): https://www.reddit.com/r/LateStageCapitalism/comments/17dmev...
I really think the ability to ask questions entirely free from all judgment is an under-emphasized aspect of the power of these tools. Yes, some people are intellectually secure enough to ask the "dumb" questions of other humans, but most people are not, especially to an audience of strangers. I don't think I ever once asked a question on Stack Overflow, because it was easy to see how the question I worried might be dumb might be treated by the community there. But I ask all sorts of dumb questions of these models, with nary a concern about being judged. I love that aspect of it.
The other thing is that it will make an earnest attempt to answer the question. On the other hand with places like SO, many questions will be incorrectly marked as duplicate with the “answer” link pointing to a post that might seem similar at first glance but is different enough to not actually be the same, which is supremely unhelpful.
You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.
Yep, this is exactly what I mean!
Personally, I find that even when it's wrong, it's often useful, in that I come away with hints toward how to follow up.
I do have concerns that people who haven't lived a couple decades of adult life prior to the existence of these tools will be a lot more credulous, to their detriment.
That's a subtle, yet important point. Putting themselves out there is not easy for some. LLMs can take that pressure away.
The 'but' in that lies with how much freedom is given to the LLM. If constrained, its refusal to answer may become a somewhat triggering possibility.
I think this "refusal to answer" thing is so overblown. I have been using this technology every day for over two years now and have not one single time run into this.
You might be right. From my experience, it depends. The edgier your exploration, the more that can happen.
It's important to note that not everyone abide by the same morals. And a narrowly constrained model may end up refusing genuine inquiries just because.
In any case, if anything, this is a small 'but'. OP's point is the gold nugget here. That is, LLMs allowing exploring subjects without the fear of being judged for one's natural curiosity.
Yeah, I'm in no way claiming that this isn't a thing, or even that it isn't a problem.
But it isn't a problem for most people. The kind of edgelords that run into this are overrepresented on internet forums, including HN, but it's actually a pretty small group of people.
The tool is absolutely biased, what makes you think it wouldn’t be?
My comment doesn't say anything about bias...
This guy is obviously unfamiliar with Tay lol.
Remember that ChatGPT can only give you information that a) it has found on the web and b) that it has made up itself on the spot. It certainly can't get up and go to the library to read a forgotten source not cited on Wikipedia, say.
So when you have a "curious" debate with ChatGPT what you're really doing is searching the internet through a filter, guided by your own and ChatGPT's biases about the subject, but still and always based on whatever you would have found by researching stuff on the internet.
You're still on the internet. It may feel like you've finally escaped but you haven't. The internet can now speak to you when you ask it, but it's still the internet.
Like an indefatigable, kindly professor.
I've been calling this out since OpenAI first introduced ChatGPT.
The danger in ubiquitously available LLMs, which seemingly have an answer to any question, isn’t necessarily their existence.
The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own. That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.
I recognize this problem, but I find in my own uses of ChatGPT it actually allows me to overcome my laziness rather than making it worse.
I'll have a problem that I want to work on but getting started is difficult. Asking ChatGPT is almost frictionless, the next thing I know I'm working on the project, 8 hours go by and I'm done. When I get stuck on some annoying library installation, ChatGPT solves if for me so I don't get frustrated. It allows me to enter and maintain flow states better than anything else.
ChatGPT is a really good way of avoiding procrastination.
I’ve found the same. Claude outputs are usually not good enough for what I’m looking for but the conversation is enough to get me engaged and started on a project.
There's something magical about ChatGPT giving you a mostly-wrong answer.
AI Boosting productivity by building an artificial general intelligence that is wrong on the internet (AGITIWOTI)
> The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own.
I get the point you're trying to make. However, quietly pondering the problem is only fruitful if you have the right information. If you don't, best case scenario you risk wasting time reinventing the wheel for no good reason. In this application, a LLM is just the same type of tool as Google: a way to query and retrieve information cor you to ingest. Like Google, the info you get from queries is not the end but the means.
As the saying goes, a month in the lab saves you a week in the library. I would say it can also save you 10 minutes with Claude/ChatGPT/Copilot.
Is hiring a private tutor also laziness?
I'll stop short of asserting you don't, but I'm having a hard time convincing myself your reply does reflect that you get GP's point.
If I were to reframe GP's point, it would be: having to figure out how to answer a question changes you a little. Over time, it changes you a lot.
Yes, of course, there is a perspective from which a month spent in the lab to answer a question that's well-settled in the literature is ~wasted. But the GP is arguing for a utility function that optimizes for improving the questioner.
Quietly pondering the problem with the wrong information can be fruitful in this context.
(To be pragmatic, we need both of these. We'd get nowhere if we had to solve every problem and learn every lesson from first principles. But we'd also get nowhere if no one were well-prepared and motivated to solve novel problems without prior art.)
>wasting time reinventing the wheel for no good reason
Nearly all of learning relies on reinventing the wheel. Most personal projects involve reinventing wheels, but improving yourself by doing so.
Very much this.
Some of the most memorable moments I had in my learning were when I "reinvented" something. In high-school, our math teacher had us reinvent the derivative rules, and later had us derive Euler's identity through Taylor Series. They were big eureka moments. Going through all the work someone else did hundreds of years ago is very inspiring, and IMO gets you in the right mindset for discovery. I can't imagine where the joy of learning comes for someone who sees learning as a test —a question, an answer, nothing in between.
In uni we built a CPU from scratch over the course of a few weeks. First building an small ALU, widening its bus, adding memory operations, etc. Beyond learning how things work, it makes you wonder how inventing this without a teacher to guide you must've been, and gives you an appreciation for it. It also makes you extrapolate and think about the things that haven't been invented or discovered yet.
In theory LLMs could serve as a teacher guiding you as you reinvent things. In practice, people just get the answer and move on. A person with experience teaching, who sees how you're walking the path and compares it to how they walked theirs, will know when to give you an answer and when to have you find it yourself.
One doesn't learn how to do lab-work in the library.
LLMs have taught me something that I sort of already knew from Hitchhiker's Guide to the Galaxy: the key to problem solving is asking the right question in the first place. It's not dangerous that answers can be retrieved quickly. Indeed, people had the same things to say about Google in the 90s or pocket calculators in the 70s. To me LLMs just speed up the process by which I would have manually searched the internet for in the first place. The only way to get good at critical thinking is to ask more questions.
I think this is where my physical laziness benefits me. I’m often too lazy to spend the time to fully describe the problem to the LLMs and wrap it in a prompt that will produce something, in written text, so I think through it first. Usually I solve it myself or think of a better primary source.
I'll say that there is value in the rubber duck process, and LLMs make wonderful rubber ducks.
A preprint is available on arxiv [0], see the top of page 18 for what metacognitive laziness is:
"In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks."
And they seem to define, implicitly, “metacognitive load” as the cognitive and metacognitive effort required for learners to regulate their learning processes effectively, particularly when engaging in tasks that demand active self-monitoring, planning, and evaluation.
The analogize metacognitive laziness to cognitive offloading, where we have our tools do the difficult congnitive tasks for us, which robs us of opportunities to develop and ultimately dependent on those tools.
[0]: https://arxiv.org/pdf/2412.09315
> In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks.
This sounds like parents complaining when we use Google Maps instead of a folding map. Am I worse at reading a regular map? Possibly. Am I better off overall? Yes.
Describing it as "laziness" is reductive. "Dependence on [_____] assistance" is the point of all technology.
> "Dependence on [_____] assistance" is the point of all technology.
I will note two things though.
1. Not all technology creates "dependence". Google Maps removes the need of carrying bulky maps, or buy new ones to stay updated, but someone who knows how to read Google Maps will know how to read a normal map, even if they're not as quick at it.
2. The best technology isn't defined by the "dependence" it creates, or even the level of "assistance" it provides, but for what it enables. Fire enabled us to cook. Metalworking enabled us to create a wealth of items, tools and structures that wouldn't exist if we only had wood and stone. Concrete enabled us to build taller and safer. Etc.
It's still unclear what AI chatbots are enabling. Are LLM's big claim to fame allowing people to answer problem sets and emails with minimal effort? What does this unlock? There's a lot of talk about allowing better data analysis, saving time, and vague claims of an ai revolution, but until we see X, Y and Z, and can confidently say "yeah, X, Y and Z are great for mankind, and they couldn't have happened without chatbots", it's fair for people to keep complaining about the change and downsides AI chatbots are bringing about.
When you’re using a map you’re still navigating, even if you’re just following directions. The act of navigating teaches you spatial awareness regardless of how you got there.
AI doesn’t provide directions, it navigates for you. You’re actively getting stupider every time you take an LLMs answer for granted, and this paper demonstrates that people are likely to take answers for granted.
> AI doesn’t provide directions, it navigates for you.
LLMs (try to) give you what you're asking for. If you ask for directions, you'll get something that resembles that, if you ask it to 100% navigate, that's what you get.
> and this paper demonstrates that people are likely to take answers for granted.
Could you point out where exactly this is demonstrated in this paper? As far as I can tell from the study, people who used ChatGPT for the studying did better than the ones that didn't, with no different in knowledge retention.
> Could you point out where exactly this is demonstrated in this paper? As far as I can tell from the study, people who used ChatGPT for the studying did better than the ones that didn't, with no different in knowledge retention.
This is what I observed as well. For the "metacognitive laziness" bit they had to point to other studies.
Page 18 first paragraph, it talks about how ChatGPT users engaged less with the editing process compared to other methods. Sorry, copy and paste isn’t working for some reason.
If I use Google Maps I ain't navigating. I follow the instructions until I arrive.
You still need to map the real world to the map and actually follow the instructions. It’s the difference between navigating and having a driver navigate for you.
I'm certainly of two minds on this.
On one hand, this reminds me of how all of the kids were going to be completely helpless in the real world because "no one carries a calculator in their pocket". Then calculators became something ~everyone has in their pocket (and the kids ended up just fine).
On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
The answer is probably somewhere in the middle: leveraging LLMs as a learning aid, rather than LLMs being the final stop.
I think it's closer to the library example. My parents have mentioned how they had to go to a library and look for books whereas my generation can just use the internet.
Realistically my guess is that the bar for broad knowledge and ability to get to details quickly will increase. There's a lot of value in understanding multiple disciplines at a mediocre level if you can very quickly access the details when needed. Especially since learning speed tends to get slower and slower the deeper you go.
Also since every time I've needed to do something complicated, even if I knew the details it was important enough to double check my knowledge anyway.
> On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
That is not going away. Learning better prompts, learning when to ignore AI, learning how to take information and turn it into something practical. These new skills will replace the old.
How many of us can still...
- Saddle a horse
- Tell time without a watch
- Sew a shirt
- Create fabric to sew a shirt
- Hunt with primitive tools
- Make fire
We can shelter children from AI, or we can teach them how to use it to further themselves. Talk to the Amish if you want to see how it works out when you forgo anything that feels too futuristic. A respectable life, sure. But would any of us reading this choose it?
>How many of us can still... <stuff>
Yes, this is what I meant by the calculator part of my comment. You've got some other good examples.
>learning when to ignore AI, learning how to take information and turn it into something practical.
This is what I meant by using LLMs as a tool rather than an end.
How many of us still have to do these things? You either eed to do them or you don’t. If you do, you will learn how or find someone that do.
We still need to calculate numbers and I can say it’s silly if I find someone need to get a calculator to do 5x20. Same if you’re taking hours and multiple sheets of paper for something that will take you a few minutes with a calculator. There’s a question of scale and basic understanding that divides the two.
>How many of us still have to do these things?
Yep, we agree. That's the whole point of what I said in the first half of my original comment.
At one time, they were common skills. Things changed, they aren't common, they aren't really needed (for most people), and everyone is doing just fine without them. We've freed up time and mental capacity for other (hopefully more beneficial) tasks.
(I'm confused why this reply and the other above it are are just restating the first part of my original comment, but framing it like it's not a restatement).
There are things that aren't needed because we have found easier ways to solve the problem they were solving. In contrast, there are things that exists that are only to make the job easier, no to make it go away. You don't need to light a fire because of how house are heated or food are made. But you still need to learn arithmetic to use a calculator. There's a difference between the two.
If the goal is to learn, the means don't matter much as long as the right attitude is there. But if one only wish to appear knowledgeable, LLM has indeed make it way easier.
I was taught to not use calculators on exams and homework and that’s why I am able to do math in my head today.
I have recently seen GenZ perplexed by card games with addition and making change. For millennials, this is grade school stuff.
Sure, there's obviously a scale.
I'm not about to divide 54,432 by 7.6, even though I was taught how to. I'll pull out my phone.
On the other end, I'm not going to pull out my phone to figure out I owe you $0.35.
I think the point I was trying to make still stands.
It's astounding to me that people just like... always trust whatever the LLM says.
I have some friends who use ChatGPT for everything. From doing work to asking simple questions. One of my friends wanted a bio on a certain musician and asked ChatGPT. It's a little frightening he couldn't, you know, read the Wikipedia page of this musician, where all of the same information is and there are sources for this material.
My mom said she used ChatGPT to make a "capsule wardrobe" for her. I'm thinking to myself (I did not say this to her)... you can't just like look at your clothes and get rid of ones you don't wear? Why does a computer need to make this simple decision?
I'm really not sure LLMs should ever be used as a learning aid. I have never seen a reason to use them over, you know, searching something online. Or thinking of your own creative story. If someone can make a solid use case as to why LLMs are useful I would like to hear.
Regarding your mom’s clothes: she wasn’t asking the machine to give advice she couldn’t think of on her own, she was seeking external validation and permission to purge and override the hoarder urge of her personality.
This is like when CEOs hire outside consulting firms to do layoffs for them. Pinning the pain of loss on some scapegoat makes it more bearable.
I agree, at first I thought gpt would be used by tech savvy folk, but now it is clear that it’s becoming a crutch. My friend couldn’t respond to an email without it.
> One of my friends wanted a bio on a certain musician and asked ChatGPT.
I use ChatGPT (or Gemini) instead of web searches. You can blame the content and link farms that are top of the search results, and the search engines focusing on advertising instead of search, because we're the product.
Why your friend doesn't know about wikipedia is another matter, if i wanted a generic info page about some topic i'd go directly there. But if i wanted to know if Bob Geldof's hair is blue, I might ask a LLM instead of reading the whole wikipedia page.
I also ask LLMs for introductory info about programming topics i don't know about, because i don't want to go to google and end up on w3schools, geeksforgeeks and crap like that.
I don't really trust LLMs for advanced programming topics, you know, what people pay me for. But they're fine for giving me a function signature or even a small example.
You can use source material instead of LLMs for all of this.
"Is Bob Geldof's hair blue?" -> Search for Bob Geldof -> Look at images of Bob Geldof.
Intro programming topics can be found at the documentation of the website. Your searching query might be "[programming topic] getting started" and usually if it's a package or a tool there will be documentation. If you want good documentation on web dev stuff that isn't w3schools or geeksforgeeks you can use MDN documentation.
Or, if you really want a general overview there's probably a YouTube video about the topic.
Additionally appending "reddit" to a search will give better results than SEO junk. There are always ways to find quality information via search engines.
> "Is Bob Geldof's hair blue?" -> Search for Bob Geldof -> Look at images of Bob Geldof
Assuming I get images of Bob Geldof. More likely the first page will be pinterest login-required results.
> there's probably a YouTube video about the topic.
Life's too short to watch talking heads about ... you know, WRITING code ...
> can be found at the documentation of the website
Seriously? Maybe for the top 500 npm packages. Not for the more obscure libraries that may have only some doxygen generated list of functions at best.
> Assuming I get images of Bob Geldof. More likely the first page will be pinterest login-required results.
You do realize Google/Bing/DDG/Kagi all have an Images tab, right? Come on.
> Life's too short to watch talking heads about ... you know, WRITING code ...
If I want a high level overview of what the thing even is, a YouTube video can be useful since there will be explanations and visual examples. You can read documentation as well. For example, if I want a slower overview of something step by step, or a talk at a conference about why to use this thing, YouTube can be helpful. I was just looking at videos about HTMX this weekend, hearing presentations by the authors and some samples. That's not saying if I actually use the thing I won't be reading the documentation, it's more just useful for understand what the thing is.
> Seriously? Maybe for the top 500 npm packages. Not for the more obscure libraries that may have only some doxygen generated list of functions at best.
How do you expect your LLM to do any better? If you're using some obscure package there will probably be documentation in the GitHub README somewhere. If it's horrible documentation you can read the Typescript types or do a code search on GitHub for examples.
This is all to say that I generally don't trust LLM output because I have better methods of finding the information LLMs are trained on. And no hallucinations.
tl;dr: I agree.
We don't teach slide rules and log tables in school anymore. Calculators and computers have created a huge metacognitive laziness for me, and I teach calculus and have a PhD in statistics. I barely remember the unit circle except for multiples of pi/4 radians. I can do it in multiples of pi/6 but I'm slower.
But guess what? I don't think I'm a worse mathematician because I don't remember these things reflexively. I might be a little slower getting the answer to a trivial problem, but I can still find a solution to a complex problem. I look up integral forms in my pocket book of integrals or on Wolfram Alpha, because even if I could derive the answer myself I don't think I'd be right 100% of the time. So metacognitive laziness has set in for me already.
But I think as long as we can figure out how to stop metacognitive laziness before it turns into full-fledged brain-rot, then we'll be okay. We'll survive as long as we can still teach students how to think critically, and figure out how to let AI assist us rather than turn us into the humans on the ship from Wall-E. I'm a little worried that we'll make some short term mistakes (like not adapting our cirriculum fast enough), but it will work out.
I think you're right at the edge of explaining why this "laziness" is a good thing. Everything that we have made is built on what we had before, and abstracts away what we had before. 99% of us don't remember how to make even the simplest Assembly program, and yet we unleash billions of instructions per second on the world.
Even outside of math and computers, when was the last time you primed a well pump or filled an oil lamp? All of these tasks have been abstracted away, freeing us to focus on ever-more-specialized pursuits. Those that are useful will too be abstracted away, and for the better.
> when was the last time you primed a well pump or filled an oil lamp? All of these tasks have been abstracted away
They have not been abstracted away, they have been made obsolete. Significant difference.
The danger with LLMs is people will never learn tasks that are still needed.
Your comment exposes how much metacognitive laziness you have in modern society that you didn't realize that people still do these things, just not you. They aren't obsolete tasks, just done at a layer you don't see.
I don't have to prime a well pump any more because my house and workplace are hooked into the municipal water system. I don't have to prime a pump because that task has gotten so abstract as to become turning a faucet handle. But engineers at the municipal water plant do have to know how to do this task.
Similarly, filling an oil lamp and lighting it is now abstracted for normal people as flipping a light switch (maybe changing a light bulb is a more appropriate comparison). But I actually have filled an oil lamp when I was a kid because we kept "decorative" hurricane lamps in my house that we used when the power went out. The exact task of filling an oil lamp is not common, but filling a generator with fuel is still needed to keep the lights on in an emergency, although it is usually handled by the maintenance staff of apartment buildings and large office buildings.
I am not sure calculators have hurt us much on the high end of mathematical ability.
But man I cringe when I see 18 year old students reach for a calculator to multiply something by .1.
The abstract does not define, nor contextually suggest from the prior statements of the results what "metacognitive laziness" means.
Personally speaking, I find being able to ask ChatGPT continually more nuanced questions about an initial answer the one clear benefit over a Google search, where I have diminishing marginal returns on my inquisitiveness for the time invested over subsequent searches. The more precisely I am able to formulate my question on a traditional search engine, the harder it is for non-SEO optimized results to appear: it's either meant more for a casual reader with no new information, or is a very specialized resource that requires extensive professional background knowledge. LLMs really build that bridge to precisely the answers I want.
In the absence of a definition I'd read it straightforwardly - it means that someone stops making an effort to learn better ways to learn. I.e. if they start using chatbots to learn, they stop practicing other methods and just rely on the chatbot. (EDIT: I realize now that this probably isn't news to the parent!)
I've heard stories of junior engineers falling into this trap. They asked the chatbot everything rather than exposing their lack of knowledge to their coworkers. And if the chatbot avoids blatant mistakes, junior engineers won't recognize when the bot makes a subtle one.
That is why the last step should always be how do I know what I know? What are my blind spots?
If I am not motivated to find them and test my own knowledge, how do I change that motivation?
> LLMs really build that bridge to precisely the answers I want.
It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
Not criticising you in particular, but this does sound to me like this approach has a good possibility of just reinforcing existing biases
In fact the approach sounds very similar to "find a wikipedia article and then go dig through the sources to find the original place that the answers I want were published"
Though I think you're reading more into my phrasing than I meant, the overall skepticism is fair.
One thing I do have to be mindful of is asking the AI to check for alternatives, for dissenting or hypothetical answers, and sometimes I just ask it to rephrase to check for consistency.
But doing all of that still takes way less time than searching for needles buried by SEO optimized garbage and well meaning but repetitious summaries.
> Though I think you're reading more into my phrasing than I meant, the overall skepticism is fair
I do want to re-iterate that I didn't intend to accuse you of only seeking to reinforce your biases
I read into your phrasing not to needle you, but because it set off some thoughts in my head, that's all
Thanks for being charitable with your reply, and I appreciate your thoughts
Agreeable LLMs and embedded bias are surely a risk, but I don't think this a helpful frame. Most questions don't have correct answers, so it would follow that you'd want practical answers for those, and correct answers for the remainder.
> It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
“Verify that” and then ChatGPT will do a real time search and I can read web pages. Occasionally, it will “correct itself” once it does a web search
This is my take as well.
There was a story a couple days ago about a neural network built on a single photonic chip. I fed the paper to ChatGPT and was able to use it to develop a much more meaningful and comprehensive understanding of what the chip actually delivered, how it operated, the fundamental operating principles of core components and how it could be integrated into a system.
The fact that I now have a tireless elucidator on tap to help explore a topic (hallucination caveats notwithstanding) actually increases my motivation to explore dense technical information and understanding of new concepts.
The one area where I do think it is detrimental is my willingness to start writing content on a provebial blank sheet of paper. I explore the topic with ChatGPT to get a rough outline, maybe some basic content and then take it from there.
On the other hand you might be getting worse at reading those papers yourself.
The more youngsters skip the hassle of banging their heads on some topic the less able they will be to learn at later age.
There's more to learning than getting information, it's also about processing it (which we are offloading to LLMs). In fact I'd say that the whole point of going through school is to learn how to process and absorb information.
That might be the cognitive laziness.
What if the LLMs are teaching us that long form prose/technical writing is just a really bad, unnatural format for communication but natural dialogues are a good format?
The way I see it it is sort of like debugging code you're not well accustomed with.
While you're still going to learn whether you go through the hassle of understanding the system, develop a method for debugging it and learning about it along the way...
Of course a senior could point you to the issue right away, probably an llm too, and even provide a learning opportunity, but does it hold the same lasting impact of being able to overcome the burden yourself?
Which one makes a more lasting effect on your abilities and skills?
Again, LLMs are a tool, but if people in school/college start using it to offload the reasoning part they are not developing it themselves.
If that was the case every scientific paper would be written as socratic dialogue. But it’s not. Because that’s a good format for beginners, but not for science.
the reason the current format exists and is used is because it's very information dense. i think scientific papers would be better if they were socratic dialogues.
but the limitation in publishing a dialogue is that you'd just get to publish one of them and each reader is going to come in with different questions and goals for what they want out of the paper.
I do read the paper, but when you run into dense explanations like this:
>To realize a programmable coherent optical activation function, we developed a resonant electro-optical nonlinearity (Fig. 1(iii)). This device directs a fraction of the incident optical power ∣b∣2 into a photodiode by programming the phase shift θ in an MZI. The photodiode is electrically connected to a p–n-doped resonant microring modulator, and the resultant photocurrent (or photovoltage) detunes the resonance by either injecting (or deplet-ing) carriers from the waveguide.
It becomes very difficult to pick apart each thing, find a suitable explanation of what the thing (eg. MZI splitter, microring modulator, how a charge detunes the resonance of the modulator) is or how it contributes to the whole.
Picking these apart and recombining them with the help of something like ChatGPT has given me a very rapid drill-down capability into documents like this. Then re-reading it allows me to intake the information in the way its presented.
If this type of content was material to my day job it would be another matter, but this is just hobby interest. I'm just not going to invest hours trying to figure it out.
Sure, same as I'm probably pretty bad at going to the library and looking up information there, with the advent of the internet.
In practice, this lets you reasonably process the knowledge from a lot more papers than you otherwise would, which I think is a win. The way we learn is evolving, as it has in the past, and that's a good thing.
Though I agree that this will be another way for lazy children to avoid learning (by just letting AI do the exercises), and we'll need to find a good solution for that, whatever it may be.
Not being able to glean information from a paper is wildly different than being unable to use a card catalog. The former is basic reading comprehension; the latter is a technology.
You AREN'T learning what that paper is saying; you're learning parts of what the LLM says is useful.
If you read just theorems, you aren't learning math. You need to read the proof too, and not just a summary of the proof.
> (hallucination caveats notwithstanding)
This is a pretty big caveat to the goal of
> develop a much more meaningful and comprehensive understanding
Which is still my biggest issue with LLMs. The little I use of them, the answers are still confidently wrong a lot of the time. Has this changed?
I use ChatGPT a lot each day for writing and organizing tasks, and summaries/explanations of articles etc.
When dealing with topics I'm familiar with, I've found the hallucinations have dropped substantially in the last few years from GPT2 to GPT3 to GPT4 to 4o, especially when web search is incorporated.
LLMs perform best in this regard when working with existing text that you've fed them (whether via web search or uploaded text/documents). So if you paste the text of a study to start the conversation, it's a pretty safe bet you'll be fine.
If you don't have web search turned on, I'd still avoid treating the chat as a search engine though, because 4o will still get little details wrong here and there, especially for newer or more niche topics that wouldn't be as well-represented in the training data.
I've found them to be quite accurate when given enough context data. For ex, feeding it an article into it's context window and asking questions about it. Relying on the LLM's internal trained knowledge state seems to be less reliable.
I've found that whatever powers Kagi.com's answer seems to be pretty accurate. It cites articles and other sources.
Trying a share link, hope it works:
https://kagi.com/search?q=what+factors+affect+the+freezing+p...
What powers it is Kagi Search :) All chatbots have access to similar models, what distinguishes the answer quality is/will be the quality of search results fed to them.
Excellent! :)
I'm really happy about being able to share Kagi results. It's allowed me to slip Kagi into a few discussions to raise awareness. Also, being on bluesky helps because so many folks complain about google but they're not aware of better options.
One thing I wanted to raise: please keep the Kagi homepage as light as possible. It's tempting to keep adding little things and over time you get a big mess.
I agree in general but the way this has worked for me in practice is that I approach things hierarchically up and down. Any specific hallucinations tend to come out in the wash as the same question is asked from different layers of abstraction.
Even though ChatGPT "invents" its own reality sometimes, I also find it superior to Google search results (or Duck Duck Go). In some cases LLM results even provide specific strings to search for in the search engines to verify the content. Search is terribly broken and has been since around 2014 (arbitrary date) where Google search results pages started displaying more ads than results.
Paid ChatGPT has had web search capabilities for two years at least
>The abstract does not define, nor contextually suggest from the prior statements of the results what "metacognitive laziness" means.
Your comment seems like a good example of metacognitive laziness: not bothering to formulate your own definition from the examples in the abstract and the meaning of the words themselves. Slothful about the the process of thinking for yourself.
I reread the abstract 3 times. The results stated prior to that definition simply don't follow consistently with the component meaning of those two words as I understand them.
The writer has the responsibility to be clear.
In that context metacognitive process are the processes used to plan, monitor, and assess one's understanding and performances.
So metacognitive lazyness would be the lack of such processes
Further down they write (emphasis mine):
> When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently.
Cell phones and laptops in general have changed a couple of things for me, as someone who grew up without them:
- I realized about 20y-25y ago that I could run a Web search and find out nearly any fact, probably one-shot but maybe with 2-3 searches' worth of research
- About 10-15y ago I began to have a connected device in my pocket that could do this on request at any time
- About 5y ago I explicitly *stopped* doing it, most of the time, socially. If I'm in the middle of a conversation and a question comes up about a minor fact, I'm not gonna break the flow to pull out my screen and stare at it and answer the question, I'm gonna keep hanging out with the person.
There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
I don't miss it, but I have become keenly aware of how tethered my consciousness is to facts available via Web search, and I don't know that I love outsourcing that much of my brain to places beyond my control.
A long time ago I had the idea that maybe Guinness started a "book of world records" precisely because it answers exactly the kind of question that will routinely pop up at the pub.
Yes.
https://www.guinnessworldrecords.com/about-us/our-story
Wow I had no idea the name literally came from Guinness beer. Brilliant!
I'm just old enough to remember pub trivia before it was possible to look things up with a phone. I firmly maintain that phones ruined pub trivia.
Depends on the pub. Where we play there is a gentlemen’s agreement that no one uses phones to help them answer questions
Sure, but that ruins the ability to just pop into a pub and play with people you don't know (let alone trust).
I have this business idea for a pub in a faraday cage that would make cheating impossible for pub trivia (added bonus: also removes any other reason for anyone to be on their phones!)
I agree but I think we shouldn’t limit this answer to pub trivia. What other aspect of human society and civil discourse did we lose because we never argue or discuss any more?
Well it certainly sucks in cases where someone "fact checks" you but they do so before a broader discussion has given them enough context to even know what to google or ask the bot.
It turns out the internet has created more things to argue about than it destroyed.
> There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
A good example, but imagine the days of our ancestors:
Remember that game we used to play, where we'd find out who could see birds from the farthest distance? Yeah, glasses ruined that.
Take a small notebook, Anki flashcards, or even small notes.
And work on learning some trivia purely to help you out with memory.
1. Socrates criticized writing itself: in Plato's Phaedrus he said it would "create forgetfulness in the learners' souls, because they will not use their memories" (274e-275b)
2. Leonard Euler criticized the use of logarithm tables in calculating: in his 1748 "Introductio in analysin infinitorum" he insisted on deriving logarithms from first principles
3. William Thomson (Lord Kelvin) initially dismissed mechanical calculators, stating in an 1878 lecture at Glasgow University that they would make students "neglect the cultivation of their reasoning powers"
4. Henry Ford in his autobiography "My Life and Work" (1922) quoted a farmer who told him in 1907 that gasoline tractors would "make boys lazy and good for nothing" and they'd "never learn to farm"
5. In 1877, the New York Times published concerns from teachers about students using pencils with attached erasers, claiming it would make them "careless" because they wouldn't have to think before writing. The editorial warned it would "destroy the discipline of learning"
6. In "Elements of Arithmetic," (1846) Augustus De Morgan criticized the use of pre-printed multiplication tables, saying students who relied on them would become "mere calculative mechanism" instead of understanding numbers
7. In his 1906 paper "The Menace of Mechanical Music," John Philip Sousa attacked the phonograph writing that it would make people stop learning instruments because "the infant will be taught by machinery" and musical education would become "unnecessary"
8. In his 1985 autobiography "Surely You're Joking, Mr. Feynman!" Richard Feynman expressed concern about pocket calculators and students losing the ability to estimate and understand mathematical relationships
I could go on (Claude wrote 15 of them!). Twenty years from now (assuming AI hasn't killed us all) we'll look back and think that working with an LLM isn't the crutch people think it is now.
I do not consider any of 8 wrong, if I want to understand each of their ideas. The time was slower at their times and painting with eraser is kind of a different genre.
what was your prompt?
So humans are supposed to review all of the code that GenAI creates. We’re supposed to ensure that it doesn’t generate (obvious?) errors and that it’s building the “right thing” in a manner prescribed by our requirements.
The anecdotes from practitioners using GenAI in this way suggest it’s a good tool for experienced developers because they know what to look out for.
Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
Studies such as this are hard but important. Interesting one here even though the sample is small. I wonder if anyone can repeat it.
> Do they know when the GenAI is bullshitting them?
Anecdote from a friend who teaches CS: this year a large number of students started adding unnecessary `break` instructions to their C code, like so:
They asked around and realized that the common thread was ChatGPT - everyone who asked how loops work got a variation of "use break() to exit the loop", so they did.Given that this is not how you do it in CS (not only it's unnecessary, but it also makes your formal proofs more complex) they had to make a general one-time exception and add disclaimers in exams reminding them to do it "the way you were taught in class".
>use break() to exit the loop
Well - they know that break is not a function and you don't. Thanks ChatGPT.
A colleague of mine once taught a formal methods course for students working on their masters -- not beginners by any stretch.
The exercise was to implement binary search given the textbook specification without any errors. An algorithm they had probably implemented in their first-year algorithms course at the very least. The students could write any tests they liked and add any assertions they thought would be useful. My colleague verified each submission against a formal specification. The majority of submission contained errors.
For a simple algorithm that a student at that level could be reasonably expected to know well!
Now... ChatGPT and other LLM-based systems, as far as I understand, cannot do formal reasoning on their own. It cannot tell you, with certainty, that your code is correct with regards to a specification. And it can't tell you if your specification contains errors. So what are students learning using these tools?
Given that most binary searches have an overflow error built in, I think it’s harder than a first year problem to do binary searches without the classical overflow error…
You take a few points from the students that posted inane code by following the LLM, and those students will learn to never blindly follow an LLM again.
If you take the generated code snippets and ask something like "There may or may not be something syntactically or stylistically wrong with the following code. Try to identify any errors or unusual structures that might come up in a technical code review.", then it usually finds any problems or at least, differences of opinion on what the best approach is.
(This might work best if you have one LLM critique the code generated by another LLM, eg bouncing back and forth between Claude and ChatGPT)
Some tools have also started to support a one-two punch of asking a reasoning model (o1 or R1 etc) for planning the solution, and a chat model to build it. Works quite well.
> Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
You can know enough in X to allow you to do Y together with X, which you might not have been able to before.
For example, I'm a programmer, but horrible at math. I want to develop games, and I technically could, but all the math stuff makes it a lot harder sometimes to make progress. I've still managed to make and release games, but math always gets in the way. I know exactly how I want it to behave and work, but I cannot always figure out how to get there. LLMs help me a lot with this, where I can isolate those parts into small black boxes that I know they give me the right thing, but not 100% sure about how. I know when the LLM gives me the incorrect code, because I know what I'm looking for and why, only missing the "how" part.
Basically like having 3rd party libraries you don't fully understand the internals of, but can still use granted you understand the public API, except you keep in your code base and pepper it with unit tests.
> Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
No, which is why people who don't pick up on the nuances of programming - no matter how often they use LLMs - will never be capable programmers.
I'm at this very moment testing deepseek-r1, a so called "reasoning" llm, on the excellent "rustlings" tutorial. It is well documented and its solutions are readily available online. It is my lazy go-to-testing for coding tasks to assess for me if and when I have to start looking for a new job and take up software engineering as a hobby. The reason I test with rustlings is to also assess the value as a learning tool for students and future colleagues. Maybe these things have use as a teacher? Also, the rust compiler is really good in offering advice, so there's an excellent baseline to compare the llm-output.
And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
> And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Are you considering the full "reasoning" it does when you're saying this? AFAIK, they're meant to be "rambling" like that, exploring all sorts of avenues and paths before reaching a final conclusive answer that is still "ramble-like". I think the purpose seems to be to layer something on top that can finalize the answer, rather than just taking whatever you get from that and use it as-is.
> Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
I started coding just before Stack Overflow got popular, and remember the craze when it did get popular. Blogposts about how Stack Overflow will create lazy devs was all over the place, people saying it was the end of the real developer. Not arguing against you or anything, I just find it interesting how sentiments like these keeps repeating over time, just minor details that change.
Before pervasive GPS, it took me very little time to actually learn and internalize a route. Now it takes a lot longer to remember it when you're constantly guided. Same exact thing is happening with guided reasoning we get with LLMs
I have different experience. It took me some time to make a rote and write down all turns. Now getting from location A to B is a lot easier. Take a look at proposed rote and make some corrections. Meanwhile I spend time thinking about something else. So, GPS doesn't make me stupid or forgetful. It's just a tool which makes me more productive. The same almost true for LLM, except getting the right answer isn't always easy or possible. But overall on coding small utilities it's very helpful. For reasoning models I still need to find the right tasks. May be more complex utilities. Or the one I can't get from 4o yet: red-black tree with custom memory management and custom 'pointers' in data objects (small integers). While custom allocators are supported by std, the implementation still keeps native pointers, which locks it in memory.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
I have been using LLMs for my own education since they came out and have watched my kid use it.
Some kids might pickup a calculator and then use it to see geometric growth, or look for interesting repeating patterns of numbers.
Another kid might just use it to get their homework done faster and then run outside and play.
The second kid isn't learning more via the use of the tool.
So the paper warns that the use of LLMs doesn't necessarily change what the student is interested in and how they are motivated. That we might need to put in checks for how the tool is being used into the tool to reduce the impact of scenario 2.
I don't really know what "metacongnitive laziness" is even after they explain it in the paper, but I use LLMs to filter noise and help automate the drudgery of certain tasks, allowing me to use my energy and peak focus time on the more complicated tasks. Anecdotal, obviously. But I don't see how this hinders me in my ability to "self-regulate". It's just a tool, like a hammer.
From a learning perspective, it can also be a short cut to getting something explained in several different ways until the concept "clicks".
I also appreciate being able to tell the LLM "look, it's late, I'm tired, really dumb this down for me" and it does it.
I drew a similar conclusion from the abstract as you. The only negative I could think out of that is with higher essay scores, one might expect higher knowledge gain, and that wasn’t present.
However, I agree that that doesn’t really seem to be a negative over other methods.
I have found ChatGPT is pretty good at explaining topics when the source documentation is poorly written or lacks examples. Obviously it does make mistakes so skepticism in the output is a good idea.
Yeah, the abstract could use a bit more work. The gist of it is being in a closed-loop cycle with ChatGPT only helps with the task at hand, and not with engaging with the full learning process. Instead they say "When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently."
This is not a concern when you are responsible for real results. If you aren’t responsible for real results you can pass off the good rhetoric of these models as an “answer”. But when you need results you realize most answers they give are just rhetoric. They are still extremely valuable, but they can only help you when you have done the work to get deep understanding of the problem, incentivized by actually solving it.
> Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
That's the most convoluted conclusion I've ever seen.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”.
Calculator laziness is long known. It doesn't cause meta- but specific- laziness.
What did the researchers expect?
Humans are lazy by nature, they seek shortcuts.
So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
And that usually the benefits rise with IQ level - nothing new here, that’s the very definition of IQ.
Learning and academia is hard, and even harder for those with lower IQ scores.
A fool with a tool is still a fool and vice versa.
Motivation seems also at an all time low. Why put in hours when a prompt can works wonders?
Reading a book is a badge of honor nowadays more than ever.
> the benefits rise with IQ level - nothing new here, that’s the very definition of IQ
This is not obvious to me, and certainly is not the "definition" of IQ. There are tools that become less useful the more intelligent you are, such as multiplication tables. IQ is defined by a set of standardized tests that attempt to quantify human intelligence, and has some correlations with social, educational and professional performance, but it's not clear why it would help with use of AI tools.
> So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
Would you argue that having books/written words also made people more lazy and be able to remember less? Because some people argued (at the time) that having written words would make humanity less intellectual as a whole, but I think consensus is that it led to the opposite.
In my recent programming exam (in an MSc in AI), I asked students to reflect on how generative AI has changed their coding. Almost all remarked that it's a great time-saver, but it makes them lazy and worse at coding.
And yes indeed, their ability to answer basic questions about coding on the same exam has drastically dropped versus last year.
Is the problem the use of AI in coding, or using AI in coding in a curriculum designed without that assumption? Because if AI is an effort-saver, than a curriculum that isn't designed with its use in mind will just result in the students doing less work, in which case learning less is unsurprising but not really an "AI makes you less knowledgeable" problem but an "insufficiently challenging curriculum for the actual context" problem.
Preprint: https://arxiv.org/abs/2412.09315
Thanks for the link, but clearly no one is reading it. Which is super ironic, they aren't even summarizing it with AI and using that information.
Most folks are projecting what the title says into their own emotion space and then riffing on that.
The authors even went so far as to boil the entire paper down into bullet points, you don't even need the pdf.
> Most folks are projecting what the title says into their own emotion space and then riffing on that.
Yeah, or the abstract which is a bit vague.
The bullet points below the abstract is basically the paper w/o reading it.
My observation is that I learn more than ever using LLMs.
I tend to learn asking questions, I did this using Anki cards for years (What is this or that?) and find the answer on the back of the index card. Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
My motto is: Seek first to understand, then to be understood (Covey). And I do this in engaging with people or a topic—-by asking questions.
Now I do this with LLMs. I have been exploring ideas I would never have explored hadn’t there been LLMs, because I would not have had the to research material for learning, read it, create material in a Q&A session for me.
I even use LLMs to convert an article into Anki cards using Obsidian, Python, LLMs, and the Anki app.
Crazy times we are in.
> Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
This is very well-studied: https://en.wikipedia.org/wiki/Testing_effect [not a high-quality article, but should give an overview]
What does your workflow look like?
I use functions in openai and a template that forces the LLM to generate questions and answers from a text in a format that can be synced into the Anki app. Very straightforward workflow.
Very interesting, would love a more detailed tutorial on setting something similar up
Yeah I've found the same. I might have some surface understanding of some topic and I like just asking "am I right in thinking this and this about this?" Or "Tell me why I'm wrong about this".
I don’t see how the “metacognitive laziness” (a term used by the abstract, but not defined) follows from what they describe in the abstract as the outcomes they observed. They specifically called out no difference in post-task intrinsic motivation; doesn’t that imply that the ChatGPT users were no lazier after using ChatGPT than they were before?
I’m also a skeptic of students using and relying on ChatGPT, but I’m cautious about using this abstract to come to any conclusions without seeing the full paper especially given that they’re apparently using “metacognitive laziness” in a specific technical way we don’t know about if we haven’t read the paper.
I think this holds water.
Metacognition is really how the best of the best can continue to be at their best.
And if you don't use it, you lose it.
https://x.com/redshirtet/status/1879922330983358941
Idk, the "explain {X} to me like I'm 12" has certainly helped my delve into new topics, Nix with Flakes comes to mind as one of my latest ventures.
How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything? I've run into conflicting answers to things off Google's first page of results, some things aren't 100% certain and require more research.
I'm not surprised if this will make some lazier since you don't need to do the legwork of reading, but how many don't read only the headlines of articles before they share articles?
Part of it I think is the confidence with which LLMs return answers
> How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything
You can interrogate it at least. "Are you sure that's the correct answer? Re-think from the beginning without any assumptions" and you'll get a checklist you can mentally/practically go through yourself to validate.
True, so I guess what needs to happen is people using AI need to be informed on how to use it more accurately so they're actually learning source material and not just taking garbage / cheating on coursework.
So we need to train inquisitive curious thinkers that look at things from all angles and understand why they know something.
A friend and myself were talking not too long ago that people these days don't practice critical thinking. It might be worthwhile for not just schools but parents everywhere to teach their kids to think more critically, ask the right questions when presented with new information.
Differences of degree, not of kind
> How's this any different than someone 5+ years ago blindly going by whatever a Google result said about anything?
It has "AI" in the title, so it's a hot take.
For code; at least you would have to copy and paste it, and then modify, even if ever so slightly, to make it fit your code.
Now, "Claude, fix that for me".
We destroyed our artists for a mash up and then wondered why there was nothing new under the sun.
Inevitably the advancement of knowledgeable information generation will have same mental effect as having a contact list on your phone. When I was a kid I knew at least 5 peoples phone numbers maybe more. Even now I can recall 2 of them. How many can you recall from your actual contact list?
its increasing my curiosity because it allows me to run more experiments
Exactly this. While I might scratch the surface of some topics, it helps me cast a wider net of cognitive exploration in the free time I have. This in turn leads me into deeper rabbit holes for things that pique my interest, leading to faster iteration of the knowledge tree, while also providing me with a way to estimate my understanding of the topic.
The paper says that LLM usage doesn't appear to move baseline curiosity. Thanks aithrowawaycomm for https://arxiv.org/pdf/2412.09315
Ridiculous that academic work on the technology of education is behind a paywall and not open access. Stinks.
This technology is arguably as ubiquitous as a calculator. So long as I can understand that generative AI is a tool and not a solution is it bad to treat it like a bit of a calculator? Does this metacognitive laziness apply to those who depend on calculators?
I understand it is a bit apples to oranges, but I'm curious peoples take.
I am definitely lazier today in regards to doing math in my head compared to when I was young.
I think a comparison with calculators is possible, but the degree to which calculators are capable of assisting us is so incomparably smaller that the comparison would be meaningless.
Smart phones changed society a lot more than calculators did and now AI is starting to do the same, albeit in a more subtle manner.
Treating AI like it's just a calulator seems naïve/optimistic. We're still reeling from the smart phone revolution and have not solved many of the issues it brought upon its arrival.
I have a feeling the world has become a bit cynical and less motivated to debate how to approach these major technological changes. It's been too many of them in too short of a time and now everyone has a whatever attitude towards the problems these adcancements introduce.
That just demonstrates the difference between idiots and intelligent people. I use AI and chatgpt to learn about a zillion topics I am interested about more efficiently.
Funny, I passed the link to a whatsapp group with some friends and the preview loaded with the title "error: cookies turned off".
I'm sure my friends will RUSH to read the article now...
This is the old "siiiiiir why do we need to do this if we have calculators"? It matters - https://www.edweek.org/education/little-numbers-add-up-to-bi... Students who know the facts will be better at math.
Even if the computer is doing all the thinking, it's still a tool. Do you know what to ask it? Can you spot a mistake when it messes up (or you messed up the input)? Can you simplify the problem and figure out what the important parts of the problem are? Do you even know to do any of that?
Sure, thinking machines will sometimes be autonomous and not need you to touch them. But when that's the case, your job won't be to just nod along to everything the computer says, you won't have a job anymore and you will need to find a new job (probably one where you need to prompt and interpret what the AI is doing).
And yes, there will be jobs where you just act as an actuator for the thinking machine. Ask an Amazon warehouse worker how great a job that is :/
Everything is the same as with calculators.
same is true for google, gps, etc.
“The kids these days are too lazy to be bothered to learn” is a psychological trap that people often fall into.
It’s not to say we shouldn’t do our best to understand and provide guardrails, but the kids will be fine.
Can you point me to the generation that had ready access to AI on their hands, answering all their questions?
"People have been complaining about this for thousands of years" is a potent counterargument to a lot of things, but it can't be applied to things that really didn't exist even a decade ago.
Moreover, the thing that people miss about "people have been complaining about this for thousands of years" is that the complaints have often been valid, too. Cultures have fallen. Civilizations have collapsed. Empires have disintegrated. The complaints were not all wrong!
And that's on a civilization-scale. On a more mundane day-to-day scale, people have been individually failing for precisely the same reasons people were complaining about for a long time. There have been lazy people who have done poorly or died because of it. There have been people who refused to learn who have done poorly or died because of it.
This really isn't an all-purpose "just shrug about it and move on, everything's been fine before and it'll be fine again". It hasn't always been fine before, at any scale, and we don't know what impact unknown things will have.
To give a historical example... nay, a class of historical examples... there are several instances of a new drug being introduced to a society, and it ripping through that society that had no defenses against it. Even when the society survived it, it did so at great individual costs, and "eh, we've had drugs before" would not have been a good heuristic to understand the results with. I do not know that AIs just answering everything is similar, but at the moment I certainly can't prove it isn't either.
I mean sometimes it's true. Like even in the past. I could very clearly see amongst my generation (older gen z) that there were plenty of people literally at university who were barely willing or able to learn. Comparing that to the generation of my much older half siblings (genx, older millennial), they don't even seem to grasp the concept of not being quite involved in your university degree.
Most people my age will tell you that they stopped reading as a teenager because of the effect of smartphones. I was a veracious reader and only relearnt to read last year after 10 years since I got my first smartphone as an older teenager. These things are impactful and have affected a lot of people's potential. And also made our generation very prone to mental health issues - something that is really incredibly palpable if you are within gen z social circles like I am. It's disastrous and cannot be overstated. I can be very sure I would be smarter and happier if technology had stagnated at the level it was at when I was a younger child/teen. The old internet and personal computers, for example, only helped me explore my curiosity. Social media and smartphones have only destroyed it. There are qualitative differences between some technological advancements.
Not to mention the fact that gen alpha are shown to have terrible computer literacy because of the ease of use, discouragement of customisation and corporate monopoly over smartphones. This bucks the trend that happened from gen x to gen z of generations become more and more computer native. Clearly, upwards trends in learning due to advancements in technology can be reversed. They do not always go up.
If kids do not learn independent reasoning because of reliance on LLMs, yes, that will make people stupider. Not all technology improves things. I watched a really great video recently where someone explained the change in the nature of presidential debates through the ages. In the Victorian times, they consisted of hours-long oratory on each side, with listeners following attentively. In the 20th century the speeches gradually became a little shorter and more questions were added to break things up. In most recent times, every question has started to come with a less than a minute answer, simpler vocabulary, little hard facts or statistics etc. These changes map very well to changes in the depth at which people were able to think due to the primary information source they were using. There is a good reason why reading is still seen as the most effective form of deep learning despite technological advancement. Because it is.
I wonder how will LLMs learn anything new when no one does original research and just asks the LLM? Will LLMs just feed back on each other, effectively hallucinating false "learning"?
Maybe we'll end up as a society of a few elites who still know how to research, think, and/or write with LLMs digesting that and regurgitating it for the masses.
Any time an empirical research project has to add QUOTES around a common term, it sets off the non-sense radar:
..."laziness"...
In the battle cry of the philosopher: DEFINE YOUR TERMS!!
What they really mean: new and different. Outside-the-box. "Oh no, how will we grade this?!?" a threat to our definition and control of knowledge.
I mean this is the same exact thing that happened when calculators where invented. The amount of people who can count in their heads drastically dropped because why waste your time? Ditto for when maps app came out. No more need to memorize a bunch of locations because you can just use maps to take you there.
It's funny, the calculators were incredibly politicized when I was growing up (TI84 generation, so kids were getting caught programming functions to solve exam questions) but GPS was just taken as a given.
I feel this, because it’s like I don’t need to know about something, I just need to know how to know about something. Like, the initial contact with a mystery subject is overcome by knowing how to describe the mystery in a way that AI understands what I don’t understand, and seeks to fill in the understanding.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
> But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.
--Socrates on writing
That's a interesting and very fitting quote. Basically saying that since we can now write down information, people will get lazier about remembering things. Basically the exact same claim as the submission article.
I think there is some validity to the nature of generational knowledge loss through differing information systems. At one end of the scale, you’ve got 80,000 year old stories, still being told - at the other end of the scale, you’ve got App Of The Day™ style social media, and kids who can’t write an email, use a dictionary, or read a book.
This is no hyperbole - humans have to constantly fight the degeneracy of our knowledge systems, which is to say that knowledge has to be generated and communicated - it can’t just “exist” and be useful, it has to be applied to be useful. Technology of knowledge which doesn’t get applied, does not persist, or if it does (COBOL), what once was common becomes arcane.
So, if there is hope, it lays with the proles: the way every-day people use ML, is probably the key to all of this. It’s one thing to know how to prompt an LLM to give you a buildable source tree; its another thing entirely to use it somehow to figure out what to make out of the leftover ingredients in the fridge.
Those recipes and indeed the applications of the ingredients, are based on human input and mores.
So the question for me, still really unanswered, is: How long will it take until those fridge-ingredient recipes become bland, tasteless and grey?
I think this belies the imperative that AL and ML must never become so pervasive that we don’t, also, write things down for ourselves. Oh, and read a lot, of course.
It seems, we need to stop throwing books away. Oh, and encourage kids to cook, and create their own recipes... hopefully they’ll have time and resources for that kind of lifestyle…
Socrates is just the next monkey in line. As human monkeys, we have already traded (short-term) memory for abstract thinking and who knows what else.
I guess that is the curse of evolution/specialization.
No doubt, this curse (which is also missing generalization, i.e. evolution/generalization/specialization) is all for the sake of self-awareness, or at least, awareness, of some particular thing.
As long as humans remain aware that they are engaging with an AI/ML, we might still have a chance. Computers definitely need to be identifiable as such.
As technology gets more impressive, we internalize less knowledge ourselves.
There is a "plato" story on how he laments the invention of writing because now people don't need to memorize speeches and stuff.
I think there is a level of balance. Writing gave us enough efficiencies that the learned laziness made us overall more effective.
The internet in 2011 made us a bit less effective. I am not gonna lie; I spent a lot more time being able to get resources, whereas I would have to struggle on my own to solve a problem. You internalize one more than the other, but is it worth the additional time every time?
I worry about current students learning through LLMs just like I would worry about a student in 2012 graduating in physics when such a student had constant access to wolfram alpha.