Step into the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, US, and the future feels a little closer. Glass cabinets display prototypes of weird and wonderful creations, from tiny desktop robots to a surrealist sculpture created by an AI model prompted to design a tea set made from body parts. In the lobby, an AI waste-sorting assistant named Oscar can tell you where to put your used coffee cup. Five floors up, research scientist Nataliya Kosmyna has been working on wearable brain-computer interfaces she hopes will one day enable people who cannot speak, due to neurodegenerative diseases such as amyotrophic lateral sclerosis, to communicate using their minds.
Kosmyna spends a lot of her time reading and analysing people’s brain states. Another project she is working on is a wearable device – one prototype looks like a pair of glasses – that can tell when someone is getting confused or losing focus. Around two years ago, she began receiving out-of-the blue emails from strangers who reported that they had started using large language models such as ChatGPT and felt their brain had changed as a result. Their memories didn’t seem as good – was that even possible, they asked her? Kosmyna herself had been struck by how quickly people had already begun to rely on generative AI. She noticed colleagues using ChatGPT at work, and the applications she received from researchers hoping to join her team started to look different. Their emails were longer and more formal and, sometimes, when she interviewed candidates on Zoom, she noticed they kept pausing before responding and looking off to the side – were they getting AI to help them, she wondered, shocked. And if they were using AI, how much did they even understand of the answers they were giving?
With some MIT colleagues, Kosmyna set up an experiment that used an electroencephalogram to monitor people’s brain activity while they wrote essays, either with no digital assistance, or with the help of an internet search engine, or ChatGPT. She found that the more external help participants had, the lower their level of brain connectivity, so those who used ChatGPT to write showed significantly less activity in the brain networks associated with cognitive processing, attention and creativity.
In other words, whatever the people using ChatGPT felt was going on inside their brains, the scans showed there wasn’t much happening up there.
The study’s participants, who were all enrolled at MIT or nearby universities, were asked, right after they had handed in their work, if they could recall what they had written. “Barely anyone in the ChatGPT group could give a quote,” Kosmyna says. “That was concerning, because you just wrote it and you do not remember anything.”
Kosmyna is 35, trendily dressed in a blue shirt dress and a big, multicoloured necklace, and she speaks faster than most people can think. As she observes, writing an essay requires skills that are important in our wider lives: the ability to synthesise information, consider competing perspectives and construct an argument. You use these skills in everyday conversations. “How are you going to deal with that? Are you going to be, like, ‘Err … can I just check my phone?’” she says.
The experiment was small (54 participants) and has not yet been peer reviewed. In June, however, Kosmyna posted it online, thinking other researchers might find it interesting, and then she went about her day, unaware that she had just created an international media frenzy.
Alongside the journalist requests, she received more than 4,000 emails from around the world, many from stressed-out teachers who feel their students aren’t learning properly because they are using ChatGPT to do their homework. They worry AI is creating a generation who can produce passable work but don’t have any usable knowledge or understanding of the material.
The fundamental issue, Kosmyna says, is that as soon as a technology becomes available that makes our lives easier, we’re evolutionarily primed to use it. “Our brains love shortcuts, it’s in our nature. But your brain needs friction to learn. It needs to have a challenge.”
If brains need friction but also instinctively avoid it, it’s interesting that the promise of technology has been to create a “frictionless” user experience, to ensure that, provided we slide from app to app or screen to screen, we will meet no resistance. The frictionless user experience is why we unthinkingly offload ever more information and work to our digital devices; it’s why internet rabbit holes are so easy to fall down and so hard to climb out of; it’s why generative AI has already integrated itself so completely into most people’s lives.
We know, from our collective experience, that once you become accustomed to the hyperefficient cybersphere, the friction-filled real world feels harder to deal with. So you avoid phone calls, use self-checkouts, order everything from an app; you reach for your phone to do the maths sum you could do in your head, to check a fact before you have to dredge it up from memory, to input your destination on Google maps and travel from A to B on autopilot. Maybe you stop reading books because maintaining that kind of focus feels like friction; maybe you dream of owning a self-driving car. Is this the dawn of what the writer and education expert Daisy Christodoulou calls a “stupidogenic society”, a parallel to an obesogenic society, in which it is easy to become stupid because machines can think for you?
Human intelligence is too broad and varied to be reduced to words such as “stupid”, but there are worrying signs that all this digital convenience is costing us dearly. Across the economically developed countries of the Organisation for Economic Co-operation and Development (OECD), Pisa scores, which measure 15-year-olds’ reading, maths and science, tended to peak around 2012. While over the 20th century IQ scores increased globally, perhaps due to improved access to education and better nutrition, in many developed countries they appear to have been declining.
Falling test and IQ scores are the subject of hot debate. What is harder to dispute is that, with every technological advance, we deepen our dependence on digital devices and find it harder to work or remember or think or, frankly, function without them. “It’s only software developers and drug dealers who call people users,” Kosmyna mutters at one point, frustrated at AI companies’ determination to push their products on to the public before we fully understand the psychological and cognitive costs.
In the ever-expanding, frictionless online world, you are first and foremost a user: passive, dependent. In the dawning era of AI-generated misinformation and deepfakes, how will we maintain the scepticism and intellectual independence we’ll need? By the time we agree that our minds are no longer our own, that we simply cannot think clearly without tech assistance, how much of us will be left to resist?
Start telling people that you’re worried about what intelligent machines are doing to our brains and there’s a risk that, in the not-too-distant future, everyone will laugh at what a fuddy-duddy you were. Socrates worried that writing would weaken people’s memories and encourage only superficial understanding: not wisdom but “the conceit of wisdom” – an argument that is strikingly similar to many critiques of AI. What happened instead was that writing and the technological advances that followed – the printing press, mass media, the internet era – meant that ever more people had access to ever more information. More people could develop great ideas, and they could share those ideas more easily, and this made us cleverer and more innovative, as individuals and as communities.
After all, writing didn’t only change how we access and retain information; it changed how we think. A person can achieve more complex tasks with a notebook and paper to hand than without: most people can’t work out 53,683 divided by 7 in their head but could have a stab at doing long division on paper. I couldn’t have dictated this piece, but writing helped me organise and clarify my thoughts. As humans, we’re very good at what experts call “cognitive offloading”, namely using our physical environment to reduce our mental load, and this in turn helps us achieve more complex cognitive tasks. Imagine how much harder it would be to function each day without a calendar or phone reminders, or without Google to remember everything for you. In the best case scenario, intelligent people working in partnership with intelligent machines will achieve new intellectual feats and solve tricky problems: we’re already seeing, for instance, how AI can help scientists discover new drugs faster and doctors detect cancer earlier and more efficiently.
The complication is, if technology is truly making us cleverer – turning us into efficient, information-processing machines – why do we spend so much time feeling dumb?
Last year, “brain rot” was named Oxford University Press’s word of the year, a term that captures both the specific feeling of mindlessness that descends when we spend too much time scrolling through rubbish online and the corrosive, aggressively dumb content itself, the nonsense memes and AI garble. When we hold our phones we have, in theory, most of the world’s accumulated knowledge at our fingertips, so why do we spend so much time dragging our eyeballs over dreck?
One issue is that our digital devices have not been designed to help us think more efficiently and clearly; almost everything we encounter online has been designed to capture and monetise our attention. Each time you reach for your phone with the intention of completing a simple, discrete, potentially self-improving task, such as checking the news, your primitive hunter-gatherer brain confronts a multibillion-pound tech industry devoted to throwing you off course and holding your attention, no matter what. To extend Christodoulou ’s metaphor, in the same way that one feature of an obesogenic society are food deserts – whole neighbourhoods in which you cannot buy a healthy meal – large parts of the internet are information deserts, in which the only available brain food is junk.
In the late 90s the tech consultant Linda Stone, who was working as a professor at New York University, noticed that her students were using technology very differently from her colleagues at Microsoft, where she also worked. While her Microsoft colleagues were disciplined about working on two screens – one for emails, perhaps, and another for Word, or a spreadsheet – her students seemed to be trying to do 20 things at once. She coined the term “continuous partial attention” to describe the stressful, involuntarily state we often find ourselves in when we’re trying to toggle between several cognitively demanding activities, such as responding to emails while on a Zoom call. When I first heard the term I realised that I, like most people I know, live most of my life in a state of continuous partial attention, whether I’m guiltily checking my phone when I’m supposed to be playing with my kids, or incessantly sidetracked by texts and emails when I’m trying to write, or trying to relax while watching Netflix and simultaneously doing an online food shop, still wondering why I feel as chilled-out as an over-microwaved dinner. Digital multitasking makes us feel productive, but this is often illusory. “You have a false sense of being on top of things without ever getting to the bottom of anything,” Stone tells me. It also makes you feel permanently on edge: one study she conducted found that 80% of people experience “screen apnea” when checking their emails: they become so caught up in the endless notifications that they forget to breathe properly. “Your fight or flight system becomes up-regulated, because you’re constantly trying to stay on top of things,” she says, and this hypervigilance has cognitive costs: it makes us more forgetful, worse at making decisions and less attentive.
after newsletter promotion

Continuous partial attention helps explain both brain rot as a mental state – because what is it if not cognitive overwhelm, the point at which you stop resisting the onslaught of digital distraction and allow your brain to rest in the internet’s warm, murky shallows? – and the existence of the online slop itself. After all, what matters to tech companies financially is not that you want to be reading what you’re reading, or that you love what you listen to or what you’re looking at, only that you are unwilling or unable to pull yourself away. This is why streaming services such as Netflix crank out bland, formulaic films that are euphemistically labelled “casual viewing” and are literally designed for viewers who aren’t really watching, and Spotify playlists are filled with generic stock music by fake artists, to provide background music, “Chill Out” or “Party” vibes, for listeners who aren’t really listening. In short, the modern internet doesn’t necessarily make you an idiot, but it definitely primes you to act like one.
It is into this climate that generative AI arrived, with an entirely novel offer. Until recently you could only outsource remembering and some data processing to technology; now you can outsource thinking itself. Given that we spend most of our lives feeling overstimulated and frazzled, it’s little wonder that so many have jumped at the chance to let a computer do more things we would have once done for ourselves– such as write work reports or emails, or plan a holiday. As we transition from the internet era to the AI era, what we’re consuming is not only ever more low-value, ultra-processed information, but more information that is essentially predigested, delivered in a way that is designed to bypass important human functions, such as assessing, filtering and summarising information, or actually considering a problem rather than finessing the first solution presented to us.
Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, began studying the impact of generative AI on critical thinking because he noticed the quality of classroom discussions decline. Sometimes he’d set his students a group exercise, and rather than talk to one another they continued to sit in silence, consulting their laptops. He spoke to other lecturers, who had noticed something similar. Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. One concern is the so-called “anchoring effect”. If you post a question to generative AI, the answer it gives you sets your brain on a certain mental path and makes you less likely to consider alternative approaches. “I always use the example: imagine a candle. Now, AI can help you improve the candle. It will be the brightest ever, burn the longest, be very cheap and amazing looking, but it will never develop to the lightbulb,” he says. To get from the candle to a lightbulb you need a human who is good at critical thinking, someone who might take a chaotic, unstructured, unpredictable approach to problem solving. When, as has happened in many workplaces, companies roll out tools such as the chatbot Copilot without offering decent AI training, they risk producing teams of passable candle-makers in a world that demands high-efficiency lightbulbs.
There is also the bigger issue that adults who use AI as a shortcut have at least benefited from going through the education system in the years before it was possible to get a computer to write your homework for you. One recent British survey found that 92% of university students use AI, and about 20% have used AI to write all or part of an assignment for them. Under these circumstances, how much are they learning? Are schools and universities still equipped to produce creative, original thinkers who will build better, more intelligent societies – or is the education system going to churn out mindless, gullible, AI essay-writing drones?
Some years ago, Matt Miles, a psychology teacher at a high school in Virginia in the US, was sent on a training programme on tech in schools. The teachers were shown a video in which a schoolgirl is caught checking her phone during lessons. In the video, she looks up and says, “You think I’m just on TikTok or playing games. I’m actually in a research room talking to a water researcher from Botswana for a project.”
“It’s laughable. You show it to the kids and they all laugh, right?” Miles says. Alarmed at the disconnect between how policymakers view tech in education and what teachers were seeing in the classroom, in 2017 Miles and his colleague Joe Clement, who teaches economics and government at the same school, published Screen Schooled, a book that argued that technology overuse is making kids dumber. In the years since, smartphones have been banned from their classrooms, but students still work from their laptops. “We had one kid tell us, and I think it was pretty insightful, ‘If you see me on my phone, there’s a 0% chance I’m doing something productive. If you see me on my laptop, there’s a 50% chance,’” Miles says.
Until the pandemic, many teachers were “rightly sceptical” about the benefits of introducing more technology into the classroom, Faith Boninger, a researcher at the University of Colorado, observes, but when lockdowns forced schools to go online, a new normal was created, and ed tech platforms such as Google Workspace for Education, Kahoot! and Zearn became ubiquitous. With the spread of generative AI came new promises that it could revolutionise education and usher in an era of personalised student learning, while also reducing the workload for teachers. But almost all the research that has found benefits to introducing tech in classrooms is funded by the ed-tech industry, and most large-scale independent research has found that screen time gets in the way of achievement. A global OECD study found, for instance, that the more students use tech in schools, the worse their results. “There is simply no independent evidence at scale for the effectiveness of these tools … in essence what is happening with these technologies is we’re experimenting on children,” says Wayne Holmes, a professor of critical studies of artificial intelligence and education at University College London. “Most sensible people would not go into a bar and meet somebody who says, ‘Hey, I’ve got this new drug. It’s really good for you’ – and just use it. Generally, we expect our medicines to be rigorously tested, we expect them to be prescribed to us by professionals. But suddenly when we’re talking about ed tech, which apparently is very beneficial for children’s developing brains, we don’t need to do that.”
What worries Miles and Clement is not only that their students are permanently distracted by their devices, but that they will not develop critical thinking skills and deep knowledge when quick answers are only a click away. Where once Clement would ask his class a question such as, “Where do you think the US ranks in terms of GDP per capita?” and guide his students as they puzzled over the solution, now someone will have Googled the answer before he’s even finished his question. They know students use ChatGPT constantly and get annoyed if they aren’t provided with a digital copy of their assignment, because then they must type rather than copy and paste the relevant questions into an AI assistant or the Google search bar. “Being able to Google something and providing the right answer isn’t knowledge,” Clement says. “And having knowledge is incredibly important so that when you hear something that’s questionable or maybe fake, you think, ‘Wait a minute, that contradicts all the knowledge I have that says otherwise, right?’ It’s no wonder there’s a bunch of idiots walking about who think that the Earth is flat. Like, if you read a flat Earth blog, you think, ‘Ah, that makes a lot of sense’ because you don’t have any understanding or knowledge.” The internet is already awash with conspiracy and misinformation, something that will only become worse as AI hallucinates and produces plausible fakes, and he worries that young people are poorly equipped to navigate it.
During the pandemic, Miles says, he found his young son weeping over his school-issued tablet. His son was doing an online maths program and he had been tasked with making six using the fewest number of one, three and five tokens. He kept suggesting using two threes, and the computer kept telling him he was wrong. Miles tried one and five, which the computer accepted. “That’s kind of the nightmare you get with a non-human AI, right?” Miles observes: students often approach topics in unanticipated and interesting ways, but machines struggle to cope with idiosyncrasy. Listening to his story, however, I was struck by a different kind of nightmare. Maybe the dawn of the new golden era of stupidity doesn’t begin when we submit to super-intelligent machines; it starts when we hand over power to dumb ones.