Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Nerd for Hire
Pittsburgh is a sports town. Especially when things are going well for the local teams, the rhythm of their seasons directly affects the pace of life in the city. I follow the local teams more closely some seasons than others, but I like that, even when I'm not actively keeping up with them, I know how they're doing by a kind of osmosis just from existing here.
And that's one of the things that's fun about sports, I think—it creates this collective experience that connects people across an entire region, a shared touchstone of identity that unites across other dividing lines. That also makes it useful for storytellers. It can be a setting to bring two disparate people together, or a telling detail that automatically anchors the reader in a specific place.
There are other things that make sports useful for writers, too. If you're looking for ways to build tension, for example, you can mine tons of them from sports. They have a ticking clock, built-in stakes, and natural good guys and bad guys. They're also active. The characters are constantly in motion and interacting with each other and their environment. Of course, writing about sports also has its challenges, which I've written about before on this blog, but it's still a setting and world that I like playing around in with my stories.
I've been in the mental world of sports writing lately because I've been in late-stage edits on a hockey time loop story that I've been fiddling around with for a while. Because that's another cool thing about sports—they don't necessarily need to only live in the real, expected world, but can have a role in fantasy worlds, alternate realities, and far futures. You also don't need to directly write about sports to use the ideas behind them as a device in your writing. I wrote up a few sports-themed writing prompt for an event I participated in related to the NFL draft last week, and I figured since I was already in this headspace it was a good time to share a few of them here.
Sports come with their own unique language and terminology. Some of these are used across various sports—think about broad concepts like scoring points or penalties. But pretty much every sport also has its own set of words that define it. If a quarterback throws a touchdown, you know you're on a football field; if the winger shoots a one-timer on the power play, you're in a hockey arena.
That distinctive language can also be used metaphorically by applying it to other parts of life. We already do this on a daily basis in conversation. Idioms like step up to the plate, out of left field, jump the gun, on the ropes, and par for the course all started with sports before gaining traction as everyday expressions. All of these phrases are so commonly used that they can feel like cliches if you use them in creative work, but there are plenty more terms where those came from that aren't used as often in metaphoric language.
This prompt is going to play with some of them. To do it:
Pick a sport that you're familiar with. This doesn't need to be a professional team sport like football or baseball—it could be anything from racing to golf to bowling to running marathons.
Think of an important relationship in your life, then brainstorm some ways that concepts or terms from your chosen sport could apply in a metaphorical way to aspects of that relationship.
Write a poem or scene incorporating those words or ideas.
While we're thinking about language, let's take a second to play with voice. One voice that's shared by any televised sport is that of the commentator's play-by-play. The pace of this varies depending on what sport it is, from the frenetic high-energy calls of soccer, hockey, or basketball to the more measured and low-key commentary behind a baseball game or golf tournament. But regardless of the sport, these have some things in common. When the athletes are active, the commentary describes what's happening (making use of lots of those unique terms we talked about in the last prompt). During lulls, the commentators fill in the space with context: facts about the players, how the outcome of this match will impact the standings, or historical context on past matches and how this one compares.
This kind of format could be applied to a story, too, and we're going to play with that form in this prompt. To start, pick a moment to focus on. You could use a moment from a work in progress that you want to play with from a different angle, or come up with something new (if you're coming up with a new one, also take a second to think about the character(s) involved and where this moment is taking place).
Once you've figured out those details, write the moment in the style of a sports play-by-play. Try to emulate the idea flow of sports commentary along with the voice, using the lulls between actions in the present scene to fill in any necessary details about the character(s) or the events that led to this moment.
Sports have a way of enduring even while culture around them changes. Some current pro teams were founded in the 19th century, like Sheffield FC in the UK or the Chicago Cubs here in the US. Of course, the sports have also evolved in the hundred-plus years that those teams have existed—if a fan from those early years were teleported to the stands of a modern game, a lot of things would be strange for them, but there'd still be aspects of the game and traditions that they'd recognize.
For this prompt, we're going to into the future another 100 years, to 2126, and picture the sports of the future. To start, pick a sport that exists in the present day and brainstorm what it might be like in 2126. What parts of the game do you imagine would stay the same, and what might change? What about the spectator experience—how might the way that fans watch games or engage with the sport evolve in the next century?
Once you've thought about that, imagine that a major championship event is about to happen. Write a scene or poem from a fan's perspective as they prepare to watch it. Think about how the sport plays into the characters identity and day-to-day life, and how the world around the sport has changed, too, as you're writing through the moment.
Now that we've zoomed forward into the future, let's also take a second to linger on those long sports roots that I mentioned in the last prompt. In places where the same teams have played for decades, it's very common for that fandom to get passed down along with other family traditions. And this doesn't just happen with professional teams—the same can happen with college sports, or even the local high school team.
For this prompt, start by picturing a family that has rooted for the same team across at least 3 generations. This can be your own family or a made-up one, and you don't need to stick to real sports if you don't want to—this could be a fun way for speculative writers to explore a new aspect of a world they've built. Once you've decided on those basic details:
Brainstorm what traditions the family might have related to this sports fandom that span across three generations.
Now, think about how those different generations might do things differently in how they root for or watch their team of choice. These could be universal changes, like the shift from listening to games on the radio to watching on TV, or an individual change, like if one of the family members moved to a different country and now watches games at an expat bar.
For the last step, write a scene or poem that shows members from 3 generations of the same family watching the same game. They could be watching it together or separately, whatever works best for your characters. In the course of writing it, aim to highlight both the similarities and the differences in their experiences with watching the game.
See similar posts:
#WritingExercises #WritingAdvice #Sports
from
SmarterArticles

On a Tuesday morning in a primary school on the outskirts of Melbourne, a nine-year-old is asked to work out, without help, why a character in a short story is lying to his mother. She reads the paragraph twice. She frowns. Then she reaches for the tablet on the desk beside her, not out of defiance, but out of something that looks more like a reflex, the way a left-handed child reaches for a pencil. Her teacher, watching from the back of the room, later describes the gesture as “the most ordinary thing in the world, and the most frightening thing I see all day”. The girl has been using a chatbot to answer comprehension questions since she was seven. When her teacher gently removes the tablet and asks her to try again, the girl sits very still for a long moment, and then she begins to cry. Not because she is upset about the story. Because she does not know where to start.
The teacher who told me this story, and who asked that neither she nor her school be named because the parents in her catchment are already litigious about screen-time policies, says she has been teaching for twenty-two years. She has seen phonics wars, whole-language revivals, iPads promised as the saviour of literacy and then quietly stripped from her classroom, a pandemic, a long tail of pandemic, and the slow arrival of tools she still struggles to describe without sounding apocalyptic or ridiculous. What she has not seen before, she says, is a child who reaches for a machine not to cheat, but because she genuinely does not understand that thinking is something a person can do by herself.
That scene, or some version of it, is the one haunting a quieter argument now running beneath the louder one about AI and work. The loud argument is about jobs: which ones the models will take, which ones they will refashion, whether the productivity dividend will be broadly shared or narrowly hoarded. It is a serious argument, and it is the argument most of the research funding is chasing. But the quieter one, the one that turns up in developmental psychology journals, in Senate committee testimony, in the footnotes of arXiv preprints, is about something else. It is about whether a generation of children is growing up in an environment where the mental work that would have built their minds is being done for them, so reliably and so invisibly, that nobody, not even the children themselves, will be able to tell what has been lost until the loss is structural and the windows for repair have already shut.
In March 2026, a piece called “Adults Lose Skills to AI. Children Never Build Them.” appeared on the Psychology Today site under the byline of a researcher writing in its Algorithmic Mind column. The argument it makes is small and precise, and once you have seen it, the rest of the debate looks blurry. Adults who hand cognitive tasks to AI, the piece says, are offloading skills they already possess. The capacity existed; the neural scaffolding was built; the effortful years of doing the thing for themselves left behind an internal model that persists even when the external crutch is taken away. An accountant who uses a spreadsheet still knows, in some muscle-memory way, how the calculation should go. A journalist who leans on autocomplete still has, somewhere, the instinct for the shape of a sentence. This kind of offloading is what the piece calls atrophy. It is recoverable. Pull the tool away, do the exercise for a while, and the capacity comes back, stiff at first and then easier, like a limb out of a cast.
What happens to children, the piece argues, is not atrophy. It is foreclosure. A child who has never learnt to structure an argument, but who has been using AI to structure arguments since she was seven, is not weakening a capacity she already owns. She is skipping the developmental step at which the capacity would have been assembled in the first place. There is no cast to remove because there is no limb underneath. And because the child has no independent baseline, no memory of a self who used to be able to do this without help, she cannot recognise what is missing. She cannot mourn what she never had. From the inside, foreclosure does not feel like a loss. It feels like the way the world has always been.
This is the framing that the wider AI-and-cognition debate has largely missed, and its usefulness is that it cuts cleanly through a conversation that has been going round in circles since at least the mid-2010s. The calculator analogy, which is the default comfort blanket reached for whenever anyone raises concerns about AI in classrooms, assumes an adult model of cognition: people who already know their times tables can use a calculator without forgetting them, so children who already know how to write can use a chatbot without forgetting how. The problem is that the second clause is doing an enormous amount of quiet work. It presupposes the very thing AI in early education calls into question, which is whether the children in front of the tablet ever acquired the underlying capacity to begin with.
The Psychology Today framing also clarifies why “AI is just the new calculator” has always been the wrong metaphor, even for adults. Calculators replaced a narrow, visible, easily measurable skill: arithmetic drill. You could tell, at a glance, whether a sixteen-year-old could do long division. You could not tell, at a glance, whether a sixteen-year-old could construct an argument, weigh contradictory evidence, or notice when a paragraph did not quite make sense. The cognitive work that large language models absorb is precisely the invisible, foundational, harder-to-assess kind. You do not find out what has been foreclosed until the child is twenty-three, in her first real job, staring at a problem that no prompt will dissolve.
The Psychology Today piece was not written in a vacuum. A few weeks earlier, Fortune had published a story, drawing on testimony the neuroscientist Jared Cooney Horvath gave to the United States Senate Committee on Commerce, Science, and Transportation in January 2026, with a headline sharp enough to survive the algorithmic churn: Gen Z, Horvath told senators, appeared to be the first generation in modern history to test as less cognitively capable than their parents. The follow-up Fortune story in March put a figure on the problem. The United States, the piece argued, had spent around thirty billion dollars since the mid-2000s replacing textbooks with laptops and tablets, and what it had bought for the money was not smarter children. It was the reversal of a century-long trend.
Horvath's headline claim is not, strictly, a claim about AI. It is a claim about screens, edtech, and the accumulated effects of two decades in which classrooms were rebuilt around the assumption that digital tools would make children sharper. What the actual data show, according to his Senate testimony, is something closer to the opposite. He cited the OECD's Programme for International Student Assessment, whose 2022 round, the most recent for which full results are public, recorded what the OECD itself described as an unprecedented drop in fifteen-year-olds' performance: reading down ten score points, mathematics down almost fifteen, compared with the 2018 cycle, with the mathematics decline three times larger than any previous consecutive change and not attributable solely to the pandemic. Science was flat. Reading had been drifting downward for about a decade. These are, by the OECD's own accounting, equivalent to roughly three-quarters of a year of lost learning, across 81 member countries and economies, involving around 700,000 children.
It is worth being careful about what Horvath did and did not say. He did not say that AI has broken the minds of Generation Z. The large language models that most worry the developmental psychologists arrived too recently to have shaped the cohorts PISA was measuring. What he said was that the decline began somewhere around 2010, which is the moment smartphones became ambient in teenagers' lives and the moment American school districts started buying laptops by the truckload. The declines, he added, cut across attention, memory, literacy, numeracy, executive function and general IQ. He argued that this is consistent with a structural mismatch between how human cognition develops and how digital platforms are engineered to harvest attention, fragment focus and reward task-switching. He also argued, importantly, that the effects appear to be environmental rather than genetic, and therefore at least in principle reversible.
Taken alone, the Horvath testimony would be a disputable but interesting data point. Taken together with the wider Flynn-effect-reversal literature, it becomes harder to wave away. The Flynn effect, named for the political scientist James Flynn, was the observation that IQ scores rose steadily, by roughly three points per decade, across most of the twentieth century in most of the developed world. It is one of the most replicated findings in psychometrics. What recent work, including the Bratsberg and Rogeberg sibling study in Norway, has found is that this rise began to stall in the 1990s and, in some countries, has reversed. Norway, Denmark, Finland, the United Kingdom and France have all produced cohorts whose measured IQ is lower than their parents'. The Bratsberg and Rogeberg work is particularly hard to explain away because it uses within-family comparisons, which rule out the usual dysgenic stories about immigration or differential fertility. Whatever is causing the reversal is environmental, which means it was built by choices and could be unbuilt by different ones.
This does not mean Horvath's stronger framing is uncontested. Critics point out, fairly, that the skills PISA tests, and the skills IQ tests were built to measure, are not the whole of cognition. Some of what looks like decline may be a genuine loss of older competences while newer ones, digital navigation, rapid information filtering, cross-modal search, are not being captured by instruments designed in the 1960s. Some of it may be a confound with the pandemic. Some of it may be a sampling artefact as participation rates drift. These are real objections. They are also, collectively, not enough to dispose of the trend. The honest reading of the evidence is that something is happening to the cognitive capacities of young people across several developed countries, that it predates generative AI by at least a decade, and that the arrival of generative AI has dropped an accelerant onto whatever fire was already lit.
The reason the Fortune story and the Psychology Today framing matter, and the reason they are more than just another moral panic about screens, is that there is a mechanism. The mechanism is old, well replicated, and wildly inconvenient for anyone who would like to believe that an AI tutor is the same as a human one with lower overheads.
Robert Bjork, the UCLA cognitive psychologist who, with his wife Elizabeth Bjork, spent the better part of four decades mapping how people actually learn, coined the term “desirable difficulties” in 1994. The phrase is counterintuitive by design. What Bjork's work showed, across hundreds of studies in his lab and elsewhere, is that conditions which make learning feel slower and harder in the moment, such as spacing practice sessions out, interleaving different topics, forcing yourself to retrieve an answer before checking it, generating your own examples, produce dramatically better long-term retention and transfer than conditions which make learning feel smooth. The cognitive struggle is not a bug on the way to understanding. It is the thing that builds the understanding. The feeling of effortful recall, the moment when your brain has to fetch something that is almost but not quite there, is, as far as anyone can tell, the moment at which the neural trace is strengthened. Easy learning is forgettable learning. Hard, but achievable, learning is the kind that lasts.
Retrieval practice, the Bjorks' most famous technique, is the clearest illustration. In a now-canonical 2006 study, the memory researchers Henry Roediger and Jeffrey Karpicke showed that students who spent part of their study time testing themselves on the material, rather than simply re-reading it, recalled roughly fifty per cent more of it a week later, even though in the moment the re-readers felt they knew the material better. The test-takers felt worse about their own learning and had actually learnt more. This gap between the feeling of fluency and the reality of competence is, for the Bjorks, the central pedagogical fact of the twentieth century, and it is exactly the fact that AI tools are engineered, by commercial necessity, to flatter.
Now consider what happens when a child faces a writing task and asks a chatbot to help. The child types a prompt. The model returns a draft. The child reads the draft, perhaps edits it, perhaps not, and submits. Somewhere in that loop, the part where the child had to sit with the blank page, feel the discomfort of not knowing where to start, retrieve the half-remembered fragment of an idea, generate a sentence and then judge whether the sentence was any good, has been excised. The child experiences a product. What has been bypassed is the process, and the process is the learning. The writing task, in Bjork's terms, has been stripped of every desirable difficulty that made it pedagogically useful in the first place, and what is left is a performance.
It is tempting to assume this is a problem only for writing. It is not. A preprint posted to arXiv by the Anthropic fellows Judy Hanwen Shen and Alex Tamkin in late January 2026, titled “How AI Impacts Skill Formation” (arXiv:2601.20245), ran a randomised controlled trial with fifty-two professional software engineers who used Python regularly but had not worked with Trio, a library for asynchronous programming. Half used an AI assistant to complete two feature-building tasks. Half did the tasks by hand. Both groups then took a comprehension quiz covering code reading, debugging, conceptual understanding and related competences. The AI-assisted engineers finished the tasks only marginally faster than the controls, but they scored seventeen per cent lower on the comprehension quiz, fifty per cent versus sixty-seven per cent on average, with the steepest deficit in debugging. The paper's bluntest line is that AI assistance, in this setup, bought almost no productivity and cost a substantial chunk of learning.
The Shen and Tamkin paper is important for two reasons. The first is its methodological cleanness: it is a randomised trial, with adults, in a domain where the outputs can be scored objectively, and it still finds that AI use impairs skill formation. Adults are the easy case, the case the Psychology Today framing says should be recoverable, and the study shows the effect arriving even there. The second reason is the paper's subtler finding, which is that not all AI interactions are equivalent. The authors identify six distinct patterns of how participants used the model, and three of them, broadly, the ones where users asked the AI conceptual questions, asked for explanations of code rather than code itself, or treated the model as a tutor rather than a dictation machine, preserved learning outcomes. The other three did not. The difference is precisely the amount of effortful processing the user still did for themselves. When the AI absorbed the cognitive work, skill formation suffered. When the AI augmented the cognitive work without replacing it, skill formation survived.
This is the mechanism that explains why the child in the Melbourne classroom cried. For her, every piece of writing she had ever done was an interaction pattern in which the model absorbed the cognitive work. The capacity to sit with a blank page and do the effortful retrieval herself had not atrophied; it had never been built. When the scaffold was removed, there was nothing underneath it, because the scaffold, in her experience, was what a paragraph was.
Developmental neuroscience has a concept that makes all of this more alarming than it would otherwise be, and that is the concept of the critical period. The idea, first established in work on the visual cortex by David Hubel and Torsten Wiesel in the 1960s, which won them the Nobel Prize, is that brains are unusually plastic at specific points in development and then harden into something more fixed. If a kitten's eye is sewn shut during the critical period for binocular vision, the animal never develops normal depth perception, even after the eye is opened. The relevant machinery has simply been pruned away. The window closes. The brain moves on.
The critical-period literature has since been extended, with varying degrees of confidence, to language, hearing, phonological discrimination, some aspects of social cognition, and, more cautiously, to higher-order skills like executive function and abstract reasoning. Nobody serious claims that essay writing has a critical period in the Hubel-Wiesel sense. The developmental windows for the cognitive skills most relevant to schoolwork are longer, softer, more “sensitive periods” than hard critical ones, more like doors that gradually narrow than doors that slam. But the general principle holds: the brain you have at thirty is substantially shaped by which circuits got exercised between the ages of four and fourteen, and the circuits that do not get exercised are quietly pruned in favour of the ones that do. The developing brain is ruthless about not maintaining capacity it does not seem to need.
What Psychology Today's March 2026 piece is really proposing, if you follow the logic through, is that the sensitive period for a whole cluster of cognitive capacities, not just reading and writing but the habits of retrieval, argument, patience with uncertainty, willingness to sit inside a problem, is being spent in environments where those capacities are not needed, because something else is doing the work. The child is not lazy. The child is responding, correctly, to the affordances of her environment. If the environment rewards prompting over thinking, the environment will get children who are very good at prompting and have never developed the cognitive muscle for thinking. The pruning is not a moral failure. It is how brains work.
This is the part of the argument where sensible people want to reach for the calculator analogy again, and it is the part where the analogy most obviously breaks. Calculators do not build arguments or interpret metaphors or quietly suggest that your reasoning is unsound. They do one narrow thing. A large language model does the whole general-purpose cognitive stack. The relevant comparison is not “what happened to mental arithmetic when calculators arrived” but “what would happen to reading if, from the age of four, a machine read everything aloud for you, summarised it, and told you what to think about it”. We have reasonable confidence, from decades of reading research, that the answer would not be “children who read as well as their parents, plus more”. It would be children who never acquired the circuitry that reading builds, and who would struggle to acquire it later, because the window would be smaller and the pruning already done.
If foreclosure is the worry, the next question is how you would even know. This is the problem that makes the whole subject genuinely difficult, because the honest answer is: at the moment, you would not. Not in time.
Consider the instruments. PISA runs every three years and publishes results with a lag of about eighteen months. The most recent full cycle, for which results exist, is 2022. The next, 2025, will tell us something about the cohort of fifteen-year-olds who were eleven when GPT-4 arrived, but it will tell us in 2026 or 2027, about a tool that reached maturity in late 2022, so the lag between capacity loss and its measurement is already around five years, and those are the fast instruments. Standardised tests administered in individual countries have their own lags, their own methodological controversies, their own periodic rewritings. IQ testing is rare, expensive and freighted with political baggage. The longitudinal studies that produced the Flynn-effect literature take decades to run and decades more to analyse. None of this machinery is built to detect a capacity collapse in real time.
Worse, the instruments we have are disproportionately good at measuring the things AI is already good at. A child who can prompt a chatbot to write a competent five-paragraph essay will produce a competent five-paragraph essay. The assessment, if it is marking surface features, will record a capable student. What the assessment cannot easily see is whether the child could have produced the essay without the machine, whether she could defend any of its claims under gentle questioning, whether she could identify the one sentence in it that is subtly wrong. The symptoms of foreclosure are, by construction, visible only in the conditions the test is not running. This is not a new problem in education. It is the old problem of fluency illusions, the Bjorks' observation that students routinely mistake the feeling of understanding for actual understanding, applied at population scale and accelerated by tools that are very good at generating the feeling.
There are earlier warning lights, but they are easy to miss. Teachers, if you ask them, will often tell you that something has changed. The sort of story the Melbourne teacher told me turns up in quiet rooms at education conferences more and more often: children who do not know how to begin, children who panic when the Wi-Fi goes down, children who can summarise a text without being able to explain what it meant, children who will tell you the answer is “whatever the AI said” and cannot say more. These are noisy anecdotes, easily dismissed as the usual generational grumbling. But teachers were also the first to notice that reading stamina was collapsing, years before any national test caught it, and the national tests eventually caught up. Anecdote at scale is data with the p-values stripped off.
Better instruments exist in principle. Cognitive load tasks, where a child is asked to reason aloud through a problem without a screen, can distinguish between the child who has internalised the process and the child who has only ever observed it. “Structured desisting” protocols, in which pupils are asked to complete a task the hard way while being observed, expose the difference between performance and competence. Neuropsychological batteries can pick up executive-function deficits that do not show up on content tests. None of these are new. All of them are more expensive, more intrusive and less media-friendly than a headline number. None of them are being rolled out at anything like the scale the problem would justify.
The deeper detection problem is temporal. Cognitive capacities, like compound interest, reveal themselves most obviously in the long run. A child who has not built argumentative stamina at nine may look fine at nine, because nine-year-olds are not asked to sustain long arguments. She may look fine at fourteen, when her assessments reward short-form production at which AI excels. The capacity she is missing only becomes load-bearing at nineteen, when she is asked to write a dissertation, or at twenty-six, when she is asked to lead a meeting nobody in the room quite understands, or at thirty-one, when she is the one expected to notice that a model's output is wrong. By that point, the window she would have needed to build the missing capacity in has long since narrowed, and the environment she is in has no incentive to reopen it.
This is what makes the foreclosure framing morally serious rather than merely alarming. If the worry were “children will do less well on tests next year”, we would notice next year. The worry is that children will do roughly as well on tests next year, and the year after, and the year after that, because the tests measure the thing the machine is doing, and the underlying cognitive formation will show up missing only much later, in contexts nobody is tracking, to people who have no baseline against which to know what they lost.
It is tempting, at this point in an argument of this kind, to reach for the policy conclusion most congenial to the writer's prior commitments. The restrictionists will want phone bans, chatbot bans, a return to pencils. The optimists will want more AI, of a better kind, with better pedagogical design, and will point, correctly, to the Shen and Tamkin finding that some interaction patterns preserve learning. Both of these are reflexes. Neither of them takes the detection problem seriously.
The harder thing to say is that if the Psychology Today framing is right, even approximately, the response has to be architectural rather than prohibitive. You cannot ban children out of the environment they live in. The environment is the internet, and the internet now has generative models woven into most of its surfaces, and that genie is not returning to its bottle. But you can, in the environments you control, engineer deliberate zones of desirable difficulty: places where the cognitive work is protected from outsourcing not because AI is bad, but because the work is the point. Classrooms that do some things on paper, not as a nostalgic gesture but as a cognitive-science intervention. Assessments that measure process, not just product. Homework that cannot be plausibly completed by a chatbot because it requires the child to explain her reasoning in real time, to a human, without a screen. The Danish school reforms Horvath cited in his Senate testimony, which pulled tablets out of early years and reintroduced pencils and books, are not a Luddite gesture. They are a bet that the developmental window matters more than the device.
Architectural responses also mean taking the detection problem as seriously as the problem itself. If we cannot know whether capacities are foreclosing until the cohort in question is adult and the window has shut, then the only responsible posture is to build, now, the instruments we will need then: longitudinal studies that follow today's seven-year-olds through to adulthood with periodic process-oriented assessments, funding for the boring, non-headline-grabbing work of measuring what is actually happening to attention spans and retrieval ability and argumentative stamina, independence for those studies from the platforms that would rather the results were flattering. This is expensive and unsexy and will produce results on a timescale longer than any electoral cycle. It is also the only way to avoid waking up in 2040 with a generation of adults who cannot do things their parents took for granted, and without the data to show how it happened.
What genuine concern looks like, if you take the evidence seriously, is neither the panic of the restrictionists nor the deflection of the optimists. It looks like a grown-up willingness to say that some things children used to do for themselves were not decoration; they were how the child's mind got built. It looks like designing schools and homes and apps on the assumption that effort is not friction to be smoothed away but the scaffolding on which capacity accretes. It looks like accepting that AI is a permanent feature of the adult environment, and therefore that the business of childhood, more urgently than ever, is to build the cognitive machinery the child will need in order to use those tools as an augmentation rather than a replacement. It looks, finally, like humility about what we do not yet know, and a willingness to act under uncertainty, because the alternative, waiting for proof that will only arrive when it is too late to act on, is a kind of negligence we have rehearsed before, with lead paint and with sugar and with tobacco, and which we keep promising ourselves we will not rehearse again.
The teacher in Melbourne told me the girl who cried over the comprehension question eventually, with coaxing, produced three sentences of her own. They were not very good. They were hers. “That's the first time this term she's thought on the page,” the teacher said. “And I had to physically take the tablet away. I had to sit there and wait. And the worst thing is, I kept wanting to give it back to her. Because it felt cruel. Because she was struggling. And the whole point is that she was supposed to be struggling. That was the lesson. That was the only lesson.”
What Psychology Today's March 2026 piece names is the possibility that the struggle, the messy, tearful, unproductive-looking work of a child sitting with a problem she cannot solve yet, is the developmental window. And the window closes in the dark, unremarked, while everyone is congratulating the child on how fluent her outputs have become. You will not notice when it shuts. You will notice, years later, what does not walk through it.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from brendan halpin
When I was a lad, I used to go to punk rock shows at the Jockey Club in Newport, Kentucky. At the time, Newport was an economically depressed, run-down, menacing place. There were dying strip clubs there, and dive bars, and a White Castle that was the second-scariest fast food place I ever set foot in. (The first was the McDonald’s at 40th and Walnut in Philadelphia, where serial killer Gary Heidnick used to find victims and where at least one customer was stabbed by an employee when I lived in the neighborhood).
The Jockey Club was a dive bar where someone had convinced the crochety old owner (known only as “Shorty”) to let them book punk rock shows. It was not a nice place. But it was fun and weirdly wholesome. There was sometimes overly enthusiastic moshing (which we called “slam dancing.” This term inspired the title of Wayne Wang’s underrated 80’s film noir Slam Dance, starring Tom Hulce!), but otherwise it was just a bunch of kids hanging around enjoying the music and thinking they were sophisticated as they sipped from bottles of Guinness or oil cans of Foster’s Lager.
The venue made a little money because people would pay to see this kind of music that couldn’t get booked at any other clubs. And people started and joined bands because they knew they’d have a place to play. That’s how you get a scene of independent artists doing their thing without corporate attention or interference.
This isn’t a lighting in a bottle phenomenon. It just requires cheap rents. The recent documentary Secret Mall Apartment shows how a similar art/performance scene grew up in disused warehouses in Providence. And then got displaced by development, which is what’s happened in so many cities.
Cheap rents are in extremely short supply in most major cities in the USA, and art and culture have suffered as a result.
But last night, I went to a pro wrestling show in Elmwood Place, a small municipality northwest of Downtown Cincinnati and got some hope. I pulled up in front of an empty storefront church. You could see the pews through the windows, and the owner had put up a big sign that said, “FOR RENT: RETAIL ONLY.” I passed two more empty storefronts on my way to the venue, which was an unmarked storefront.
I paid ten bucks cash at the door and walked into the venue. Grimy wall-to-wall carpeting covered the floor. The walls were stained enameled cinderblock. There was a tin ceiling that was rusted in spots and had paint peeling pretty much all over. And in the center of the space, a wrestling ring. Oh yeah, and like most indoor athletic facilities, especially carpeted ones, this place had a certain funk in the air—it smelled like feet and shaving cream.
I pulled up a chair in the front row next to a couple of kids who had brought signs. “This,” I thought, “is where the real shit happens.”
And it was! I enjoyed a really fun wrestling show with about 30 other fans, and I couldn’t help thinking of the Jockey Club. Not only because of my physical surroundings, although also that, but because I was watching art that people were making for love.
The gate from this event was probably 300 bucks. They might have cleared a little more than that from concessions, merch, and the 50/50 raffle. Nobody was here trying to make it big—they were just making art for people who loved it.
Now don’t get me wrong—I do believe artists should get paid. But, and I speak from experience as someone who was a professional writer, as soon as money enters the picture, it demands changes and compromises, and while you can still make great art under those circumstances, the lack of money allows you to be weird as hell, to say, yeah, I’m making this thing, and you can like it or not, but it is EXACTLY what I want it to be. It is what I want to put into the world.
Now look—maybe indie wrestling isn’t your thing. (though, if it is, head on over to kayfabe.ink and sign up for my newsletter. I’ll be writing up this very show in the next couple of days!) But somewhere near you (and, admittedly, if you live in a major city, it’s probably not in your city), people are making cool, weird, authentic art on a block where you can’t get a good cup of coffee. It’s not corporate, it’s not capitalist, and most importantly at this point, it’s not fascist, because of course fascism is all about conformity and cruelty.
Find the weirdos and go dig their art. Or, better yet, be one of those weirdos. Go start your own band! Put on a play! Paint something and hang it on the wall! Art makes us human and makes life bearable and meaningful. Go make some!
from
Roscoe's Story
In Summary: * Another quiet Sunday in the Roscoe-verse peacefully winds down. The only thing of consequence I did today was install fresh batteries in the wall-thermostat unit so I could fire up the central air conditioner to cut the ridiculously high humidity level and make the air in this joint comfortably breathable. Did this after the wife went down for an afternoon nap. She'll be surprised when she wakes up. Heh.
In about two hours I'll wrap up the night prayers and head to bed early so as to be ready for Monday morning when it arrives.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 231.04 lbs. * bp= 135/78 (71)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:00 – biscuits and butter * 10:00 – pizza * 11:30 – 2 peanut-butter cookies * 13:50 – sausages, pickled papaya, white rice * 16:00 – 1 fresh apple * 18:15 – 1 chocolate chip cookie
Activities, Chores, etc.: * 05:30 – bank accounts activity monitored. * 06:50 – read, write, pray, follow news reports from various sources, listen to relaxing music, surf the socials, nap. * 11:15 – tuned into a Formula E Race: The Hankook Berlin E-Prix * 12:00 – now watching LPGA Golf * 14:00 – now PGA Tour Golf: The Cadillac Championship * 17:00 – following news reports from various sources * 18:30 – listening to relaxing music
Chess: * 11:12 – moved in all pending CC games
from
Larry's 100
No mainstream artist has captured my fan heart over the last eight years like Spacey Kacey. From classic country revivalism, through excursions into disco diva, cottage core and electro-pop, I ride with it. Middle of Nowhere packages all the elements into a cohesive LP.
She adds more western touches like pedal steel, cowboy cornpone, and Mariachi to her brew, grounding the album in her East Texas roots. Mimicking that geography, themes explore expanse, isolation and the duality of joy and pain of being alone.
Most will herald a “return,” but for me, I keep riding the Rose Wave.
Buy it.

#KaceyMusgraves #MiddleOfNowhere #LostHighway #CountryMusic #TexasMusic #RoseWave #AlbumReview #Music #Larrys100 #100WordReview #100DaysToOffload
from An Open Letter
I don’t know why today but I decided that even though it would probably hurt a little bit less if I waited more time I’m going to throw away the bag of stuff that I kept in the shed from our relationship. I went through everything because it was going to be the last time I was going to see them. And I decided that all of it should be thrown away. I feel guilty for growing away lemon, since I spent a lot of nights cuddling lemon and I feel like the parrot of that stuffed animal but at the same time it is just a stuffed animal and I don’t need to torture myself by humanizing it too much. I also decided to throw away the other presence and stuff that she gave me because I don’t want to carry those memories with me longer than I have to, and I don’t think that I’ll ever be able to separate those memories which is not a bad thing either. I just don’t want to be constantly reminded of them. I read the birthday card that she gave me. Twice. And then I threw it away without taking a picture. In the card she told me how much she loves me and how she loves to see the passion in my eyes and getting to hear about my perspective on the world and how many things I’ve been able to teach her. She told me that the only thing she wants is to be able to move in with me. I remember in one of the cards that I gave her, I wrote about how I’ve never been religious but I found heaven and it’s me laying awake in bed with her softly snoring on my chest, with me wanting to stay awake as long as possible to just save that memory. And it hurts because for some reason my brain wants to first say that it was wasted on her, something that beautiful, but it was not. I wrote those things because that’s how I felt and that’s what she gave me. And she also gave me a lot of horrible things I don’t want to romanticize the relationship, I remember going through the gratitude journal that she gave me and seeing the things that she wrote down and seeing the things that I wrote down, and it feels like I just had such a low bar or expectation, that I was trying to find ways to be grateful for the fact that she apologized for something after a lot of explanation for me, even though there was no behavior to back that up. And it sucks that I felt so unsafe and volatile in that relationship. And it hurts to see the times where she writes how much she loves me and how much she wants to spend her life with me and how I was able to teach her to apologize, but I couldn’t teach her how to actually change her behavior. And I think there’s just so much of a discrepancy between what would be healthy for me in a relationship and what she was able to offer, and that just caused so much friction and eventually the end. But it still hurts to throw away the framed photo of us that she gave me as one of her apologies near the end. She wanted to show me that she was committed and that she did care and that she didn’t want to change and that was her way of showing that she could put in effort. And it was so incredibly sweet of her. She framed the photo of us at the cat café that I took her to as a surprise. And it really hurts because I remember his feelings instead of had enough time to fade into the back, but with these small little things and these memories I remember how much I loved her. Like it’s such a beautiful feeling to care about someone so much and want to make them happy that you don’t even feel like it’s effort or work at all. It’s something that you want to do and it’s so incredibly rewarding. I have to kind of force myself to do these creative projects at different artsy things that I like and I’ve never once had to force myself to think about her or to try and execute these cute dates or things that would make her feel loved. Like wanting to write her cards, or to try to think about ways that I can help her or make her life easier. And it’s just that feeling of loving someone. And God it hurts to remember how I don’t have that anymore. It’s such a beautiful thing to be able to love someone like that and it’s so incredibly priceless to feel like that’s reciprocated. To think and to feel to believe that someone sees you and just wants to make you happy and just wants the best for you. And it hurts because I really did feel that and I don’t think that E is a bad person, and I don’t think that she was intentionally manipulative or aware of the bad things that she was doing, and I really do believe that she loved me. And I know that I loved her. And I know that both of us hurt in different ways and we both have to go through our own journeys and she is not alone in her path, even though it’s not one that I can relate to. And I know that vice versa is true. But it really does hurt to hold both of those truths together in a way that I don’t feel like I was able to earlier in the breakup. It hurts to understand that someone can love you and you can love them and they can have the best intentions, and at the same time they can still hurt and be toxic and do all of these things that are not OK. And I know that this vacuum and hole that I’m feeling from losing what was something incredibly beautiful is a necessary pain because it was beautiful in the same way that a drug is. It’s not sustainable and it’s something that can be damaging if you tie yourself to it so heavily. And there were absolutely things that I’m so grateful for and I am glad that I had this relationship, there was a lot of things that I had to learn and be aware of and thankfully because of that relationship I am more suited and positioned to hopefully find a partner where I do feel safe and consistently so. I don’t want to have every week or every other week another big problem or another potential dealbreaker pop-up. I don’t want her to yell at me when I try to voice that something hurts, or have to find out that she was hiding things like exes or talking to people that are showing interest in her. I don’t want to have this jealousy or conflict that isn’t communicated to me about my other friends, even with my attempts to be transparent. I don’t want to feel like there’s a different life that’s being hidden from me, and seeing the differences between her when she’s around me and her when she’s around other people. And I want to know that the big things that hurt me can be remedied, rather than them being disregarded or ignored or minimized.
But I do miss the good. And I know that overall it was a very clear sign that this was not a relationship for me and I am grateful in a sense, because there were enough explicit things and enough that pushed me hard enough to see that I was in the wrong for trying to make it work constantly. And this would have hurt me so much more if there were these different things that were incredibly valued to me in the relationship, or if it was just that zone of comfortable discomfort. I’m so grateful that it happened when it did and it didn’t last longer, and God forbid something like marriage or children. And I really do believe that there is some sort of divine planning in my life or some kind of a overseer that gives me these opportunities and experiences in ways that I truly need, even when I don’t think I do – all while protecting me as much as possible through it. And I will be OK. And I mean that in the sense of in the future I will have a life that will be so beautiful and it will be filled with the things that I am currently wishing for, like a loving wife that I feel safe with, hopefully children, and I really hope Hash for a long time. I will have someone who will love Hash just as much as me, if not more. And he will be so incredibly loved and safe. And I will find someone that matches me in the ways that matter, and someone that will be a great mother to future children. Someone that will be able to give them a childhood not just of love, but of stability. And that is so incredibly important to me. And it’s so important that it’s not worth a wide confidence interval for potential, but rather a narrow necessity.
I firmly and truly believe that my future will be everything that I want, either through divine planning, or through sheer effort and intentionality. I love you man, and I know that there’s a lot of pain and hurt that comes from living life, but I want to remind you that it is worth it.
from Lastige Gevallen in de Rede
Het is bijna zover het moment waarop ik zal schitteren als een ster iedereen overal vol ontzag is over mijn prestaties in elke zaal, in winkels, zelfs op straat krijg ik daverende ovaties nog een paar nachtjes slapen en dan gebeurt het sta ik te kijk op elk teevee kanaal en ga viraal over het internet dan heb ik het hier op aard helemaal gemaakt het genoegen dat ik dan ben valt bij iedereen in de smaak nog even wachten en dan begint het allemaal echt dat is natuurlijk ook niet meer dan terecht nog een paar versjes in de uiterste marge van de kantlijn tikken nog drie almachtige kontjes langdurig likken maar dan mag ik van de eigenaren door hun schitterend glanzende poort richting schier eindeloze verering dan komt het geroezemoes nooit meer tot bedaren het is bijna zover dan kom ik aan bij mijn geheel eigen ster maar eerst nog een paar nachtjes slapen dan op bezoek bij de drie belangrijkste mediamagnaten gaan knielen, drie paar kloten kussen en als een bezeten likken aan de randen van anussen maar dan ben ik iemand voor het leven dan wil iedereen dat ik aandacht aan hun ga besteden dan ben ik ook zo iemand die zekere diensten mag eisen een klein taakje om je aanwezige talenten te bewijzen het is bijna zover dan likt een ander aan mijn ster.