It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
from
Internetbloggen
När internet började bli tillgängligt för en bredare publik under 1990-talet uppstod ett behov av enklare sätt att publicera innehåll. Tidiga webbplatser var ofta statiska och krävde teknisk kunskap för att uppdateras, men gradvis växte idéer fram om mer personliga och kontinuerligt uppdaterade sidor. Ur detta föddes bloggarna – en blandning av dagbok, publiceringsplattform och offentlig röst, där individer kunde dela tankar, länkar och berättelser i ett löpande flöde.
Samtidigt uppstod ett praktiskt problem: hur skulle man hålla koll på alla dessa uppdateringar utan att behöva besöka varje sida manuellt? Lösningen blev RSS, ett standardiserat sätt att distribuera innehåll automatiskt till läsare. Med hjälp av RSS kunde användare prenumerera på sina favoritbloggar och få nya inlägg samlade på ett ställe, vilket gjorde internet både mer överskådligt och mer levande. Tillsammans lade bloggar och RSS grunden för ett mer dynamiskt, användardrivet nät – långt innan sociala medier tog över scenen.
Under tidigt 2000-tal var bloggar själva ryggraden i det sociala internet. Plattformar som Tumblr, Blogger och WordPress gjorde det enkelt för vem som helst att publicera tankar, guider och dagboksinlägg. RSS, via format som RSS och Atom, blev ett slags distributionslager ovanpå detta: istället för att besöka varje blogg kunde man samla allt i en läsare och få uppdateringar i realtid. Det var en ganska decentraliserad och användarkontrollerad modell.
Sedan kom sociala medier och förändrade spelplanen. Plattformar som Facebook, Twitter och senare Instagram tog över mycket av det som bloggar tidigare stod för. Det blev enklare och snabbare att publicera kortare innehåll, och algoritmer började styra vad vi ser istället för kronologiska flöden. I den miljön tappade RSS sin synlighet, inte för att tekniken slutade fungera, utan för att den inte passade in i affärsmodellen hos de stora plattformarna.
Men det betyder inte att bloggar och RSS försvunnit. Snarare har de blivit mer nischade och ibland mer professionella. Nyhetsbrevstjänster som Substack och Ghost bygger i praktiken vidare på samma idéer: direkt relation mellan skribent och läsare, utan mellanhänder. Många av dessa erbjuder fortfarande RSS-flöden, även om de inte alltid lyfts fram lika tydligt.
Samtidigt finns det en tyst renässans för RSS bland mer tekniskt intresserade användare. Verktyg som Feedly och Inoreader används för att återta kontrollen över informationsflödet i en tid där algoritmer ofta upplevs som brusiga eller manipulativa. I en värld av “doomscrolling” blir RSS nästan ett motgift: du väljer själv vad du vill följa, och inget annat.
Bloggandet i sig har också förändrats snarare än minskat. Mycket av det som tidigare hade varit blogginlägg dyker idag upp som långa trådar på sociala medier, videor på YouTube eller poddar. Formen har skiftat, men drivkraften att publicera och dela perspektiv är densamma.
Så frågan är inte riktigt om bloggar och RSS är på väg bort, utan om de har slutat vara mainstream. De har gått från att vara standard för alla till att bli verktyg för de som aktivt väljer ett mer öppet och kontrollerat internet. Och just därför finns det något nästan tidlöst i dem. När pendeln svänger bort från centraliserade plattformar brukar intresset för öppna standarder och egna publiceringsytor komma tillbaka.
Det dyker också upp nya tjänster för att följa bloggar så som Blogflock. Så än är nog inte bloggar och RSS utdöda.
Det har också kommit mer nischade bloggplattformar. Nouw är en svensk sådan, den växte fram i en tid när bloggandet redan hade blivit etablerat, men höll på att förändras. Den lanserades 2015 som en vidareutveckling och omprofilering av det tidigare communityt Nattstad, med ambitionen att skapa något mer än bara ett tekniskt verktyg för att skriva inlägg.
Till skillnad från klassiska bloggplattformar fungerade Nouw inte bara som en plats där man publicerar texter, utan också som ett slags digitalt magasin. Bloggarna blev en del av ett större nätverk där innehåll kunde lyftas fram, kurateras och nå en bredare publik. Det gjorde att plattformen fick drag av både socialt nätverk och mediekanal, snarare än enbart ett publiceringsverktyg.
Framtiden för bloggar och RSS är svår att spika fast, men mycket pekar på att de inte försvinner utan snarare fortsätter leva i nya former. I takt med att fler tröttnar på algoritmstyrda flöden och centraliserade plattformar kan intresset för öppnare lösningar öka igen, där användaren själv styr vad som konsumeras. Tekniker som RSS finns redan på plats och används fortfarande bakom kulisserna i många tjänster, även när det inte märks utåt. Samtidigt kan nya sätt att publicera innehåll – som nyhetsbrev, poddar och egna plattformar – fortsätta sudda ut gränsen för vad en “blogg” egentligen är. Kanske blir framtidens blogg mindre synlig som begrepp, men desto mer närvarande som idé: en direkt kanal mellan skapare och läsare, utan att någon annan bestämmer vad som ska nå fram.
from An Open Letter
I did an over two hour leg workout with a ton of drop sets and failure and I feel good. I do believe that I have a life worth living and I would like to experience it and I’m grateful for all of the additional chances that I get to be appreciative for what I have.
from
Talk to Fa
She often shares pictures and videos of her daughter. The baby is 8 months old. I get the impression that she is more entertained by the baby than gently loving her. She is learning to love, to love herself by loving her daughter. The baby is filling the mother's lack of love. She gave birth to a girl rather than a boy because the girl is the healer for the mother.
from Millennial Survival

Resilience. One word that can determine whether you survive or not. One word that can determine whether you pick up and keep going or gradually fade into the background, no longer relevant to the word around you.
I was reminded about what it means to be resilient recently when I was not selected for a job role, despite being one of the two finalists. I gave it my all, I had great conversations with my interviewers, and I felt good coming out of the final round of interviews. Then I started to notice the signs. Follow up wasn’t as forthcoming as I expected it to be despite how enthusiastic the organization was about me. I was told to expect feedback as of a certain date, it didn’t come. Then I was going to receive it by a slightly later date. It came. I was a strong candidate, the decision was hard, but I wasn’t selected. Someone that was closer to where the organization is headquartered was. Someone that wouldn’t require relocation. I lost the opportunity because my situation was harder to deal with logistically for this organization that what the other candidate’s situation was.
The anger set in, as did the frustration, the disappointment, and the questions about what I could have done differently. Rather than getting the chance to make a positive impact within an organization, I was shown the exit. I had little explanation as to why and a lingering feeling that I wasn’t selected because someone didn’t want to deal with the logistics involved with me taking the role.
The response to this kind of situation could becoming a defining moment in my professional and personal life. Either I choose to double down in my current role and excel where I am or I disengage, become bitter, and resent that I wasn’t going to be where I wanted to. I made a conscious decision to choose the former. I chose resilience. No organization is perfect; the organization I work in today is far from perfect. Yet if I choose to be resilient, I choose to engage more and choose to find opportunity in times of setback when I know I can make the organization better.
I refuse to let the decision made by someone else define my outlook, my attitude, or whether or I am happy or not. I choose to be resilient. I chose to move forward.
from
SmarterArticles

In January 2026, Kristalina Georgieva, the Managing Director of the International Monetary Fund, stood before an audience at the World Economic Forum in Davos and offered a statistic that landed with the quiet brutality of a footnote in a corporate restructuring memo. The number of translators and interpreters at the IMF, she said, had dropped from 200 to 50. The cause was not a budget crisis or a policy realignment. It was technology. The fund had simply decided that machines could handle most of the work that humans used to do.
Georgieva presented the figure as evidence of a broader transformation. Forty per cent of global jobs, she argued, would be transformed or eliminated by artificial intelligence, with that figure climbing to 60 per cent in advanced economies. But it was the specificity of the translation example that stuck. This was not a hypothetical projection or an economist's forecast. It was a headcount. Real people, with real expertise in the precise rendering of financial policy across languages and cultures, had been replaced by systems that could approximate their output at a fraction of the cost.
The IMF is not alone. Across the global translation industry, now valued at an estimated 31.70 billion US dollars according to Slator's 2025 Language Industry Market Report, a similar pattern is playing out. Large language models and neural machine translation systems have not simply made human translators obsolete. They have restructured the profession from the inside, converting skilled practitioners into quality controllers for text they did not write. The question this raises is not whether AI can translate. It demonstrably can, often to a standard that passes casual inspection. The question is what happens to a profession, and to the cultural knowledge it carries, when the market decides that “good enough” is good enough.
A 2024 survey conducted by the United Kingdom's Society of Authors, which polled 787 of its 12,500 members, found that 36 per cent of translators had already lost work to generative AI. Forty-three per cent reported a decrease in income as a direct result of the technology. Over three-quarters, some 77 per cent, believed that generative AI would negatively affect their future earnings. Eighty-six per cent expressed concern that the use of generative AI devalues human-made creative work. These are not projections. They are reports from working professionals describing what has already happened to their livelihoods.
The income data from individual translators is more granular and more alarming. Brian Merchant, writing in his newsletter Blood in the Machine, documented cases across the profession in mid-2025. One technical translator with 15 years of experience reported earning just 8,000 euros in 2025, down from six figures in previous years. A French-English translator based in Quebec described a 60 per cent income decline in 2024, with projections suggesting an 80 per cent drop from peak earnings by the end of 2025. An Italian-English translator in Rome reported that work requests had ceased entirely for the month of June 2025, after years of working 50 to 60 hours per week. An English-Portuguese translator documented that post-editing rates had collapsed from 0.04 euros to 0.02 euros per source word, halving the already modest compensation for correcting machine output.
In the United States, Andy Benzo, president of the American Translators Association, told CNN in January 2026 that many translators were leaving the profession entirely. Benzo noted that the risks of using AI translation in “high-stakes” fields remain “humongous,” yet the exodus continues regardless. Ian Giles, chair of the Translators Association at the UK's Society of Authors, confirmed the same pattern, noting that translators were seeking retraining “because translation isn't generating the income it previously did.” The exits are not dramatic. There are no picket lines or public protests. People are simply disappearing from a profession that can no longer sustain them.
The scale of this workforce is not trivial. There are approximately 640,000 professional translators globally, and three out of four are freelancers. It is this freelance majority that has borne the brunt of the disruption, lacking the institutional protections and guaranteed workloads that might have cushioned the blow.
A study published in 2025 by Carl Benedikt Frey and Pedro Llanos-Paredes at the Oxford Martin School quantified the scale of displacement with unusual precision. Analysing variation in Google Translate adoption across 695 local labour markets in the United States, the researchers found that a one percentage point increase in the use of Google Translate corresponded to a 0.71 percentage point reduction in translator employment growth. The cumulative effect, they estimated, amounted to more than 28,000 fewer translator positions created over the period from 2010 to 2023. And that figure captures only the impact of a single, relatively crude machine translation tool that preceded the large language model era. The arrival of systems like GPT-4, Claude, and Gemini has accelerated the process enormously, because these models do not just translate. They handle idiomatic expression, register, and contextual nuance at a level that earlier statistical systems could not approach.
In July 2025, Microsoft researchers published a study examining which occupations were most exposed to generative AI capabilities. Translators and interpreters ranked first on the list, with 98 per cent of their work activities overlapping with tasks that AI systems could perform with relatively high completion rates. The study analysed 200,000 real-world conversations between users and Microsoft's Copilot system to arrive at its rankings. The researchers were careful to note that high exposure does not automatically mean elimination. But the practical effect has been unmistakable. Employers have used the availability of AI translation as justification for cutting rates, reducing headcounts, and restructuring workflows around machine output.
The restructuring of translation work follows a pattern that is becoming familiar across AI-affected professions. The human does not vanish. Instead, they are repositioned downstream in the production process, tasked with reviewing and correcting output that a machine generated in seconds. In the translation industry, this workflow is known as Machine Translation Post-Editing, or MTPE, and it has rapidly become the dominant model for commercial translation work.
According to Slator's 2025 survey of the language industry, 60 per cent of all respondents were using machine translation, with adoption reaching 80 per cent among language service providers. Among those using machine translation or large language models, between 90 and 98 per cent performed some level of post-editing on AI-generated content. Eighty-four per cent of language service integrators reported that clients had specifically requested human editing services to review AI-generated translations. The human, in other words, has not been removed from the process. But the nature of their involvement has been fundamentally altered. They are no longer creating. They are correcting.
The compensation reflects this downgrade. Post-editing rates typically fall between 50 and 70 per cent of standard translation rates, with some agencies offering as little as 25 per cent of what a full human translation would command. Industry data from 2025 indicates that MTPE work commands between 0.05 and 0.15 US dollars per word, compared with 0.15 to 0.30 dollars per word for standard human translation. One translator documented by Equal Times, an international labour news platform, described pre-translated segments paying just 30 to 50 per cent of original rates, while fully automated platforms paid up to seven times less than standard. The economic logic is straightforward. If the machine does 80 per cent of the work, the reasoning goes, then the human should be paid for only 20 per cent. What this calculation ignores is that post-editing often requires comparable time and cognitive effort to translation from scratch, because the translator must not only identify errors but also understand the systematic patterns of how the AI fails and where its confidence is misplaced.
The workflow itself has been transformed in ways that strip autonomy from the translator. Texts no longer arrive as clean source documents to be rendered thoughtfully into a target language. They arrive pre-segmented, with machine-generated suggestions already populating each segment. The translator's task becomes one of triage: deciding which suggestions are acceptable, which need modification, and which must be discarded entirely. Automated platforms distribute this work via alerts that give translators minutes or even seconds to claim individual segments, creating a piecework dynamic more reminiscent of a fulfilment warehouse than a skilled profession. Some platforms threaten automatic disconnection for translators who dispute corrections imposed by quality-assurance algorithms.
Jean-Jacques, a 30-year veteran translator quoted by Equal Times, described the shift bluntly. “It's not really a matter of translating anymore,” he said, “but revising and correcting the segments proposed by the machine.” Another translator, identified as Alina, captured the paradox at the heart of the arrangement. “AI is both a tool and a threat,” she said. “We ourselves are teaching it how to translate, how to improve.” Each correction a post-editor makes feeds back into the training data that will make the next generation of AI translation marginally better, and the human's role marginally less essential.
This dynamic, in which skilled workers are conscripted into training their own replacements, is not unique to translation. It has appeared in content moderation, coding, and legal document review. But in translation, the irony is particularly sharp, because the expertise being extracted is precisely the kind that AI systems struggle most to develop on their own: cultural sensitivity, tonal awareness, and the ability to navigate the space between what a text says and what it means.
The case for human translation has always rested on something more than accuracy. It rests on the claim that translation is an interpretive act, a creative negotiation between two linguistic and cultural systems that requires not just knowledge but judgement. Jhumpa Lahiri, the Pulitzer Prize-winning novelist who has written extensively about translation, describes the process as “a radical act of reshaping text and self.” In her essay collection Translating Myself and Others, published by Princeton University Press in 2022, Lahiri argues that “a translator restores the meaning of a text by means of an elaborate, alchemical process that requires imagination, ingenuity, and freedom.”
This is not the language of quality assurance. It is the language of craft, of a practice that involves the translator's full intellectual and emotional engagement with a text. Emily Wilson, the first woman to translate Homer's Odyssey into English, has spoken repeatedly about the impossibility of separating linguistic from cultural knowledge in translation. The hardest part of translation, she has argued, is not understanding the original but “figuring out how to create it entirely from scratch in a totally different language and culture.” Wilson's translation of the Odyssey was widely praised precisely because it made choices that no algorithm would make: tonal decisions, rhythmic choices, and interpretive framings that reflected not just the Greek text but Wilson's own understanding of what the poem means to contemporary English-speaking readers.
Gregory Rabassa's English translation of Gabriel Garcia Marquez's One Hundred Years of Solitude is perhaps the most celebrated example of translation as creative achievement. Marquez himself reportedly said that he considered the English translation a work of art in its own right, a remarkable statement from an author about a rendering of his own novel. Edith Grossman, the acclaimed translator of both Marquez and Cervantes, described Rabassa as “the godfather of us all,” crediting him with introducing Latin American literature to the English-speaking world in a way that preserved not just meaning but spirit.
These examples belong to the domain of literary translation, which remains relatively insulated from AI disruption. Literary commissions have continued to flow to human translators, in part because publishers recognise that the qualities that make a literary translation valuable are precisely the qualities that machines lack. But the insulation is narrower than it appears. The vast majority of professional translation work is not literary. It is commercial, legal, technical, medical, and administrative. And it is in these domains that the restructuring has been most severe, not because the cultural stakes are lower, but because the market has decided they are.
Consider the translation of a medical consent form from English into Tagalog for a Filipino patient in a London hospital. The document is not literary. It will never win a prize. But the accuracy of its translation has direct consequences for a person's understanding of what is being done to their body. A machine translation might render the words correctly while missing the pragmatic force of the language: the way a particular phrasing might sound reassuring or threatening, the cultural assumptions embedded in notions of consent, the difference between informing someone and making them feel informed. These are not edge cases. They are the bread and butter of professional translation, and they are the first tasks being handed to machines.
Or consider immigration proceedings, where a mistranslation can determine whether an asylum seeker's testimony is deemed credible. The translator in that context is not merely converting words. They are mediating between legal systems, cultural frameworks of narrative and evidence, and the emotional register of a person recounting traumatic experiences. The difference between “I was afraid” and “I feared for my life” is not a matter of synonymy. It is a matter of legal consequence, and navigating it requires the kind of situated cultural judgement that no statistical model possesses.
The industry's preferred narrative for this transition is “human-AI collaboration.” The framing suggests a partnership: the machine handles the heavy lifting, and the human provides the finishing touch. But the power dynamics of this arrangement are radically asymmetric. The machine sets the terms. The human adjusts.
This is not collaboration in any meaningful sense. It is supervision, and it is supervision of a peculiarly degrading kind, because the supervisor is being paid less than they would earn if they were simply doing the work themselves. The translator who once sat with a source text and crafted a target text from scratch, making hundreds of micro-decisions about register, idiom, rhythm, and cultural resonance, now sits with a machine-generated draft and decides, sentence by sentence, whether it is wrong enough to fix.
The cognitive experience of post-editing is qualitatively different from translation. Several translators have described it as more fatiguing and less satisfying than original translation work. The machine's output creates a kind of gravitational pull. Even when the translator knows a better rendering exists, the effort required to override the machine's suggestion and compose something from scratch can feel disproportionate to the compensation. Over time, this produces a phenomenon that linguists and labour researchers have begun to call “anchoring,” in which the translator's own instincts are gradually subordinated to the machine's defaults. The result is not a blend of human and machine intelligence. It is machine intelligence with a human stamp of approval.
A 2025 survey of translators found that a majority, some 66 per cent, acknowledged that MTPE can be useful but still requires substantial human intervention. Roughly half of respondents refused to offer discounts for post-editing work, arguing that the effort required is routinely underestimated by clients and agencies. Among those who did discount, the most common reduction fell between 10 and 30 per cent, far less than the 50 to 75 per cent cuts that many agencies impose unilaterally.
Rosa, a translator quoted by Equal Times, described the economic logic with characteristic directness. “Profit is the only thing that matters,” she said, “and translation has become like a commodity that they extract from us at the lowest possible price.” The commodity metaphor is precise. What was once a craft, defined by the individual translator's knowledge, taste, and cultural fluency, has been reframed as a raw material to be processed at industrial scale.
There is a version of this story in which what is happening to translators is tragic but temporary, a painful adjustment period that will eventually stabilise as the technology matures and the market finds a new equilibrium. In this version, AI translation will continue to improve until the quality gap between machine and human output narrows to insignificance, at which point the remaining human translators will occupy a small, highly specialised niche: literary translation, diplomatic interpreting, and other domains where the stakes are too high for automation.
But this narrative assumes that the qualities human translators bring are merely a matter of degree, that machines are doing a slightly worse version of the same thing, and that incremental improvement will close the gap. There is a competing argument, advanced by translators, linguists, and cognitive scientists, that the gap is not quantitative but structural. That what human translators do when they translate with cultural sensitivity and emotional intelligence is not a more refined version of pattern matching. It is a fundamentally different cognitive operation.
A study published in Nature's Humanities and Social Sciences Communications in 2026, examining AI performance in literary autobiography translation, found that while AI models could produce grammatically correct and largely accurate translations, they consistently failed to capture the emotional texture and cultural specificity of the original texts. The researchers concluded that human translators brought interpretive capacities that were not simply absent from AI systems but categorically different in kind. AI models could identify the surface layer of meaning but failed to recognise cultural allusions and deeper emotional context, elements that are essential not just to literature but to any communication that carries weight beyond its literal content.
This distinction matters because it determines whether human translators are a temporary patch or a permanent necessity. If translation is ultimately a pattern-matching problem, then machines will eventually solve it. If it is an interpretive problem, requiring the kind of embodied cultural knowledge that comes from living inside a language and its associated worldview, then machines will not solve it, regardless of how much training data they consume. The patterns they learn are drawn from existing translations, which means they can only reproduce what human translators have already created. They cannot originate the kind of interpretive leap that makes a translation feel alive.
Poetry, with its reliance on rhythm, rhyme, and figurative language, remains a particularly formidable challenge. A machine can translate the denotative content of a poem. It cannot translate its music. It cannot decide, as Emily Wilson did with the Odyssey, that the opening word of an epic should be “Tell me” rather than “Sing to me,” and understand the cascade of interpretive consequences that follows from that single choice.
The structural incapacity argument, however compelling, runs into a problem that is not technological but economic. The market for translation services is not optimised for craft. It is optimised for throughput, cost reduction, and acceptable quality at scale. And by this measure, AI translation is already good enough for the vast majority of commercial applications. The Slator survey found that while 72 per cent of respondents cited accuracy concerns with machine translation and 68 per cent cited quality concerns, adoption continued to accelerate regardless. Trust grew slowly, but adoption grew fast. The concerns are real. They are also, from a procurement perspective, manageable.
This is the uncomfortable truth at the centre of the translation crisis. The question is not whether AI can match human translators in quality. It demonstrably cannot, particularly in contexts requiring cultural nuance, tonal sensitivity, or interpretive judgement. The question is whether the market values those qualities enough to pay for them. And the evidence, from rate compression to headcount reduction to the restructuring of workflows around machine output, suggests that it does not.
The AI-enabled translation services market, valued at 5.18 billion US dollars in 2025 according to Precedence Research, is projected to reach 50.69 billion by 2035, expanding at a compound annual growth rate of 25.62 per cent. These are not numbers that suggest a market hedging its bets. They describe an industry that has made a decisive bet on automation, with human involvement reduced to the minimum necessary to maintain an acceptable error rate. Software platforms already dominate the market, holding nearly 73 per cent of 2025 revenue, and they are growing faster than any other component as enterprises embed AI-driven localisation into core workflows.
The parallel to other creative and knowledge-work professions is instructive. Journalism, graphic design, customer service, and legal research have all experienced similar dynamics: AI systems that produce output of variable but often adequate quality, followed by a restructuring of human roles around review, correction, and oversight rather than creation. In each case, the same rhetorical move occurs. The technology is presented as a tool that augments human capability. In practice, it becomes a ceiling that constrains it. The human is not empowered. The human is made cheaper.
The consequences of this restructuring extend beyond the economic fortunes of individual translators. Languages are not neutral containers for information. They are living systems of meaning, shaped by history, geography, power, and culture. A translator who has spent decades working between English and Arabic, or Mandarin and Portuguese, or Hindi and German, carries within them a form of knowledge that is not reducible to a bilingual dictionary or a statistical model trained on parallel corpora.
The Frey and Llanos-Paredes study at Oxford Martin documented an additional finding that received less attention than the employment data but may be more consequential in the long term. Areas with robust Google Translate usage saw job postings demanding Spanish fluency grow by about 1.4 percentage points less than in other regions, with similar declines of roughly 1.3 and 0.8 percentage points for Chinese and German respectively, and measurable dampening even for Japanese and French. The adoption of machine translation, in other words, is not just replacing translators. It is reducing the perceived value of knowing another language at all.
This is a feedback loop with serious cultural implications. As machine translation becomes more capable and more widely adopted, the incentive to invest in human language skills diminishes. Fewer people pursue translation as a career. Fewer organisations invest in in-house linguistic expertise. The pool of human knowledge about how languages relate to one another, how cultural contexts shape meaning, and how texts function differently across linguistic boundaries gradually shrinks. And the AI systems that replace this knowledge are trained on the output of the very translators they displace, creating a closed loop in which the training data grows stale as the human source of fresh interpretive insight dries up.
Ian Giles, in his capacity as chair of the Translators Association, has raised precisely this concern, questioning whether “the demand for subtlety and craft from enough readers and publishers” will “save highly skilled individuals from becoming mere AI post-editors.” The word “mere” carries the weight of the entire argument. It acknowledges that the role of post-editor exists. It questions whether the role is sufficient to sustain the expertise it depends upon.
The problem is compounded by the pipeline effect. If experienced translators leave the profession and aspiring translators are deterred by collapsing incomes, the next generation of human translators simply will not exist in sufficient numbers. The craft knowledge that takes years to develop, the intuitive feel for how a sentence should land in a target language, the awareness of cultural registers that no textbook teaches, is not the kind of knowledge that can be stored in a database and retrieved on demand. It lives in people. When those people leave, it leaves with them.
Professional translators have long occupied a peculiar position in the knowledge economy. Their work is invisible when done well. A reader who encounters a beautifully translated novel does not think about the translator. A patient who reads a clearly rendered medical document in their own language does not consider the person who bridged the linguistic gap. This invisibility made translators vulnerable long before AI arrived. It meant that their expertise could be devalued without anyone noticing, because the beneficiaries of their work rarely understood what it involved.
What is happening to translators now is therefore not just a story about one profession. It is a preview of what happens when AI is deployed not to eliminate human workers but to restructure their role in ways that extract their expertise while diminishing their authority, autonomy, and compensation. The translator who becomes a post-editor is still needed. But the nature of the need has changed. They are needed not for what they can create but for what they can catch. Not for their vision but for their vigilance.
Georgieva's statistic from Davos, those 150 translators who lost their positions at the IMF, represents one institution's calculation that the cultural and interpretive knowledge those individuals carried was worth less than the cost savings achieved by replacing them with technology. That calculation is now being replicated across every sector that relies on translation, from international law to pharmaceutical regulation to immigration services. In each case, the logic is the same. The machine produces output that is adequate for most purposes. The remaining humans clean up whatever the machine gets wrong. And the expertise that once defined the profession gradually atrophies, because there is no economic incentive to develop it and no structural pathway through which it can be transmitted to the next generation.
The question, then, is not whether AI translation will continue to improve. It will. And it is not whether human translators will survive in some form. They will, at least for a while, as post-editors and quality reviewers and specialists in the narrow domains where machine output remains unreliable. The question is whether a society that systematically devalues the ability to translate with feeling, with cultural awareness, with the full depth of human interpretive intelligence, will eventually discover that it has lost something it cannot rebuild. Not because the technology failed, but because the market decided that what translators knew was not worth preserving.
CNN. “Meet the translation professionals losing their jobs to AI.” CNN Business, 23 January 2026. https://www.cnn.com/2026/01/23/tech/translation-language-jobs-ai-automation-intl
TIME. “The IMF's Kristalina Georgieva on the AI 'Tsunami' Hitting Jobs.” TIME, January 2026. https://time.com/collections/davos-2026/7339218/ai-trade-global-economy-kristalina-georgieva-imf/
Slator. “Five Ways AI Reshaped the Translation Industry in 2025.” Slator, 2025. https://slator.com/five-ways-ai-reshaped-translation-industry-2025/
Slator. “Slator 2025 Language Industry Market Report.” Slator, 2025. https://slator.com/slator-2025-language-industry-market-report/
Society of Authors. “SoA survey reveals a third of translators and quarter of illustrators losing work to AI.” Society of Authors, April 2024. https://europeanwriterscouncil.eu/soa-survey-uk-ai-2024/
Merchant, Brian. “AI Killed My Job: Translators.” Blood in the Machine, 2025. https://www.bloodinthemachine.com/p/ai-killed-my-job-translators
Equal Times. “Artificial intelligence, dehumanisation and precarious work: translators on the frontline of tech-induced job degradation.” Equal Times, 2025. https://www.equaltimes.org/artificial-intelligence?lang=en
Frey, Carl Benedikt and Llanos-Paredes, Pedro. “Lost in Translation: Artificial Intelligence and the Demand for Foreign Language Skills.” Oxford Martin School, March 2025. https://www.oxfordmartin.ox.ac.uk/publications/lost-in-translation-artificial-intelligence-and-the-demand-for-foreign-language-skills
CEPR. “Lost in translation: AI's impact on translators and foreign language skills.” CEPR VoxEU, 2025. https://cepr.org/voxeu/columns/lost-translation-ais-impact-translators-and-foreign-language-skills
Fortune. “Microsoft researchers have revealed the 40 jobs most exposed to AI.” Fortune, July 2025. https://fortune.com/article/what-are-the-jobs-most-exposed-to-ai-microsoft-research/
CNBC. “These 10 jobs are the least AI-safe, according to new Microsoft report.” CNBC, 5 August 2025. https://www.cnbc.com/2025/08/05/these-10-jobs-are-the-least-ai-safe-according-to-new-microsoft-report.html
Precedence Research. “AI Enabled Translation Services Market Size 2025 to 2035.” Precedence Research, 2025. https://www.precedenceresearch.com/ai-enabled-translation-services-market
Lahiri, Jhumpa. Translating Myself and Others. Princeton University Press, 2022. https://press.princeton.edu/books/hardcover/9780691231167/translating-myself-and-others
Princeton University. “Jhumpa Lahiri champions the writerly art of translation.” Princeton University News, 4 September 2020. https://www.princeton.edu/news/2020/09/04/jhumpa-lahiri-champions-writerly-art-translation
Wilson, Emily. Conversations with Tyler, Episode 63. “Emily Wilson on Translations and Language.” https://conversationswithtyler.com/episodes/emily-wilson/
Nature. “Exploring AI's performance in literary autobiography translation: how closely do AI models match human translation.” Humanities and Social Sciences Communications, 2026. https://www.nature.com/articles/s41599-026-06630-4
Washington Post. “AI is taking on live translations. But jobs and meaning are getting lost.” Washington Post, 26 September 2025. https://www.washingtonpost.com/business/2025/09/26/ai-translation-jobs/
The Bookseller. “A third of translators report losing work to generative AI systems, SoA survey reveals.” The Bookseller, 2024. https://www.thebookseller.com/news/a-third-of-translators-report-losing-work-to-generative-ai-systems-soa-survey-reveals
World Economic Forum. “Putting a figure on it: Davos 2026 in numbers.” WEF, January 2026. https://www.weforum.org/stories/2026/01/davos-2026-in-numbers/
GTS Translation. “The State of Machine Translation Post-Editing (MTPE) in 2025: What Translators Think.” GTS Blog, 7 April 2025. https://blog.gts-translation.com/2025/04/07/the-state-of-machine-translation-post-editing-mtpe-in-2025-what-translators-think/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from 下川友
人の行動は、すべて外的な事象に対する反応、もしくは体調の変化など内的な要因に対する反応によって生まれるものだと考えている。 つまり、純粋な能動的な行動というものは人間には存在しない。
火事や地震が起きたとき、身体が自動的に防衛反応を示すように、あらゆる行動は何かしらの刺激に対する応答である。それらは日常の中で小さく、視覚的に分かりにくくなっているだけで、本質的にはすべて、受けたものに対するカウンターだ。
部屋が汚いから掃除をする。 お腹が空いたから食事を作る。 体が冷えたから服を着込む。 これらはすべて、能動的に見えて実際には受動的な反応である。
努力という言葉がある。 努力は能動的な行動ではなく、それができること自体が才能だ、という意見がある。 自分も概ねその意見には賛成だが、どちらかというと、行動回数というのは「事象に反応するスイッチが入る回数」だと考えている。
会社でバリバリ働いている人は、一見すると主体的に努力しているように見える。 しかし、人間の行動をすべて受けたものへの反応と捉えるなら、それは例えば、貧しい生活への危機感に対する応答とも言える。 つまり、その人が能動的に動いているのではなく、状況に対して反応しているだけと解釈できる。 では、不幸な人間だけが行動するのかというと、そうではない。 「大切な人に美味しいものを食べさせたい」とか、「愛する人が病気なら治療費を出したい」といったように、人が動く理由は無数にある。
要するに、人は「受け取った刺激の回数」に応じて行動する。 そして、その刺激に関心を持つかどうかが個性になる。
どれだけ感受性があるか。 どのような刺激に反応するか。 それに対する応答のパターンをどれだけ持っているか。 それらが人の違いを形作っている。
ここまで考えると、人を動かすには「どれだけ刺激を与えるか」という話になる。 ただし個性がある以上、何に反応するかは人それぞれであり、特定することは難しい。 だからこそ、多様な刺激を、繰り返し与えるしかない。
しかし人は経験的に「つらいことが含まれているもの」には手を出さなくなる。 そのため、自力では到達できない領域が多く存在する。
そこで他人の存在が必要になる。 人は、他人からの刺激を待っている。
ただし、他人が自分に刺激を与える明確な理由は基本的にない。 だから、それは頻繁には起こらない。
ではどうするか。 自分が他人に刺激を与えれば、結果としてそれが自分にも返ってくるのではないか。
そう考えると、他人から刺激を受けたいなら、自分が先に与えるしかないという結論に至る。 これは、自分が動くための一つの理由になる。
人間の行動がすべて外的要因への反応の連続であるならば、 その中であえて自分が他者に刺激を与えにいくという行為は、どこか矛盾を含んでいるようにも感じる。 それでも、その矛盾が結果として自分を動かす理由になるのであれば、ここまで考えた意味はあったのだと思う。
ここで一度、思考を止める。 次は「では、何を相手に与えるべきか」を考えたい。
from
Roscoe's Story
In Summary: * Listening now to the Diamondbacks Sports Network for the Pregame Show ahead of tonight's game between the Arizona Diamondbacks and the Baltimore Orioles. I'll stay with this station for the radio call of the game. When it ends I'll wrap up the night prayers and head to bed.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 233.9 lbs. * bp= 157/93 (61)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:00 – 1 banana, coffeecake * 09:25 – snack on cheese * 11:45 – meat oaf, white bread and butter, fresh mango * 16:40 – 1 fresh apple * 17:00 – 1 dish of ice cream
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:30 – bank accounts activity monitored. * 07:00 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:45 to 14:15 – watch old game shows and eat lunch at home with Sylvia * 15:30 – listening to The Jack Riccardi Show * 16:30 – listening to sports talk on ESPN 620 AM, Phoenix, AZ
Chess: * 09:40 – moved in all pending CC games
from
Roscoe's Quick Notes

My MLB game of choice tonight will be the Arizona Diamondbacks vs the Baltimore Orioles. With its start time of 5:35 PM CDT, I've got about 3 hours before I'll need to find a radio station to bring me the call of the game. That's enough time for me to squeeze in a post-lunch nap, I think.
And the adventure continues.
from Tuesdays in Autumn
With the benefit a week off work I visited Cardiff on Thursday. At the Oxfam Books and Music shop on St. Mary Street I bought two classical albums. Spinning them later that day I took a shine to one; a slight dislike to the other. The latter was a 2-LP collection of Paul Hindemith's piano music. Aside from the 2nd piano sonata, a piece I already knew and liked, the other works on it left me cold.
The more successful purchase was the Complete Works For Harpsichord – Vol.2 (Book One, Deuxième Ordre) of François Couperin, played by Kenneth Gilbert. Though lacking the discrimination of a connoisseur, I am nevertheless quite fussy about how my Couperin is served up. Luckily I found Gilbert's renditions to my liking. Certain of the pieces' titles seem, as with many of Couperin's compositions, as if they're referring to individuals: for example 'La Flatteuse' and 'La Voluptueuse'.
I already own an LP of the Cinquiême Ordre, another of the suites from Couperin's First Book (1713) of keyboard compositions, performed by Blandine Verlet. Oddly enough it came from a visit to same Oxfam shop six years ago. I'm not sure whether Kenneth Gilbert persevered in recording all of the 27 Ordres from Couperin's four books – but it appears he did enough of them at least to fill sixteen LPs.
In Monmouth the following day another pair of vinyl purchases, but in quite a different musical vein. With a view to expanding my funk horizons I picked up My Radio Sure Sounds Good to Me by Larry Graham and Graham Central Station and Ultra Wave by Bootsy Collins. Again, there was one I liked rather better than the other. While the Larry Graham record had its moments (especially in the closing number ‘Are You Happy?‘) it was Bootsy's album I preferred: it had me smiling throughout. Try 'It's a Musical' by way of an example track.
On the train to Cardiff I finished Jan Neruda's Prague Tales, a set of engaging narratives from mid-19th-Century Bohemia, some of them like freshly-served slices of life, others with a hint of urban legend about them. An informative introduction by Ivan Klíma provided useful context. There's much to savour in these pieces, in which Neruda's amiable tone grates only occasionally – such as in the moments revealing the baked-in antisemitism and sexism of his milieu.
That same evening I got to the end of Olga Ravn's short novel The Wax Child. The setting is early seventeenth-century Denmark, where a noblewoman finds herself accused of witchcraft. The tragic story is related in eerily evocative prose which vividly animates the protagonist and her world. While the flavour of the book is very different to the only other novel of Ravn's I've read (The Employees), one could argue there are nevertheless some intriguing parallels between them.
The cheese of the week has been Hafod, an idiosyncratic Welsh-made organic cheddar. They make both pasteurized and raw milk variants, of which I'm sampling the latter. It has a yielding texture with hints of sharpness and vaguely mineral-like notes over mellow, buttery underpinnings, making for a blend of flavours that lingers for a very agreeable while on the palate.

from Conjure Utopia
Last weekend was Cables of Resistance, a conference I've been organizing together with 20-something other people since last September. The goal was to bring all the Berlin and German movements fighting against Big Tech in the same venue for cross-pollination, strategic coordination, and simply to discover more about each other.

For me, it was a chance to do something again in Berlin, the city where I live, after two years focusing on Tech Workers Coalition Global, which is primarily an online affair. The element of grounding and relationship-building, which underlined the conference, was for me a personal and emotional need before a political one.
I was skeptical at first: not being a Leftist, the organizing groups and the target crowd felt and still feel distant in culture, language, and identity. For a long time, I felt like a guest, suppressing this sentiment, as I often do, to pursue the organization of the conference, a necessity I agreed with.

Now that it is over, I want to look back and offer some insights that speak to the historical moment we are going through.
Let's start with some math.
Originally, we were targeting 300 participants. We booked what at the time felt like an oversized venue. We sold out all the tickets in less than a week, basically doing a single post on social media. This was months before the conference.
Wait, that's just not how it works: it was the first event for us, and possibly the first of this kind in decades in Germany. It wasn't targeting the general public, but people who were already politically active. Why was it so easy?

We had to sell more tickets. We sold more tickets. More participants coming required more volunteers, and in the end, more than 200 people took shifts to help us.
Comes the day of the event, and the venue is packed. Bodies are squeezed into every hall. People lining the walls of the seminar rooms. More people show up asking to volunteer to join the event. We struggle to count who's coming in through the door. In the end, probably more than 1000 people joined us across the three days.

How many were left out? Most of my friends couldn't get a ticket, which we stopped selling because, at some point, we were afraid of endangering people. All of this with pretty much no effort to try to sell the tickets. I like to speculate that we could have sold 3000 tickets if we had made different choices.
It may sound self-congratulatory, and it is. As I said, I'm not a Leftist: I like to win, and this result is a win worth of celebration, even if just instrumental to more impactful wins. But I'm sharing these numbers because they suggest a lot more is moving than we can see. The interest in the event surprised every single person involved, including me. I believed I had a sense of the technopolitical scene: I discovered I don't.
The numbers don't add up: we counted ourselves, and we are many more than we thought. We inherited from the tech industry the sentiment of always leaving on the bleeding edge, the fetishism for the new. Like Amazon still calling itself a “startup”. The numbers don't match the narrative, hence the narrative has to change. None of this is young and new: the movement is becoming adult.

Let me talk about the Saturday workshop. Since the program felt a bit too academic for my taste, I tried to bring something else to the table. Yeah, we know big tech is bad. Now what? Knowing things doesn't change things. Let's spice things up, I thought.
Some weeks before the conference, by chance, I met Nala at a party after a long time. We danced. We talked about Rodrigo Nunes. We talked about the conference. “What's your strategy to scale up this effort after the conference is over? What's the expected outcome? Where will you funnel the people involved? What do you want to get out of it?”
I didn't know.
As I said before, the conference for me was fulfilling primarily an emotional need rather than a strategic one, and I grew comfortable with the limited clarity on long-term clarity that motivated what in the end was a first event from a heterogeneous group of organizations with very different theories of change, perspectives, and motivations to join. I was so concerned with the short-term execution that I forgot to keep the focus on the next move.
Fuck. I'm getting sloppier.
In the end, I managed to squeeze in a Strategic Mapping workshop of the anti-big-tech organizations in Germany. Nala would facilitate. The slot is not great: 7:30 PM – 9:00 PM, in parallel with the dinner and a couple of other sessions, and a live performance. It's the end of a long day of conferencing, and it's a Saturday evening in Berlin. Only the more motivated will come, but it's ok. “I guess max 20 people will show up, plan for that, Nala.”
Five minutes before the time of the workshop, there are already 30 people in the room. “Simone, close the doors and let me think how to adapt the workshop.” Nala shuts down for a couple of minutes, eyes closed. “I got it”, she says.
People keep coming in. Lesson learned: if you place a sign saying “Full” on the door at an event full of Leftists, it won't achieve any effect. More people join. In the end, there will be around 60 participants in the room. Run around, grab post-its from every room in the venue, run back.
Nala replanned the workshop on the fly and gave me a master lecture on the ineffable art of the “It is what it is.” As a 3° Dan political facilitator, I was impressed by what a 6° Dan could do. I still have a long way to go. The workshop involved different exercises that culminated in the production of a collaborative map, documenting all the relevant organizations in Germany fighting against Big Tech.

The most interesting bit is that most people didn't know most of the actors and organizations that other participants were bringing up. Neither did I, despite having done similar mapping exercises before. You can see the results in the photo. Hopefully, soon the exhaustion from the conference will fade, I will regain control of my limbs, and be able to transcribe and systematize the results.
A second important insight, which was the input for the reflection I'm writing, is that when the participants were asked which actors are building the narrative we need, very few, and underwhelming, actors came up. Solarpunk and Lunarpunk were mentioned. Then Cory Doctorow. Big up for Cory, who always promotes Tech Workers Coalition, but I don't think his shoulders are broad enough to carry this burden. Where is the equivalent of Fridays For Future or Extinction Rebellion in the fight for democratic technology? There's nothing like that. Nobody is filling that ecosystemic function.
The dust still has yet to settle after the event. We have to deal with the consequences of German political repression. We haven't had a meeting yet, but we are already thinking about what comes next. It's clear this is not going to end here.
The intensification of the psycho-digital loops makes the whole society more nervous: Cables of Resistance is but an itch that got scratched.

The shakes provoked by the acceleration of Imperial collapse leave bigger and bigger cracks in the concrete, where the tendrils of a new technology probe around, looking for attachment, nourishment, and Sunlight.
We did what we did not because it was easy, but because we thought it was easy. We are going to do it again.

from 下川友
今日も電車には新入社員が溢れていて、乗れるだけ人が詰め込まれている。 なぜか今日は早足のグルーヴで歩いていて、乗り換えのタイミングで次の電車に乗ろうとしたら、自分の足が速すぎて、いつも乗っている一本前の電車が目の前に到着した。
なぜそんなに早足だったのかはわからない。 視力を良くしようと思って、電車の窓から見える家の色を見ては、その色を頭の中で言葉にしていたからかもしれない。 そして、その屋根に色は、意外とすぐに出てこない、絶妙な色が多かった。
一本早い電車に乗れたと思ったが、それは鈍行で、出社時間に間に合わないことに気づく。 見たことのない駅で降りて急行に乗り換えたが、結局いつもより遅い電車になった。 車内では綿菓子みたいな匂いがして、少し気分が悪くなった。
帰りの電車も混んでいた。 やたら体の存在感が強くて硬い外国人が隣にいて、急ブレーキでよろけたときにその人の肘に当たった。 手すりにぶつかったのかと思うくらい痛かったが、その人は当たったことにも気づかず、まったく動かなかった。
夕飯は生姜焼きだった。 平日にこんなにちゃんとした食事ができるのは、ただただありがたい。本当に嬉しい。
風呂に入る。
昔から、ときどき自分が誰かに話しかけているイメージが勝手に浮かんでくる。
漫画が好きだった頃は、読んでいるだけで自分が描いているような気になっていたと、その中の自分は豪語していた。
そのあとも、自分が楽しそうに話しているのに、音だけがあって、具体的な言葉はなかった。
新品で買ったパンツが傷んでいくのが嫌で、メルカリでスラックスを買った。 中古で安くて生地の良いブラウンのスラックスは手に入るが、ブラックはなかなか見つからない。 特にタイトなものは。
明日はそれを履いていく。 この子もきっと、好きになれる形をしている。
from An Open Letter
I apologize because this is gonna sound so incredibly cringe and I swear it’s not in a fucking Redditor way, but I do think I have a fairly high IQ which just corresponds to pattern matching, and I wonder if that is my issue in a way. I talked with my therapist today about why I felt so horribly bad after spending time with friends, and there are other reasons there but the biggest thing was just the severity of how bad I felt afterwards, and specifically the fact that I had suicidal ideation. And I believe the reason for those thoughts was because I felt like I was slipping into depression even though I was doing everything I thought I needed to do. And as a result, I start to feel this desperate panic, and the way I described it to my therapist was like a hostage taker telling you that they needed $100,000. You somehow managed to scrape together enough money to pay off the ransom and when you finally do that, the hostage taker refuses to release the hostage. It is the desperation from already being faced with something so incredibly difficult and managing to do it all to find out that it is not enough, and you are still in square one but with less resources and less direction. And when the threat is a depressive episode, it is enough for me to start to indulge in the thoughts of killing myself. But a lot of that is because I remember how incredibly horrifying and hellish a depressive episode is. And when I start to feel those first warning signs, I am like a crab in the pot as it starts to boil. I am desperate to avoid what is almost guaranteed hell. Except for the fact that in the past that have been the case, but in the present it’s not nearly that bad. Still it is horrible and I wish I didn’t have to go through it sometimes, but it is nowhere near an episode like I am afraid of. One of the fallacies that my brain tries to trick me with is the fact that because I am doing all of these other things, that is a big reason why the episodes are not nearly as bad as they used to be. Nowadays more often than not it’s just one or two days depressed rather than weeks or even months. I also now have the tools to break myself out of full of those cycles, and I also do have those social networks fostered well enough to help me out. And so I think a lot of the fear and desperation comes from the pattern matching. Using the crab analogy, I start to feel the water heating up and I’m desperate to do anything to avoid the incoming pain of being boiled alive, but in reality the water is just going to get warm to hot for a bit, and then go back down. And even if I logically know that and even though it through data, depression is a pretty efficient thing in the sense that it also convinces you that this feeling will not go away and it is going to stay.
Another thing from therapy today was that I should remind myself how I exist, and so statistically since I don’t think I am so unique person, there will be other people out there like me, and I will be able to meet a girl that I feel matches me. And additionally I will be able to meet her at a time where things work out and at the location where I am. And for what it’s worth I do see very concrete tangible genius in myself especially in this small stuff being able to recognize certain red flags that prior would’ve romanticized. Additionally the fact that I am willing to step away from infatuation to rather wait a little bit longer for a partner that I feel more confident about. I think these are all things that past me has not always exhibited and I’m very proud of myself for that and I want to recognize that progress. I am proud of the person I see myself becoming every day.
from Patrimoine Médard bourgault
Une mémoire vivante est encore là, sur le domaine Médard Bourgault. À travers ces enregistrements, la parole d’André Médard donne accès, sans filtre, à une histoire qui n’a jamais été écrite ainsi.
6 heures de témoignages d’André Médard Bourgault — 18 fichiers audio classés, résumés et minutés, enregistrés sur le domaine familial

André Médard a 85 ans. Il porte dans sa mémoire une connaissance intime et rare de Médard, de sa famille, de ses techniques, de son époque et de son territoire. Ces enregistrements ont été captés au fil de plusieurs rencontres, sur le domaine familial.
Ces enregistrements constituent une archive sonore directe, captée sur le lieu même où cette mémoire s’est construite.
Je suis le petit-fils de Médard Bourgault. J’ai passé une partie de ma jeunesse sur ce domaine, à m’y promener, à observer et parfois à y dormir. De ma naissance jusqu’à la période de la COVID, j’y ai célébré les principales fêtes chrétiennes, notamment Noël et Pâques.
En parallèle, j’ai travaillé sur des productions d’animation jeunesse (HBO, Radio-Canada), ce qui m’a permis de développer une capacité à structurer des récits et à mettre en valeur du contenu narratif.
Cette double proximité — personnelle et professionnelle — donne à ce travail une dimension d’échange vivant, ancré dans une expérience réelle du lieu et dans une capacité concrète à en transmettre la mémoire.
Les fichiers sont en cours de classement. Les résumés ci-dessous donnent un aperçu des sujets abordés dans chaque enregistrement. Les audio ne sont pas encore tous disponibles pour écoute publique.
Ces enregistrements ont été captés au Zoom H2 lors de rencontres informelles avec André Médard Bourgault, sur le domaine familial à Saint-Jean-Port-Joli. Les conversations n'étaient pas scriptées — André Médard parlait librement, guidé par les objets autour de lui, les pièces de la maison, le terrain. Il s’agit de captations brutes, sans mise en scène. Les fichiers sont classés par lieu et par date d'enregistrement. Les résumés sont établis à l'écoute, minutage par minutage. Les approximations de dates sont signalées — André Médard lui-même reconnaissait que Médard n'était pas toujours fiable sur les années.
Les sections suivantes sont des exemples tirés des enregistrements. Elles illustrent comment les audio peuvent être utilisés pour construire des récits courts à partir d’éléments précis du domaine Médard Bourgault.
L’ensemble du corpus couvre un large éventail de sujets : les sculptures présentes sur le domaine, les différentes périodes de la vie de Médard et d’André Médard, la vie dans le village, les métiers, ainsi que la manière dont se vivait le quotidien au sein d’une grande famille. On y retrouve autant le bon que le moins bon — sans mise en scène.
Ces extraits montrent le potentiel du matériau audio à faire émerger des histoires complètes, à partir de fragments captés sur place.
Les routes de terre
En 1932, les routes sont encore en terre. Un couple de Rivière-du-Loup arrive jusqu'à Saint-Jean-Port-Joli et veut acheter une sculpture. C'est la première vente de Médard Bourgault. Il en tire 2 piastres. Le Québec est en pleine crise économique. André Médard se souvient de ce que valait 2 piastres à cette époque-là.
Le village
Saint-Jean-Port-Joli dans les années 30 et 40 — les bœufs et les chevaux pour labourer, le forgeron Fortin, l'Auberge du Faubourg, les touristes américains qui arrivent l'été, Jean-Marie Gauvreau et d'autres personnages importants de l'époque. André Médard en parle comme si c'était hier.
Avant la Révolution tranquille
Dans le Québec d'avant 1960, le clergé avait son mot à dire sur tout — y compris sur la longueur du pagne des crucifix. Les fils de Médard vivaient des commandes religieuses. Médard, lui, sculptait des nus sur la grève en cachette. André Médard raconte cette tension — entre la liberté d'un père et le gagne-pain de ses fils.
Les écoles ménagères
Dans les années 30, les filles de Médard fréquentaient l'école ménagère. C'était une institution — on y apprenait à tenir une maison, à coudre, à cuisiner. André Médard raconte comment ça se passait, ce que ses sœurs y vivaient, ce que ça dit du Québec de cette époque.
Le Montcalm
Avant de sculpter, Médard était marin. Il naviguait sur le Montcalm — un brise-glace sur le Saint-Laurent — et a traversé l'Atlantique avec un équipage anglais. Ce voyage en Europe, cette vie sur le fleuve, cette façon de voir le monde — tout ça se retrouve dans son œuvre. André Médard raconte les années marines de son père.
la longueur du pagne sur les crucifix
Le clergé qui commande des sculptures religieuses aux fils pendant que le père cache ses nus sous un drap. Puis le clergé qui négocie la longueur du pagne sur les crucifix. Et finalement Médard qui arrête de cacher — il assume.
C'est toute une époque dans cette tension-là. Le Québec d'avant la Révolution tranquille raconté à travers un drap et un pagne trop court.
André Médard porte ça avec humour et affection. C'est ce qui rend ces enregistrements vivants.
La banque audio est plus large que les extraits présentés ici et permet, à partir d’un même matériau, de structurer plusieurs récits complets.
Travail en cours d’archivage, de structuration et de mise en forme.
https://archive.org/details/Andre-Medard-Bourgault-Temoignage-27-octobre-2021
Durée : 25 minutes
Son de l'horloge grand-mère — enregistrement sonore authentique de l'horloge dont André Médard parle en détail dans le fichier 27 octobre 2021.
Ambiance sonore — André Médard qui marche sur le terrain du domaine. Sons de pas.

Durée : ~7 minutes
Voici le document formaté pour write.as :
https://archive.org/details/rencontre2_202603

Voici le document formaté pour write.as :
Enregistrement fait à l'extérieur

Enregistrement fait dans la petite boutique sur le bord du fleuve — domaine Médard Bourgault

Enregistrement fait dans la petite boutique sur le bord du fleuve — domaine Médard Bourgault
Fichier de ~15 minutes — tous les symboles présents sont discutés
Médard qui humanise le sacré
Document en cours de mise à jour — Raphaël Maltais Bourgault, 2026
Ce travail de documentation se construit sur le terrain, à partir d’enregistrements et d’archives en cours.
Si vous jugez qu’il mérite d’être poursuivi : https://ko-fi.com/raphaelmaltaisbourgault

Pour comprendre le Domaine Médard Bourgault
Ces pages permettent de découvrir le domaine, son histoire, et les enjeux actuels à travers des archives, des analyses et des témoignages directs.
Archives et mémoire du lieu → Domaine Médard Bourgault — archives sonores et témoignages d’André Médard Bourgault Enregistrements réalisés sur le domaine, retraçant la vie, les gestes et la mémoire du lieu.
Analyses et situation actuelle → Domaine Médard Bourgault — analyses et enjeux actuels Réflexions et mises à jour sur les enjeux en cours.
Savoir et transmission → André Médard Bourgault — classe de maître complète en sculpture sur bois → Médard Bourgault — éducation artistique, principes, beauté et transmission Comprendre la pratique, la transmission et la vision artistique de Médard Bourgault.
Récit et contexte historique → Médard Bourgault — récit en mer inspiré de son journal (1913–1918) Un récit basé sur ses écrits, qui éclaire une période peu connue de sa vie.
Enjeu actuel du domaine → Domaine Médard Bourgault — le jardin doit-il devenir un accès public au fleuve ? Une question concrète sur l’avenir et l’usage du lieu.
from
SmarterArticles

Nearly seven in ten middle and high school students now say they believe artificial intelligence is eroding their critical thinking skills. They reported this in a December 2025 survey conducted by the RAND Corporation's American Youth Panel. They also reported, in the very same survey, that they are using AI for homework more than ever before, with usage climbing from 48 per cent to 62 per cent in barely seven months. The students, in other words, can see the problem clearly. They simply cannot stop participating in it.
This is an extraordinarily revealing paradox, and it deserves more scrutiny than the predictable hand-wringing it has generated. Because the most uncomfortable question here is not whether ChatGPT is making teenagers worse at thinking. It is whether the education system that ushered AI into classrooms with such breathless enthusiasm ever genuinely valued the kind of independent, rigorous, critical thought it now claims to be losing.
The answer, if you follow the evidence, is not encouraging.
The RAND data is striking in its internal contradictions. Among the 1,214 young people surveyed (aged 12 to 29, all enrolled in school during the 2025-26 academic year), 67 per cent endorsed the statement that “the more students use AI for their schoolwork, the more it will harm their critical thinking skills.” That figure had risen more than ten percentage points in just ten months. The concern was especially pronounced among female students, 75 per cent of whom agreed, compared with 59 per cent of male students.
Yet during the same period, the percentage of middle schoolers using AI for homework leapt from 30 per cent to 46 per cent, and among high schoolers it jumped from 49 per cent to 60 per cent. Most of these students (60 per cent) also expressed concern about using AI for school-related purposes. So they are worried and they are doing it anyway. This is not cognitive dissonance in any simple sense. It is something more structurally interesting: students have correctly diagnosed a systemic problem, but they exist within a system that gives them no rational incentive to behave differently.
Consider the logic from a student's perspective. Assignments are graded. Grades determine university admissions. University admissions determine (or are perceived to determine) life outcomes. If your peers are using AI and getting better grades, opting out is not a principled stand. It is a competitive disadvantage. The students are not confused. They are trapped.
Think of it another way. You are sixteen. You have five GCSEs to revise for, a personal statement to write, and a part-time job. Your classmates are producing polished coursework in half the time it takes you to write a first draft because they are running their ideas through ChatGPT. Your teachers, overwhelmed and under-resourced, cannot reliably tell the difference. The system rewards the output, not the process. In this environment, choosing not to use AI is not intellectual integrity. It is self-sabotage.
Meanwhile, faculty at the university level are sounding alarms with even greater urgency. A national survey conducted by the American Association of Colleges and Universities and Elon University's Imagining the Digital Future Centre in November 2025 found that 95 per cent of the 1,057 faculty respondents feared that generative AI would increase student overreliance on the technology. Ninety per cent said it would diminish students' critical thinking skills. Eighty-three per cent said AI would decrease student attention spans. And 78 per cent said cheating on their campuses had increased since these tools became widely available, with 57 per cent saying it had increased significantly.
The teachers see the same thing the students see. The difference is that teachers are surprised. The students are not.
Here is where the conversation gets genuinely uncomfortable. Long before ChatGPT existed, education reformers, cognitive scientists, and classroom teachers themselves were raising the alarm about a system that was systematically undermining higher-order thinking. The culprit was not artificial intelligence. It was standardised testing.
The No Child Left Behind Act of 2001 (NCLB) represented, in the United States at least, the triumph of measurable outcomes over meaningful learning. Under its regime, schools were judged by their students' performance on standardised assessments. The consequences of poor scores were severe: funding cuts, staff dismissals, school closures. The entirely predictable result was what educators came to call “teaching to the test,” a practice in which classroom instruction was narrowed to the specific content and formats that would appear on state exams.
The effects were devastating and well-documented. Subjects not covered by standardised tests, including art, music, physical education, and social studies, were minimised or eliminated outright. Some principals eliminated recess to devote more time to test preparation. Science was replaced with additional maths drills. Social studies gave way to language arts worksheets. The phrase that captured this era most succinctly was “sit, get, spit, forget,” a cycle in which students received information passively, regurgitated it on an exam, and promptly forgot it, having never engaged with it at any depth.
The situation in the United Kingdom has followed a parallel trajectory. Successive reforms since the introduction of the National Curriculum in 1988, the expansion of league tables in the 1990s, and the intensification of Ofsted inspections have created an accountability culture that rewards measurable outcomes above all else. Teachers in England report spending enormous amounts of time on assessment preparation, data tracking, and administrative compliance, time that might otherwise be devoted to the kind of open-ended, inquiry-driven teaching that develops critical thinking. The Department for Education published expanded guidance on AI in education in June 2025, stressing that AI tools should support rather than replace subject knowledge and that students still need a strong foundation in reading, writing, and critical thinking to use these tools effectively. But guidance is one thing; structural reform is quite another.
Paulo Freire, the Brazilian educator and philosopher, would have recognised all of this instantly. In his seminal 1968 work “Pedagogy of the Oppressed,” Freire described what he called the “banking model” of education, in which teachers deposit knowledge into the passive receptacles of students' minds, and students are expected to receive, memorise, and repeat. Freire argued that this approach was fundamentally hostile to critical consciousness; the more students worked at storing deposits, the less they developed the critical thinking that would allow them to intervene in the world as transformers of that world. His alternative, critical pedagogy, was rooted in dialogue, in treating students as co-creators of knowledge rather than empty vessels to be filled.
NCLB was, in Freire's terms, the banking model with federal enforcement mechanisms. The UK's accountability framework achieved much the same outcome through different institutional channels. And while NCLB was eventually replaced by the Every Student Succeeds Act (ESSA) in 2015, which offered states greater flexibility in assessment design, the deeper cultural damage had been done. An entire generation of teachers on both sides of the Atlantic had been trained in a system that rewarded compliance over curiosity, memorisation over analysis, and standardised answers over independent thought.
So when commentators now lament that AI is destroying students' capacity for critical thinking, the honest follow-up question is: which critical thinking? When, precisely, was this golden age of independent thought in schools? Because the evidence suggests it was already in serious trouble long before a single student typed a homework question into ChatGPT.
The cognitive science, meanwhile, tells a more nuanced story than either technophiles or technophobes would prefer. Research published in 2025 by Michael Gerlich of SBS Swiss Business School, in the journal Societies, investigated the relationship between AI tool usage and critical thinking through the lens of cognitive offloading, the well-established phenomenon in which humans delegate cognitive tasks to external resources to reduce mental demand.
Gerlich's study surveyed and interviewed 666 participants across diverse age groups and educational backgrounds, finding a significant negative correlation between frequent AI tool use and critical thinking abilities. The numbers were stark: cognitive offloading was strongly correlated with AI tool usage (r = +0.72) and inversely related to critical thinking (r = -0.75). Younger participants, those aged 17 to 25, showed higher dependence on AI tools and lower critical thinking scores compared to older age groups. However, and this is crucial, advanced educational attainment correlated positively with critical thinking skills, suggesting that education, when it works properly, can mitigate some of the cognitive costs of AI reliance. The implication is clear: the problem is not that education cannot protect against cognitive offloading, but that most education systems are not currently designed to do so.
A separate study from Microsoft Research, presented at CHI 2025 (the Conference on Human Factors in Computing Systems), surveyed 319 knowledge workers about their experiences with generative AI. The findings revealed a telling dynamic: higher confidence in AI was associated with less critical thinking, while higher self-confidence was associated with more critical thinking. The research also identified a fundamental shift in the nature of cognitive work, from information gathering to information verification, from problem-solving to AI response integration, and from doing tasks to supervising them.
This matters enormously for students, who are still in the process of building the very cognitive capacities that adults are now choosing to offload. A knowledge worker who has spent twenty years learning to construct arguments, evaluate evidence, and synthesise information can afford to delegate some of those tasks to AI without losing the underlying skill. A teenager who has never fully developed those skills in the first place is in a fundamentally different position. For them, cognitive offloading is not a convenience. It is a developmental short-circuit.
This is not merely a problem of laziness or moral failure. It is a predictable consequence of how human cognition interacts with powerful tools. We have always offloaded cognitive tasks onto external supports, from written language to calculators to search engines. The question with AI is whether the offloading is so comprehensive, and so seamless, that it crosses the line from scaffolding (which is temporary and empowering) to substitution (which is permanent and diminishing).
The critical distinction, as cognitive scientists have noted, is whether AI operates as a scaffold or a substitute. Scaffolding is characterised by temporariness, adaptability, and the goal of strengthening internal capacities. Substitution simply does the thinking for you. And the educational system, in its rush to adopt AI tools, has devoted remarkably little attention to ensuring the former rather than the latter.
Any honest account of this situation must reckon with the position of teachers themselves, who are caught between contradictory demands with diminishing resources to meet any of them. Nearly half of teachers in the United States and the United Kingdom report chronic burnout. Teacher shortages are endemic. Class sizes in many state schools have grown. Administrative demands consume ever-larger portions of the working week.
Into this environment of exhaustion and scarcity comes AI, marketed to schools and teachers as a solution to the very problems the system has created. District leaders implementing AI tools report that teachers can reclaim an average of 5.9 hours per week by automating lesson planning, grading, and communication tasks. For a profession in crisis, this is not a trivial proposition. If a teacher can use AI to handle routine administrative work and spend more time on meaningful instruction, that sounds like progress.
But the reality is more complicated. Only about one in five teachers work at a school that has an AI policy. Teacher training on the pedagogical use of AI remains inconsistent and often superficial. The gap between the promise of AI as a teaching aid and the lived reality of its implementation is vast. Teachers are being asked to integrate a transformative technology into their practice while simultaneously meeting accountability targets, managing behaviour, differentiating instruction for diverse learners, and coping with the emotional demands of working with young people in an era of escalating mental health challenges.
The result is that AI adoption in schools is happening not through careful pedagogical planning, but through exhaustion. Teachers are adopting AI not because they have been trained to use it well, but because they are too stretched to do without it. And students are adopting AI not because they have been taught to use it critically, but because nobody has given them a compelling reason not to.
The speed at which schools reversed their positions on AI is itself a revealing story. In January 2023, New York City's Department of Education became one of the first major school systems to ban ChatGPT from its networks and devices. The ban was announced with the gravity of a public health measure, citing concerns about academic integrity and the tool's potential to provide students with answers that lacked critical thinking. Fairfax County Public Schools in Virginia and Austin Independent School District in Texas followed suit, citing child safety and academic integrity.
Within four months, New York City reversed its ban. The reversal came after convening tech industry representatives and educators to evaluate the technology's potential benefits. By 2024, more than three-quarters of educators reported that their districts had not banned ChatGPT or similar tools. The pattern, ban first, then embrace, played out across districts nationwide. Seattle Public Schools, which had initially banned ChatGPT and six additional AI writing assistance websites, similarly softened its stance.
This institutional whiplash is instructive. The initial bans suggested that schools understood, at least intuitively, that AI posed a genuine threat to the learning process. The rapid reversals suggested that this understanding was no match for the combined pressures of industry lobbying, parental expectations, competitive anxiety, and the sheer momentum of a technology that students were already using at home.
The AI in education market tells its own story of institutional capture. Valued at approximately 7 billion dollars in 2025, the sector is projected to grow to nearly 137 billion dollars by 2035, expanding at a compound annual growth rate of over 34 per cent. Major technology companies, including Microsoft, Google, Amazon, and Pearson, have invested heavily in educational AI products. In July 2025 alone, Microsoft announced plans to invest over 4 billion dollars in AI education initiatives. These investments are not philanthropic gestures. They are strategic plays for long-term market dominance in an industry that touches every child in the developed world.
These are not neutral actors offering disinterested tools. They are companies with revenue models that depend on deep integration into educational infrastructure. When schools adopt their platforms, they are not just choosing a product; they are choosing a pedagogical philosophy, one that often prioritises efficiency, personalisation through algorithmic recommendation, and scalable delivery over the messy, slow, deeply human process of learning to think for oneself.
Not all educational AI is created equal, and the differences matter. Khan Academy's Khanmigo, launched in limited beta in 2023 and reaching approximately 1.5 million users across 130 countries by the end of 2025, represents a philosophically distinct approach to AI in education. Unlike ChatGPT, Khanmigo is designed not to give answers directly. Instead, it employs a Socratic method, offering hints and guiding questions intended to help students find answers themselves.
According to Khan Academy's own data, 68 per cent of students preferred Khanmigo's approach over ChatGPT for homework help, citing reduced anxiety about cheating. There is, students reported, a real psychological difference between “the AI gave me the answer” and “I figured it out with help.” This is a meaningful distinction. The student who works through a problem with Socratic guidance is still engaging in the cognitive labour that builds understanding. The student who pastes an essay prompt into ChatGPT and submits the output is not.
This distinction matters because it reveals that the problem is not AI per se, but how AI is designed and deployed. A tool built to scaffold learning is fundamentally different from a tool optimised to generate complete, polished outputs on demand. Yet in practice, most students are not using carefully designed educational AI. They are using general-purpose large language models, tools built for productivity, not pedagogy. And the education system has done remarkably little to shape how students interact with these tools.
The gap between what is possible and what is actually happening is enormous. Khanmigo demonstrates that AI can be designed to support critical thinking rather than replace it. But Khanmigo also requires institutional investment, teacher training, and a deliberate pedagogical framework, precisely the things that the current system, oriented toward rapid adoption and measurable outcomes, is least equipped to provide.
The temptation to draw neat historical parallels is strong, and partly justified. In 1986, the Christian Science Monitor reported on fierce debates over calculator use in schools, with one Oregon teacher of the year warning that “once you have a crutch, you rely on it more and more.” The National Council of Teachers of Mathematics had urged the integration of calculators at all grade levels, and maths teachers in Washington, D.C. picketed their meetings in protest.
The pro-calculator camp cited studies showing that students with calculators performed at least as well on tests as those without them (except, curiously, in the fourth grade). The anti-calculator camp warned of atrophied mental arithmetic skills and dangerous dependency. Eventually, calculators became ubiquitous, and the debate faded into the background noise of educational history.
The AI parallel writes itself, but it is also misleading in important ways. A calculator is a tool for performing a specific, well-defined operation. It computes. AI, by contrast, is a tool for generating language, analysing arguments, synthesising information, and producing written outputs that closely mimic (and sometimes surpass) the kinds of work that students are assessed on. The calculator could not write your essay. ChatGPT can. The calculator did not threaten the process by which students learned to construct arguments, weigh evidence, or develop original perspectives. AI does. The scope of the offloading is categorically different, and so the historical precedent offers less comfort than its proponents suggest.
The more honest historical parallel might be the introduction of television in the 1950s and 1960s, when educators initially hailed the new medium as a revolutionary learning tool before gradually recognising that passive consumption of information was not the same as active engagement with ideas. The lesson from that era was not that television was inherently bad, but that it was easy to confuse exposure to information with genuine understanding. AI presents the same confusion in a more insidious form: the output looks like understanding. It reads like comprehension. But the student who submits it may not have comprehended anything at all.
The global picture offers both cautionary tales and faint glimmers of hope. The OECD's PISA 2022 assessment, which for the first time evaluated creative thinking skills across 64 countries and economies, revealed enormous international variation in how well education systems prepare students for higher-order cognition. Singapore, South Korea, Canada, Australia, New Zealand, Estonia, and Finland topped the creative thinking rankings, with Singapore's students scoring a mean of 41 points, well above the OECD average of 33. In Singapore, South Korea, and Canada, over 70 per cent of students performed at or above Level 4.
What distinguishes these high-performing systems is not the presence or absence of technology, but the pedagogical philosophy that underpins its use. Finland, consistently celebrated for its educational outcomes, emphasises teacher autonomy, minimal standardised testing, and a holistic approach in which children are encouraged to explore their interests rather than conform to rigid assessment frameworks. Finnish teachers enjoy the freedom to craft lessons tailored to their students' needs, a dynamic that fosters precisely the kind of critical and creative thinking that AI threatens to undermine elsewhere. Crucially, Finland has also launched national AI literacy programmes, including free online coursework, ensuring that citizens understand the technology rather than simply consuming it.
Singapore, meanwhile, has announced a national initiative to build AI literacy among students and teachers, with training to be offered at all levels by 2026. But Singapore's approach is embedded within its broader “Smart Nation” strategy, which explicitly aims to help teachers customise education for individual students rather than replace teacher judgement with algorithmic recommendation. The emphasis is on AI literacy, understanding what these tools are, what they can and cannot do, and how to use them critically, rather than mere AI adoption.
The contrast with the prevailing approach in the United States and United Kingdom is instructive. Where Finland and Singapore have invested in teacher preparation, pedagogical frameworks, and critical AI literacy, many anglophone systems have prioritised speed of adoption, market-driven solutions, and measurable outcomes, precisely the conditions under which AI is most likely to substitute for, rather than scaffold, genuine thinking. The PISA data suggests this is not a coincidence. Systems that invest in the conditions for critical thinking produce students who think critically. Systems that invest in accountability metrics produce students who are good at meeting metrics.
What emerges from all of this is not a simple story about technology corrupting youth. It is a story about institutional incentives, structural pressures, and a decades-long failure to prioritise the very capacities that AI now threatens.
Consider the chain of causation. Standardised testing regimes devalued critical thinking in favour of measurable performance. This created an educational culture oriented toward right answers rather than good questions. Into this culture arrived AI tools optimised to produce right answers at unprecedented speed. Students, trained since primary school to value correct outputs over thoughtful processes, adopted these tools with the perfectly rational logic of the system they inhabit. And institutions, pressed by market forces, parental expectations, and competitive dynamics, facilitated this adoption with minimal safeguards.
The students who told RAND researchers that AI is harming their critical thinking are not confused. They are articulating something that adults in the system have been reluctant to say: that the educational infrastructure was never really set up to produce independent thinkers. It was set up to produce compliant test-takers. AI simply automated the compliance.
This framing shifts the burden of responsibility from individual students (who are often blamed for laziness or moral weakness) to the system that shaped their incentives. A 15-year-old who uses ChatGPT to complete an essay is not failing the education system. The education system is failing that 15-year-old, not because it allowed access to AI, but because it created conditions in which using AI to generate a polished essay and submitting it for a grade is the most rational thing a student can do.
If the diagnosis is systemic, the treatment must be too. Banning AI, as the brief experiment of early 2023 demonstrated, is neither practical nor effective. Students will use these tools regardless of school policies, just as they use mobile phones in classrooms despite decades of prohibition attempts. The question is not whether students will interact with AI, but what kind of interaction the education system enables.
A genuinely transformative response would begin by acknowledging what the PISA data and international comparisons make clear: that systems emphasising teacher autonomy, reduced standardised testing, and inquiry-based learning produce students who are better equipped for creative and critical thought. This is not a new insight. It is a well-established finding that anglophone education systems have spent decades ignoring in favour of accountability frameworks and market-based reforms.
It would continue by investing in the kind of deliberate AI pedagogy that tools like Khanmigo gesture toward, in which AI is designed to support the development of thinking skills rather than bypass them. This requires not just better software, but better teacher training, smaller class sizes, and assessment reforms that reward the process of thinking rather than the product of having thought. It requires, in short, treating teachers as professionals with the autonomy and resources to teach well, rather than as data-entry operatives tasked with hitting numerical targets.
It would also require a fundamental rethinking of what education is for. If the purpose of schooling is to produce graduates who can pass standardised assessments and demonstrate competence on measurable metrics, then AI is not a threat; it is an upgrade. It does what the system was always asking students to do, only faster and more efficiently. If, however, the purpose of education is to cultivate human beings capable of independent judgement, ethical reasoning, creative problem-solving, and the ability to navigate complexity without algorithmic assistance, then the arrival of AI is not the crisis. It is the revelation that the crisis was already here.
The DfE's guidance in the United Kingdom acknowledges as much, at least implicitly. Its insistence that AI must operate under human oversight, that professional judgement and critical thinking remain essential, and that AI is a tool to inform decisions rather than make them, articulates a philosophy that is sound. Whether the institutional structures, the funding, the teacher training, and the assessment frameworks exist to make that philosophy real is an entirely different question.
The most provocative implication of the RAND data is not that AI is making students less capable. It is that the students themselves are more honest about the situation than the institutions that serve them. When 67 per cent of young people say AI is harming their critical thinking, they are not just reporting a technology problem. They are reporting a system problem. They are saying, in effect: we know this is making us worse at thinking, and we know the system gives us no reason to care.
That honesty deserves a response that is equally honest. Not more bans. Not more surveillance software. Not more hand-wringing opinion pieces from adults who themselves rely on AI for their professional work. What the moment demands is a structural reckoning with the values that education systems actually embody, as opposed to the values they claim in their mission statements.
The 95 per cent of faculty who fear student overreliance on AI are right to be concerned. But the overreliance they fear is not a new phenomenon introduced by ChatGPT. It is the logical extension of an educational philosophy that has been cultivating dependency on external authority, whether in the form of textbooks, standardised curricula, or high-stakes assessments, for generations. AI did not break the system. It revealed, with uncomfortable clarity, what the system was always building toward: a model of education in which the appearance of learning matters more than learning itself, and in which the correct output is valued infinitely more than the process of arriving at it.
The students, it turns out, were paying closer attention than anyone gave them credit for. They can see the trap. They can describe it with remarkable precision when asked. They just need the adults in the room to stop pretending it is not there.
RAND Corporation. “More Students Use AI for Homework, and More Believe It Harms Critical Thinking: Selected Findings from the American Youth Panel.” RAND Research Report RRA4742-1, March 2026. https://www.rand.org/pubs/research_reports/RRA4742-1.html
RAND Corporation. “Student Use of AI for Homework Rises as Concerns Grow About Critical Thinking Skills.” RAND Press Release, March 2026. https://www.rand.org/news/press/2026/03/student-use-of-ai-for-homework-rises-as-concerns-grow.html
Watson, C. Edward, and Rainie, Lee. “The AI Challenge: How College Faculty Assess the Present and Future of Higher Education in the Age of AI.” American Association of Colleges and Universities and Elon University, January 2026. https://www.aacu.org/newsroom/national-survey-95-of-college-faculty-fear-student-overreliance-on-ai-and-diminished-critical-thinking-among-learners-who-use-generative-ai-tools
Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, 15(1), 6, 2025. https://www.mdpi.com/2075-4698/15/1/6
Lee, et al. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.” Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/full/10.1145/3706598.3713778
Freire, Paulo. “Pedagogy of the Oppressed.” Continuum Publishing, 1968.
National Education Association. “Standardized Testing is Still Failing Students.” NEA Today. https://www.nea.org/nea-today/all-news-articles/standardized-testing-still-failing-students
CNN. “New York City public schools ban access to AI tool that could help students cheat.” CNN Business, January 2023. https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
NBC News. “New York City public schools remove ChatGPT ban.” NBC News, May 2023. https://www.nbcnews.com/tech/chatgpt-ban-dropped-new-york-city-public-schools-rcna85089
Education Week. “Students Are Worried That AI Will Hurt Their Critical Thinking Skills.” Education Week, March 2026. https://www.edweek.org/technology/students-are-worried-that-ai-will-hurt-their-critical-thinking-skills/2026/03
OECD. “PISA 2022 Results (Volume III): Creative Minds, Creative Schools.” OECD Publishing, June 2024. https://www.oecd.org/en/publications/pisa-2022-results-volume-iii_765ee8c2-en.html
Khan Academy. “Meet Khanmigo: Khan Academy's AI-powered teaching assistant and tutor.” 2025. https://www.khanmigo.ai/
Precedence Research. “AI in Education Market Size to Surge USD 136.79 Bn by 2035.” Precedence Research, 2025. https://www.precedenceresearch.com/ai-in-education-market
Christian Science Monitor. “The great calculator debate: Educators disagree over their place in the classroom.” CSMonitor.com, 9 May 1986. https://www.csmonitor.com/1986/0509/dcalc-f.html
Centre on Reinventing Public Education. “Shockwaves and Innovations: How Nations Worldwide Are Approaching AI in Education.” CRPE, 2025. https://crpe.org/shockwaves-and-innovations-how-nations-worldwide-are-dealing-with-ai-in-education/
Emerald Publishing. “AI policies in school education: a comparative study on China, Singapore, Finland, and the US.” Journal of Science and Technology Policy Management, 2025. https://www.emerald.com/jstpm/article/doi/10.1108/JSTPM-06-2024-0218/1302351/
Brookings Institution. “The Impact of No Child Left Behind on Students, Teachers, and Schools.” Brookings Papers on Economic Activity, 2010. https://www.brookings.edu/wp-content/uploads/2010/09/2010b_bpea_dee.pdf
Education Week. “Does Your District Ban ChatGPT? Here's What Educators Told Us.” Education Week, February 2024. https://www.edweek.org/technology/does-your-district-ban-chatgpt-heres-what-educators-told-us/2024/02
Department for Education. “Generative AI in Education Settings.” UK Government, June 2025. https://thirdspacelearning.com/blog/ai-in-schools/
K-12 Dive. “Lighten teacher workloads and reduce burnout with AI designed for education.” K-12 Dive, 2025. https://www.k12dive.com/spons/lighten-teacher-workloads-and-reduce-burnout-with-ai-designed-for-education/758435/
Education Futures. “How did we get from 'schools kill creativity' to 'AI kills critical thinking in schools?'” Education Futures, 2025. https://educationfutures.com/post/how-did-we-get-from-schools-kill-creativity-to-ai-kills-creativity-in-schools/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * A stange coincidence: as soon as the wife went to bed for her post-lunch nap. the home Internet went down. I checked with our ISP and they were aware of an Internet outage in our neighborhood and were working to have service restored. Three hours later, at almost the exact moment when the wife woke up, our home connection to the Internet was restored. Huh!
Anyway she's gone to play Bingo now, and I've found a baseball game to keep me company. Phillies are leading the Cubs 2 to 0 in the top of the 3rd inning.By the time the game ends I'll have worked through the night prayers and should be ready for bed.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 232.81 lbs. * bp= 154/90 (68)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:15 – 1 banana, coffee cake * 11:00 – 1 peanut butter sandwich, crackers and gravy * 12:15 – meat loaf and crackers, pineapple cake
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:00 – bank accounts activity monitored. * 07:00 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 10:00 – listening to Jack in 60 Minutes * 10:30 – start my weekly laundry * 11:00 – listening to The Markley, van Camp and Robbins Show * 12:15 to 14:15 – watch old game shows and eat lunch at home with Sylvia * 14:30 – research sudden lack of home Internet * 15:15 – listening to OTA local radio while folding laundry * 17:33 – and... the Internet comes back up. * 17:45 – now that I've got access to the Internet again, I've found a baseball game to follow: Chicago Cubs vs Philadelphia Phillies.
Chess: * 17:30 – moved in all pending CC games