It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
from
Micropoemas
Cruza la calle con el corazón, sigue con tu cabeza y adivina a dónde va el otro.
from
Micropoemas
Pensar como quien busca un hueso, porque en alguna parte puede estar y esa médula, esa felicidad espera.
Existe una misteriosa palabra que se pronuncia abriendo los labios, luego se sostiene el sonido, como si la primera vocal escondiera otra, y se termina al cerrar la boca, para que la última vocal atrape a la consonante y el silencio prolongue su eficacia. Es de aire y huele a azahar.
Es un mantra, una auténtica palabra de poder. Aunque sí repites la palabra de un modo innecesario, además de estar atontado y fuera de lugar, es seguro que quemas un bello momento, para tí.
Es el sonido que lleva en sus alas la felicitación y el misterio de la bendición de la buena fortuna por un hecho importante, a veces del todo maravilloso, como el nacimiento de un niño, aunque hay quienes de forma alocada lo usan para hacerse notar, celebrando asuntos triviales. Y es que vivimos tiempos en los que hay quienes, ayunos de juicio, tratan de degradar lo sagrado.
Dicen -quienes saben de estas cosas- que el sonido lleva una bendición que nos advierte que incluso la felicidad más extraordinaria, es aire.
from
hex_m_hell
“Wake me up when the guillotines come out.”
You've read this comment whenever #NoKings, or some other big protest, comes up. This, or some variation thereof, is the default response of a specific type of pseudoradical. It represents a deep failure to engage with the both history, and the realities of power. Literally just go listen to the Revolutions podcast. I don't think it could be any more clear.
But let me make it a bit more clear. I'll start by repeating part of one reply I've given to a similar response, then expand it out a bit.
To get to guillotines, you have to change society. By the time you’ve changed society, you don’t need the guillotines. If you focus on the guillotines instead of building that society, you will end up with a more brutal and repressive system than the one you started in. See the history of the French Revolution and the USSR.
The South of France is literally the place billionaires go to hang out on their Yachts and Russia is literally the most oppressive and exploitative oligarchy in history. Like… That shit didn’t work. Not only did it not work in the long run, but it almost immediately became way worse.
If you build a society that is just and equitable, then the billionaires will starve to death because they can’t exploit anyone. They will starve to death while watching the world for literally everyone else become immeasurably better.
A guillotine is a machine that can only be used against someone who is disarmed, bound, and ultimately helpless. If you have already disarmed someone, they are not a threat. If you can tie up a billionaire, then you already have the structural capability of taking away their power (and have probably already done so). What, then, is the value of the guillotine at this point?
Are you worried they're going to pull themselves back up by their bootstraps? All of these assholes rely on hereditary privilege to build their own privilege. Once you take away their advantage, they are basically helpless. If they, somehow, manage to recover and try to mount an assault on the new order then you kill them, in battle, while they are an active threat.
There seems to be this idea that there is something about billionaires as people that is a threat. That these people are inherently bad and that the problem can be solved simply by killing them. Once they are dead, the narrative seems to imply, everything will be better. The people who take their place will definitely not follow the exact same trajectory because the problem is, it implies, people not systems.
This is actually very close to the antisemitic argument of the Nazis. Jews, they argue, are a specific type of people. They are inherently bad. They control all the money. If they are killed, it will make room for “True Germans” to start industry. They will not exploit people because they are naturally better.
Now, Jews don't actually run everything while billionaires kind of do. Jews are just an arbitrary group of people, while the group of “billionaires” actually represents a group with power. But the focus on individuals and their properties vs the properties of systems is consistent, and it is consistent in a way that specifically empowers authoritarianism. An uninformed anti-capitalist critique can quickly mutate into an explicitly antisemitic one, because history and culture wear deep groves into reality that are easy for systems, without intention or thought, to fall into and follow.
It doesn't take a lot to jump from “billionaires” to “George Soros” directly to a red-brown “socialism of fools.” And then on whose necks do the guillotines fall? Consider the current moment and ask yourself if this feels unlikely.
These same billionaires have spent the last several decades atomizing people and learning to manipulate narratives to redirect violence from them back towards the most marginalized people. If you believe you will out maneuver them in controlling violence narratives, I have an NFT to sell you.
But let's ignore for a moment the ultimate injustice of killing someone who is not a threat and the risk of redirection.
The machinery of systematic execution is a social machinery that must be built (built at the expense of other machinery, I might add). It is not instant. It isn't the “first strike.” The Terror was about consolidating power, not establishing it.
By “the time the guillotines come out” the revolution is essentially over, and we have lost. When someone says this, they think that they are saying something radical. But they are actually saying, “I don't want to have anything to do with actually making a revolution happen. I just want to sit on my computer and criticize everyone else until they show me that they are done.” It is an assertion of complete disinterest in actually building the society we want to build. It is an assertion that they do not want to do any real work.
I am not a reformist, by any measure. Every time protest comes up, I write a big long post saying, in essence, “go fucking harder.” But “go harder” is not, “kill the rich” it's “organize” and “build a world in which the concept of 'rich' is not imaginable.”
Let me pick back up my original response, edited a bit.
I do want the billionaires to die, but I don't want them to die subdued. I want them to starve to death because they can’t figure out how to force people to feed them anymore. I want them to face reality, face the shattered idea that they deserved their wealth because they were so smart, capable, etc. I want them to see that they were never Atlas, but that we were always their Atlas. I want them to listen to “We Have Fed You All For a Thousand Years” and understand it in a way they could never have understood it before. I don't want them to die without seeing the world dance at their fall. I don't want them to die without fully understanding how much better the world is without their boot on it's neck.
I want to shatter their god complex and grind it in to dust, and rub it in their eyes every single day. I want something worse than a guillotine, something they acutally fear, I want them to know they are unnecessary, that they are the villains, that their power was never earned. I want them to live in a world where they are socially poor, where they have as much social debt as they once had monetary wealth, so they can feel the absolute powerlessness, helplessness, and precarity that I felt growing up destitute in a home broken by their wars and economic policies.
I don't want them to die. I want them to suffer. I want them to suffer our joy. I want them to cry at the beauty of the world that we built, and the recognition that they spent every second of their lives preventing it.
And I want some of them to live, because I also want to hear an apology.
I am not not angry at billionaires.
This is why I will always shut down “guillotine” rhetoric. I dream of something far more cruel planned for them: a better future for us.
And if all you're doing is going out and holding a sign and marching you are doing infinitely more to bring that world into reality than every single one of these pseudoradicals, with their guillotine dreams, combined.
from An Open Letter
As I sit here crouched in front of my small heater in my bathroom, I remember what it was like growing up. I spent A lot of my memory crouched by the heater. Feeling that warmth was nice, like a surrogate embrace. I also really like warm showers for that reason, which is ironic because they’re bad for my skin. But I was thinking today how cruel it is that a shower cannot fully engulf me in that warmth. If I was to do that I would drown, and I think there’s something vaguely poetic about that. But only on a surface level, and I think that trope is so worn out that I feel ashamed even thinking it.
It’s weird but expected, I’m right now struggling with the excessive socialization I think. I’m kind of tired, and I feel a bit worn out. I also feel like I’ve lost myself in some ways. Like I don’t game as much as I used to do, not even close. And I think that’s not exactly a bad thing but it is strange to see the difference in myself. I’m supposed to practice smells like teen spirit for my band, but all I want to do is play angst. And I don’t wanna practice the drums like I know I should. I just wanna play guitar because it feels like a proxy for the voice that I’ve never learned how to use. And that’s also ironic because I can’t play the guitar that well all things considered.
Honestly I just want to indulge in self hate a little bit here. I guess maybe because if I do that then it’s a little bit more understandable why I feel shitty even after I did all the things right. I went out with a friend and I signed up for a new event that I was anxious about, and it didn’t go bad at all. But I’m tired. And I feel like the rejection from just being this social and reaching out in this many different ways is catching up to me.
I put a bubble cigarette in my Amazon cart, because I thought it would be really funny as a bit. But I keep finding myself drawn to just the idea of putting that cigarette between my lips. Not an actual cigarette, but just the idea of it is enough to make me want it.
I wish I was able to go to the gym today, like I had enough time to also be able to do that in addition to the event I went to. I feel like when I’m depressed in this sense, the healthiest form of self harm I can do is go to the gym and just take it out on my body. I really do like that pain. I know that it’s not good for muscular growth or fatigue, but I just really like the feeling of pushing myself until the pain is enough to take a forefront in my mind. And it feels so edgy to say it, but I don’t really know how else to describe it. It’s not a bad sort of pain, but it’s more like a physical ringing that continues to get louder and louder until it drowns everything else out. I just wanna get lost in something. I want that escapism. I want some path, and it’s kind of ironic because all things considered my life is not an all bad right now. I guess this persistent sadness that comes sporadically is what got me to where I am so I cannot complain too much.
from
Notes I Won’t Reread
So this is what you wanted, Ms. Noura? Right? Yes, I’ll write your name, honey, I won’t hide it anymore, I won’t pretend like im talking about someone else while I’ve been talking about you for fucking ever, I’ll write your name, Ms. Noura, again and again. Just the way you shoved it in my heart as it belonged there, and let me tell you something, it did. until you ripped it off. So I say this again, is this what you wanted? distance? space? peace? all those pretty words people use when they’re done but don’t want to sound like it, Don’t worry, I listened, I just wish I learned earlier that listening to you was the fastest way to lose myself. I always listen a little too late, it seems. You kept begging me to let you go, like I was holding a knife to something scared, like I woke up one day and decided to become someone you’d be scared of. I didn’t. You grew that in me. slowly. quietly. perfectly.
funny, right? How do you get to walk away clean, and I’m left explaining why I sound like this now? But yeah, sure. I’m the problem, I always am when love stops being convenient for you. I won’t chase you. relax. no calls, no messages, no “accidentally” showing up in your life again. You get to live your quiet little life without me ruining the aesthetic. Your peace, your version of the story where you did everything right
, and I’ll stay here with the version where I watched you change and still chose you anyway. Don’t worry, honey, don’t worry, Ms. Noura, I won’t write you ugly. I know how much you care about that. appearances. angles. The way things look instead of what they are, I’ll write you exactly how you are now, distant enough to sleep at night, close enough to never leave my head. And yeah, I hate that. I hate that you’re still here in ways you don’t deserve to be. I hate that you made me into someone who even knows what this kind of hate feels like. I didn’t want it. But you were patient with it. You planted it better than you ever held me.
Don’t worry, though, I won’t keep writing about you. I know how much you’d like that, too.
Just remember this instead: You wanted distance, remember? I gave it to you. completely. Just don’t act surprised when one day we end up in the same place again. nothing dramatic. nothing cinematic. just eye to eye. smile to smile.
And you’ll realize, I was never as temporary as you tried to make me, and I meant every word I never said out loud. And I’ll make sure your corpses are placed next to mine. I’ll make sure you get what you wish, your eyes in one of my organ jars, where I can stare at them. I'll hold your heart with my own hands, showing you it's not the cold thing you said it was.
Congrats on the silence. Noura You can keep it.
Sincerely, the silence you asked for.
from sugarrush-77
What the Bible calls sin, deeds of darkness, whatnot often feels really good in the moment. No matter who in the clergy tries to convince you that following God results in joyful living, and that following God is surefire way to be happy, no amount of Spirit-induced joy can produce the same dopamine high as snorting crystal meth. Likewise, sin lets you achieve highs of pleasure that an upstanding citizen of God’s kingdom will probably never experience. However, it’s but for a moment, and leaves you feeling empty and regretful in its wake. See how many people are going to AA meetings to quit using?
Upstanding citizens of God’s kingdom have to exercise utmost focus and willpower to keep their eyes on Christ, and Christ only. You look away for a moment, and you’ll find you have strayed. If suffering comes your way, you must endure it. Maybe sometimes, you’ll be happy. But you’ll find that you can go to sleep easy knowing that you’ve fought the good fight, and that your conscience is clear(er).
So, you can’t have your cake, and eat it too. Ya gotta choose. What do you want?
I find that as I get older, I become more aware of death. Death is useful as a sieve for filtering out the things that are important and not important. It reminds me that I should choose to be an upstanding citizen of God’s kingdom, because that’s what really matters.
from
Noisy Deadlines

Babel-17 by Samuel R. Delany, 311p: This is a book ahead of its time, with a strong and intelligent main female character. It's weird and bizarre, with psychedelic vibes. It's full of interesting ideas: neurolinguistic programming, language being used as weapon, polyamorous ship navigators, discorporate people, pilots good at wrestling. It's queer-norm, it is cyberpunk before cyberpunk was a thing. It has an anti-war message, illustrated brilliantly in the weapons gallery scene (if you know, you know). Also, that dinner party scene: I never read something so intense and vivid. This book feels remarkably new and modern even though it was written in 1966. I missed some more character development and more information about the Invaders. The writing style was not exactly my cup of tea, it had highs and lows for me.
The Brush of Black Wings (Master of Crows #2) by Grace Draven, 135p: I wanted to read this novella because it's the sequel of a book that I love: Master of Crows. But this novella did not measure up to the previous book. The mystery plot was not interesting enough, and it was too easy to predict the ending.
Diplomatic Immunity (Vorkosigan Saga (Publication Order) #13) by Lois McMaster Bujold, 320p: We start with an older Miles and Ekaterin, in their honeymoon, getting into a messy investigation of a man disappearance in the Quaddiespace. Also: scary biological weapons, station lockdowns and mysteries unravelled. We meet again Bel Thorne, former pilot in the Dendarii Free Mercenaries and our favourite hermaphrodite. I liked the contrast of the different cultures: the quaddies focus on work and the Barrayarans focused on honour. The stakes are high, I wasn't prepared for all the tension! Another book with excellent pacing and good climax.
The Thursday Murder Club (A Thursday Murder Club Mystery #1) by Richard Osman, 388p: I was very curious to read this book. I was intrigued by the premise of a group of retirees investigating murders. I don't read a lot of mystery nowadays, and this book didn't work for me. I thought it started well, but it gets convoluted in the middle with so many new characters being introduced to the story. It has a unique sense of humour that worked most of the time. The chapters were super short, and the writing style felt choppy to me. It alternates different points of view in each chapter and I found it hard to track which characters were active. I didn't like that the plot was clearly manipulating me and steering my attention away from the real clues. I could see when the author was meandering just to fool me. It was not my type of mystery.
from
SmarterArticles

In late February 2026, Perplexity AI quietly published a blog post with a claim that should have set off alarms in every corporate office from London to Los Angeles. The company's new product, Computer for Enterprise, had been deployed internally as a Slack integration, with every employee in the same channel. After processing more than 16,000 queries in four weeks, the system had, by Perplexity's own estimation, completed the equivalent of 3.25 years of human work and saved the company $1.6 million in labour costs. The benchmarks used to measure this output came from institutions including McKinsey, Harvard, MIT, and Boston Consulting Group.
Let that settle for a moment. Not 3.25 years spread across thousands of workers performing marginal speed improvements. The claim is that a single AI platform, running cloud-based workflows across roughly 20 frontier models, replaced years of the kind of cognitive labour that knowledge workers perform every day: querying databases, compiling reports, synthesising research, drafting analyses. The tasks that fill the calendars of financial analysts, marketing strategists, management consultants, and corporate researchers everywhere.
Perplexity's CEO, Aravind Srinivas, framed the ambition with characteristic directness. “What we are going to try to do is help businesses run as autonomously as possible,” he said. On the question of AI displacing jobs, he offered a response that managed to be both provocative and revealing: “The reality is most people don't enjoy their jobs.” His suggestion was that displacement could free people to pursue entrepreneurship and more fulfilling work. It is, to put it mildly, an incomplete answer to a question affecting hundreds of millions of workers worldwide.
To understand why Perplexity's claims matter, you need to understand what Computer for Enterprise actually does. It is not a chatbot. It is not a search engine with a conversational veneer. It is an orchestration platform that routes tasks across approximately 20 AI models from multiple providers, including Anthropic's Claude Opus 4.6 as its primary reasoning engine, Google's Gemini for deep research, OpenAI's GPT-5.2, and xAI's Grok. Each session runs inside its own isolated Firecracker virtual machine, ensuring data separation between users.
The platform connects natively to the software stack that modern enterprises already run: Snowflake, Salesforce, HubSpot, Slack, Notion, GitHub, Gmail, Outlook, and more than 400 other applications through its connector ecosystem. Administrators can install custom connectors via the Model Context Protocol. The system includes workflow templates for legal contract review, finance audit support, sales call preparation, and customer support ticket triage.
Here is the critical capability: Computer for Enterprise does not merely answer questions. It writes the database queries, executes them, and returns structured results. A financial analyst can ask for revenue broken down by vertical from Snowflake, and the system will compose the SQL, run it against the data warehouse, and present the findings. A sales team can simultaneously pull CRM data and competitive context. The AI handles the translation from natural language intent to technical execution and back again, collapsing what might take a human analyst hours into seconds.
Srinivas described the underlying philosophy on the social media platform X: “When AIs can orchestrate a file system with CLI tools plus a browser, AI essentially becomes the Computer, running things on the cloud as you sleep.” He drew a distinction between traditional operating systems and what Perplexity is building: “A traditional operating system takes instructions; an AI operating system takes objectives.”
The enterprise offering comes wrapped in the security apparatus that corporate procurement teams demand: SOC 2 Type II compliance, SAML single sign-on, audit logs, sandboxed query execution, and GDPR and HIPAA compliance. Pricing runs at $325 per user per month for the Enterprise Max tier, or $40 per user per month for Enterprise Pro. Perplexity's annualised revenue reached approximately $148 million by mid-2025, with internal projections targeting $656 million by the end of 2026.
The company is candid about limitations. Factual hallucinations occur, particularly on niche topics or very recent events. The system occasionally generates broken URLs. External communications, whether emails or published content, should always be reviewed by a human before distribution. But the trajectory is clear, and the implications are staggering.
The question that Perplexity's announcement forces into the open is not whether AI can perform knowledge work. That debate ended sometime around mid-2024, when large language models began consistently demonstrating competence at research synthesis, data analysis, report writing, and code generation. The question now is what happens to the people who currently do this work for a living.
The numbers are sobering. According to Goldman Sachs research, generative AI could automate tasks equivalent to 300 million full-time jobs worldwide, with 26 per cent of office roles and 20 per cent of customer service positions highly exposed. In the United States alone, Goldman Sachs estimates that AI automation will ultimately displace roughly six to seven per cent of the workforce, equivalent to approximately 11 million workers. The World Economic Forum's Future of Jobs Report 2025, drawing on perspectives from more than 1,000 leading global employers representing over 14 million workers, projects that 92 million roles will be displaced by 2030, though it forecasts 170 million new roles emerging for a net gain of 78 million jobs.
McKinsey's analysis adds another dimension. The consultancy estimated that today's technology could, in theory, automate approximately 57 per cent of current U.S. work hours. That figure does not mean 57 per cent of jobs will vanish. It means that across the entire working population, just over half of the hours worked involve tasks that a sufficiently deployed AI system could handle. McKinsey projects that 30 per cent of U.S. work hours could be automated by 2030, accelerated by generative AI's capabilities.
The disruption is already visible in employment data. There were 77,999 AI-attributed tech job losses in the first six months of 2025 alone. Employment in the computer systems design and related services sector declined five per cent since ChatGPT's release. Entry-level job postings dropped 15 per cent year over year. Employment among software developers aged 22 to 25 fell 20 per cent compared to their late 2022 peak. According to research from the Dallas Federal Reserve, AI is simultaneously aiding existing workers and replacing others, with the wage data suggesting a complex and uneven transformation.
Certain roles face particularly acute risk. Data entry positions carry a 95 per cent automation risk. Customer service representatives face 80 per cent risk, because most inquiries are answerable from a knowledge base. Paralegals face an 80 per cent risk of automation by 2026, and legal researchers face a 65 per cent risk by 2027. An estimated 200,000 jobs are expected to be cut from Wall Street banks over the next three to five years, and as much as 54 per cent of banking jobs have high potential for AI automation. SSRN projections estimate that 7.5 million data entry and administrative jobs could be eliminated by 2027.
Seventy-five per cent of knowledge workers are already using AI tools at work, and nearly half started within the last six months. They report 66 per cent productivity improvements. But the question nobody wants to confront directly is this: if each worker becomes 66 per cent more productive, how many fewer workers does an organisation actually need?
The corporate world is not waiting for the research to settle before acting. The global technology sector eliminated nearly 60,000 jobs in less than three months of 2026, according to layoff tracker TrueUp, which recorded 171 separate events affecting 59,121 workers since January. That pace, averaging 704 jobs lost per day, is running ahead of 2025, when 245,953 workers were let go across the full year. If it holds, total cuts could reach 265,000 by December. A Resume.org survey of 1,000 U.S. hiring managers found that 55 per cent expect layoffs at their companies in 2026, and 44 per cent identified AI as a primary driver.
Some of the largest names in technology are leading the charge. Amazon confirmed 16,000 corporate job cuts in 2026 despite reporting record revenue of $716.9 billion the previous year, framing the reductions as a push to flatten management layers. Some of those roles are not being backfilled with humans; they are being backfilled with software. Block, the payments company formerly known as Square, slashed 4,000 roles in early 2026, nearly 40 per cent of its entire workforce. Ingka Group, the largest IKEA retailer, announced 800 office role cuts in March.
Perhaps the most instructive example comes from Klarna, the Swedish fintech company. In 2024, Klarna deployed an AI assistant that handled the equivalent workload of 700 full-time customer service employees. The company's headcount fell from approximately 7,000 in 2022 to roughly 3,000, and CEO Sebastian Siemiatkowski publicly championed the results. But the strategy backfired. Customer complaints increased, satisfaction ratings dropped, and internal reviews revealed that AI systems lacked empathy and could not handle nuanced problem-solving. By early 2025, Siemiatkowski acknowledged that the company had overestimated AI's capabilities, stating bluntly: “We went too far.” Klarna began rehiring human customer service staff, specifically targeting students, rural populations, and dedicated product users.
Klarna's reversal is a cautionary tale that speaks directly to Acemoglu's warnings about “so-so automation.” The financial savings looked impressive on a spreadsheet, but the technology degraded the quality of the service it was supposed to improve. The question for every organisation evaluating tools like Perplexity's Computer for Enterprise is whether the same pattern will repeat across other domains: impressive benchmarks followed by the slow realisation that human judgement, context, and empathy were doing more work than anyone appreciated until they were gone.
Every wave of technological disruption produces two competing narratives. The optimists point to history: the Industrial Revolution destroyed agricultural and artisan livelihoods but created factory work. The IT revolution eliminated typing pools and filing clerks but created entire industries around software, networking, and digital services. The pessimists counter that this time is different, that the pace and breadth of AI's capabilities outstrip anything that came before.
History offers both comfort and caution. During the first Industrial Revolution, the Luddites famously destroyed the mechanised looms that threatened their livelihoods in industrial Britain. Their fears were not irrational. While new manufacturing jobs eventually emerged, the transition period was brutal. Research from economic historians shows that average real wages in England stagnated for decades even as productivity rose. Eventually, wage growth caught up to and then surpassed productivity growth, but only after substantial policy reforms including labour protections and education acts.
The Second Industrial Revolution followed a similar pattern. Automation technologies increased the efficiency and scope of mechanised production, requiring fewer operators but more engineers, managers, and other new occupations. As automation created fewer middle-skill jobs than it made obsolete, the result was a hollowing out of the skill distribution in manufacturing, a pattern that persists to this day.
The robotics wave of the 1970s and 1980s displaced approximately 1.2 million manufacturing jobs globally by 1990. In the United States alone, robot-induced automation displaced 300,000 factory workers in the automotive sector. New jobs did eventually appear, but they required different skills, existed in different locations, and often paid different wages.
McKinsey's historical analysis offers a striking statistic: 60 per cent of today's U.S. workforce is employed in occupations that simply did not exist in 1940. That is genuinely encouraging. But it also means that 60 per cent of today's workers are in roles that their grandparents could not have trained for, because the jobs had not yet been invented. The lag between destruction and creation is where the human cost concentrates.
What makes the AI wave qualitatively different from previous automation episodes is its target. Earlier forms of automation primarily replaced physical labour and routine cognitive tasks: drilling, sewing, sorting files, calculating spreadsheets. AI encroaches on non-routine cognitive domains once thought uniquely human, including recognising images, drafting emails, drawing illustrations, synthesising research, and making complex judgements. The Bipartisan Policy Center in Washington notes that AI is different because it can automate many tasks that do not follow an explicit set of rules and are instead learned through experience and intuition.
The pace compounds the challenge. Previous technological transitions unfolded over generations, allowing social institutions to adapt. The shift from agricultural to industrial employment in the United States took roughly a century. The transition from manufacturing to services took several decades. AI capabilities are advancing on a timeline measured in months. Goldman Sachs models show that each one percentage point productivity gain from technology raises unemployment by approximately 0.3 percentage points in the short run, though this effect historically fades within two years.
The distributional question matters enormously. The World Economic Forum's net positive headline of 78 million new jobs conceals what the organisation itself acknowledges is a profound distributional challenge: the jobs being destroyed and the jobs being created are not the same jobs, do not require the same skills, do not pay the same wages, and are not located in the same geographies.
Entry-level and young workers are bearing the brunt. AI can replicate codified knowledge but not tacit knowledge, the experiential understanding that comes from years of practice. This means AI may substitute for entry-level workers while augmenting the efforts of experienced professionals. Fourteen per cent of all workers report having already been displaced by AI, with the rate higher among younger and mid-career workers in technology and creative fields. Unemployment among 20 to 30 year olds in tech-exposed occupations has risen by almost three percentage points since the start of 2025, according to Goldman Sachs data, notably higher than for their same-aged counterparts in other trades.
There is also a significant gender dimension. In the United States, 79 per cent of employed women work in jobs that are at high risk of automation, compared to 58 per cent of men. That translates to 58.87 million women versus 48.62 million men occupying positions highly exposed to AI automation.
White-collar workers in industries such as financial services and media now express higher levels of concern about automation (67 per cent) than their counterparts in blue-collar sectors, including transportation (60 per cent) and retail (59 per cent). The traditional assumption that automation primarily threatens manual and routine work has been comprehensively upended. AI poses a risk of eliminating 10 to 20 per cent of entry-level white-collar jobs within the next one to five years.
The irony is sharp. Knowledge workers spent decades insulating themselves from automation risk by acquiring education, developing analytical skills, and moving into roles that required judgement and communication. Now the very capabilities they cultivated, research synthesis, data analysis, report writing, pattern recognition, are precisely what large language models do best.
Not all economists agree on the magnitude of the disruption. Daron Acemoglu, the Nobel Prize-winning economist and Institute Professor at MIT, offers one of the most rigorously evidence-based counterpoints to the prevailing AI hype. Despite predictions from some quarters that AI will dramatically boost GDP growth, Acemoglu expects it to increase U.S. GDP by just 1.1 to 1.6 per cent over the next decade, with a roughly 0.05 per cent annual gain in productivity. He believes current AI tools are likely to impact only about five per cent of jobs.
Acemoglu's central concern is what he terms “so-so automation,” technologies that replace jobs without meaningfully boosting productivity or human welfare. “When hype takes over, companies start automating everything, including tasks that shouldn't be automated,” he has warned. “You end up with no productivity gains, damaged businesses, and people losing jobs without new opportunities being created.” Think of self-checkout kiosks that are slower and more frustrating than human cashiers, or automated customer service menus that leave callers trapped in loops of increasingly desperate button-pressing.
His prescription is pointed: “We're using it too much for automation and not enough for providing expertise and information to workers.” He draws a crucial distinction between AI that provides new information to a biotechnologist, helping them become more effective, and AI that replaces a customer service worker with an automated system. The former creates value; the latter merely transfers costs from employer to consumer.
Acemoglu acknowledges that AI will transform many occupations but remains sceptical of elimination claims: “I don't expect any occupation that we have today to have been eliminated in five or 10 years' time. We're still going to have journalists, we're still going to have financial analysts, we're still going to have HR employees.” What will change, he argues, is the task composition within those roles, with AI handling data summary, visual matching, and pattern recognition while humans focus on judgement, creativity, and interpersonal skills.
Gartner's projections align with this more measured view, predicting that AI's impact on global jobs will be neutral through 2026, and that by 2028, AI will create more jobs than it destroys. But neutral aggregate impact can still mask severe disruption for specific communities, industries, and demographics.
Organisations are responding with a mixture of enthusiasm and anxiety. According to the World Economic Forum, 41 per cent of employers globally plan to use AI to reduce headcount, while simultaneously 77 per cent aim to upskill their staff for working alongside AI, and 47 per cent plan to move affected employees into different roles internally. About one in six employers expect AI to reduce headcount in 2026 specifically.
The skills gap is already the most significant barrier to business transformation, with nearly 40 per cent of skills required on the job set to change and 63 per cent of employers citing it as their key challenge. The number of workers in occupations where AI fluency is explicitly required has risen from around one million in 2023 to approximately seven million in 2025, according to McKinsey data. Across McKinsey's most recent global survey, 94 per cent of employees and 99 per cent of C-suite executives report personal use of generative AI.
Companies are pursuing several adaptation strategies simultaneously. Some are integrating AI with their proprietary data via retrieval-augmented generation or fine-tuning, creating what Goldman Sachs describes as expert AI systems with advanced capabilities and industry-specific knowledge. Others are restructuring roles around human-AI collaboration, keeping the human in the loop for judgement calls, client relationships, and strategic decisions while delegating research, analysis, and first-draft creation to AI systems. According to a PwC survey of 300 senior executives conducted in May 2025, 88 per cent said their team or business function plans to increase AI-related budgets in the next twelve months due to agentic AI, while 79 per cent reported that AI agents are already being adopted in their companies.
The retraining challenge, however, is formidable. The half-life of professional skills is collapsing faster than any training programme can keep pace with. A displaced worker who enrols in an eighteen-month data analytics programme may find that entry-level positions in that field have already been automated by graduation. Nobel laureate Angus Deaton has noted that economists were naively optimistic about the effectiveness of trade adjustment assistance, including worker retraining programmes, for those hurt by previous economic shifts. The track record of large-scale retraining initiatives is, at best, mixed.
PwC's own research underscores a deeper challenge: technology delivers only about 20 per cent of an initiative's value. The other 80 per cent comes from redesigning work so that AI agents can handle routine tasks and people can focus on what truly drives impact. That redesign requires not just new software licences but fundamental rethinking of roles, workflows, and organisational structures. It is the kind of transformation that most companies talk about but few execute well.
The policy conversation is struggling to keep pace with the technology. In early 2026, U.K. Minister for Investment Lord Jason Stockwood told the Financial Times that the government is weighing the introduction of a universal basic income to support workers in industries where AI threatens displacement. “Undoubtedly we're going to have to think really carefully about how we soft-land those industries that go away,” he said, “so some sort of UBI, some sort of lifelong learning mechanism as well so people can retrain.” He has also floated the idea of technology companies being taxed to fund such payments.
The UBI discussion has shifted from theoretical curiosity to practical policy consideration. Ioana Marinescu, an economist at the University of Pennsylvania, has argued that UBI could be a pragmatic solution to AI-driven job displacement, particularly given the uncertainties around how many people will lose their jobs and for how long. For people without prior employment history, especially younger workers entering the labour market for the first time, unemployment insurance benefits are not guaranteed, making unconditional UBI payments a potentially effective safety net.
The idea has precedent. According to the Stanford Basic Income Lab, 163 programmes piloting basic income, including 41 active programmes, have been run in the United States alone. Ireland's Basic Income for the Arts programme, which began as a three-year pilot, will become permanent in 2026, allowing creative workers to pursue their craft without needing supplementary employment.
Researchers at the London School of Economics argue that UBI's successful implementation depends on sustainable funding mechanisms, investment in education, and attention to social and psychological dimensions, not only economic and labour market outcomes. The question of funding remains contentious. In 2017, Bill Gates proposed taxing robots, suggesting that companies replacing human workers with automation should pay taxes at levels comparable to the people they displace. The concept of an AI automation tax is gaining traction as a revenue source where automation's economic benefits help support those most affected by the transition.
Morgan Stanley noted in a report in early 2026 that AI-related job cuts are hitting Britain the hardest, with eight per cent net job losses over the preceding twelve months. The United States currently has no comprehensive labour transition strategy, no reskilling infrastructure capable of operating at the required speed, and no serious public conversation about income decoupled from employment.
Some analysts advocate for integrated approaches: AI-enabled personalised retraining pathways, job matching to emerging sectors, and combining UBI with reskilling initiatives, education grants, and healthcare services. Policymakers are urged to prioritise pilot programmes that integrate income support with workforce development, leveraging AI itself to optimise distribution and measure impact.
The fundamental tension at the heart of this story has no clean resolution. Perplexity's Computer for Enterprise represents a genuine productivity breakthrough. If knowledge workers can accomplish in seconds what previously took hours, the economic potential is enormous. Organisations that adopt these tools will move faster, spend less on routine analysis, and free their best people to focus on the creative and strategic work that AI still handles poorly.
But the maths of productivity improvement and the maths of employment are not the same calculation. When Srinivas says he wants to help businesses run as autonomously as possible, he is describing a world with fewer employees. When Perplexity's internal study shows 3.25 years of work completed in four weeks, it is demonstrating that the same output can be achieved with a fraction of the human input. When 75 per cent of knowledge workers report using AI and seeing 66 per cent productivity gains, the logical endpoint is that organisations need significantly fewer knowledge workers to produce the same volume of output.
The World Economic Forum projects a net positive outcome globally, with new job categories emerging to replace those that disappear. History suggests this is likely correct over sufficiently long time horizons. But the transition period, the years between when old jobs vanish and new ones coalesce, is where lives are disrupted, careers are derailed, mortgages go unpaid, and communities fracture. Klarna's experience is a reminder that even the companies most aggressively pursuing AI-driven efficiency can discover, too late, that they have optimised away something essential.
Acemoglu urges a more deliberate approach: deploying AI to augment human capabilities rather than simply replacing human workers, celebrating what he calls “the places where AI is better than humans, and the places where humans are better than AI.” Given the mixed evidence on benefits and drawbacks, he and his colleagues argue that it may be best to adopt AI more slowly than market fundamentalists might prefer.
That counsel of patience, however, runs headlong into competitive reality. No company can afford to ignore a technology that promises to compress years of work into weeks, not when their competitors are already adopting it. The individual incentive to automate is overwhelming, even if the collective consequence is displacement on a scale that existing social safety nets were never designed to absorb.
Srinivas outlined an AI evolution on LinkedIn: “2023: Using AI to research. 2024: Super prompting galore. 2025: AI remembers you. 2026: Agents are useful (and not just to vibe coders).” He added that intelligence is no longer the bottleneck; what matters now is knowing which model to call, what context to surface, and when to act versus ask a follow-up question.
For the millions of knowledge workers whose professional identity is built on exactly those skills, research, analysis, synthesis, and communication, the message is unsettling. The tools that made their expertise valuable are now embedded in software that costs $325 per month and never sleeps. The question is not whether the transformation will happen. It is whether societies will manage the transition with anything approaching the speed, scale, and seriousness that the moment demands. Based on every previous technological transition in recorded history, the honest answer is: probably not fast enough.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * Listening to the middle innings of an MLB game, and enjoying the evening of a day which has been more relaxed than yesterday. When this game ends I'll wrap up the night prayers, maybe lay out material for Wednesday's planned chores, then head to bed at a reasonably early time.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 227.74 lbs. * bp= 146/84 (65)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 05:50 – 1 McDonald's Big Arch sandwich * 07:10 – crispy oatmeal cookies. * 13:00 – beef chop suey,fried rice * 16:05 – garden salad, fried chicken livers and hearts, small chocolate milkshake
Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:00- bank accounts activity monitored * 05:20 – read, write, pray, follow news reports from various sources, surf the socials, nap, * 13:00 to 14:00 – watch old game shows and eat lunch at home with Sylvia * 14:15 – Prayerfully read the Daily Mass Proper for Tuesday of Holy Week, Morch 31, 2026, according to the 1960 Rubrics. * 15:30 – watching Intentional Talk on MLB Network * 17:00 – “It's baseball time in Texas.” – tuned in to 105.3 The Fan, the radio home of the Rangers, for the Pregame Show then the call of the MLB game between the Texas Rangers and the Baltimore Orioles.
Chess: * 15:30 – moved in all pending CC games
from
Space Goblin Diaries
Big milestone this month: the game now has one path where I can play all the way from start to finish without encountering any placeholder text.
A complete playthrough took me about 45 minutes, but that was with me skim-reading the text and not taking any time to think about the decisions, so someone playing it “for real” would probably take an hour or more.
After a long time with my head buried in individual chapters, this is the first time I've been able to step back and see what a playthrough of the game as a whole is going to be like...and I think overall I'm pretty happy with it! The gimmick of the whole thing being narrated by the villain works well, and the overall structure of the game is satisfying. There are lots of things wrong with it, but they're all things I can fix.
In particular I think the game could be a bit longer, but I think the way to fix that is to make the individual chapters longer rather than add more chapters to an individual path. I also want to add more puzzle-type content, so the player has to think harder to come up with the correct solution. (As I mentioned in January's dev diary, my new way of handling failure means I can be less merciful.) But there will be multiple solutions to at least some of the puzzles, as I want to strike a balance between making you work out the correct solution and letting you roleplay your space hero.
My plan now is to write the whole rest of the game to the same first-draft standard as this path. Then, once the structure is in place, I can go through and make the individual chapters actually good.
Can our hero complete a first draft of the entire game, or is his confidence misplaced? Find out in next month's exciting developer diary!
#FoolishEarthCreatures #DevDiary
from drpontus
I was invited to speak at The Global Education Conclave 2026, hosted by CGC University in Mohali, that gathered 120+ delegates from 60+ nations under the theme “EduVerse 2050: Rethinking Global Academia for a New Human Epoch.”

This is a written version of my main talking points, edited after the conference. The text therefore contains both the narrative of my talk, along with reflections from the actual events and meetings during these intense days in Mohali, India.
These threads weave together a coherent narrative: the future of higher education cannot be outsourced to opaque, profit‑driven, monocultural LLM-based platforms. It must remain a public good, rooted in critical thinking, cultural pluralism, and open scholarship free from commercial gatekeepers.
The conclave was unusual in the best possible way: diplomats alongside scholars with different perspectives on peace-building. It was very interesting to hear voices that outnumbered traditional US and Western Europe perspectives by a wide margin. That composition mattered. It shaped what got said – and what I learned.
My background is in AI and information technology. I have a Master's in Cognitive Science and a PhD in Computational Linguistics with a focus on interactive AI. I have spent 25 years putting AI technologies into use, both as a practitioner and as a researcher. You might expect me to be an enthusiastic advocate for initiatives like Gemini for Students or ChatGPT Education. I am not, and I want to explain why – carefully, because the argument matters.
My point was not that everything that the ”AI” umbrella covers is bad. AI as a field is far larger than LLMs and has been developing for at least 70 years with a multitude of approaches.
Instead, I wanted to point out something more uncomfortable: that the products currently being sold to our higher education institutions under the name “AI” is being systematically misdescribed, that the people selling it know this, and that students are ultimately the ones who will pay the price.

The problem begins with the word “intelligence.” When a company calls a product “artificial intelligence”, we fill in the gap with a meaning we already understand. Intelligence: the capacity to reason, to understand, to form genuinely new ideas. That is what the word means to us. It is not what it means in the products currently being labeled AI. This is not a subtle distinction. It is a central misconception – and in the context of institutional adoption, it is closer to actual deception.
Now, LLM systems are technically large statistical models trained on enormous quantities of human-produced text. Text that were written by humans, for humans to read. The LLM learns the probability distributions of word (token) sequences. When given a prompt, they sample from those distributions to produce a plausible-looking continuation. That is the mechanism. Entirely. There is no reasoning. There is no understanding. It is pattern completion at massive scale.
The word “generative” has the same problem. In plain language it sounds like creativity, like something new being made. In the actual mathematical sense, generative only means the model approximates a distribution and samples from it. It cannot reach outside what it has seen. It interpolates and recombines within learned boundaries, and it does that with impressive fluency. But fluency is not understanding. When a model produces a coherent-looking summary of a historical argument, it has not understood the argument. It has produced a statistically plausible reconstruction of what a summary of that kind of argument tends to look like. It cannot tell you what the argument gets wrong. It does not know when it is outside its competence – which is why it fabricates citations and hallucinates facts with complete confidence.
The people building these systems know this.
The people selling them to our institutions and universities also know this.
The framing of “AI” as intelligence, as reasoning, as a thinking partner, is a marketing decision. And that marketing decision is now shaping academic policy at institutions that are supposed to be built on precision, source criticism, and rigorous thought.
When the conversation turns to “AI in education,” it is framed as if we were discussing a broad and open category of tools. We are not. In practice, we are talking about a handful of commercial services from OpenAI, Anthropic, Google, and Microsoft. These are not education companies. They are among the largest commercial platform companies in history, headquartered in the United States, operating under US legal frameworks (like the CLOUD Act, for example), with business models built on lock-in, data accumulation, and scale. When a university integrates one of these services into its learning management system, it hands a portion of the university's knowledge infrastructure to a commercial actor whose systems cannot be audited, whose behavior cannot be reliably predicted, and whose terms of service reserve the right to analyze behavioral metadata regardless of what the headline privacy promises say.
There is a structural problem here. These models are optimized for English and an American textual culture. When millions of students at thousands of institutions worldwide are using the same two or three closed models to research, summarize, and draft, the result is a global homogenization of what knowledge looks like – and that homogenization flows outward from a single cultural center. This point landed hard in the conclave’s multicultural context, and rightly so.
The conclave's composition – delegates from across Africa, Asia, the Middle East, and Latin America – foregrounded what is usually politely left aside in Western discussions of EdTech adoption: these tools were not built for most of the world's students, do not reflect most of the world's intellectual traditions, and the people doing the low-wage annotation work that makes them function are typically from the Global South and benefit from them the least.
Sitting next to two of my esteemed fellow panelists from Ethiopia and Nigeria – one of the most incisive points raised in my panel was the urgent need for local models, local data infrastructure, and local governance. The reason is simple: contemporary models carry very little meaningful context for the majority of their global users. This is a structural failure.
The researchers and educators who used to determine what counts as rigorous analysis are being gradually displaced by the probability weights of commercial systems optimized for plausibility, owned by companies optimized for growth.
Universities stand for open science, source criticism, and reproducibility. We risk building pedagogy on closed, non-replicable statistical systems that we cannot scrutinize and did not choose on educational grounds. The pressure to adopt these tools combines three forces: fear of being seen as behind, funding tied to adoption, and the absence of organized faculty resistance at the moment decisions were made.
None of those forces is an educational reason. And this is happening at a moment when higher education is already under attack from populist movements that question its value, its legitimacy, and its purpose. The Palestinian ambassador's framing – “education as resistance” – was not just a slogan. In a room representing 60 nations, many of them navigating serious political pressure, it summarizes what is at stake. Surrendering the epistemic foundations of universities to unauditable commercial systems is not a neutral administrative choice. It is a capitulation at exactly the wrong time.
Three positions:
First, demand real technical literacy before adoption. Before your institution deploys any of these tools in a learning context, someone with genuine technical knowledge – not a vendor representative – should be able to answer in plain language: what does this system actually do? What are its known failure modes? What data does it collect, and what do the actual terms of service say? If those questions cannot be answered clearly, adoption should wait.
Second, protect the process. Design assessment for process visibility. Oral examinations. Iterative drafts with documented revision. In-person discussion of written work. Assignments that require engagement with specific sources a model cannot have accessed. These are pro-learning positions, and we know they produce the outcomes education exists to produce.
In the panel I offered: ”You do not send a robot to the gym to do the lifting for you. The friction and struggle are the point. An LLM service, used without reflection, is the direct opposite of that. It removes the resistance that builds intellectual capacity – and it makes students and scholars dependent in the process. Reading deeply and discussing even more deeply is what matters. That has not changed.”
Third, say out loud what you actually think. There is enormous pressure in academic institutions to perform enthusiasm for these tools, or at minimum to avoid being publicly critical. Push back on that pressure. When adoption decisions are being made in your departments, show up and say clearly what the evidence says and what your professional judgment is.
The companies selling these products are extremely loud. Educators and guardians of knowledge and critical thinking need to be louder.
We are being pushed toward a version of higher education where knowledge is a product to be delivered, learning is a transaction to be optimized, and the university's role is to credential people who have learned how to prompt proprietary AI services. That is not higher education.
What happens next will not be determined by what OpenAI or Google builds. It will be determined by what you decide to defend — in your classrooms, your departments, your institutions.
Several delegates cited Nelson Mandela’s point that education is the most powerful weapon for changing the world. He was right. But such weapons require the person holding them to have judgment, skill, and the strength built from genuine effort. That strength does not come from outsourcing your thinking to machines. It comes from doing the intellectual work yourself.
The wisdom is already in our culture. Such as in novels, like this Frank Herbert quote from Dune in 1965(!):
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
The Global Education Conclave 2026 was held at CGC University, Mohali, India. My specific panel addressed the intersection of AI technology, pedagogical integrity, and global educational sovereignty.
Thank you to the wonderful organizers and CGC University Mohali for creating this international platform for conversation.

from
Kroeber
Regresso ao trabalho dentro de 6 dias. Ainda não recuperei completamente da ruptura muscular na perna e do problema com o ombro, mas vou voltar. Além do esforço físico, vai-me custar perder quatro horas por dia (uma hora de sono e três horas em transportes públicos) nos dias em que vou ao escritório. Mas de resto, vai-me fazer bem sair de casa, ter um ritmo e preocupações para além de me recuperar fisicamente. E até ter menos tempo me dará oportunidade de aprender a geri-lo melhor. Aqui está: é tão fácil regurgitar um discurso positivo, sem estar em sintonia com as palavras debitadas.
from
Kroeber
É mais difícil adormecer se o dia foi vazio. Talvez, ao sentir que um dia valeu a pena, tenha mais vontade que venha o próximo. Ou reste alguma esperança de que algo salve o dia, uma frase lida, uma palestra no youtube, um parágrafo escrito. A insónia é um sintoma, mas de quê?
from
Kroeber
É mais difícil adormecer se o dia foi vazio. Talvez, ao sentir que um dia valeu a pena, tenha mais vontade que venha o próximo. Ou reste alguma esperança de que algo salve o dia, uma frase lida, uma palestra no youtube, um parágrafo escrito. A insónia é um sintoma, mas de quê?
from Tuesdays in Autumn
Reading for me tends to be a thing of feasts and famines, done in fits and starts. While much of this month has been a dry spell. I did finish a book on Friday: Debit and Credit, a slim, early '70s collection of poems by the Sicilian author (and 1959 Nobel laureate) Salvatore Quasimodo, in translations by Jack Bevan.
From 'Only If Love Should Strike You':
...do not forget
to be animal, fit and sinuous,
torrid in violence, wanting everything here
on earth, before the final cry
when the body is cadence of shrivelled memories
and the spirit hastens to the eternal end:
remember that you can be the being of being
only if love should strike you right in the bowels.
It's a very short book, but a nourishing one, and it felt like it did me good.
Lately I've been listening to and enjoying an increasing amount of what might be termed 'jazz for the elderly, by the elderly'. For instance, when they recorded their wonderful album Jasmine, Keith Jarrett's and Charlie Haden's combined age was about 130. Charles Lloyd was an impressive 85 when recording his record The Sky Will Still Be There Tomorrow. And Carla Bley was in her mid-70s when her album Trios was made, with neither of her bandmates spring chickens either (this one is a new addition to my collection, arriving on Friday). Moderate tempos predominate, with reflective and nostalgic moods the norm. I can certainly see myself getting more of this kind of thing.
Cheese of the week has been Fourme d'Ambert. I'd been recommended it a few months ago but hadn't spotted any until a visit to Madame Fromage in Abergavenny on Saturday. Creaminess and 'earthiness' in a cheese are characteristics I particularly prize, and this one has both in equitably balanced abundance. I suspect my piece may be verging on maximum ripeness. Amid its rich blend of mild flavours I can sometimes discern an intriguing anise-like note.