from Roscoe's Quick Notes

Yankees vs Royals

Flexibility will be important as I choose which of this Saturday's many sporting events to follow. Scattered heavy rain and thunderstorms across Texas and much of the United States may impact not only the scheduled games but also my ability to pull in a strong enough signal to follow them.

That having been said, the MLB Game between the New York Yankees and the Kansas City Royals will be today's first game on my agenda. Its scheduled start time of 12:35 PM Central Time will leave plenty of time available if I need to make other choices.

And the adventure continues.

 
Read more...

from Patrimoine Médard bourgault

Certaines œuvres de Médard Bourgault portent des titres qui, lus aujourd'hui, peuvent provoquer une réaction. La naissance d'une race. L'ébauche d'une race. Le mot « race » accroche. C'est un fait réel, et il serait naïf de l'ignorer.

Mais la réaction que provoque un mot en 2025 ne dit rien sur ce que ce mot signifiait en 1930.

À l'époque de Médard Bourgault, l'expression « race canadienne-française » désignait un peuple, une continuité culturelle, une identité collective. Ce n'était pas un vocabulaire racial au sens contemporain du terme — c'était le vocabulaire ordinaire du nationalisme canadien-français de l'époque, utilisé par les écrivains, les curés, les politiciens, les journaux. Changer le titre d'une œuvre pour effacer ce mot, c'est substituer notre sensibilité à la réalité historique. Ce n'est pas une correction. C'est une falsification.

Un titre n'est pas une étiquette

Un titre n'est pas un élément secondaire qu'on peut ajuster sans conséquence. Il fait partie de l'œuvre au même titre que sa forme, son matériau, sa date. Il documente l'intention du créateur, son époque, son vocabulaire. Les historiens de l'art, les archivistes et les institutions muséales le traitent comme une donnée primaire — pas comme un texte promotionnel qu'on révise selon les circonstances.

Modifier un titre, ce n'est pas expliquer une œuvre. C'est en altérer le sens. L'œuvre qu'on consultera dans cinquante ans ne sera plus tout à fait celle que Médard Bourgault a créée — elle sera une version corrigée par des gens qui trouvaient l'original inconfortable.

Ce qu'on risque vraiment

L'argument le plus solide contre la modification des titres n'est pas moral — il est pratique.

Si on accepte le principe que les titres peuvent être changés quand ils dérangent, on n'a plus aucun critère stable pour décider où s'arrêter. Ce qui dérange aujourd'hui sera modifié. Ce qui dérangera demain le sera à son tour. Dans dix ans, dans vingt ans, d'autres sensibilités prévaudront — et elles s'appliqueront aux mêmes œuvres, avec la même logique. Le résultat n'est pas un patrimoine préservé. C'est un patrimoine en révision permanente, qui finit par ne plus refléter l'époque où il a été créé, mais les préoccupations successives des époques qui lui ont succédé.

C'est exactement l'opposé de ce que fait la conservation.

La solution existe déjà

Il ne s'agit pas de choisir entre préserver et expliquer. On peut faire les deux. Les institutions sérieuses le font constamment : elles maintiennent l'œuvre dans son état d'origine et elles fournissent le contexte nécessaire pour la comprendre. Un cartel, une note d'interprétation, un guide de visite — ce sont les outils qui existent précisément pour ça. Ils permettent d'aborder la complexité sans toucher à l'œuvre elle-même.

Expliquer un mot, c'est en restituer le sens. Le remplacer, c'est renoncer à le comprendre — et demander au public d'en faire autant.

Ce que ça dit d'une institution

Une institution qui modifie les titres d'œuvres pour éviter les questions difficiles ne protège pas son public. Elle lui retire la possibilité de comprendre. Elle traite les visiteurs comme des personnes incapables de recevoir une information contextualisée — comme s'il fallait filtrer le passé avant de le leur montrer.

C'est une posture condescendante. Et c'est une posture qui, appliquée au patrimoine, a des conséquences irréversibles.

Conclusion

Les titres des œuvres de Médard Bourgault doivent être maintenus dans leur forme originale. Non par indifférence au présent, mais parce que c'est la seule manière de transmettre fidèlement ce qui a été créé. Le rôle d'un lieu de mémoire n'est pas de rendre le passé confortable. C'est de le rendre compréhensible.

Ce sont deux choses très différentes.

Raphaël Maltais Bourgault

 
Lire la suite... Discuss...

from Patrimoine Médard bourgault

Pourquoi la question se pose aujourd'hui

On vit dans une époque où la représentation du corps féminin est devenue un terrain politique. #MeToo a changé la façon dont on regarde les rapports de pouvoir entre les hommes et les femmes. La publicité, le cinéma, les réseaux sociaux — partout, on a commencé à questionner qui représente le corps des femmes, comment, et dans quel intérêt. C'est un mouvement réel, et il a produit des prises de conscience nécessaires.

Dans ce contexte, entrer dans un lieu qui expose des sculptures de femmes nues, créées par un homme, au début du vingtième siècle — ça peut provoquer une réaction. C'est normal. Le regard qu'on pose sur une œuvre n'est jamais neutre. Il arrive chargé de tout ce qu'on a vécu, lu, appris.

Mais une réaction n'est pas une analyse. Et c'est là que ça devient important.

Parce que regarder les nus de Médard Bourgault avec les seules lunettes de 2025, c'est regarder le mauvais objet. Ce n'est pas une publicité. Ce n'est pas une image produite pour vendre quelque chose ou pour satisfaire un regard masculin. C'est l'œuvre d'un sculpteur autodidacte de Saint-Jean-Port-Joli, qui travaillait le bois dans un contexte où montrer ces sculptures lui coûtait quelque chose — socialement, religieusement. Il les cachait parfois. Il les faisait quand même.

Un anachronisme déguisé en progrès

Remettre en question les nus de Médard Bourgault à partir de critères contemporains, c'est juger une œuvre du passé comme si elle avait été créée aujourd'hui — et lui reprocher de ne pas s'y conformer. C'est un anachronisme. Il ne dit rien sur l'œuvre. Il dit quelque chose sur nous.

Le nu traverse toute l'histoire de l'art occidental et non-occidental. Il a servi à représenter la beauté, la dignité, la fragilité du corps humain, sa présence dans le monde. Réduire cette tradition à une logique d'objectification, c'est appauvrir radicalement ce qu'on regarde — et se priver de la capacité de le comprendre.

Chez Médard Bourgault, le nu relève d'une recherche formelle : l'équilibre, la masse, la vérité du corps sculpté dans le bois. Ce n'est pas une posture idéologique. C'est un travail de sculpteur.

Une forme d'art qui ne vieillit pas

Il y a aussi quelque chose qui dépasse Médard Bourgault, et qui dépasse son époque.

Le nu est peut-être la forme la plus ancienne et la plus constante de l'histoire de l'art. Des Vénus préhistoriques taillées il y a 30 000 ans aux sculptures grecques, de Michel-Ange à Rodin, d'Auguste Renoir à Louise Bourgeois — le corps humain nu a traversé tous les siècles, toutes les cultures, tous les courants artistiques sans jamais disparaître. Pas parce que les artistes cherchaient à choquer ou à provoquer. Parce que le corps est l'expérience humaine la plus universelle qui soit. Tout le monde en a un. Tout le monde vieillit dedans, souffre dedans, aime dedans. Le représenter, c'est parler de quelque chose que personne ne peut nier.

C'est pour ça que le nu résiste au temps d'une façon que peu d'autres sujets artistiques peuvent revendiquer. Les modes changent, les idéologies passent, les sensibilités se transforment — et le nu est encore là, toujours pertinent, toujours capable de toucher quelqu'un qui le regarde pour la première fois. Ce n'est pas de l'indécence qui a survécu malgré la censure. C'est une forme d'art qui a survécu précisément parce qu'elle dit quelque chose de vrai sur ce que c'est qu'être humain.

Ce qui est remarquable chez Médard Bourgault, c'est qu'il arrive à cette même vérité sans formation académique, sans avoir fréquenté les grandes écoles des beaux-arts, sans avoir vu de près les chefs-d'œuvre de la tradition occidentale. Un homme qui taille le bois dans un village du Québec au début du vingtième siècle, et qui aboutit au même endroit que les grands sculpteurs de l'histoire — le corps humain comme sujet fondamental, comme lieu de beauté et de vérité. Ça ne diminue pas son œuvre. Ça en révèle la portée.

Vouloir faire disparaître ces sculptures d'un lieu de mémoire, c'est couper ce lieu du courant le plus long et le plus profond de l'histoire de l'art.

Un détail qu'on oublie toujours

Médard Bourgault devait lui-même cacher certaines de ses sculptures. L'environnement religieux et social de son époque imposait des limites strictes à ce qui pouvait être montré. Ces nus existaient donc dans un espace de tension — parfois dissimulés, rarement assumés publiquement. Ce n'était pas de la provocation. C'était un espace de liberté, arraché à des contraintes réelles.

Il y a quelque chose d'autre à comprendre. La représentation du corps humain n'est pas un détail dans l'œuvre de Médard Bourgault — c'en est le fondement. C'est par le corps qu'il cherchait la beauté, l'équilibre, la vérité de la forme humaine. Retirer ces sculptures ou les cacher, ce n'est pas protéger qui que ce soit. C'est amputer l'œuvre de ce qui en constitue le cœur.

Et il y a une ironie là-dedans qu'on ne peut pas ignorer. Pour certaines personnes — des femmes en particulier — voir le corps féminin représenté avec dignité, avec soin, comme sujet d'une recherche artistique sérieuse et non comme objet de consommation, c'est précisément le contraire d'une offense. C'est une forme de reconnaissance. Effacer ces œuvres au nom de leur protection, c'est leur retirer quelque chose sans leur demander leur avis.

La censure ne protège pas tout le monde de la même façon. Elle choisit à la place des gens ce qu'ils sont capables de voir.

Aujourd'hui, au nom de sensibilités nouvelles, on propose de faire exactement la même chose que l'Église faisait à son époque : retirer ou atténuer ces œuvres. Le mécanisme est identique. Hier c'était la religion qui imposait le retrait. Aujourd'hui c'est une autre forme d'orthodoxie. Dans les deux cas, ce n'est pas l'œuvre qui change — c'est le regard qu'on cherche à lui imposer.

Censurer ces sculptures aujourd'hui n'est pas un progrès par rapport à ce que Médard a vécu. C'est une répétition.

Ce que ça coûte vraiment

Quand une institution retire ou atténue une œuvre pour éviter la controverse, elle ne protège pas son public. Elle lui retire quelque chose : la possibilité de rencontrer une réalité complexe et d'en sortir avec une compréhension plus fine du monde et de l'histoire.

Un lieu patrimonial n'est pas là pour rendre le passé confortable. Il est là pour le rendre compréhensible. Ce sont deux missions très différentes. La première consiste à filtrer. La seconde consiste à expliquer, à contextualiser, à fournir les outils pour comprendre ce qu'on regarde — sans toucher à l'œuvre elle-même.

Un cartel bien rédigé fait ce travail. Un guide de visite le fait. Une note d'interprétation le fait. Aucun de ces outils ne nécessite de modifier ou de cacher quoi que ce soit.

La règle qui s'applique à tout patrimoine

Un patrimoine qu'on a rendu inoffensif est souvent un patrimoine qu'on a vidé de son sens. Les œuvres qui dérangent encore après un siècle dérangent parce qu'elles touchent quelque chose de réel — une tension, une vérité, une complexité qui n'a pas disparu. C'est précisément pour ça qu'elles méritent d'être transmises intactes.

Si on accepte le principe qu'une œuvre peut être modifiée ou retirée quand elle provoque un inconfort, on n'a plus aucun critère stable. Ce qui dérange aujourd'hui sera censuré. Ce qui dérangera dans vingt ans le sera à son tour. Le résultat n'est pas un patrimoine protégé — c'est un patrimoine en révision permanente, qui finit par ne plus témoigner de l'époque où il a été créé, mais des sensibilités successives de ceux qui l'ont géré après coup.

Conclusion

Les nus de Médard Bourgault doivent être présentés dans leur forme originale. Pas parce que le confort du public est sans importance, mais parce que la mission d'un lieu de mémoire est la transmission — pas la gestion de l'inconfort.

Comprendre une œuvre demande un effort. Elle n'est pas tenue de se simplifier pour être acceptée. C'est au regard de s'ajuster pour en saisir le sens. Et c'est précisément le rôle d'un lieu patrimonial que de rendre cet ajustement possible — avec du contexte, de l'interprétation, de la rigueur.

Raphaël Maltais Bourgault

 
Lire la suite... Discuss...

from An Open Letter

I had a rooftop barbecue and hot tub event with a friend, and L Brought her sister and her sister for some reason is just such a massive dick towards me specifically it feels like. There was only one other guy there, and that guy didn’t really interact with her but it felt like just disproportionately she was being very rude to me, like making comments about how people just must not have liked me for something completely unrelated, insulting the random playlist that was playing on my speaker saying that my music was elevator music, being excessively pedantic with rhetorical questions, when I jumped into the pool as I got up from the water I heard her calling me a fat ass, along with several other consistent just like negs it felt like. I don’t know what this girl’s problem is because her sister is nice, but she is just such a fucking dick it feels like and im pretty confident its not a signal towards me, like it is not a reflection on my behavior as much as it is on her. No one else not even her sister joined with her and other people kind of defended me at different points. But overall just fucking weird from her.

 
Read more...

from POTUSRoaster

Hello Again. I Hope you had a good Easter or Passover or other religious celebration of your choice.

Since the start of the unprovoked war with Iran, POTUS has told the country the reason for the conflict was that Iran had intentions for nuclear weapons and that could not be allowed.

We know that the majority of the nuclear material that Iran needs to build bombs is at a place called Pickaxe Mountain. This is a facility so deep in the mountains of central Iran that no bunker buster bomb in the American arsenal is powerful enough to destroy the place. Inspite of claims by POTUS that Iran's ability to create a bomb was destroyed almost a year ago, nothing is further from the truth. Of course everyone knows that nothing could be further from POTUS than the truth.

While we know that Iran has the ability to deliver ordinance to its perceived enemies, as evidenced by its continued bombing of its neighbors, it does not need to construct an ICBM bomb to permanently damage our country. Nuclear material spread with a common construction site explosive could leave huge portions of this country permanently poisoned for hundreds of years and many of us dead.

Sooner or later the people will recognize that this POTUS is a major danger to the country and must be removed. We have a chance to begin that process on the first Tuesday in November by creating a congress that is not afraid to do the job. Let's work to make that happens.

POTUS Roaster

Thanks for reading these posts that I write for you. To read others in this series, go to write.as/potusroaster/archive. I hope you have a great weekend.

 
Read more... Discuss...

from Sean Barnett

Over the past few months I have been doing some technical reading. Well, actually a lot of technical reading, perhaps compensating for having not focused on multiprocessing and performance for some years. And, guess what? The technical world has changed.

I do hope to curate this list at some stage, but at least I've now captured some of the links so I don't lose track of them.

Multi-Processing (e.g., concurrency, multi-threading, asynchrony) * Promises * Zap * Brad Cypert Blog * Programming Languages Memory Model * Making Sense of Acquire Release Semantics * Miguel Young Blog * Algorithms for Modern Hardware * Work-Stealing Deque Part 1: The Problem with Locks

Performance (e.g., algorithms, SIMD, branchless coding) * Daniel Lemire, Computer Science Professor * Ash's Blog * Tutorial on SIMD vectorisation to speed up brute force * Josh Haberman Blog * Latency Numbers Every Programmer Should Know * Optimizing UTC –> Unix Time Conversion for Size and Speed

Zig * Open My Mind – Karl Seguin Blog

Geospatial * Gamdev Maths: distance from point to line * Find the Intersection of Two Line Segments in 2D (Easy Method)

Data Engineering * Jenna Jordan Blog * Data Engineering Blog of Simon Späti * Spartan Blog – Jerónimo * The Evolution of Database Architecture and the Future of Data Management * Stop Paying the Complexity Tax * Big Data is Dead

 
Read more... Discuss...

from Sean Barnett

  • time series geospatial data – each record has a timestamp and (in some cases) a geospatial position (typically WGS84 latitude and longitude)
  • durable – rather than ephemeral – each record
  • idempotent – in context of at least once delivery semantics
  • commutative – same result regardless of order in which data records are received, including the case that the output is dependent upon multiple records
 
Read more... Discuss...

from Millennial Survival

Sometimes I wonder how organizations can function and survive. If you are hiring for important roles, maybe you should put some thought into coordinating the process effectively when using multiple recruiting agencies.

When you have multiple agencies contact you about the same role, at the same company, but with entirely different messages it destroys any trust the candidate has in the process. It is even better when one agency tells you how confidential the search is and won’t even disclose the name of their client without an NDA, yet another will happily divulge the name of the client without an NDA. The cherry on top is when the organization looking to hire should absolutely know how to go about hiring for a role of this caliber without making these basic mistakes.

All of this adds up to making any prospective candidate want to run away from the process as fast as possible. After all, if an organization can’t manage to coordinate the hiring process, how can they possibly be any less dysfunctional internally? Seriously, do better. Otherwise assume you will never find someone other than a person that is too dense to see past the red flags presented during the hiring process. That isn’t a recipe for long-term success.

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Listening to the Cubs best the Mets this afternoon was the most ambitious thing I did all day. Several hours of listening to music filled the rest of this Friday. Now it's nearly time to focus on the night prayers and get ready for bed.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 234.57 lbs. * bp= 151/90 (71)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 05:20 – 1 banana * 06:30 – seafood salad, cheese, crackers * 11:45 – 1 bacon and egg breakfast taco, 1 bean and cheese breakfast taco * 12:00 – home made meat and vegetable soup

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:15 – bank accounts activity monitored. * 05:40- read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:00 – listen to the Markley, van Camp and Robbins Show * 11:45 – watch old game shows and eat lunch at home with Sylvia * 13:20 – follow an MLB Game, Mets vs Cubs, and.... Cubs win, 12 to 4. * 16:30 – listen to relaxing music and nap

Chess: * 17:00 – moved in all CC games

 
Read more...

from SmarterArticles

The robots were supposed to take our jobs. Instead, they are sorting us into winners and losers while we argue about the wrong question entirely.

For the better part of three years, the dominant anxiety about artificial intelligence in the workplace has been binary: will it replace us, or won't it? Governments have convened panels. Think tanks have published forecasts. CEOs have made pledges about “responsible deployment.” And through all of it, the conversation has orbited a single, dramatic scenario: mass displacement, a wave of redundancies, the hollowing out of the white-collar middle class.

But in March 2026, Anthropic, the San Francisco-based AI company behind the Claude family of large language models, published a piece of labour market research that quietly reframed the entire debate. Their study, “Labor market impacts of AI: A new measure and early evidence,” introduced a novel metric called “observed exposure” and used millions of real Claude interactions mapped against roughly 800 occupations in the O*NET database to measure not what AI could theoretically do to jobs, but what it is actually doing right now. The headline finding was almost anticlimactic: AI is not yet replacing jobs at scale. There has been no systematic rise in unemployment among workers in the most AI-exposed occupations.

The less comfortable finding, buried deeper in the data, was this: AI is already creating a measurable skills divide. Hiring of workers aged 22 to 25 in highly exposed occupations has dropped roughly 14 percent compared to pre-ChatGPT levels. The researchers noted this finding was “just barely statistically significant,” but the directional signal is hard to ignore. The first measurable labour market effect of generative AI is not a pink slip. It is a closed door.

And that might be worse.

The Gap Between Can and Does

Anthropic's study is notable not for what it predicts but for what it measures. Previous attempts to gauge AI's impact on employment, including the widely cited 2023 research by Eloundou and colleagues, relied on theoretical exposure: estimating whether a large language model could, in principle, make a given task at least twice as fast. By that measure, the numbers look staggering. Theoretical AI coverage for Computer and Mathematical occupations sits at 94 percent. For Office and Administrative Support roles, it is 90 percent.

But theoretical capability is not the same as economic reality. Anthropic's observed exposure metric tracks what is actually happening in professional settings by counting which tasks show sufficient work-related usage in Claude traffic, then weighting fully automated implementations at full value and augmentative use (where humans remain in the loop) at half weight. The result is a far more sober picture. In Computer and Mathematical roles, Claude currently covers just 33 percent of tasks. For the most exposed individual occupations, the figures are higher but still well below ceiling: programmers at 74.5 percent, customer service representatives at 70.1 percent, and data entry clerks at 67.1 percent.

At the other end of the spectrum, theoretical AI coverage is lowest in grounds maintenance at just 3.9 percent, followed by transportation at 12.1 percent, agriculture at 15.7 percent, food and serving at 16.9 percent, and construction at 16.9 percent. The divide is not merely between AI-proficient workers and everyone else. It is between entire categories of work that exist in fundamentally different relationships to the technology.

The gap between theoretical and observed exposure is, in a sense, the breathing room the labour market currently enjoys. But it is also a measure of latent disruption. As Anthropic's researchers note, tracking how that gap narrows over time provides a real-time indicator of economic transformation as it unfolds. The question is not whether AI can reshape these occupations. It is how quickly the observed line catches up to the theoretical one.

Anthropic's earlier Economic Index report, published in January 2026, provides additional context. That study, based on a privacy-preserving analysis of two million AI conversations split between consumer and enterprise use, found that in early 2025, 36 percent of occupations used Claude for at least a quarter of their tasks. By the time data was pooled across subsequent reports, that figure had risen to 49 percent. The trajectory is clear. What was niche behaviour a year ago is becoming standard practice for nearly half of all tracked occupations. And for the workers on the wrong side of the emerging divide, the pace of that convergence matters enormously.

Power Users and the Compounding Loop

If Anthropic's research tells us what AI is doing to the labour market in aggregate, a separate body of evidence reveals what it is doing to individual workers. And here the picture is sharper, more unequal, and considerably more troubling.

OpenAI's 2025 State of Enterprise AI report documented a sixfold productivity gap between power users and everyone else. Workers at the 95th percentile of AI adoption send six times as many messages to ChatGPT as the median employee at the same companies. For coding tasks specifically, the heaviest users engage 17 times more frequently than their typical peers. Among data analysts, the most active users employ AI data analysis tools 16 times more often than the median. Over the past year, weekly messages in ChatGPT Enterprise increased roughly eightfold, and the average worker sends 30 percent more messages than they did a year prior. Seventy-five percent of enterprise users report being able to complete entirely new tasks they previously could not perform.

The numbers translate directly into time. Workers who applied AI to seven or more distinct tasks reported saving over 10 hours per week. Those using it for fewer than three tasks reported no time savings at all. This is not a gentle gradient. It is a cliff edge.

What makes this particularly consequential is the compounding nature of the advantage. Workers who experiment broadly with AI discover more uses, which leads to greater productivity gains and better performance reviews, which leads to more interesting assignments and faster advancement, which in turn provides more opportunity and incentive to deepen AI usage further. The Debevoise Data Blog described this dynamic in January 2026 as a self-reinforcing cycle: “AI success leads to more AI success,” with early adopters developing intuitions and workflow habits that simply cannot be shortcut by intensive late-stage training. Organisations that wait until 2027 to address their AI skills gap, the analysis argued, will find themselves competing for a shrinking pool of trainable talent against firms that started building capability in 2024 and 2025. Those firms that are ahead now will find it relatively easy to stay ahead, the analysis continued, especially if they can recruit talent away from firms that have fallen behind.

Gensler's 2026 Global Workplace Survey, which polled 16,459 full-time office workers across 16 countries, adds another dimension. About 30 percent of employees now qualify as AI power users, defined as people who regularly use AI tools in both professional and personal contexts. More than half of these power users are under 40, and nearly a third are managers. These workers score significantly higher on innovation, engagement, and team relationships. They spend less time working alone (37 percent of their week versus 42 percent for late adopters) and more time learning (12 percent versus 8 percent) and socialising (11 percent versus 9 percent). Seventy percent of AI power users say learning is highly critical to their job performance. They are three times more likely to perceive their organisations as among the most innovative in the sample.

This is not the profile of someone coasting on a productivity hack. It is the profile of someone whose entire relationship to work has been restructured around a new set of capabilities, and whose career trajectory is diverging from peers who have not made the same transition.

Who Falls Behind, and Why It Is Not Random

The demographics of AI exposure complicate any simple narrative about technology helping the little guy. Anthropic's research found that workers in the most exposed professions “are more likely to be older, female, more educated, and higher-paid.” This inverts the usual pattern of technological disruption, where low-skilled, low-wage workers bear the heaviest costs. AI's first targets are not factory floors or retail counters. They are the knowledge-work occupations that have historically offered stable, well-compensated careers.

At the same time, the youth hiring slowdown suggests that the entry points to those careers are narrowing. If organisations can get 33 percent of a junior analyst's work done through an AI system, the calculus around hiring a new graduate changes. You do not necessarily fire the senior analyst. You simply do not replace the intern. The result is an invisible contraction: no layoffs, no headlines, just a quiet thinning of opportunity at the bottom of the professional ladder. As Anthropic's researchers cautioned, the young workers who are not hired may be remaining at their existing jobs, taking different jobs, or returning to education. The displacement, if that is even the right word, is diffuse and hard to track through conventional unemployment statistics.

This matters because early career experience has always been the mechanism through which workers build the skills, networks, and institutional knowledge that drive later advancement. A 22-year-old who spends two years doing data cleaning, attending meetings, and learning the rhythms of a professional environment is accumulating human capital that no online course can replicate. If AI shrinks the pool of those formative roles, the long-term consequences extend well beyond the immediate hiring numbers. It creates a generational bottleneck: not a single event, but a gradual narrowing of the pipeline through which junior talent enters and eventually rises within knowledge-work professions.

The World Economic Forum's Future of Jobs Report 2025 projected that 170 million new jobs will be created globally by 2030, while 92 million will be displaced, yielding a net gain of 78 million positions. But the same report warned that 59 percent of the global workforce will need reskilling or upskilling by 2030, and that 120 million workers face medium-term risk of redundancy if training systems fail to keep pace. The skills gap, the report noted, is the single most significant obstacle to business transformation, cited by 63 percent of employers. By 2030, 77 percent of employers plan to prioritise reskilling and upskilling their workforce to enhance collaboration with AI systems. The intent is there. Whether the execution will match the ambition is another question entirely.

The question is whether the workers who need reskilling most are the same ones who are positioned to receive it. The evidence suggests they are not.

The Training Paradox

Corporate AI training is booming. It is also, by most measures, failing.

A February 2026 DataCamp and YouGov survey of 517 business leaders in the United States and United Kingdom found that 82 percent of enterprise leaders say their organisation provides some form of AI training. And yet 59 percent of those same leaders report an AI skills gap within their workforce. Only 35 percent say they have a mature, organisation-wide upskilling programme in place. The access is there. The capability is not.

The problem, according to DataCamp's analysis, is structural. Most corporate AI training still follows a passive, course-based model: video lectures, multiple-choice assessments, completion certificates. Twenty-three percent of leaders surveyed said video-based courses make it difficult for employees to apply skills in the real world. The training exists in a vacuum, disconnected from the actual workflows where AI tools would be used. Workers complete modules and tick boxes, but the gap between knowing what a large language model is and knowing how to restructure your daily work around one remains vast.

This finding aligns with the EY 2025 Work Reimagined Survey, which polled 15,000 employees and 1,500 employers across 29 countries and found that organisations are missing up to 40 percent of potential AI productivity gains due to gaps in talent strategy. Among organisations experiencing AI-driven productivity improvements (96 percent of those investing in AI), only 17 percent reported that those gains led to reduced headcount. Far more were reinvesting in new AI capabilities (42 percent), cybersecurity (41 percent), research and development (39 percent), and employee upskilling (38 percent).

The pattern is revealing. Organisations are spending on AI training. They are not firing people because of AI. But they are also not succeeding at turning their existing workforce into proficient AI users at anything close to the speed required. The result is a two-track system within organisations: a minority of self-motivated power users who are pulling ahead, and a majority who have attended the workshops but have not fundamentally changed how they work.

McKinsey's January 2025 report on “Superagency in the workplace” put this disconnect in stark terms. While 92 percent of companies plan to increase AI investments over the next three years, only 1 percent report that they have reached what McKinsey considers AI maturity. The report also found that employees are three times more likely than leaders expect to be using generative AI for at least 30 percent of their daily work. Nearly half of C-suite leaders believe their companies are moving too slowly on AI development, citing leadership misalignment and lack of talent as the primary obstacles. The gap is not just between workers and AI. It is between what organisations think is happening with AI adoption and what is actually happening on the ground.

DataCamp's research found that organisations with mature, workforce-wide upskilling programmes are nearly twice as likely to report significant positive AI return on investment. The implication is clear: the training itself is not the bottleneck. The quality, structure, and integration of training into daily work is what separates organisations that capture AI value from those that do not. And that distinction maps uncomfortably well onto existing inequalities in corporate resources, management quality, and organisational culture.

The Wage Premium and the Widening Gulf

PwC's 2025 Global AI Jobs Barometer, which analysed close to a billion job advertisements from six continents, quantified the financial dimension of the AI skills divide. Jobs requiring AI skills now command a 56 percent wage premium over comparable roles, more than double the 25 percent premium recorded the previous year. Skills demands in AI-exposed occupations are changing 66 percent faster than in other roles, up from 25 percent the year before. And jobs requiring AI skills are growing 7.5 percent year on year, even as total job postings fell 11.3 percent.

These numbers describe an accelerating divergence. Workers who acquire and maintain AI proficiency are not just keeping pace; they are pulling away from the pack in measurable economic terms. A 56 percent wage premium is not a marginal advantage. It is the kind of differential that, compounded over a career, produces fundamentally different life outcomes: different housing, different schools for children, different retirement trajectories.

The acceleration is equally significant. When skill demands change 66 percent faster in one set of occupations than in others, the half-life of any given training investment shrinks accordingly. A worker who completes an AI literacy course in 2026 may find its content partially obsolete by 2027. This creates a treadmill effect that disproportionately burdens workers with less time, fewer resources, and less institutional support for continuous learning. It also creates a recruitment spiral. Workers with AI skills command higher salaries, which means they gravitate towards organisations that already have strong AI cultures, which further concentrates capability in firms that are already ahead.

PwC's data also contained a counterintuitive finding: productivity growth has nearly quadrupled in industries most exposed to AI, rising from 7 percent over the 2018 to 2022 period to 27 percent over 2018 to 2024 in sectors like financial services and software publishing. Jobs continue to grow even in the most easily automated roles. AI, in other words, is making people more valuable, not less. But the value accrues unevenly, and the distribution of that value tracks closely with the distribution of AI competence.

The Five-and-a-Half Trillion Dollar Question

IDC, the technology research firm, has put a price tag on the AI skills gap: $5.5 trillion in projected global economic losses by 2026, stemming from delayed products, quality issues, missed revenue, and impaired competitiveness. Over 90 percent of global enterprises, by IDC's estimate, will face critical AI skills shortages. Ninety-four percent of CEOs and CHROs identify AI as their top in-demand skill, yet only 35 percent feel they have adequately prepared their employees. Only a third of employees report receiving any AI training in the past year, even as half of employers report difficulty filling AI-related positions.

The scale of the mismatch is staggering. There are currently 1.6 million open AI positions globally, against approximately 518,000 qualified candidates, a demand-to-supply ratio of roughly 3.2 to 1. And the positions going unfilled are not niche research roles at frontier labs. They are the applied, mid-level positions where AI tools meet business operations: the prompt engineers, the automation specialists, the analysts who can bridge the gap between a model's capabilities and an organisation's needs.

The barriers to closing this gap are not mysterious. IDC's research identified the key obstacles as lack of talent (46 percent), data privacy concerns (43 percent), poor data quality (40 percent), high implementation costs (40 percent), and unclear return on investment for AI programmes (26 percent). These are not exotic challenges. They are the ordinary frictions of organisational change, amplified by the speed at which AI capabilities are advancing.

IDC projects that AI technologies themselves will eventually shave about a trillion dollars off skill-gap losses by 2027, as AI tools become more intuitive and self-service. But that still leaves trillions in unrealised value, and it assumes a level of organisational readiness that the DataCamp and EY surveys suggest is far from guaranteed.

The irony is hard to miss. The tool that is supposed to democratise knowledge work is, in its current deployment phase, concentrating advantage among those who already have the skills, resources, and institutional support to learn how to use it. AI's promise of universal empowerment remains real. Its present reality is stratification.

Structural Shift or Growing Pains

The critical question embedded in all of this data is whether the AI skills divide is a temporary adjustment, a transitional friction that will smooth out as tools improve and training catches up, or a permanent structural feature of the labour market.

The case for optimism rests on several reasonable premises. AI tools are becoming more user-friendly with each generation. Natural language interfaces have dramatically lowered the barrier to entry compared to previous waves of technology. Companies are investing heavily in training, even if current programmes are imperfect. PwC's data shows that AI is creating jobs and boosting productivity broadly, not just for an elite few. And 85 percent of organisations plan to increase their investment in upskilling employees through the period from 2025 to 2030, according to multiple industry surveys.

But the case for structural concern is stronger, and it rests on the compounding dynamics that multiple independent studies have now documented. The Debevoise analysis identified a self-reinforcing cycle where early AI adopters develop capabilities that accelerate their further adoption, creating a widening gap that late entrants cannot easily close. OpenAI's data shows a sixfold productivity differential that maps onto usage intensity. Anthropic's observed exposure metric reveals that even within occupations theoretically saturated by AI capability, actual adoption is unevenly distributed.

The OECD's 2025 report on bridging the AI skills gap acknowledged that current adult training systems “often favour those already advantaged by higher education, widening opportunity gaps.” The report recommended that governments expand incentives for AI training, improve accessibility and inclusivity, and invest in modular credentials and recognition of prior learning. These are sensible policy proposals. They are also the kind of recommendations that take years to implement and decades to show results.

Meanwhile, the compounding loop runs at the speed of quarterly performance reviews and annual promotion cycles. Every month that a power user pulls further ahead is a month that makes the gap harder to close. Every junior role that goes unfilled because AI handles part of its function is a career pathway that becomes slightly narrower. The structural argument is not that these trends are irreversible. It is that they are self-reinforcing, and that the window for intervention narrows with each passing quarter.

What Organisations Get Wrong

The most common corporate response to the AI skills divide is to treat it as a training problem. It is not. It is a management problem, a culture problem, and, increasingly, a strategic problem.

Training, as the DataCamp survey makes clear, is a necessary but insufficient condition for building AI capability. What separates organisations that successfully embed AI into their workflows from those that do not is not the availability of courses but the integration of AI tools into actual work processes, with management support, performance incentives, and tolerance for experimentation. McKinsey's superagency report found that 48 percent of employees rank training as the most important factor for AI adoption, but training alone, without the organisational scaffolding to support its application, produces graduates who know the theory but cannot implement it.

The EY survey found that 96 percent of organisations investing in AI report some productivity gains. But the distribution of those gains within organisations is wildly uneven, with a handful of power users capturing the majority of value while the broader workforce remains largely unchanged. This suggests that the barrier is not technological but organisational: the tools work, but most organisations have not restructured roles, workflows, and incentives to make broad adoption possible.

Companies that lead in AI adoption, according to OpenAI's enterprise report, enjoy 1.7 times higher revenue growth, 3.6 times greater total shareholder return, and 1.6 times higher EBIT margins compared to laggards. The correlation between AI adoption and financial performance is becoming impossible to ignore. And yet the mechanisms for spreading AI proficiency remain largely ad hoc, dependent on individual initiative rather than systematic organisational design.

This is the paradox at the heart of the AI skills divide. The technology is genuinely democratising in its potential. Anyone with access to a large language model can, in theory, perform analyses, draft documents, and automate workflows that previously required specialist expertise. But “in theory” is doing a lot of heavy lifting. In practice, the workers who extract the most value from AI are those who already possess the skills, confidence, and institutional support to experiment effectively. The tool is egalitarian. The context in which it is deployed is not.

The Policy Vacuum

Government responses to the AI skills divide have been, with some exceptions, sluggish and incremental. The OECD has called for expanded AI training incentives, improved accessibility, and investment in connected learning pathways that allow workers to move more fluidly between vocational and academic routes. The European Parliament has commissioned research on AI's role in reshaping the European workforce. The World Economic Forum continues to publish increasingly urgent reports about the scale of reskilling required.

But the gap between policy aspiration and implementation remains wide. Most OECD countries do not yet have comprehensive AI literacy programmes targeted at working adults. Funding for reskilling tends to flow through existing institutional channels, which, as the OECD itself acknowledges, “often favour those already advantaged by higher education.” The workers most at risk of falling behind are precisely the ones least served by current policy frameworks: those without degrees, without employer-sponsored training, without the time or resources for self-directed learning.

The speed mismatch is perhaps the most critical issue. AI capabilities are advancing on a timeline measured in months. Policy responses operate on a timeline measured in years, sometimes decades. By the time a government commission has completed its review, published its recommendations, secured funding, designed a programme, and enrolled its first cohort of learners, the AI landscape will have shifted beneath their feet. The skills taught in 2026 may be partially obsolete by 2028. The OECD's own recommendation for “modular credentials and recognition of prior learning” implicitly acknowledges this problem: long-form educational programmes are too slow for a technology that rewrites its own capabilities every few months.

This does not mean policy is futile. It means that policy alone cannot solve the problem. Effective responses will require coordination between governments, employers, educational institutions, and the AI companies themselves. They will require a willingness to experiment with new models of training delivery, credentialing, and workforce support. And they will require an honest reckoning with the fact that the AI skills divide is not simply a technical challenge to be solved with better courses. It is a distributional challenge that reflects, and threatens to amplify, existing structures of inequality.

What Comes Next

Anthropic's March 2026 study offered one final, underappreciated insight. The gap between theoretical and observed AI exposure is not closing uniformly across occupations. In some fields, adoption is accelerating rapidly. In others, it has barely begun. The trajectory of that convergence will determine, more than any other single factor, how deeply AI reshapes the labour market over the next five years.

If observed exposure converges slowly, there is time for training systems, policy responses, and organisational practices to adapt. Workers can build skills incrementally. Institutions can adjust. The transition, while painful, remains manageable.

If it converges quickly, as improvements in AI capability, agentic workflows, and enterprise integration suggest it might, the window for orderly adaptation shrinks dramatically. The 14 percent decline in youth hiring that Anthropic documented could become 30 percent, or 50 percent. The sixfold productivity gap between power users and everyone else could widen further. The 56 percent wage premium for AI-skilled workers could calcify into a permanent feature of the labour market, as entrenched and as difficult to reverse as any existing dimension of economic inequality.

The honest answer to whether AI's skills divide is temporary or structural is that it is both, simultaneously, and the balance between those two possibilities depends on choices being made right now, in boardrooms and government offices and training departments around the world. The technology does not predetermine the outcome. But the compounding dynamics are real, the clock is running, and the workers who are falling behind today are accumulating disadvantages that will become progressively harder to reverse.

The robots did not take the jobs. They created a new hierarchy within them. And unless something changes, that hierarchy is hardening fast.

References and Sources

  1. Anthropic, “Labor market impacts of AI: A new measure and early evidence,” Anthropic Research, March 2026. https://www.anthropic.com/research/labor-market-impacts

  2. Anthropic, “Anthropic Economic Index report: Economic primitives,” January 2026. https://www.anthropic.com/research/anthropic-economic-index-january-2026-report

  3. Fortune, “Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible,” March 6, 2026. https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/

  4. Fortune, “Is AI about to take your job? New Anthropic research suggests the answer is more complicated than you think,” March 10, 2026. https://fortune.com/2026/03/10/will-ai-take-your-job-this-chart-in-an-economic-study-by-anthropic-may-give-you-a-hint-but-the-answer-is-complicated/

  5. OpenAI, “The State of Enterprise AI: 2025 Report,” 2025. https://openai.com/index/the-state-of-enterprise-ai-2025-report/

  6. VentureBeat, “OpenAI report reveals a 6x productivity gap between AI power users and everyone else,” 2025. https://venturebeat.com/ai/openai-report-reveals-a-6x-productivity-gap-between-ai-power-users-and

  7. Debevoise Data Blog, “AI Advantages Tend to Compound, Increasing the Risks of Falling Too Far Behind,” January 7, 2026. https://www.debevoisedatablog.com/2026/01/07/ai-advantages-tend-to-compound-increasing-the-risks-of-falling-too-far-behind/

  8. Gensler Research Institute, “Global Workplace Survey 2026,” 2026. https://www.gensler.com/gri/global-workplace-survey-2026

  9. Gensler, “The Human Side of AI: What Power Users Are Telling Us About the Workplace,” 2026. https://www.gensler.com/blog/what-ai-power-users-tell-us-about-the-workplace

  10. DataCamp and YouGov, “Companies Are Investing in AI, But Their Workforces Aren't Ready,” February 2026. https://www.datacamp.com/blog/the-ai-skills-gap-in-2026-why-most-ai-training-isn-t-translating-to-workforce-capability

  11. EY, “AI-driven productivity is fueling reinvestment over workforce reductions,” December 2025. https://www.ey.com/en_us/newsroom/2025/12/ai-driven-productivity-is-fueling-reinvestment-over-workforce-reductions

  12. EY, “EY survey reveals companies are missing out on up to 40% of AI productivity gains due to gaps in talent strategy,” November 2025. https://www.ey.com/en_gl/newsroom/2025/11/ey-survey-reveals-companies-are-missing-out-on-up-to-40-percent-of-ai-productivity-gains-due-to-gaps-in-talent-strategy

  13. PwC, “The Fearless Future: 2025 Global AI Jobs Barometer,” 2025. https://www.pwc.com/gx/en/services/ai/ai-jobs-barometer.html

  14. IDC via CIO Dive, “What's the cost of the IT skills gap? IDC says $5.5 trillion by 2026,” 2025. https://www.ciodive.com/news/tech-talent-skills-gaps-cost-trillions-idc/716523/

  15. World Economic Forum, “Future of Jobs Report 2025,” January 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/

  16. OECD, “Bridging the AI skills gap,” 2025. https://www.oecd.org/en/publications/bridging-the-ai-skills-gap_66d0702e-en.html

  17. McKinsey, “Superagency in the workplace: Empowering people to unlock AI's full potential at work,” January 2025. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

  18. HR Dive, “Anthropic: AI's influence over the labor market is only beginning to be felt,” March 2026. https://www.hrdive.com/news/anthropic-ai-influence-over-the-labor-market-jobs/814670/

  19. TechCrunch, “The AI skills gap is here, says AI company, and power users are pulling ahead,” March 25, 2026. https://techcrunch.com/2026/03/25/the-ai-skills-gap-is-here-says-ai-company-and-power-users-are-pulling-ahead/

  20. The Decoder, “Anthropic's new study shows AI is nowhere near its theoretical job disruption potential,” March 2026. https://the-decoder.com/anthropics-new-study-shows-ai-is-nowhere-near-its-theoretical-job-disruption-potential/

  21. Workera, “The $5.5 Trillion Skills Gap: What IDC's New Report Reveals About AI Workforce Readiness,” 2025. https://www.workera.ai/blog/the-5-5-trillion-skills-gap-what-idcs-new-report-reveals-about-ai-workforce-readiness


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from TechNewsLit Explores

On Tuesday (14 April 2026), Rep. Ro Khanna, a Democratic member of Congress from California, spoke at the National Press Club about his vision for the country and answered questions from Mark Schoeff, NPC president and financial services correspondent for CQ-Roll Call. The event should put to rest any questions of Khanna running for president.

Exclusive photos from Khanna’s event at the National Press Club are available in the TechNewsLit portfolio at the Alamy photo agency.

Khanna became well known for his work on the House Oversight Committee to release the Department of Justice’s files on convicted sex offender Jeffrey Epstein. Despite deadlines written into legislation passed by Congress and signed by the president, DoJ has yet to release all of the files, and recently fired Attorney General Pam Bondi has so far ignored a subpoena to appear before the commitee on this topic.

While Khanna made several references to Epstein and the files, he framed many of his arguments on economic inequality in terms of the “Epstein Class” vs. most everyone else. In Khanna’s view, the Epstein Class is made up of super-rich individuals who feel their wealth and power makes them exempt from laws all others must obey. Their disrespect for sex offender laws is just one example.

Khanna’s main pitch was for plans with bold direct actions addressing economic inequality: universal health care, affordable child care, faster conversion to green energy, and more support for college or vocational education. He said reversing the Trump tax cuts and ending the blank check for defense spending would pay for those programs.

Khanna noted that incrementalist or technocratic proposals from Democrats only got Donald Trump elected twice. He also said Sen. Chuck Schumer (D-NY), should step down from his Democatic Party leader post. Khanna did not say anything about Rep. Hakeem Jeffries (D-NY), the party leader in the House where he serves.

Copyright © Technology News and Literature. All rights reserved.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog