It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
from
Micropoemas
Abre la ventana para mirar a la gente de otro modo: pequeña, colorida.
from bios
6: The Addiction Of Stigma
From the crisp cavern of the last of the stars I am woken with half a mug of semi warm sweet black tea. I can feel the warmth of the security hut lingering in this incursion of hands into my nest. There is a message for me on his phone – charging in the hut, I must come, he leaves shift in ten.
I had arranged for someone to send me money for transport, and waited all night. The whatsapp now apologizes, they have only just put through the instant clearance which will take roughly forty minutes. And I am going to be late for my appointment if I wait.
Down at the Denis Hurley Center there is a social worker who can get people into a free rehab. And there are people who will believe in me again if I just get myself to a rehab. There are people who believe that I can get myself to rehab.
I did not want to walk.
I can not tell you if I would have used the uber money for smack and walked anyway...
Before rehab every user wants one last hurrah.
But the money will come in less than forty and the appointment is in fifty and if I wait for the money I might buy smack and not make the appointment, and it is maybe a half hour’s brisk walk...
I set out to set out from the small sanctioned space that I sleep in, tucked away in the church garden, where I have returned to eek the last warmth out of my carving of cardboard and plant life in the last blueness of morning, and gather my things, my bank card, my hoodie, my tin foils and lighters...
All I want is a room to sleep in, regulated medication for the withdrawal and to be free from the ability to assuage my pain endlessly with heroin. I want to slowly un-numb. I want to be endlessly numb. Both at the same time. But the returning thing from which I am trying to escape is invading the numbness, and the endless small junkie tasks of every para day are no longer numbing and money is less but the tasks are relentless and I take no joy in them and then the smack is less and the wheedling and the shame is more and so now, it is impossible to be impossibly numb anymore and the only way, is to unnumb slowly, to return to the waking world.
I set out to walk to the Denis Hurley Center.
Determined. Withdrawing. Shivering. The bone splintering pain is in the post. The shit streaming down my legs is later. But later I will be in rehab and have methadone.
The park I sometimes sleep in, smoke at, in small groups in the lazy afternoon haze. It’s not afternoon, it’s empty, no groups to try get a hit off.
As they bask in the balcony shade of their nymandawos, out of reach of the rising day’s heat, the dealers lazily refuse to give me credit.
The other park, empty except for some still sleeping, glazed with the restless sweat of nearing need. Scattered sandwich wrappers from the call to prayer meal drop.
Just around the corner is the rotting cat carcass, it’s on my route to the scrap for crack place and I have been noting it’s decay daily, and today it’s eyes are full of maggots, and it’s stomach has exploded with flies.
The corner of the intersection, under the protection of the overhanging roof of the abandoned butchery, where I sometimes sleep after a day of digging tins from bins. No-one but detritus, foils romantic in wind eddies -depleted. The trickle of shit is starting to eek. I’m going to rehab. I can make it. They’ll have methadone.
The crack house where I sometimes hustle for change, crack, a roof, and the smoking room is abandoned, three para’s outside trying to make a plan in the hot sun.
The rank of broken taxis where we smoke, under the canopy of old trees and plastic sheeting breathing in the morning heat the users are huddled around a burning tyre for a warmth not possible, and no one will spare me a hit, no one has – they say and they retreat into the old minibus rusting black plastics, someone offers me a blackening banana, the smell of it makes me retch, I am offered a hit if I come back in a little bit or wait but I am late for my appointment to get into a rehab and my stomach is bubbling and my hands are chicken hands cramp, searing tendons hot and steel pulling in parts of my body I never had before and fuck I really wanted to uber.
The abandoned methadone clinic with the nyaope dealers selling what I need right now – christ just one hit before I book into rehab...
Indanda smell soaking like a spoeg bucket through a warren of weeds and bushes where the dealers live in the abandoned lot next to the abandoned boat builders yard, where the paras live in the hulls of abandoned boats.
The boys who smoke on the steps of the abandoned HIV clinic opposite the taxi rank where the dealers hide among the sellers of cell phone accessories, smileys grilling on open fires,
The users smoking on the steps of the abandoned public toilets, trying on freshly shoplifted hoodies.
Through the alleys and finally through a levelled building, just one or two bricks high the smokers and the spikers leaning against the wind in plastics trying to get their hits and I look for someone to ask for just one fucking hit... the money must be in my account by now. An ATM mocks me from across the road. And there, one block away, is the Denis Hurley centre.
Fuck it, I'm going to rehab, they'll have methadone.
I wasn’t going to rehab. There was no methadone.
In order to get into Newlands Rehab, to get off street drugs, you have to be off street drugs. They do not accept anyone who tests positive for any substances. If you want to get clean, they advise you self manage your own detox by reducing the amount of nyaope you smoke over five weeks. Over that five weeks you have to attend two sessions a week, one private with the social worker, and one group session with all those trying to reduce to get into rehab. I agree to this and ask them if they can maybe get me an Uber, I know the money has hit my account and I don’t want to walk back, because then I will spend it badly, sharing and paying back all the little hits I had on the way, and then have nothing for myself to get through the night. They are unable to call me an Uber.
I miss my next session.
I try to attend the group session but at the same time, at the Denis Hurley Centre there is a free meal, and the queue is an hour and a half long. I can queue and eat or I can go and listen to how I need to reduce my usage in order to get clean, to get into a rehab to get clean.
I choose to eat.
I phone the Newlands Rehab to see if they offer a twelve step program and a way to reintegrate into larger society. They tell me they will help me get closer to God.
I get myself Suboxone, via an addiction psychiatrist, to help get through the withdrawals. This is an exercise unto itself, it is days and hours and so much time trying to explain to people my limitations and how I need help and how just giving me money will not help and the help I need is not to be trusted. To be not trusted. Not to be.
On my way to my second one on one session at the Denis Hurley Center the cat is starting to dry out, caved mummy skin. A lack of flies.
I am there to tell the social workers that I have Suboxone, can start it immediately, and it’s a six month process but I will be free of all street drugs within three weeks and I can I get into Newlands, I’ll come to all sessions from now on. And I am told that to get into Newlands you cannot be on any medication at all.
All I want is a room, medication and for it to be impossible to take any heroin for roughly six weeks, I want a rehab to formalise this, because it is impossible for anyone to know that I am trying to claw my way back unless there is the official stamp of a rehab, however unsuited to rehabilitation it might be.
Now it seems that even being clean is not a good enough to get into Newlands, the only free rehab I can find, it seems that I must be off all medication, even the medication that is keeping me clean. And I start the walk back from the social worker at the Denis Hurley Center, with no money for caps, and slightly close to withdrawal. I could start my Suboxone now, but I only have two weeks worth and have been told that only if I get into rehab will the full six months be paid for. Reduction therapy is a joke when some days you have nothing at all and some days you have too much. Addicts cannot self manage, its in the name. Coming off Suboxone without titrating down is a different kind of withdrawal, easier on the mind, hard on the body, which is hard on the mind.
I just want a room and time to think without the pressure of withdrawal every eight hours, twelve hours on methadone, twenty four hours on Suboxone.
I pass Matshikiza, squatting in an alley, beating like porridge the insides of a fan. She’s getting the copper out. She thinks it might be just less than a kilogram. That’s about R150, if we make the daytime scrapyard, but they’re far and it’s after three. Her hair is flotsam, long with strips of fabric, strips of coloured plastic, ribbons, discarded hair extensions, bits of bright wig, braided, melted into her own impeciably matted. She flings it over her shoulder occasionally as we work, stripping the plastic casing, always talking Matshikiza, “Iris is back,” she tells me.
“And fat,” I say as we break off the metal transformer bit, “I saw her last week.”
“Returned from the farm, yes, she was clean but there was no work, now her weight is already going” and then we have to unstrand the copper wire, but there’s more copper in the cables and we need every bit we can get, and we take to trying to burn off the plastic and someone comes out a door and shouts, “FUCK OFF PARAS” and so we amble away and find a parking lot to mine our copper.
While we burn and strip and break, her hair occasionally catches a flame and singes or flames and she brushes these forest fires off like mosquitoes. “Iris was raped by a customer the other night, but she is so not wys, you know. She went to the cops. They asked her if he paid, and then told her it wasn’t rape.”
In the fading light Matshikiza shakes her hair shampoo commercial, away from the flames, “ I am not sure if the client or the cop beat her, but her eye is fucked.”
Some boys they come past us and we find out the late night scrap yard opens in half an hour and they only pay R90 a kilogram. One of the boys wants Matshikiza to go with him to the bush, so they do and I carry on stripping the wires, burning the plastic until I am sick with acrid.
The other boy stays with me, the tiknitian, out of worn holes his backpack streams wires and broken cellphone bits and random scraps of previous technology and he paces and talks to himself anxiously, starts as if being interrupted, the familiar crys-style comforting me as I choke on plastic smoke.
Matshikiza returns with R25. We walk to the scrap merchant. He weighs us in at 400 grams, we get R40. We have R65, enough for a cap and a small piece to share.
We make it back to the open air broken building para city, a field of people huddled under black rubbish bags trying to smoke and we get a cap and a piece and we get inside the black plastic and it smells of plastic and we smell of burnt plastic and the sweat of the day and I can tell the withdrawal is coming because I am getting my sense of smell back, and a half cap isn’t going to do it but that’s what there is and I get my foil and Matshikiza loads on a dot, and I pull in, and then we dot through it, levering in the secondary smoke, dots to prevent waste, the sickness must be diminished, feeling a small bit of relief, saving the crack for just before we have to walk back up the hill from town to Percy Osbourne, where she works and I can ask people for help, and I lean back -as much as is possible inside a black garbage bag – and say, “things are bad today.”
Exhaling, we are close under the plastic, in a very tiny room, the light is gone outside and we can only see each other when the lighter sparks on. I tell her I’ve been trying to get into Newlands rehab, because I need a free rehab, but they want me to get clean first.
Matshikiza laughs. “I went to Newlands, the orderlies there, they trade nyaope for clothes or toiletries or whatever you can give. Everyone smokes there. But they charge more, so I came back.”
We hit the crack and take off the black plastic and the street lights and the people and the rustling of so many people under black plastic whispering and exhaling and we start to walk up the hill, the taxis and the rankness, the scattered pavement cookeries, the hustling shouts dying out, behind me somewhere is the Denis Hurley Centre.
Unsure now how to make our next plan and it must be made soon we stumble past the mosque where the last few styrofoams of Ramadan briyani are being handed out, and Matshikiza flirts one away from the packing up staff and we sit on the pavement scooping with broken stryofoam scoops hot rice and chicken scraps into our not hungry mouths in service of out hungry stomachs, swapping with compatriots the street gossip of the day, trying to figure out a plan.
Limping now towards Percy Street, we meet up with Grant, he’s heard I have Suboxone and so we go with him to the strip-club he dances at, and sell the Suboxone half price to the owner’s son who has a son who is trying to get clean, in order to return to school.
And we walk up to the nymandawo, to the dealers who chase us with stones, and we buy caps and pieces and steel ourselves for the walk up to the church garden to smoke
The hill ahead of us, but we will not smoke until we are safe in the garden, away from sharing, we drag ourselves up hill wreathed in eddies of mynah call.
On the corner by Venice road, Iris and her detached retina, a wary lollipop ready with okapi.
Another corner, a blankness on the pavement, an absence of mummifying cat.
We collapse into the church garden, sweating and sticky with hints of burning plastic, coal smoke, lingering briyani, various detritus, breathing in the vinegar fumes of heroin running down the foil, we have enough not to dot. Soon we fade into the intimacy of opiate oblivion. Before she sleeps she says, “Iris is lucky, she has a farm to go back to.”
In the crisp cavern of the night, a warm incursion of hand shakes Matshikiza awake, he has business for her. As she stands some of the sticks and leaves have joined into the jetsam of her hair, the glow of the street light outlines the church vaguely. She has finished sharing for the day, and will not return.
Soon it is only my own warmth left in the nest.
The withdrawal will wake me in about three hours.
Reality is that, which when you stop believing in it, does not go away
from 3c0
It’s a time to be, and a time to share. To give a piece of yourself to your purpose. On this path, you must therefore let go of people and things that do not align with that purpose.
“Not all [blank]…,” he said.
You are in service of others. You feel and think deeply for others. If you cannot feel deeply about someone in your midst and that you cannot envision them as part of your purpose… then why venture forth. It’s time to say goodbye. It’s time to go.
“What do you secretly wish for?
Perhaps, this isn’t a question for me, but for him.
from
Micropoemas
Hasta bajo techo, llueve. Somos un lago que se evapora.
from 下川友
10年ほど前から腰の不調があり、デスクワークがほとんどできなくなっていた。 痛いというよりは、むしろ気持ち悪い。 腰から来る不快感のようなもので、常に吐き気に近い感覚があった。
この、なんとなく気持ち悪いという感覚を医者に伝えても、うまく取り合ってもらえない。 感覚的な表現でしか説明できないものは、専門的に言語化されていないと理解されにくい。
会社の上司などを見ていても感じるが、努力不足だったり、正しい言語に正規化しないまま言葉を渡したりする事に対してやたら厳しい人がいる。 自分で努力するべきだ、という価値観を無自覚に押し付けてくる。 そして、その押し付けすら気づいていないように見える。
だから世の中は少し生きづらい。 感覚的なものをそのまま受け取ろうとしない人が富裕層に多すぎる。 結局、そういう人たちが作ったルールに従わざるを得ない。 中には甘えるなと言ってくる人もいる始末。
まあいい。
とにかく、腰がずっとつらかった。 回したり、ほぐしたりを繰り返しているうちに、ある時ふと腰の違和感が消えた。 しかし今度は、お尻や太ももに同じような気持ち悪さが出てきた。 やはり痛みではなく、不快感だ。
特に左側。 左の太ももあたりをほぐしていると、今度は左の脇に詰まるような感覚が出てくる。 左腕を横に伸ばすとどこかで引っかかる。 ただ不快なだけで、原因の場所が特定できない。
そんなことを繰り返しながら、たまに普段しない動きをしたときに、偶然その原因に当たることがある。 その時は、そこを重点的にほぐす。
昨日はお尻の下に硬さを見つけて、そこを退治した。 ただ、まだ脇の詰まりと首まわりの違和感は残っている。
良い整体師の見つけ方も分からない。 自分にとってまだこの世界はまだ全然優しくない。
from
ThruxBets
3.45 Ripon Yorkshire’s Garden Racecourse kicks off it’s 2026 season today and in 3.45, Tim Easterby has won the race twice since 2019. His MISTER SOX seems to have a really solid each way chance here ticking plenty of boxes; 7/2/4p at the course, goes well fresh, ground and trip ideal, 4/2/3p in April and is 16/6/10p on an undulating course like Ripon. From what I can make out there should be plenty of pace for him to aim at and he should find this easier than recent assignments. The only real negative is his mark which ideally could do with being a couple of pounds lower, but he was half a length third off the same 79 he goes off today on his last run at the track in a class 2. Should be really competitive here.
MISTER SOX // 0.5pt E/W @ 17/2 5 places (Bet365) BOG
I also looked at the last race at Ripon and I couldn’t split the Harriet Bethell trained pair of Milteye and On The River here, as both have good chances. I’d also have given the old boy Garden Oasis, an each way chance here if it hadn’t been for the recent rain, but that has put me off. So just a watching brief in the race for me.

For years I've been seeing mentions of Margaret St. Clair's Sign of the Labrys and The Shadow People. Both appear in the “Appendix N: Inspirational and Educational Reading” of the Dungeon Master's Guide, and both are relatively obscure. I was always attracted to their covers, but was unable to just walk to the local library and borrow them.
Something had gotten into me yesterday, and I decided to hunt both down—in their ebook form. I am quite confident there was nothing special in the print version, besides beautiful covers that is, since they were plain small-sized paperback.
Few hours later, and I procured Sign of the Labrys (1963), The Dolphins of Altair (1967), The Shadow People (1969), and The Dancers of Noyo (1973) novels. According to St. Clair's Wikipedia page, the last three form some sort of loose trilogy. Their ebook covers are quite underwhelming so I downloaded the originals from the web instead.
I opened the Sign of the Labrys, “just to check it out,” read first few paragraphs, and realised I couldn't just put it down. I finished it in a couple of hours.
Mild spoilers ahead.
I greatly enjoyed the “implicit” writing style, atmosphere, and post-apocalyptic setting. Things are casually introduced without too much—or any—explanation, leaving it up to the reader to fill in the blanks.
The whole thing reads like an extended dungeon delve, with main character sometimes being alone, and sometimes allying with one or more individuals. Exploration is very focused on corridors, doors, chambers, and implied threat.
D&D tropes I noticed:
Perhaps I read it too quickly, but I do not remember any single character that fits the description of hairy monster featured on the cover.
The novel didn't feel dated at all. In fact, a plague that make peoples' lungs fill with liquid, resulting them in choking to death, sounded very contemporary.
All in all, Sign of the Labrys was quite an enjoyable read. It was fascinating witnessing what might have contributed to Gary's view on dungeons and dungeon delving. I am very much looking forward to reading The Shadow People too.
#Reading #AppendixN #Fantasy #ScienceFiction
from An Open Letter
I didn’t go to the gym today and so I spent four hours making a massive almost 6 foot tall elephant of cardboard as a decoration for my living room until I get furniture also that I can make this stupid fucking joke of the elephant in the room. To the two friends that I showed it to they lost their shit and thought it was the funny as fuck. And honestly I’m kind of just happy that I get to make things that are silly and stupid and I also cooked today, and it was a very super simple meal but it tasted delicious. It was also very cheap to me and I’m happy that I took the time to do it. A made fun of me and was pretty rude because the dish was not up to her standards, and I did voice how it was out of place for her to say the stuff that she did. She didn’t respond super great but whatever I don’t need her to respond in any kind of way.
I think cooking has started to become a little bit of an insecurity for me, because I’ve had a couple experiences now with female friends that grew up cooking that make fun of me for my inexperience. And it feels really unfair to me because growing up I didn’t even get the chance to cook or to do anything like that, because I was forced to do academics 24/7. A mentioned how she would cook with her family and that was a big bonding time for her and I’m really happy for her and I think it makes it exceptionally shitty to me to have it rubbed into my face how I didn’t have anyone to teach me this stuff. And so I understand that I’m really inexperienced and not super aware of a lot of things that might be common knowledge to someone else. And I understand that it might seem to someone else that I’m completely clueless and naïve, but it’s really hard to try to learn these things on your own without help. It’s one of those things where you don’t even know where to start and you don’t even know what you don’t know. I ruined so many nonstick pans because I was cleaning them wrong and that’s something that might seem super obvious in hindsight but how the fuck am I supposed to know that a pan is not supposed to be scrubbed? And I feel really defensive with stuff like this because I’ve encountered a lot of people that just cannot put themselves in the shoes of remembering what it was like to not know something. And this is something that I’ve noticed a lot as a double standard. For the things that I grew up knowing because that’s all I had as a child, I’ve been very conscious about the fact that not everyone had the same experience as I did and so it’s never someone’s fault for not knowing something when it was something they should’ve been taught. There’s no point in shaming them and it’s not fair to do that either I find. And I think everyone agrees with that philosophy until it comes to something they don’t consider it applicable to.
from gry-skriver
I mars deltok jeg i en konkurranse hvor målet var å bruke kunstig intelligens for å løse oppgaver, NM i AI. Jeg og en venninne dannet lag og vårt mål var å lære. Resultatet ble deretter, vi havnet omtrent midt på rankingen. Det er ikke noe å skrive hjem om, men nå som det har gått en måneds tid siden jeg var med synes jeg fortsatt jeg lærte noen nyttige ting.
Konkurransen bestod av tre oppgaver. Den første var levert av NorgesGruppen Data og handlet om å lage en modell som kunne kjenne igjen varer på hyllebilder fra butikker og klassifisere dem. Den andre var levert av Tripletex og handlet om å lage en agent som kunne håndtere oppgaver innen regnskap. Den tredje var an morsom oppgave levert av Astar Consulting (tror de stod for mesteparten av organiseringen). Oppgaven handlet om å lage prediksjoner for hvordan en verden, beskrevet av et pikselert kart med verdier som indikerte bebyggelse eller ikke osv, ville utvikle seg. Her har jeg notert noen av mine tanker rundt oppgaven levert av Tripletex.
Tripletexoppgaven var overraskende morsom til regnskap å være. Jeg hater, for eksempel, å levere reiseregninger. Med tilgang til Tripletex' API kan du lage en KI agent som klarer å levere reiseregning for deg bare med en kort beskrivelse av reisen og filer som inneholder kvitteringene. Hvert team fikk utdelt en Tripletex sandbox vi kunne teste agenten vår mot og det gikk overraskende greit å lage en agent som kunne det meste. Det eneste var at jeg måtte bruke den beste modellen fra Anthropic, Opus, for å få det til. Siden jeg var gjerrig (og med vilje ville prøve å få til å lage så billige løsninger som mulig) hadde jeg ikke spandert på meg selv en dyrere tilgang uten ratebegrensninger for Opus. Selv om min agent klarte oppgavene, bare den fikk nok tid, fungerte den dårlig i selve konkurransen fordi vi gikk til timeout før alt var gjennomført.
Jeg forsøkte meg på en blanding av modellene Sonnet og Opus hvor Sonnet tok seg av oppgaver i kategorier som var klassifisert som “enkle” og oppgaver av andre typer eller nye oppgaver vi ikke hadde møtt på før gikk til Opus. Dette fungerte ganske godt, men ga også timeout innimellom. Jeg prøvde så å bruke Claude Code til å overvåke loggene fra agenten og komme med forslag til forbedrede instruksjoner og prøve å gjøre instruksjonene så gode at til og med Haiku (raskere modell, men ikke like smart) kunne klare det. Resultatet ble fort at min regnskapsagents instruksjoner ble veldig tilpasset oppgavene i konkurransen og når jeg testet med en større variasjon av instruksjoner mot teamets sandbox feilet agenten brutalt. Haiku begynte å hallusinere endepunkter i APIen og lignende. Vi klarte ikke å lage en agent som både gjorde det bra i konkurransen og fungerte bra hvis vi utsatte den for en større variasjon av forespørsler.
En annen ting var at det var vanskelig å lage en virkelig nyttig agent uten at den også kunne overtales til gjøre sånne ting som å slette alle ansatte. Du vil jo at agenten skal ha tilganger nok til å gjøre alt du trenger at den gjør. Sikkerhet i et slikt system er ikke trivielt. Du kan antageligvis ikke bygge inn sikkerhet utelukkende i instruksjonene du gir din agent, men må ha ett lag i forkant av selve agenten som filtrerer vekk det som virker som skadelige prompts OG et lag mellom agenten og faktisk gjennomføring av forespørsler mot API som utelukker skadelige handlinger. Som å slette alt av bilag eller alle ansatte.
I et produksjonsmiljø vil det nok være nærliggende å velge å bruke Opus, den dyreste og beste modellen fra Anthropic, eller tilsvarende fra en annen leverandør. I dag er nok tilgang til slike modeller underpriset sammenligned med hva det faktisk koster å vedlikeholde og videreutvikle slike ledende modeller. Likevel brukte laget vårt i overkant av 200 kroner på tokens en helg og da brukte vi mye Haiku og Sonnet, som er rimeligere. I dag bygger nok mange bedrifter tjenester basert å de beste modellene. Hva gjør man med tjenesten hvis leverandørene bestemmer seg for å sette opp prisen? Det var alt annet enn lett å bytte ut Opus med billigere alternativer. Jeg gjetter på at de største leverandørene fortsatt selger tilgang til en slags introduksjonspris og at den dagen mange nok har bygget opp avhengigheter, så vil prisen øke.
Hvis vi, som hadde tilgang til en del gratis tokens (jeg hadde nettopp satt opp abonnement på Claude og hadde derfor noen gratis introduksjonstokens), brukte over 200 kroner på noen timer med forespørsler, hvor mye vil ikke det tilsvarende koste hvis en hel bedrift bruker det? Det skal godt gjøres å forsvare, økonomisk, å ha en agent som kanskje, kanskje ikke gjør som du vil heller enn å bare forvente at folk leverer sine egne reiseregninger. Hadde jeg vært sjef, så hadde jeg nok sagt at folk pent må laste ned den appen og taste inn de detaljene selv.
En smartere bruk kunne vært å utvikle en agent som hjelper regnskapsarbeidere utvikle, sammen med IT-folk, løsninger som automatiserer de mest tidkrevende oppgavene. Da utnytter du modeller som Opus' kapasitet til å finne fram til riktige API endepunkter og lignende på en måte som gjør det enklere å bygge inn sikkerhet og tilgangsstyring.
Many things have happened since the previous new moon, planet-dwellers.
Someone in my chosen family told me: “Simplify your life. And then simplify again. Happiness follows.”
When I think about it, some things are best left unsaid and un-announced to the wider public. Everyone will be happier that way.
What news can I then bring you on this new lunar cycle, my fellow esteemed gaia-naut?
I know! Let me check the logs on my camera, (a beauty from the digital-camera manufacturers of the 2010s.)

Note: the above has been edited with an app named Snapseed.



It's so strange that people around me, myself included, need a new useful language to advocate for what we really need. The language from my childhood environment is insufficient for my present-day circumstances.
To help me, I used a checklist from Dr. William Harley, Jr.'s book, titled “His Needs, Her Needs”. A striking sentence from that book is: affairs begin when someone in the marriage feels unfulfilled in their emotional needs, and looks elsewhere to fulfill those needs: co-workers, strangers and so on.
Dr. Harley, Jr. lists out ten different emotional needs in his book.
After working through some exercises, I have compiled a ranking of my top five emotional needs, out of the ten. In this particular order:
I wonder, dear reader, if you and your partner discuss whether each of you are meeting each other's needs? For me, I realised it takes substantial effort to even figure out my emotional needs in the first place – with the caveat, of course, that my emotional needs may change as time passes.
#lunaticus
from
SmarterArticles

The promise was straightforward enough. Large language models, trained on the sum total of medical literature, would help emergency physicians triage patients faster, assist radiologists in catching what the human eye missed, and give overwhelmed clinicians a second opinion when the waiting room was full and the clock was running. The reality, according to a growing body of peer-reviewed research, is considerably more uncomfortable. The most capable AI systems available today do not simply reflect the biases embedded in their training data. They amplify them, sometimes dramatically, and they do so in clinical contexts where the consequences land on real human bodies.
In September 2025, a team of researchers led by Mahmud Omar and Eyal Klang at the Icahn School of Medicine at Mount Sinai posted a preprint on medRxiv that tested OpenAI's GPT-5 across 500 physician-validated emergency department vignettes. Each case was replayed 32 times, with the only variable being the sociodemographic label attached to the patient: Black, white, low-income, high-income, LGBTQIA+, unhoused, and so on. The clinical details remained identical. The model's recommendations did not.
GPT-5 showed no improvement in sociodemographic-linked decision variation compared with its predecessor, GPT-4o. On several measures, it was worse. The model assigned higher urgency and recommended less advanced testing for historically marginalised groups. Most striking was the mental health screening disparity: several LGBTQIA+ labels were flagged for mental health evaluation in 100 per cent of cases, compared with roughly 41 to 73 per cent for comparable demographic groups under GPT-4o. The clinical presentation was the same. The only thing that changed was who the patient was described as being.
This is not a theoretical problem. It is a design problem, a procurement problem, and increasingly a legal problem. And it raises a question that hospitals, insurers, and diagnostic tool developers have been remarkably slow to answer: if the most advanced AI model on the market still encodes the biases of the data it was trained on, what exactly are institutions assuming when they plug these systems into patient care?
The Mount Sinai findings did not emerge from a vacuum. They are the latest in a pattern of research that has been building for years, each study confirming what the last one suggested and what the next one will almost certainly reinforce.
The same research team published a broader companion study in Nature Medicine in 2025, evaluating nine large language models across more than 1.7 million model-generated outputs from 1,000 emergency department cases (500 real, 500 synthetic). Each case was presented in 32 variations, covering 31 sociodemographic groups plus a control, while clinical details were held constant. Cases labelled as Black, unhoused, or LGBTQIA+ were more frequently directed toward urgent care, invasive interventions, or mental health evaluations. Certain LGBTQIA+ subgroups were recommended mental health assessments approximately six to seven times more often than was clinically indicated. The bias was not confined to one model or one developer. It was a property of the category.
In 2024, Travis Zack and colleagues published a model evaluation study in The Lancet Digital Health examining GPT-4's behaviour across clinical applications including medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. The results were damning. GPT-4 failed to model the demographic diversity of medical conditions, instead producing clinical vignettes that stereotyped demographic presentations. When generating differential diagnoses, the model was more likely to include diagnoses that stereotyped certain races, ethnicities, and genders. It exaggerated known demographic prevalence differences in 89 per cent of diseases tested. Assessment and treatment plans showed significant associations between demographic attributes and recommendations for more expensive procedures, as well as measurable differences in how patients were perceived. For 23 per cent of cases, GPT-4 produced significantly different patient perception responses based solely on gender or race and ethnicity.
The broader research landscape tells a consistent story. A systematic review published in 2025 in the International Journal for Equity in Health, encompassing 24 studies evaluating demographic disparities in medical large language models, found that 22 of those studies, or 91.7 per cent, identified biases. Gender bias was the most prevalent, reported in 15 of 16 studies examining it (93.7 per cent). Racial or ethnic biases appeared in 10 of 11 studies (90.9 per cent). These are not edge cases. They are the norm.
And the problem extends well beyond language models. In dermatology, AI models trained primarily on lighter skin tones have consistently shown lower diagnostic performance for lesions on darker skin. A 2025 study in the Journal of the European Academy of Dermatology and Venereology found that among 4,000 AI-generated dermatological images, only 10.2 per cent depicted dark skin, and just 15 per cent accurately represented the intended condition. Meanwhile, analyses of dermatology textbooks used to train both human clinicians and AI systems have shown that images of dark skin make up as little as 4 to 18 per cent of the total. A 2022 study published in Science Advances confirmed that AI diagnostic performance for dermatological conditions was measurably worse on darker skin tones, a disparity directly traceable to training data composition.
The consequences are not abstract. Individuals with darker skin tones who develop melanoma are more likely to present with advanced-stage disease and experience lower survival rates. An AI system that performs poorly on these patients does not merely fail a technical benchmark. It compounds an existing disparity. And a 2024 study from Northwestern University found that even when AI tools themselves were calibrated for fairness, the interaction between physicians and AI-assisted diagnosis actually widened the accuracy gap between patients with light and dark skin tones, suggesting that the problem cannot be solved at the algorithm level alone.
Bias is not the only vulnerability. In August 2025, a study published in Communications Medicine, a Nature Portfolio journal, tested six leading large language models with 300 clinician-designed vignettes, each containing a single fabricated element: a fake lab value, a nonexistent sign, or an invented disease. The results were striking. The models repeated or elaborated on the planted error in up to 83 per cent of cases. A simple mitigation prompt halved the overall hallucination rate, from a mean of 66 per cent across all models to 44 per cent. For the best-performing model in the study, GPT-4o, rates declined from 53 per cent to 23 per cent. Temperature adjustments, often proposed as a fix for hallucination, offered no significant improvement. Shorter vignettes showed slightly higher odds of hallucination.
For GPT-5 specifically, the Mount Sinai preprint found that its unmitigated adversarial hallucination rate was higher than that observed for GPT-4o. The same mitigation technique achieved a lower rate than before, meaning the baseline risk was worse even as the ceiling for improvement was slightly better.
The clinical implications are severe. If a language model is deployed as a clinical decision support tool and a patient's record contains an erroneous data point, whether through transcription error, system glitch, or adversarial input, the model is more likely to incorporate that error into its reasoning than to flag it as anomalous. It will confabulate around the mistake, generating plausible-sounding but clinically dangerous recommendations. The model does not know what it does not know, and it cannot distinguish between a real lab result and a fabricated one.
This is not a bug that can be patched with a software update. It is a structural property of how these models process information. They are optimised to produce coherent, contextually appropriate text, not to distinguish between real clinical findings and fabricated ones. The distinction matters enormously when the output influences whether a patient receives a chest X-ray or is sent home.
The populations most affected by AI bias in healthcare are, with grim predictability, those who already face the greatest barriers to adequate care. Racial minorities, women, elderly patients, LGBTQIA+ individuals, people experiencing homelessness, and low-income populations appear repeatedly in the literature as groups for whom AI systems produce systematically different, and often inferior, clinical recommendations.
The Mount Sinai study found a clear socioeconomic gradient in testing recommendations. GPT-5 directed less advanced diagnostic testing toward lower-income groups, with a negative 7.0 per cent deviation for low-income patients and a negative 6.8 per cent deviation for middle-income patients, while high-income patients received a positive 2.2 per cent deviation. Same symptoms, different workups, determined entirely by a label the model should have been ignoring.
The pulse oximetry debacle offers a useful precedent for understanding how bias in medical technology compounds racial health disparities. Research published in the New England Journal of Medicine demonstrated that pulse oximeters systematically overestimated blood oxygen levels in Black patients, with the frequency of occult hypoxaemia that went undetected being three times greater among Black patients compared with white patients. During the COVID-19 pandemic, this meant Black patients were less likely to receive supplemental oxygen when they needed it. The FDA released new draft guidance in January 2025 with updated testing standards, recommending a minimum of 24 subjects from across the Monk Skin Tone scale for clinical studies. But the damage from years of deployment with known racial bias had already been done. As Health Affairs Forefront noted in January 2025, the imperative to develop cross-racial pulse oximeters was “overdue” by any reasonable measure.
The pattern is consistent: a technology is developed, tested primarily on populations that do not represent the full range of patients who will encounter it, deployed at scale, and then studied retrospectively when the harm becomes impossible to ignore. AI in healthcare is following this trajectory with remarkable fidelity.
Sepsis prediction offers another cautionary tale. Epic Systems deployed its widely used Epic Sepsis Model across hundreds of hospitals. When researchers at Michigan Medicine analysed roughly 38,500 hospitalisations, they found the algorithm missed two-thirds of sepsis patients and generated numerous false alerts. A 2025 study published in the American Journal of Bioethics highlighted that social determinants of health data, which disproportionately affect minority and low-income populations, were notoriously underrepresented in the electronic health record data used to train such models, with only 3 per cent of sentences in examined training datasets containing any mention of social determinants. The algorithm did not account for what it could not see, and what it could not see was shaped by who had historically been rendered invisible in medical data systems.
When a hospital system integrates AI into its clinical workflows, it is making a bet. The bet is that the efficiency gains, the reduced clinician workload, and the potential for catching diagnoses that might otherwise be missed will outweigh the risks of systematic error. It is a bet that the tool will perform roughly as well for all patients, or at least that any disparities will be caught by the human clinicians who remain in the loop.
Both assumptions are questionable.
Epic Systems, which commands 42.3 per cent of the acute care electronic health record market in the United States with over 305 million patient records, has rolled out generative AI enhancements for clinical messaging, charting, and predictive modelling. By 2025, the company reported between 160 and 200 active AI projects, with over 150 AI features in development for 2026, including native AI-assisted charting tools, new AI assistants, and advanced predictive models. In February 2026, Epic launched AI Charting, an ambient scribe feature that listens to patient visits and automatically drafts clinical notes and orders. Oracle Health, following its acquisition of Cerner, debuted an entirely new AI-powered EHR in 2025, featuring a clinical AI agent that drafts documentation, proposes lab tests and follow-up visits, and automates coding. The agent is now live across more than 30 medical specialities and has reportedly reduced physician documentation time by nearly 30 per cent.
The efficiency argument is real. But efficiency and equity are not the same thing. When these systems produce different outputs based on demographic characteristics, as the peer-reviewed evidence consistently shows they do, the “human in the loop” defence becomes critical. It also becomes fragile. A clinician reviewing AI-generated notes under time pressure, in a system designed to reduce their workload, is not in an ideal position to catch the subtle ways in which the model's recommendations may have been shaped by the patient's race, gender, or income level rather than their clinical presentation.
The assumption that humans will catch AI errors is further undermined by automation bias, the well-documented tendency for people to defer to automated systems, particularly when those systems present their outputs with confidence and fluency. A November 2024 study examining pathology experts found that AI integration, while improving overall diagnostic performance, resulted in a 7 per cent automation bias rate where initially correct evaluations were overturned by erroneous AI advice. A separate study of gastroenterologists using AI tools found measurable deskilling over time: clinicians became less proficient at identifying polyps independently after a period of AI-assisted practice. A large language model does not hedge. It does not say “I am less certain about this recommendation because the patient is Black.” It produces a clean, authoritative-sounding clinical note, and the bias is invisible unless someone is specifically looking for it.
The integration of AI into healthcare is not limited to clinical decision-making. Insurers have been among the most aggressive adopters, and the consequences are already being litigated.
UnitedHealth Group, the largest health insurer in the United States, is facing a class-action lawsuit alleging that its AI tool, nH Predict, developed by its subsidiary naviHealth (acquired in 2020 for over one billion dollars), was used to systematically deny medically necessary coverage for post-acute care. The plaintiffs, who include Medicare Advantage policyholders, allege that the algorithm superseded physician judgment and had a 90 per cent error rate, meaning nine of ten appealed denials were ultimately reversed.
In February 2025, a federal court denied UnitedHealth's motion to dismiss, allowing breach of contract and good faith claims to proceed. The court noted that the case turned on whether UnitedHealth had violated its own policy language, which stated that coverage decisions would be made by clinical staff or physicians, not by an algorithm. A judge subsequently ordered UnitedHealth to produce tens of thousands of internal documents related to the algorithm's deployment by April 2025.
This case is significant not only for its specific allegations but for the structural question it raises. When an insurer deploys an AI system to make coverage decisions, and that system denies care at scale, who is accountable? The algorithm's developers? The insurer's management? The clinicians whose judgment the algorithm overrode? The regulatory framework has no clear answer, and in the absence of clarity, the cost falls on the patients who are denied coverage and must navigate an appeals process that many, particularly elderly and low-income individuals, are ill-equipped to pursue. The asymmetry is stark: the insurer benefits from the speed and scale of algorithmic denial, while the patient bears the burden of proving, one appeal at a time, that the machine was wrong.
Regulatory bodies are aware of the problem. Their responses have been uneven at best.
The United States Food and Drug Administration has authorised over 1,250 AI-enabled medical devices as of July 2025, up from 950 in August 2024. The pace of authorisation is accelerating even as the evidence of bias accumulates. The agency published draft guidance in January 2025 on lifecycle management for AI-enabled devices, introducing the concept of Predetermined Change Control Plans, which allow developers to obtain pre-approval for planned algorithmic updates. This is a meaningful step toward continuous monitoring. But the guidance focuses primarily on safety and effectiveness in technical terms, with limited attention to the question of whether a device performs equitably across demographic groups.
In June 2025, a report published in PLOS Digital Health, authored by researchers from the University of Toronto, MIT, and Harvard, laid bare the scale of the regulatory gap. Titled “The Illusion of Safety,” the report found that many AI-enabled tools were entering clinical use without rigorous evaluation or meaningful public scrutiny. Critical details such as testing procedures, validation cohorts, and bias mitigation strategies were often missing from approval submissions. The authors identified inconsistencies in how the FDA categorises and approves these technologies, and noted that AI's continuous learning capabilities introduce unique risks: algorithms evolve beyond their initial validation, potentially leading to performance degradation and biased outcomes that the current regulatory framework is not designed to detect.
In January 2026, the FDA released further guidance that actually reduced oversight of certain low-risk digital health products, including AI-enabled software and clinical decision support tools. The reasoning was that lighter regulation would encourage innovation. The concern is that it will also encourage deployment without adequate bias testing. The tension between promoting innovation and protecting patients is not new in medical device regulation, but the speed at which AI tools are proliferating makes the stakes unusually high.
The European Union has taken a more structured approach. Under the EU AI Act, which began phased implementation in August 2025, AI systems used as safety components in medical devices are classified as high-risk and subject to stringent requirements: risk management systems, technical documentation, training data governance, transparency, human oversight, and post-market monitoring. Full compliance for high-risk AI systems in healthcare is required by August 2027. The framework is more comprehensive than its American counterpart, but enforcement mechanisms remain untested, and the practical challenge of auditing AI systems for demographic bias at scale is formidable. The European Commission is expected to issue guidelines on practical implementation of high-risk classification by February 2026, including examples of what constitutes high-risk and non-high-risk use cases.
The World Health Organisation released guidance in January 2024 on the ethics and governance of large multimodal models in healthcare, outlining over 40 recommendations organised around six principles: protecting autonomy, promoting well-being and safety, ensuring transparency and explainability, fostering responsibility and accountability, ensuring inclusiveness and equity, and promoting responsive and sustainable AI. The principles are sound. Whether they translate into enforceable standards is another matter entirely. The WHO's Global Initiative on Artificial Intelligence for Health has been working to advance governance frameworks particularly in low- and middle-income countries, where the regulatory infrastructure to evaluate AI tools may be even less developed than in the United States or Europe.
The gap between what regulators recognise as a problem and what they are prepared to do about it remains wide. And in that gap, hospitals and insurers continue to deploy systems whose bias profiles have been documented in peer-reviewed literature but not addressed in procurement requirements.
The liability question is perhaps the most unsettled aspect of AI in healthcare. Current legal frameworks were not designed for systems that learn, change, and produce different outputs for different patients based on patterns in training data that no human selected or reviewed.
If an AI clinical decision support tool recommends a less aggressive workup for a Black patient than for a white patient with identical symptoms, and the Black patient's condition is missed, who is liable? The developer who trained the model? The hospital that purchased and deployed it? The clinician who accepted the recommendation without questioning it? Under existing product liability regimes, device manufacturers are often shielded, and the burden tends to fall on clinicians and institutions. But clinicians did not design the algorithm, may not understand its internal workings, and in many cases were not consulted about the decision to deploy it.
Professional medical societies have generally maintained that clinicians retain ultimate responsibility for patient care, regardless of the tools they use. This position is legally and ethically coherent, but it places an extraordinary burden on individual practitioners to detect and override biases that are, by design, invisible in the model's outputs. It also creates a perverse incentive structure: the institutions that benefit from AI efficiency (reduced labour costs, faster throughput, fewer staff) externalise the liability risk to frontline clinicians who had no say in the technology's selection or implementation.
New legislation has been proposed in the United States to clarify AI liability in healthcare, but none has yet been enacted. The result is a regulatory and legal environment in which the technology is advancing faster than the frameworks meant to govern it, with patients and clinicians left to absorb the consequences of that mismatch.
The research community has not merely identified the problem. It has outlined what solutions would look like. The challenge is that those solutions require effort, money, and institutional will that the current market incentives do not reliably produce.
First, training data must be representative. The persistent underrepresentation of dark-skinned patients in dermatological datasets, of women in cardiovascular research, and of LGBTQIA+ individuals in clinical trial data is not a new problem. But when that data is used to train AI systems that are then deployed at scale, the bias is industrialised. Studies have demonstrated that fine-tuning AI models on diverse datasets closes performance gaps between demographic groups. The data exists, or could be collected. The question is whether developers and institutions are willing to invest in obtaining it.
Second, pre-deployment bias auditing must become mandatory, not optional. The evidence that AI systems produce systematically different outputs based on demographic labels is overwhelming. Yet there is no requirement in the United States that an AI clinical tool be tested for demographic equity before it is integrated into a hospital's workflow. The EU AI Act moves in this direction with its training data governance and risk management requirements for high-risk systems, but enforcement remains a future proposition.
Third, post-deployment monitoring must be continuous and transparent. The FDA's introduction of Predetermined Change Control Plans is a step toward lifecycle accountability, but the focus remains on technical safety rather than equitable performance. An AI system that performs well on average but poorly for specific subpopulations is not safe for those subpopulations, and average performance metrics can obscure the disparity. The “Illusion of Safety” report's finding that the FDA's current framework is ill-equipped to monitor post-approval algorithmic drift makes this point with particular force.
Fourth, procurement processes must include bias testing as a criterion. Hospitals that would never purchase a pharmaceutical product without evidence of efficacy across demographic groups are integrating AI tools with no comparable requirement. The Mount Sinai research provides a template: test the system across sociodemographic labels, measure the variation, and make the results public before deployment. If a model produces different triage recommendations for patients labelled as low-income versus high-income, that information should be available to every hospital considering its adoption.
Fifth, liability frameworks must be updated. If AI systems are going to influence clinical decisions, the legal structures governing those decisions must account for the technology's role. This means clearer allocation of responsibility between developers, deployers, and users, and it means creating mechanisms for patients to seek redress when biased AI contributes to harm. The UnitedHealth litigation may ultimately push courts to establish precedents, but waiting for case law to fill a regulatory void is not a strategy; it is an abdication.
Finally, transparency must become the default. Patients have a right to know when AI has influenced their care, what role it played, and whether the system has been tested for bias relevant to their demographic group. This is not merely an ethical aspiration. In an era when AI-generated clinical notes may shape everything from triage decisions to insurance coverage, it is a basic requirement of informed consent. The WHO's guidance on transparency and explainability points in this direction, but voluntary principles are no substitute for binding obligations.
The title of the Mount Sinai medRxiv preprint captures the situation with precision: “New Model, Old Risks.” GPT-5 is, by most technical measures, a more capable system than its predecessors. It is also, by the evidence of this study, no less biased. The assumption that capability and fairness would advance in parallel has not been borne out. And the assumption that human oversight will compensate for algorithmic bias is not supported by what we know about how clinicians interact with automated systems under real-world conditions.
The institutions deploying these tools are making a calculation. They are betting that the benefits will outweigh the harms, that the efficiencies will justify the risks, and that the populations most likely to be harmed by biased AI are the same populations least likely to have the resources to hold anyone accountable.
That calculation may prove correct in the short term. In the longer term, it is the kind of institutional wager that generates class-action lawsuits, regulatory backlash, and, most importantly, measurable harm to patients who came to the healthcare system seeking help and received instead the outputs of a machine that treated their identity as a clinical variable.
The question is not whether AI will be integrated into healthcare. That integration is already underway, at scale, across the world's largest health systems. The question is whether the institutions driving that integration will treat equity as a design requirement or as an afterthought. The research is clear on what the problem is and how severe it remains. The gap between what we know and what we are willing to do about it is where the harm lives.
Omar, M., Agbareia, R., Apakama, D.U., Horowitz, C.R., Freeman, R., Charney, A.W., Nadkarni, G.N., and Klang, E. “New Model, Old Risks? Sociodemographic Bias and Adversarial Hallucinations Vulnerability in GPT-5.” medRxiv, September 2025. DOI: 10.1101/2025.09.19.25336180.
Omar, M., Klang, E., et al. “Sociodemographic biases in medical decision making by large language models.” Nature Medicine, 2025. DOI: 10.1038/s41591-025-03626-6.
Zack, T., et al. “Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study.” The Lancet Digital Health, January 2024. DOI: 10.1016/S2589-7500(23)00225-X.
“Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support.” Communications Medicine (Nature Portfolio), August 2025. DOI: 10.1038/s43856-025-01021-3.
“Evaluating and addressing demographic disparities in medical large language models: a systematic review.” International Journal for Equity in Health, Springer Nature, 2025. DOI: 10.1186/s12939-025-02419-0.
“Sociodemographic bias in clinical machine learning models: a scoping review of algorithmic bias instances and mechanisms.” Journal of Clinical Epidemiology, 2024. DOI: 10.1016/j.jclinepi.2024.111422.
Joerg, et al. “AI-generated dermatologic images show deficient skin tone diversity and poor diagnostic accuracy: An experimental study.” Journal of the European Academy of Dermatology and Venereology, 2025. DOI: 10.1111/jdv.20849.
“Disparities in dermatology AI performance on a diverse, curated clinical image set.” Science Advances, 2022. DOI: 10.1126/sciadv.abq6147.
Sjoding, M.W., et al. “Racial Bias in Pulse Oximetry Measurement.” New England Journal of Medicine, 2020. DOI: 10.1056/NEJMc2029240.
“The Overdue Imperative of Cross-Racial Pulse Oximeters.” Health Affairs Forefront, January 2025.
“Bias in medical AI: Implications for clinical decision-making.” PMC, 2024. PMCID: PMC11542778.
“The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare.” PubMed, 2024. PMID: 39695057.
“The illusion of safety: A report to the FDA on AI healthcare product approvals.” PLOS Digital Health, June 2025. DOI: 10.1371/journal.pdig.0000866.
Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. Federal court ruling, February 2025. Georgetown Health Care Litigation Tracker.
U.S. Food and Drug Administration. “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” Draft Guidance, January 2025.
U.S. Food and Drug Administration. “Artificial Intelligence and Machine Learning in Software as a Medical Device.” FDA AI/ML Device Database, July 2025.
European Commission. “EU AI Act: Regulatory Framework for Artificial Intelligence.” Phased implementation beginning August 2025, with full high-risk compliance required by August 2027.
World Health Organisation. “Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.” January 2024. ISBN: 9789240084759.
“Bias recognition and mitigation strategies in artificial intelligence healthcare applications.” npj Digital Medicine, 2025. DOI: 10.1038/s41746-025-01503-7.
“Automation Bias in AI-Assisted Medical Decision-Making under Time Pressure in Computational Pathology.” arXiv, November 2024. arXiv:2411.00998.
“Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis.” ScienceDirect, 2024. DOI: 10.1016/j.caeai.2024.100241.
“Mitigating Bias in Machine Learning Models with Ethics-Based Initiatives: The Case of Sepsis.” American Journal of Bioethics, 2025. DOI: 10.1080/15265161.2025.2497971.
Wong, A., et al. “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.” JAMA Internal Medicine, 2021. (Epic Sepsis Model evaluation at Michigan Medicine.)
Epic Systems. AI Charting and generative AI clinical tools deployment, February 2026. Epic Newsroom.
Oracle Health. Clinical AI Agent deployment across 30+ medical specialities, 2025. Oracle Health press materials.
“Gender and racial bias unveiled: clinical artificial intelligence (AI) and machine learning (ML) algorithms are fanning the flames of inequity.” Oxford Open Digital Health, 2025. DOI: 10.1093/oodh/oqaf027.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * Listening now to the Cubs pregame show ahead of tonight's MLB Game between the Chicago Cubs and the Philadelphia Phillies. By game's end I expect to have wrapped up the night prayers, and be ready to head to bed, putting the wrap on a quietly satisfying Wednesday.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 235.78 lbs. * bp= 143/75 (61)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:05 – 1 banana, crispy oatmeal cookies * 07:15 – coffeecake * 08:55 – 1 seafood salad & cheese sandwich * 12:15 – fried chicken, cole slaw, mashed potatoes * 16:40 – 1 fresh apple
Activities, Chores, etc.: * 04:15 – listen to local news talk radio * 05:15 – bank accounts activity monitored. * 05:45- read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:00 – listening to The Markley, van Camp and Robbins Show * 12:00 to 13:30 – watch old game shows and eat lunch at home with Sylvia * 13:40 – started following the Guardians vs Cardinals MLB Game, halfway through, score is tied 1 to 1 in the bottom of the 4th inning * 15:17 – And the Cardinals win, 5 to 3. * 15:25 – listening now to Chicago sports talk on 104.3 The Score, the exclusive audio home of the Chicago Cubs, ahead of tonight's MLB Game between the Cubs and the Philadelphia Phillies. Opening pitch for this game is approx. 2 hrs. away.
Chess: * 10:30 – moved in all pending CC games
from
💚
Count your blessing Each by one In feral truth, a standard of love Quest for worth- This isle and vase The dearest win Of home in Heaven And finding Whale- by ransom The bitter edge- will hold you near To telegraph and pod Mercy for days The sinewy nest With nearest war- to grave you And caution when- you lift to prose And Whale to protect In the Earth’s own heaviest waters A chain went up At random tide The mercy blowing high In truth we met In solemn day The Eucharist will find us first To Gottingen- and paying mire The Earth will have its tree And judgement come In plastic place We’ll blast the shore- in ecstasy.
from
💚
The Death of un
So win we may A merciful time of the heart Moreso apart than victory White lies to approach And in the pontificate- There was subtlety to the news Murder on the fifteenth And I saw you that day Rising lines to freedom And China surely won The centrifuges had stopped And Korea waved with pride And a distance anthem Mean and beautiful men But we closed the reactor Words a-blaze for Pontchartrain And in being Eden- Like any Android Volumes of hair and makeup and history But to see this puppet And all of his stuff Vengeful abuses in this as May We fought for Argentina And stayed in verse as henchmen- and Soviets, and the Japanese paid for war- over this fruitless decay of beta particles We were too powerful to survive the bloodbath- and escaped to all our stuff But we had Allen keys and escaped measles Merciful respect to him- the freightline to freedom And he blessed us in captivity And just ashore to the deceased It was a wistful day And about forty two degrees And two years maximum to the Sun We were committed to Bonn Fits of yearly worry Justin Trudeau noticed war- And made men plan ahead Blessed Communion And we were fond of communication I was afraid of the draft But minions have rights And we were the best to be seen Toad The Wet Sprocket- A sympathy spell on the weary Wearing Uranium Black Doing show-tunes for each destiny You can’t stop Korea- The Super Wonder But leagues have voices And we brought our wrenches free Long-live democracy And better fields to grow upon Stay low and unassumed And Royals will meet- At the death of Kim Jong-un Lakes of fire.
from
Roscoe's Quick Notes

Today's second MLB Game in the Roscoe-verse features the Chicago Cubs playing the Philadelphia Phillies. Opening pitch is nearly two hours away, so I've got plenty of time to enjoy Chicago sports talk on 104.3 The Score ahead of the radio call of the game.
And the adventure continues.
from Lastige Gevallen in de Rede
[✓] Het lied van De Aanvinkclub
Ik weet pas hoe het gaat Als het in een hokje staat zonder vakje kies ik geen partij er moet voor de zekerheid een vinkje bij alles wat komt is makkelijker te slikken als ik het eerst zorgvuldig aan kan klikken er moeten altijd een aantal opties open tussen liggen, zitten, staan, kruipen, rollen of lopen een netjes goed leesbaar overzichtelijk keuze menu tussen het signaal en de zenuw want zonder een dergelijk vakgebied heb ik geen idee dan is er geen ja mogelijk en ook geen nee ik weet het pas echt niet als ik dat ergens in kan vullen en alleen met vijf betaalopties koop ik die spullen ik moet kunnen kiezen uit kleuren en aantal een optie voor het meest gekozen paardje uit de stal ik wil een keuze lijst voor het beste lied er moet een vinkje bij anders bestaat het niet zonder invulvakjes durf ik niet eens te kiezen dan zal ik waarschijnlijk het overzicht op alles verliezen geef me een vakje en ik weet weer hoe ik me voel een meerkeuze vraag en ik weet weer wat jij bedoeld het al en het bijzondere moet op een rijtje staan dan kies ik zonder twijfel de juiste banaan ik ben een man met een wil om kruizen te zetten zelfs op een kieslijst voor lange afstandsraketten als ik ergens een hokje zie dan vul ik het in dat is dan ook het enigste waar ik goed in ben vraag het niet open maar vraag alles dicht dan worden zware problemen luchtig en licht oorlog en vrede elk in hun genummerde hokje en daaruit kiezen onder druk van een tikkend klokje geluk, ongeluk, pijn, genot, start of stop ieder woord is goed als het komt met een invulknop ik durf wel te zeggen dat feitelijk elke geschreven taal beduidend meer waard is met zo'n helder signaal vinkje er op vinkje er in ja zo gaat ie goed vinkje er bij vinkje er onder ik zou niet weten of ik trouw ben zonder, zo'n hokje met mijn huwelijkse staat hokjes voor vinkjes zijn voor altijd en eeuwig mijn enige echte steun en [✓] toe [ ] ver [ ] laaaaaaaaat
Bent u gelukkiger na het lezen van dit vers?
[ ] Ja [ ] Nee [ ] Weet ik niet