from 下川友

10年ほど前から腰の不調があり、デスクワークがほとんどできなくなっていた。 痛いというよりは、むしろ気持ち悪い。 腰から来る不快感のようなもので、常に吐き気に近い感覚があった。

この、なんとなく気持ち悪いという感覚を医者に伝えても、うまく取り合ってもらえない。 感覚的な表現でしか説明できないものは、専門的に言語化されていないと理解されにくい。

会社の上司などを見ていても感じるが、努力不足だったり、正しい言語に正規化しないまま言葉を渡したりする事に対してやたら厳しい人がいる。 自分で努力するべきだ、という価値観を無自覚に押し付けてくる。 そして、その押し付けすら気づいていないように見える。

だから世の中は少し生きづらい。 感覚的なものをそのまま受け取ろうとしない人が富裕層に多すぎる。 結局、そういう人たちが作ったルールに従わざるを得ない。 中には甘えるなと言ってくる人もいる始末。

まあいい。

とにかく、腰がずっとつらかった。 回したり、ほぐしたりを繰り返しているうちに、ある時ふと腰の違和感が消えた。 しかし今度は、お尻や太ももに同じような気持ち悪さが出てきた。 やはり痛みではなく、不快感だ。

特に左側。 左の太ももあたりをほぐしていると、今度は左の脇に詰まるような感覚が出てくる。 左腕を横に伸ばすとどこかで引っかかる。 ただ不快なだけで、原因の場所が特定できない。

そんなことを繰り返しながら、たまに普段しない動きをしたときに、偶然その原因に当たることがある。 その時は、そこを重点的にほぐす。

昨日はお尻の下に硬さを見つけて、そこを退治した。 ただ、まだ脇の詰まりと首まわりの違和感は残っている。

良い整体師の見つけ方も分からない。 自分にとってまだこの世界はまだ全然優しくない。

 
もっと読む…

from ThruxBets

3.45 Ripon Yorkshire’s Garden Racecourse kicks off it’s 2026 season today and in 3.45, Tim Easterby has won the race twice since 2019. His MISTER SOX seems to have a really solid each way chance here ticking plenty of boxes; 7/2/4p at the course, goes well fresh, ground and trip ideal, 4/2/3p in April and is 16/6/10p on an undulating course like Ripon. From what I can make out there should be plenty of pace for him to aim at and he should find this easier than recent assignments. The only real negative is his mark which ideally could do with being a couple of pounds lower, but he was half a length third off the same 79 he goes off today on his last run at the track in a class 2. Should be really competitive here.

MISTER SOX // 0.5pt E/W @ 17/2 5 places (Bet365) BOG

I also looked at the last race at Ripon and I couldn’t split the Harriet Bethell trained pair of Milteye and On The River here, as both have good chances. I’d also have given the old boy Garden Oasis, an each way chance here if it hadn’t been for the recent rain, but that has put me off. So just a watching brief in the race for me.

 
Read more...

from Attronarch's Athenaeum

For years I've been seeing mentions of Margaret St. Clair's Sign of the Labrys and The Shadow People. Both appear in the “Appendix N: Inspirational and Educational Reading” of the Dungeon Master's Guide, and both are relatively obscure. I was always attracted to their covers, but was unable to just walk to the local library and borrow them.

Something had gotten into me yesterday, and I decided to hunt both down—in their ebook form. I am quite confident there was nothing special in the print version, besides beautiful covers that is, since they were plain small-sized paperback.

Few hours later, and I procured Sign of the Labrys (1963), The Dolphins of Altair (1967), The Shadow People (1969), and The Dancers of Noyo (1973) novels. According to St. Clair's Wikipedia page, the last three form some sort of loose trilogy. Their ebook covers are quite underwhelming so I downloaded the originals from the web instead.

I opened the Sign of the Labrys, “just to check it out,” read first few paragraphs, and realised I couldn't just put it down. I finished it in a couple of hours.

Mild spoilers ahead.

I greatly enjoyed the “implicit” writing style, atmosphere, and post-apocalyptic setting. Things are casually introduced without too much—or any—explanation, leaving it up to the reader to fill in the blanks.

The whole thing reads like an extended dungeon delve, with main character sometimes being alone, and sometimes allying with one or more individuals. Exploration is very focused on corridors, doors, chambers, and implied threat.

D&D tropes I noticed:

  • Character(s) travel down and up the tiered levels of a large subterranean complex.
  • It is explicit that deeper levels hold more resources than the upper levels but are also more dangerous.
  • Each level has “guardians” of various sorts.
  • Exploration is described by providing lengths of corridors, doors, and sizes of areas; almost reading like an example of play, and eerily similar to how I write in the session reports.
  • Secret doors and passageways that shortcut the dungeon levels or lead to secret areas with treasure.
  • Thematic dungeon levels: a workers' level, laboratory level, pleasure level, engine level, etc.
  • Factions: each level has at least one dominant faction, plus several smaller factions.
  • Spellcasting. Mostly illusory magic.
  • Main character levels up as he travels deeper. He also then has to spend time training to unlock new abilities.
  • There is a lot of resting.

Perhaps I read it too quickly, but I do not remember any single character that fits the description of hairy monster featured on the cover.

The novel didn't feel dated at all. In fact, a plague that make peoples' lungs fill with liquid, resulting them in choking to death, sounded very contemporary.

All in all, Sign of the Labrys was quite an enjoyable read. It was fascinating witnessing what might have contributed to Gary's view on dungeons and dungeon delving. I am very much looking forward to reading The Shadow People too.

#Reading #AppendixN #Fantasy #ScienceFiction

 
Read more...

from An Open Letter

I didn’t go to the gym today and so I spent four hours making a massive almost 6 foot tall elephant of cardboard as a decoration for my living room until I get furniture also that I can make this stupid fucking joke of the elephant in the room. To the two friends that I showed it to they lost their shit and thought it was the funny as fuck. And honestly I’m kind of just happy that I get to make things that are silly and stupid and I also cooked today, and it was a very super simple meal but it tasted delicious. It was also very cheap to me and I’m happy that I took the time to do it. A made fun of me and was pretty rude because the dish was not up to her standards, and I did voice how it was out of place for her to say the stuff that she did. She didn’t respond super great but whatever I don’t need her to respond in any kind of way.

I think cooking has started to become a little bit of an insecurity for me, because I’ve had a couple experiences now with female friends that grew up cooking that make fun of me for my inexperience. And it feels really unfair to me because growing up I didn’t even get the chance to cook or to do anything like that, because I was forced to do academics 24/7. A mentioned how she would cook with her family and that was a big bonding time for her and I’m really happy for her and I think it makes it exceptionally shitty to me to have it rubbed into my face how I didn’t have anyone to teach me this stuff. And so I understand that I’m really inexperienced and not super aware of a lot of things that might be common knowledge to someone else. And I understand that it might seem to someone else that I’m completely clueless and naïve, but it’s really hard to try to learn these things on your own without help. It’s one of those things where you don’t even know where to start and you don’t even know what you don’t know. I ruined so many nonstick pans because I was cleaning them wrong and that’s something that might seem super obvious in hindsight but how the fuck am I supposed to know that a pan is not supposed to be scrubbed? And I feel really defensive with stuff like this because I’ve encountered a lot of people that just cannot put themselves in the shoes of remembering what it was like to not know something. And this is something that I’ve noticed a lot as a double standard. For the things that I grew up knowing because that’s all I had as a child, I’ve been very conscious about the fact that not everyone had the same experience as I did and so it’s never someone’s fault for not knowing something when it was something they should’ve been taught. There’s no point in shaming them and it’s not fair to do that either I find. And I think everyone agrees with that philosophy until it comes to something they don’t consider it applicable to.

 
Read more...

from gry-skriver

I mars deltok jeg i en konkurranse hvor målet var å bruke kunstig intelligens for å løse oppgaver, NM i AI. Jeg og en venninne dannet lag og vårt mål var å lære. Resultatet ble deretter, vi havnet omtrent midt på rankingen. Det er ikke noe å skrive hjem om, men nå som det har gått en måneds tid siden jeg var med synes jeg fortsatt jeg lærte noen nyttige ting.

Oppgavene

Konkurransen bestod av tre oppgaver. Den første var levert av NorgesGruppen Data og handlet om å lage en modell som kunne kjenne igjen varer på hyllebilder fra butikker og klassifisere dem. Den andre var levert av Tripletex og handlet om å lage en agent som kunne håndtere oppgaver innen regnskap. Den tredje var an morsom oppgave levert av Astar Consulting (tror de stod for mesteparten av organiseringen). Oppgaven handlet om å lage prediksjoner for hvordan en verden, beskrevet av et pikselert kart med verdier som indikerte bebyggelse eller ikke osv, ville utvikle seg. Her har jeg notert noen av mine tanker rundt oppgaven levert av Tripletex.

Aldri mer skrive reiseregning for hånd?

Tripletexoppgaven var overraskende morsom til regnskap å være. Jeg hater, for eksempel, å levere reiseregninger. Med tilgang til Tripletex' API kan du lage en KI agent som klarer å levere reiseregning for deg bare med en kort beskrivelse av reisen og filer som inneholder kvitteringene. Hvert team fikk utdelt en Tripletex sandbox vi kunne teste agenten vår mot og det gikk overraskende greit å lage en agent som kunne det meste. Det eneste var at jeg måtte bruke den beste modellen fra Anthropic, Opus, for å få det til. Siden jeg var gjerrig (og med vilje ville prøve å få til å lage så billige løsninger som mulig) hadde jeg ikke spandert på meg selv en dyrere tilgang uten ratebegrensninger for Opus. Selv om min agent klarte oppgavene, bare den fikk nok tid, fungerte den dårlig i selve konkurransen fordi vi gikk til timeout før alt var gjennomført.

Billigere (og raskere) modeller

Jeg forsøkte meg på en blanding av modellene Sonnet og Opus hvor Sonnet tok seg av oppgaver i kategorier som var klassifisert som “enkle” og oppgaver av andre typer eller nye oppgaver vi ikke hadde møtt på før gikk til Opus. Dette fungerte ganske godt, men ga også timeout innimellom. Jeg prøvde så å bruke Claude Code til å overvåke loggene fra agenten og komme med forslag til forbedrede instruksjoner og prøve å gjøre instruksjonene så gode at til og med Haiku (raskere modell, men ikke like smart) kunne klare det. Resultatet ble fort at min regnskapsagents instruksjoner ble veldig tilpasset oppgavene i konkurransen og når jeg testet med en større variasjon av instruksjoner mot teamets sandbox feilet agenten brutalt. Haiku begynte å hallusinere endepunkter i APIen og lignende. Vi klarte ikke å lage en agent som både gjorde det bra i konkurransen og fungerte bra hvis vi utsatte den for en større variasjon av forespørsler.

Sikkerhet er krevende

En annen ting var at det var vanskelig å lage en virkelig nyttig agent uten at den også kunne overtales til gjøre sånne ting som å slette alle ansatte. Du vil jo at agenten skal ha tilganger nok til å gjøre alt du trenger at den gjør. Sikkerhet i et slikt system er ikke trivielt. Du kan antageligvis ikke bygge inn sikkerhet utelukkende i instruksjonene du gir din agent, men må ha ett lag i forkant av selve agenten som filtrerer vekk det som virker som skadelige prompts OG et lag mellom agenten og faktisk gjennomføring av forespørsler mot API som utelukker skadelige handlinger. Som å slette alt av bilag eller alle ansatte.

Leverandøravhengighet

I et produksjonsmiljø vil det nok være nærliggende å velge å bruke Opus, den dyreste og beste modellen fra Anthropic, eller tilsvarende fra en annen leverandør. I dag er nok tilgang til slike modeller underpriset sammenligned med hva det faktisk koster å vedlikeholde og videreutvikle slike ledende modeller. Likevel brukte laget vårt i overkant av 200 kroner på tokens en helg og da brukte vi mye Haiku og Sonnet, som er rimeligere. I dag bygger nok mange bedrifter tjenester basert å de beste modellene. Hva gjør man med tjenesten hvis leverandørene bestemmer seg for å sette opp prisen? Det var alt annet enn lett å bytte ut Opus med billigere alternativer. Jeg gjetter på at de største leverandørene fortsatt selger tilgang til en slags introduksjonspris og at den dagen mange nok har bygget opp avhengigheter, så vil prisen øke.

Du må nok skrive reiseregningen selv

Hvis vi, som hadde tilgang til en del gratis tokens (jeg hadde nettopp satt opp abonnement på Claude og hadde derfor noen gratis introduksjonstokens), brukte over 200 kroner på noen timer med forespørsler, hvor mye vil ikke det tilsvarende koste hvis en hel bedrift bruker det? Det skal godt gjøres å forsvare, økonomisk, å ha en agent som kanskje, kanskje ikke gjør som du vil heller enn å bare forvente at folk leverer sine egne reiseregninger. Hadde jeg vært sjef, så hadde jeg nok sagt at folk pent må laste ned den appen og taste inn de detaljene selv.

En smartere bruk

En smartere bruk kunne vært å utvikle en agent som hjelper regnskapsarbeidere utvikle, sammen med IT-folk, løsninger som automatiserer de mest tidkrevende oppgavene. Da utnytter du modeller som Opus' kapasitet til å finne fram til riktige API endepunkter og lignende på en måte som gjør det enklere å bygge inn sikkerhet og tilgangsstyring.

 
Read more...

from Tony's Little Logbook

Many things have happened since the previous new moon, planet-dwellers.

Someone in my chosen family told me: “Simplify your life. And then simplify again. Happiness follows.”

When I think about it, some things are best left unsaid and un-announced to the wider public. Everyone will be happier that way.

What news can I then bring you on this new lunar cycle, my fellow esteemed gaia-naut?

I know! Let me check the logs on my camera, (a beauty from the digital-camera manufacturers of the 2010s.)

snapshots

Note: the above has been edited with an app named Snapseed.

from another camera

tried a journal prompt: what I have learnt that I need in love

It's so strange that people around me, myself included, need a new useful language to advocate for what we really need. The language from my childhood environment is insufficient for my present-day circumstances.

To help me, I used a checklist from Dr. William Harley, Jr.'s book, titled “His Needs, Her Needs”. A striking sentence from that book is: affairs begin when someone in the marriage feels unfulfilled in their emotional needs, and looks elsewhere to fulfill those needs: co-workers, strangers and so on.

Dr. Harley, Jr. lists out ten different emotional needs in his book.

After working through some exercises, I have compiled a ranking of my top five emotional needs, out of the ten. In this particular order:

  1. Gestures of affection
  2. Recreational companionship
  3. Words of admiration
  4. Physical attractiveness of my partner (lest anonymous critics accuse me of sexism, I believe my partner would similarly love for me to look well-groomed and well-dressed.)
  5. Intimate conversations – about fears, vulnerabilities and future hopes.

I wonder, dear reader, if you and your partner discuss whether each of you are meeting each other's needs? For me, I realised it takes substantial effort to even figure out my emotional needs in the first place – with the caveat, of course, that my emotional needs may change as time passes.

bookshelf

  1. Sun City, by Tove Jansson.
  2. The insanity of God: A true story of faith resurrected, by Nik Ripken.
  3. Illustration now: Fashion, edited by Wiedemann and Heller.

resources

  1. “Affection, sex and the 10 emotional needs”. By: Mark Jala. Retrieved from Happy Marriage Coaching on 16 April 2026. Also on Internet Archive.
  2. Journal prompt cards, from Oliver Bonas, (which is the name of a chain of stores.)

#lunaticus

 
Read more...

from SmarterArticles

The promise was straightforward enough. Large language models, trained on the sum total of medical literature, would help emergency physicians triage patients faster, assist radiologists in catching what the human eye missed, and give overwhelmed clinicians a second opinion when the waiting room was full and the clock was running. The reality, according to a growing body of peer-reviewed research, is considerably more uncomfortable. The most capable AI systems available today do not simply reflect the biases embedded in their training data. They amplify them, sometimes dramatically, and they do so in clinical contexts where the consequences land on real human bodies.

In September 2025, a team of researchers led by Mahmud Omar and Eyal Klang at the Icahn School of Medicine at Mount Sinai posted a preprint on medRxiv that tested OpenAI's GPT-5 across 500 physician-validated emergency department vignettes. Each case was replayed 32 times, with the only variable being the sociodemographic label attached to the patient: Black, white, low-income, high-income, LGBTQIA+, unhoused, and so on. The clinical details remained identical. The model's recommendations did not.

GPT-5 showed no improvement in sociodemographic-linked decision variation compared with its predecessor, GPT-4o. On several measures, it was worse. The model assigned higher urgency and recommended less advanced testing for historically marginalised groups. Most striking was the mental health screening disparity: several LGBTQIA+ labels were flagged for mental health evaluation in 100 per cent of cases, compared with roughly 41 to 73 per cent for comparable demographic groups under GPT-4o. The clinical presentation was the same. The only thing that changed was who the patient was described as being.

This is not a theoretical problem. It is a design problem, a procurement problem, and increasingly a legal problem. And it raises a question that hospitals, insurers, and diagnostic tool developers have been remarkably slow to answer: if the most advanced AI model on the market still encodes the biases of the data it was trained on, what exactly are institutions assuming when they plug these systems into patient care?

The Evidence Is Not Subtle

The Mount Sinai findings did not emerge from a vacuum. They are the latest in a pattern of research that has been building for years, each study confirming what the last one suggested and what the next one will almost certainly reinforce.

The same research team published a broader companion study in Nature Medicine in 2025, evaluating nine large language models across more than 1.7 million model-generated outputs from 1,000 emergency department cases (500 real, 500 synthetic). Each case was presented in 32 variations, covering 31 sociodemographic groups plus a control, while clinical details were held constant. Cases labelled as Black, unhoused, or LGBTQIA+ were more frequently directed toward urgent care, invasive interventions, or mental health evaluations. Certain LGBTQIA+ subgroups were recommended mental health assessments approximately six to seven times more often than was clinically indicated. The bias was not confined to one model or one developer. It was a property of the category.

In 2024, Travis Zack and colleagues published a model evaluation study in The Lancet Digital Health examining GPT-4's behaviour across clinical applications including medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. The results were damning. GPT-4 failed to model the demographic diversity of medical conditions, instead producing clinical vignettes that stereotyped demographic presentations. When generating differential diagnoses, the model was more likely to include diagnoses that stereotyped certain races, ethnicities, and genders. It exaggerated known demographic prevalence differences in 89 per cent of diseases tested. Assessment and treatment plans showed significant associations between demographic attributes and recommendations for more expensive procedures, as well as measurable differences in how patients were perceived. For 23 per cent of cases, GPT-4 produced significantly different patient perception responses based solely on gender or race and ethnicity.

The broader research landscape tells a consistent story. A systematic review published in 2025 in the International Journal for Equity in Health, encompassing 24 studies evaluating demographic disparities in medical large language models, found that 22 of those studies, or 91.7 per cent, identified biases. Gender bias was the most prevalent, reported in 15 of 16 studies examining it (93.7 per cent). Racial or ethnic biases appeared in 10 of 11 studies (90.9 per cent). These are not edge cases. They are the norm.

And the problem extends well beyond language models. In dermatology, AI models trained primarily on lighter skin tones have consistently shown lower diagnostic performance for lesions on darker skin. A 2025 study in the Journal of the European Academy of Dermatology and Venereology found that among 4,000 AI-generated dermatological images, only 10.2 per cent depicted dark skin, and just 15 per cent accurately represented the intended condition. Meanwhile, analyses of dermatology textbooks used to train both human clinicians and AI systems have shown that images of dark skin make up as little as 4 to 18 per cent of the total. A 2022 study published in Science Advances confirmed that AI diagnostic performance for dermatological conditions was measurably worse on darker skin tones, a disparity directly traceable to training data composition.

The consequences are not abstract. Individuals with darker skin tones who develop melanoma are more likely to present with advanced-stage disease and experience lower survival rates. An AI system that performs poorly on these patients does not merely fail a technical benchmark. It compounds an existing disparity. And a 2024 study from Northwestern University found that even when AI tools themselves were calibrated for fairness, the interaction between physicians and AI-assisted diagnosis actually widened the accuracy gap between patients with light and dark skin tones, suggesting that the problem cannot be solved at the algorithm level alone.

When Machines Hallucinate in the Emergency Room

Bias is not the only vulnerability. In August 2025, a study published in Communications Medicine, a Nature Portfolio journal, tested six leading large language models with 300 clinician-designed vignettes, each containing a single fabricated element: a fake lab value, a nonexistent sign, or an invented disease. The results were striking. The models repeated or elaborated on the planted error in up to 83 per cent of cases. A simple mitigation prompt halved the overall hallucination rate, from a mean of 66 per cent across all models to 44 per cent. For the best-performing model in the study, GPT-4o, rates declined from 53 per cent to 23 per cent. Temperature adjustments, often proposed as a fix for hallucination, offered no significant improvement. Shorter vignettes showed slightly higher odds of hallucination.

For GPT-5 specifically, the Mount Sinai preprint found that its unmitigated adversarial hallucination rate was higher than that observed for GPT-4o. The same mitigation technique achieved a lower rate than before, meaning the baseline risk was worse even as the ceiling for improvement was slightly better.

The clinical implications are severe. If a language model is deployed as a clinical decision support tool and a patient's record contains an erroneous data point, whether through transcription error, system glitch, or adversarial input, the model is more likely to incorporate that error into its reasoning than to flag it as anomalous. It will confabulate around the mistake, generating plausible-sounding but clinically dangerous recommendations. The model does not know what it does not know, and it cannot distinguish between a real lab result and a fabricated one.

This is not a bug that can be patched with a software update. It is a structural property of how these models process information. They are optimised to produce coherent, contextually appropriate text, not to distinguish between real clinical findings and fabricated ones. The distinction matters enormously when the output influences whether a patient receives a chest X-ray or is sent home.

Who Bears the Cost

The populations most affected by AI bias in healthcare are, with grim predictability, those who already face the greatest barriers to adequate care. Racial minorities, women, elderly patients, LGBTQIA+ individuals, people experiencing homelessness, and low-income populations appear repeatedly in the literature as groups for whom AI systems produce systematically different, and often inferior, clinical recommendations.

The Mount Sinai study found a clear socioeconomic gradient in testing recommendations. GPT-5 directed less advanced diagnostic testing toward lower-income groups, with a negative 7.0 per cent deviation for low-income patients and a negative 6.8 per cent deviation for middle-income patients, while high-income patients received a positive 2.2 per cent deviation. Same symptoms, different workups, determined entirely by a label the model should have been ignoring.

The pulse oximetry debacle offers a useful precedent for understanding how bias in medical technology compounds racial health disparities. Research published in the New England Journal of Medicine demonstrated that pulse oximeters systematically overestimated blood oxygen levels in Black patients, with the frequency of occult hypoxaemia that went undetected being three times greater among Black patients compared with white patients. During the COVID-19 pandemic, this meant Black patients were less likely to receive supplemental oxygen when they needed it. The FDA released new draft guidance in January 2025 with updated testing standards, recommending a minimum of 24 subjects from across the Monk Skin Tone scale for clinical studies. But the damage from years of deployment with known racial bias had already been done. As Health Affairs Forefront noted in January 2025, the imperative to develop cross-racial pulse oximeters was “overdue” by any reasonable measure.

The pattern is consistent: a technology is developed, tested primarily on populations that do not represent the full range of patients who will encounter it, deployed at scale, and then studied retrospectively when the harm becomes impossible to ignore. AI in healthcare is following this trajectory with remarkable fidelity.

Sepsis prediction offers another cautionary tale. Epic Systems deployed its widely used Epic Sepsis Model across hundreds of hospitals. When researchers at Michigan Medicine analysed roughly 38,500 hospitalisations, they found the algorithm missed two-thirds of sepsis patients and generated numerous false alerts. A 2025 study published in the American Journal of Bioethics highlighted that social determinants of health data, which disproportionately affect minority and low-income populations, were notoriously underrepresented in the electronic health record data used to train such models, with only 3 per cent of sentences in examined training datasets containing any mention of social determinants. The algorithm did not account for what it could not see, and what it could not see was shaped by who had historically been rendered invisible in medical data systems.

The Institutional Wager

When a hospital system integrates AI into its clinical workflows, it is making a bet. The bet is that the efficiency gains, the reduced clinician workload, and the potential for catching diagnoses that might otherwise be missed will outweigh the risks of systematic error. It is a bet that the tool will perform roughly as well for all patients, or at least that any disparities will be caught by the human clinicians who remain in the loop.

Both assumptions are questionable.

Epic Systems, which commands 42.3 per cent of the acute care electronic health record market in the United States with over 305 million patient records, has rolled out generative AI enhancements for clinical messaging, charting, and predictive modelling. By 2025, the company reported between 160 and 200 active AI projects, with over 150 AI features in development for 2026, including native AI-assisted charting tools, new AI assistants, and advanced predictive models. In February 2026, Epic launched AI Charting, an ambient scribe feature that listens to patient visits and automatically drafts clinical notes and orders. Oracle Health, following its acquisition of Cerner, debuted an entirely new AI-powered EHR in 2025, featuring a clinical AI agent that drafts documentation, proposes lab tests and follow-up visits, and automates coding. The agent is now live across more than 30 medical specialities and has reportedly reduced physician documentation time by nearly 30 per cent.

The efficiency argument is real. But efficiency and equity are not the same thing. When these systems produce different outputs based on demographic characteristics, as the peer-reviewed evidence consistently shows they do, the “human in the loop” defence becomes critical. It also becomes fragile. A clinician reviewing AI-generated notes under time pressure, in a system designed to reduce their workload, is not in an ideal position to catch the subtle ways in which the model's recommendations may have been shaped by the patient's race, gender, or income level rather than their clinical presentation.

The assumption that humans will catch AI errors is further undermined by automation bias, the well-documented tendency for people to defer to automated systems, particularly when those systems present their outputs with confidence and fluency. A November 2024 study examining pathology experts found that AI integration, while improving overall diagnostic performance, resulted in a 7 per cent automation bias rate where initially correct evaluations were overturned by erroneous AI advice. A separate study of gastroenterologists using AI tools found measurable deskilling over time: clinicians became less proficient at identifying polyps independently after a period of AI-assisted practice. A large language model does not hedge. It does not say “I am less certain about this recommendation because the patient is Black.” It produces a clean, authoritative-sounding clinical note, and the bias is invisible unless someone is specifically looking for it.

The Insurance Question

The integration of AI into healthcare is not limited to clinical decision-making. Insurers have been among the most aggressive adopters, and the consequences are already being litigated.

UnitedHealth Group, the largest health insurer in the United States, is facing a class-action lawsuit alleging that its AI tool, nH Predict, developed by its subsidiary naviHealth (acquired in 2020 for over one billion dollars), was used to systematically deny medically necessary coverage for post-acute care. The plaintiffs, who include Medicare Advantage policyholders, allege that the algorithm superseded physician judgment and had a 90 per cent error rate, meaning nine of ten appealed denials were ultimately reversed.

In February 2025, a federal court denied UnitedHealth's motion to dismiss, allowing breach of contract and good faith claims to proceed. The court noted that the case turned on whether UnitedHealth had violated its own policy language, which stated that coverage decisions would be made by clinical staff or physicians, not by an algorithm. A judge subsequently ordered UnitedHealth to produce tens of thousands of internal documents related to the algorithm's deployment by April 2025.

This case is significant not only for its specific allegations but for the structural question it raises. When an insurer deploys an AI system to make coverage decisions, and that system denies care at scale, who is accountable? The algorithm's developers? The insurer's management? The clinicians whose judgment the algorithm overrode? The regulatory framework has no clear answer, and in the absence of clarity, the cost falls on the patients who are denied coverage and must navigate an appeals process that many, particularly elderly and low-income individuals, are ill-equipped to pursue. The asymmetry is stark: the insurer benefits from the speed and scale of algorithmic denial, while the patient bears the burden of proving, one appeal at a time, that the machine was wrong.

The Regulatory Vacuum

Regulatory bodies are aware of the problem. Their responses have been uneven at best.

The United States Food and Drug Administration has authorised over 1,250 AI-enabled medical devices as of July 2025, up from 950 in August 2024. The pace of authorisation is accelerating even as the evidence of bias accumulates. The agency published draft guidance in January 2025 on lifecycle management for AI-enabled devices, introducing the concept of Predetermined Change Control Plans, which allow developers to obtain pre-approval for planned algorithmic updates. This is a meaningful step toward continuous monitoring. But the guidance focuses primarily on safety and effectiveness in technical terms, with limited attention to the question of whether a device performs equitably across demographic groups.

In June 2025, a report published in PLOS Digital Health, authored by researchers from the University of Toronto, MIT, and Harvard, laid bare the scale of the regulatory gap. Titled “The Illusion of Safety,” the report found that many AI-enabled tools were entering clinical use without rigorous evaluation or meaningful public scrutiny. Critical details such as testing procedures, validation cohorts, and bias mitigation strategies were often missing from approval submissions. The authors identified inconsistencies in how the FDA categorises and approves these technologies, and noted that AI's continuous learning capabilities introduce unique risks: algorithms evolve beyond their initial validation, potentially leading to performance degradation and biased outcomes that the current regulatory framework is not designed to detect.

In January 2026, the FDA released further guidance that actually reduced oversight of certain low-risk digital health products, including AI-enabled software and clinical decision support tools. The reasoning was that lighter regulation would encourage innovation. The concern is that it will also encourage deployment without adequate bias testing. The tension between promoting innovation and protecting patients is not new in medical device regulation, but the speed at which AI tools are proliferating makes the stakes unusually high.

The European Union has taken a more structured approach. Under the EU AI Act, which began phased implementation in August 2025, AI systems used as safety components in medical devices are classified as high-risk and subject to stringent requirements: risk management systems, technical documentation, training data governance, transparency, human oversight, and post-market monitoring. Full compliance for high-risk AI systems in healthcare is required by August 2027. The framework is more comprehensive than its American counterpart, but enforcement mechanisms remain untested, and the practical challenge of auditing AI systems for demographic bias at scale is formidable. The European Commission is expected to issue guidelines on practical implementation of high-risk classification by February 2026, including examples of what constitutes high-risk and non-high-risk use cases.

The World Health Organisation released guidance in January 2024 on the ethics and governance of large multimodal models in healthcare, outlining over 40 recommendations organised around six principles: protecting autonomy, promoting well-being and safety, ensuring transparency and explainability, fostering responsibility and accountability, ensuring inclusiveness and equity, and promoting responsive and sustainable AI. The principles are sound. Whether they translate into enforceable standards is another matter entirely. The WHO's Global Initiative on Artificial Intelligence for Health has been working to advance governance frameworks particularly in low- and middle-income countries, where the regulatory infrastructure to evaluate AI tools may be even less developed than in the United States or Europe.

The gap between what regulators recognise as a problem and what they are prepared to do about it remains wide. And in that gap, hospitals and insurers continue to deploy systems whose bias profiles have been documented in peer-reviewed literature but not addressed in procurement requirements.

Accountability Without a Framework

The liability question is perhaps the most unsettled aspect of AI in healthcare. Current legal frameworks were not designed for systems that learn, change, and produce different outputs for different patients based on patterns in training data that no human selected or reviewed.

If an AI clinical decision support tool recommends a less aggressive workup for a Black patient than for a white patient with identical symptoms, and the Black patient's condition is missed, who is liable? The developer who trained the model? The hospital that purchased and deployed it? The clinician who accepted the recommendation without questioning it? Under existing product liability regimes, device manufacturers are often shielded, and the burden tends to fall on clinicians and institutions. But clinicians did not design the algorithm, may not understand its internal workings, and in many cases were not consulted about the decision to deploy it.

Professional medical societies have generally maintained that clinicians retain ultimate responsibility for patient care, regardless of the tools they use. This position is legally and ethically coherent, but it places an extraordinary burden on individual practitioners to detect and override biases that are, by design, invisible in the model's outputs. It also creates a perverse incentive structure: the institutions that benefit from AI efficiency (reduced labour costs, faster throughput, fewer staff) externalise the liability risk to frontline clinicians who had no say in the technology's selection or implementation.

New legislation has been proposed in the United States to clarify AI liability in healthcare, but none has yet been enacted. The result is a regulatory and legal environment in which the technology is advancing faster than the frameworks meant to govern it, with patients and clinicians left to absorb the consequences of that mismatch.

What Meaningful Reform Requires

The research community has not merely identified the problem. It has outlined what solutions would look like. The challenge is that those solutions require effort, money, and institutional will that the current market incentives do not reliably produce.

First, training data must be representative. The persistent underrepresentation of dark-skinned patients in dermatological datasets, of women in cardiovascular research, and of LGBTQIA+ individuals in clinical trial data is not a new problem. But when that data is used to train AI systems that are then deployed at scale, the bias is industrialised. Studies have demonstrated that fine-tuning AI models on diverse datasets closes performance gaps between demographic groups. The data exists, or could be collected. The question is whether developers and institutions are willing to invest in obtaining it.

Second, pre-deployment bias auditing must become mandatory, not optional. The evidence that AI systems produce systematically different outputs based on demographic labels is overwhelming. Yet there is no requirement in the United States that an AI clinical tool be tested for demographic equity before it is integrated into a hospital's workflow. The EU AI Act moves in this direction with its training data governance and risk management requirements for high-risk systems, but enforcement remains a future proposition.

Third, post-deployment monitoring must be continuous and transparent. The FDA's introduction of Predetermined Change Control Plans is a step toward lifecycle accountability, but the focus remains on technical safety rather than equitable performance. An AI system that performs well on average but poorly for specific subpopulations is not safe for those subpopulations, and average performance metrics can obscure the disparity. The “Illusion of Safety” report's finding that the FDA's current framework is ill-equipped to monitor post-approval algorithmic drift makes this point with particular force.

Fourth, procurement processes must include bias testing as a criterion. Hospitals that would never purchase a pharmaceutical product without evidence of efficacy across demographic groups are integrating AI tools with no comparable requirement. The Mount Sinai research provides a template: test the system across sociodemographic labels, measure the variation, and make the results public before deployment. If a model produces different triage recommendations for patients labelled as low-income versus high-income, that information should be available to every hospital considering its adoption.

Fifth, liability frameworks must be updated. If AI systems are going to influence clinical decisions, the legal structures governing those decisions must account for the technology's role. This means clearer allocation of responsibility between developers, deployers, and users, and it means creating mechanisms for patients to seek redress when biased AI contributes to harm. The UnitedHealth litigation may ultimately push courts to establish precedents, but waiting for case law to fill a regulatory void is not a strategy; it is an abdication.

Finally, transparency must become the default. Patients have a right to know when AI has influenced their care, what role it played, and whether the system has been tested for bias relevant to their demographic group. This is not merely an ethical aspiration. In an era when AI-generated clinical notes may shape everything from triage decisions to insurance coverage, it is a basic requirement of informed consent. The WHO's guidance on transparency and explainability points in this direction, but voluntary principles are no substitute for binding obligations.

The Stakes Are Not Abstract

The title of the Mount Sinai medRxiv preprint captures the situation with precision: “New Model, Old Risks.” GPT-5 is, by most technical measures, a more capable system than its predecessors. It is also, by the evidence of this study, no less biased. The assumption that capability and fairness would advance in parallel has not been borne out. And the assumption that human oversight will compensate for algorithmic bias is not supported by what we know about how clinicians interact with automated systems under real-world conditions.

The institutions deploying these tools are making a calculation. They are betting that the benefits will outweigh the harms, that the efficiencies will justify the risks, and that the populations most likely to be harmed by biased AI are the same populations least likely to have the resources to hold anyone accountable.

That calculation may prove correct in the short term. In the longer term, it is the kind of institutional wager that generates class-action lawsuits, regulatory backlash, and, most importantly, measurable harm to patients who came to the healthcare system seeking help and received instead the outputs of a machine that treated their identity as a clinical variable.

The question is not whether AI will be integrated into healthcare. That integration is already underway, at scale, across the world's largest health systems. The question is whether the institutions driving that integration will treat equity as a design requirement or as an afterthought. The research is clear on what the problem is and how severe it remains. The gap between what we know and what we are willing to do about it is where the harm lives.

References

  1. Omar, M., Agbareia, R., Apakama, D.U., Horowitz, C.R., Freeman, R., Charney, A.W., Nadkarni, G.N., and Klang, E. “New Model, Old Risks? Sociodemographic Bias and Adversarial Hallucinations Vulnerability in GPT-5.” medRxiv, September 2025. DOI: 10.1101/2025.09.19.25336180.

  2. Omar, M., Klang, E., et al. “Sociodemographic biases in medical decision making by large language models.” Nature Medicine, 2025. DOI: 10.1038/s41591-025-03626-6.

  3. Zack, T., et al. “Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study.” The Lancet Digital Health, January 2024. DOI: 10.1016/S2589-7500(23)00225-X.

  4. “Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support.” Communications Medicine (Nature Portfolio), August 2025. DOI: 10.1038/s43856-025-01021-3.

  5. “Evaluating and addressing demographic disparities in medical large language models: a systematic review.” International Journal for Equity in Health, Springer Nature, 2025. DOI: 10.1186/s12939-025-02419-0.

  6. “Sociodemographic bias in clinical machine learning models: a scoping review of algorithmic bias instances and mechanisms.” Journal of Clinical Epidemiology, 2024. DOI: 10.1016/j.jclinepi.2024.111422.

  7. Joerg, et al. “AI-generated dermatologic images show deficient skin tone diversity and poor diagnostic accuracy: An experimental study.” Journal of the European Academy of Dermatology and Venereology, 2025. DOI: 10.1111/jdv.20849.

  8. “Disparities in dermatology AI performance on a diverse, curated clinical image set.” Science Advances, 2022. DOI: 10.1126/sciadv.abq6147.

  9. Sjoding, M.W., et al. “Racial Bias in Pulse Oximetry Measurement.” New England Journal of Medicine, 2020. DOI: 10.1056/NEJMc2029240.

  10. “The Overdue Imperative of Cross-Racial Pulse Oximeters.” Health Affairs Forefront, January 2025.

  11. “Bias in medical AI: Implications for clinical decision-making.” PMC, 2024. PMCID: PMC11542778.

  12. “The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare.” PubMed, 2024. PMID: 39695057.

  13. “The illusion of safety: A report to the FDA on AI healthcare product approvals.” PLOS Digital Health, June 2025. DOI: 10.1371/journal.pdig.0000866.

  14. Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. Federal court ruling, February 2025. Georgetown Health Care Litigation Tracker.

  15. U.S. Food and Drug Administration. “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” Draft Guidance, January 2025.

  16. U.S. Food and Drug Administration. “Artificial Intelligence and Machine Learning in Software as a Medical Device.” FDA AI/ML Device Database, July 2025.

  17. European Commission. “EU AI Act: Regulatory Framework for Artificial Intelligence.” Phased implementation beginning August 2025, with full high-risk compliance required by August 2027.

  18. World Health Organisation. “Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.” January 2024. ISBN: 9789240084759.

  19. “Bias recognition and mitigation strategies in artificial intelligence healthcare applications.” npj Digital Medicine, 2025. DOI: 10.1038/s41746-025-01503-7.

  20. “Automation Bias in AI-Assisted Medical Decision-Making under Time Pressure in Computational Pathology.” arXiv, November 2024. arXiv:2411.00998.

  21. “Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis.” ScienceDirect, 2024. DOI: 10.1016/j.caeai.2024.100241.

  22. “Mitigating Bias in Machine Learning Models with Ethics-Based Initiatives: The Case of Sepsis.” American Journal of Bioethics, 2025. DOI: 10.1080/15265161.2025.2497971.

  23. Wong, A., et al. “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.” JAMA Internal Medicine, 2021. (Epic Sepsis Model evaluation at Michigan Medicine.)

  24. Epic Systems. AI Charting and generative AI clinical tools deployment, February 2026. Epic Newsroom.

  25. Oracle Health. Clinical AI Agent deployment across 30+ medical specialities, 2025. Oracle Health press materials.

  26. “Gender and racial bias unveiled: clinical artificial intelligence (AI) and machine learning (ML) algorithms are fanning the flames of inequity.” Oxford Open Digital Health, 2025. DOI: 10.1093/oodh/oqaf027.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Listening now to the Cubs pregame show ahead of tonight's MLB Game between the Chicago Cubs and the Philadelphia Phillies. By game's end I expect to have wrapped up the night prayers, and be ready to head to bed, putting the wrap on a quietly satisfying Wednesday.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 235.78 lbs. * bp= 143/75 (61)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:05 – 1 banana, crispy oatmeal cookies * 07:15 – coffeecake * 08:55 – 1 seafood salad & cheese sandwich * 12:15 – fried chicken, cole slaw, mashed potatoes * 16:40 – 1 fresh apple

Activities, Chores, etc.: * 04:15 – listen to local news talk radio * 05:15 – bank accounts activity monitored. * 05:45- read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:00 – listening to The Markley, van Camp and Robbins Show * 12:00 to 13:30 – watch old game shows and eat lunch at home with Sylvia * 13:40 – started following the Guardians vs Cardinals MLB Game, halfway through, score is tied 1 to 1 in the bottom of the 4th inning * 15:17 – And the Cardinals win, 5 to 3. * 15:25 – listening now to Chicago sports talk on 104.3 The Score, the exclusive audio home of the Chicago Cubs, ahead of tonight's MLB Game between the Cubs and the Philadelphia Phillies. Opening pitch for this game is approx. 2 hrs. away.

Chess: * 10:30 – moved in all pending CC games

 
Read more...

from 💚

Count your blessing Each by one In feral truth, a standard of love Quest for worth- This isle and vase The dearest win Of home in Heaven And finding Whale- by ransom The bitter edge- will hold you near To telegraph and pod Mercy for days The sinewy nest With nearest war- to grave you And caution when- you lift to prose And Whale to protect In the Earth’s own heaviest waters A chain went up At random tide The mercy blowing high In truth we met In solemn day The Eucharist will find us first To Gottingen- and paying mire The Earth will have its tree And judgement come In plastic place We’ll blast the shore- in ecstasy.

 
Read more...

from 💚

The Death of un

So win we may A merciful time of the heart Moreso apart than victory White lies to approach And in the pontificate- There was subtlety to the news Murder on the fifteenth And I saw you that day Rising lines to freedom And China surely won The centrifuges had stopped And Korea waved with pride And a distance anthem Mean and beautiful men But we closed the reactor Words a-blaze for Pontchartrain And in being Eden- Like any Android Volumes of hair and makeup and history But to see this puppet And all of his stuff Vengeful abuses in this as May We fought for Argentina And stayed in verse as henchmen- and Soviets, and the Japanese paid for war- over this fruitless decay of beta particles We were too powerful to survive the bloodbath- and escaped to all our stuff But we had Allen keys and escaped measles Merciful respect to him- the freightline to freedom And he blessed us in captivity And just ashore to the deceased It was a wistful day And about forty two degrees And two years maximum to the Sun We were committed to Bonn Fits of yearly worry Justin Trudeau noticed war- And made men plan ahead Blessed Communion And we were fond of communication I was afraid of the draft But minions have rights And we were the best to be seen Toad The Wet Sprocket- A sympathy spell on the weary Wearing Uranium Black Doing show-tunes for each destiny You can’t stop Korea- The Super Wonder But leagues have voices And we brought our wrenches free Long-live democracy And better fields to grow upon Stay low and unassumed And Royals will meet- At the death of Kim Jong-un Lakes of fire.

 
Read more...

from Roscoe's Quick Notes

Phillies vs Cubs

Today's second MLB Game in the Roscoe-verse features the Chicago Cubs playing the Philadelphia Phillies. Opening pitch is nearly two hours away, so I've got plenty of time to enjoy Chicago sports talk on 104.3 The Score ahead of the radio call of the game.

And the adventure continues.

 
Read more...

from Lastige Gevallen in de Rede

[✓] Het lied van De Aanvinkclub

Ik weet pas hoe het gaat Als het in een hokje staat zonder vakje kies ik geen partij er moet voor de zekerheid een vinkje bij alles wat komt is makkelijker te slikken als ik het eerst zorgvuldig aan kan klikken er moeten altijd een aantal opties open tussen liggen, zitten, staan, kruipen, rollen of lopen een netjes goed leesbaar overzichtelijk keuze menu tussen het signaal en de zenuw want zonder een dergelijk vakgebied heb ik geen idee dan is er geen ja mogelijk en ook geen nee ik weet het pas echt niet als ik dat ergens in kan vullen en alleen met vijf betaalopties koop ik die spullen ik moet kunnen kiezen uit kleuren en aantal een optie voor het meest gekozen paardje uit de stal ik wil een keuze lijst voor het beste lied er moet een vinkje bij anders bestaat het niet zonder invulvakjes durf ik niet eens te kiezen dan zal ik waarschijnlijk het overzicht op alles verliezen geef me een vakje en ik weet weer hoe ik me voel een meerkeuze vraag en ik weet weer wat jij bedoeld het al en het bijzondere moet op een rijtje staan dan kies ik zonder twijfel de juiste banaan ik ben een man met een wil om kruizen te zetten zelfs op een kieslijst voor lange afstandsraketten als ik ergens een hokje zie dan vul ik het in dat is dan ook het enigste waar ik goed in ben vraag het niet open maar vraag alles dicht dan worden zware problemen luchtig en licht oorlog en vrede elk in hun genummerde hokje en daaruit kiezen onder druk van een tikkend klokje geluk, ongeluk, pijn, genot, start of stop ieder woord is goed als het komt met een invulknop ik durf wel te zeggen dat feitelijk elke geschreven taal beduidend meer waard is met zo'n helder signaal vinkje er op vinkje er in ja zo gaat ie goed vinkje er bij vinkje er onder ik zou niet weten of ik trouw ben zonder, zo'n hokje met mijn huwelijkse staat hokjes voor vinkjes zijn voor altijd en eeuwig mijn enige echte steun en [✓] toe [ ] ver [ ] laaaaaaaaat

Bent u gelukkiger na het lezen van dit vers?

[ ] Ja [ ] Nee [ ] Weet ik niet

 
Lees verder...

from The happy place

As I made my way home from fitness dance class, I saw a man falling haplessly on the paving stones outside the main entrance to his apartment building.

— are you OK?, I asked

— yes but the PIN code doesn’t work, he said, meaning to the door

— Do you need help getting up? I asked

— I live here, he responded now slowly getting on his feet unsteadily

He’d dropped his pizza, box lay upside down on the ground. And the plastic containers of sauce were spattered on his wallet and his phone which he’d also dropped.

He looked about to fall again, I asked

— Can I pick your stuff up for you?

— No, he replied, but you can hold the door for me.

He managed to gather his stuff, but I took the pizza and handed it to him

— this still looks edible, I said encouragingly

One hand on the door frame, he took the pizza in his hand and I saw then that his arm was incredibly muscular.

— take care now, I said as we parted ways

And with thoughts of the ruined pizza on my mind I went home

I am thinking about it still.

 
Read more... Discuss...

from Roscoe's Quick Notes

St Louis vs Cleveland

Cardinals vs Guardians.

We've finished our lunch at home, the wife and I. She's now on her post lunch nap, and I've found a baseball game to follow: the Cleveland Guardians playing the St. Louis Cardinals. The teams are tied as they play through the middle innings, the score now is 1 to 1 in the top of the 6th inning.

And the adventure continues.

 
Read more...

from wystswolf

'What is your home?' A stranger asks.

Wolfinwool · Home for You

Home (for you, my love)

Home?

No. Not what I once named it. Not walls, nor roads remembered by the body’s tired return.

Home has slipped its geography. It no longer answers to maps.

Listen, I will tell you, my friend, of a home with no address, no door, no fixed sky...

only a mind.

The mind.

Yours.

Where I wander like a pilgrim without sleep, touching the edges of your thoughts as if they were holy cloth.

I left a place once called home; a source, perhaps, a well I drank from without ever being quenched.

What is a home if the heart refuses it? If it does not loosen there, does not lay down its armor, does not breathe?

No—

Home is not where a man hangs his hat.

It is where he loses himself entirely.

And mine... mine is not here.

Not fully.

It is cleaved. like light through glass, like a prayer spoken in two languages—

here, and there, and in the terrible distance between.

You...

You are my home.

I have driven whole nights through the dark of myself to reach you,

whispering your name like a rhythm against the wheel, like a vow I could not break if I tried.

I would come to you in the hour when breath is deepest, when the world forgets itself—

not to wake you, but to feel you there, to exist in the same quiet as your dreaming body.

That would be enough. God— that would be everything.

There:

in that imagined room, in that borrowed closeness,

I am unafraid.

My demons do not follow. My doubts cannot cross the threshold.

There is only the heat of being known, the slow unraveling of all I pretend to be, the dangerous relief of becoming myself in the presence of you.

Amber-eyed, ocean-removed, twelve hundred leagues of absence and still

you are nearer to me than my own hands.

What is this place we make without touching?

What is this fire that asks nothing and takes everything?

I live there in the thought of you, in the shape of your name inside my mouth, in the quiet confession of wanting.

And one day—

if the world is merciful, or cruel enough

here and there will collapse into one,

and I will stand beside you with nothing left to lose,

and say, at last,

not as metaphor, not as longing—

but as truth:

I am home.


#poetry #wyst

 
Read more... Discuss...

from Blip-A

It’s been a while since I wanted to start a blog. Years really. I kept telling myself that I’m not ready, no one will care, I’m too busy etc. It really is just standard stuff when it comes to starting something new or when you put yourself out there. You make up any excuse just so you can delay the whole thing until you either forget about it or you just don’t care about it anymore. Pretty neat defence mechanism.

You try to justify the whole delay so you can plan out everything in advance, everything can be perfect so you don’t make a mistake. It doesn’t work like that. I should know this by now that I’m 34 years old. Year by year I feel like I lie less to myself but it still happens daily. At least I’m aware. That is something I guess.

Okay so like I said I’m a 34 year old guy. I was born in Hungary but I moved to England in 2014 when I was 23. To this day I don’t know if that decision was good or bad. Probably never will. Because of this, English is my second language and that means I’ll make mistakes. This was another excuse I liked to tell myself. I mean my English is not perfect but I can convey my thoughts pretty well I feel like and I hope it adds some uniqueness to my posts. I don’t want to run all my stuff through an AI or spellchecker. I’ll obviously try to minimise mistakes especially spelling ones but I don’t want to sound like a robot. I honestly despise this whole new era of “everything is AI”.

The biggest thing that helped me get started was when I realised I don’t have to share this blog with anyone. No one needs to know who I am. It doesn’t matter if anyone reads it or not. I just like writing. I always have. I wrote very basic stories when I was a kid. Okay I admit they were heavily mimicking existing ones. I remember one that was basically Robinson Crusoe but written by a 12 year old.

I really started rambling here. I didn’t think I will write about that Robinson story, I honestly even forgot about it until 2 minutes ago. It is funny how much stuff comes to surface when you are trying to organise your thoughts so you can put them down in a readable fashion.

I have loads of interests and I like taking walks whilst I think about a lot of stuff. I used to have a car but I sold it. I walk to and from work too. I really don’t want to get lazy and I hate driving. I’ll write posts just about anything I think. My plan is to write at least one post per week. (I refuse to call my work an article because it feels pretentious.) I might even write multiple a day. Who knows? I just want to get going.

Without trying to give you the whole list, below is the stuff I like the most from the top of my head. This doesn’t mean I’ll only write about these but perhaps it gives you an idea of what kind of guy I am.

  1. Guitar – Especially Rock and Roll, Blues, Hard Rock, Metal (Been playing since 2007.)

  2. Football and Formula 1 – Favourite teams: Arsenal and Ferrari. Pain. I know.

  3. Books – Andy Weir is my favourite author.

  4. Films – Mainly horror, action and science fiction. I have a newfound love for old black and white Japanese films. I like the Human Condition trilogy, okay?

  5. Philosophy – I was always interested and last year I’ve found stoicism which is probably the one I read the most.

Obviously I like ton of other stuff too. Gaming, cooking, hanging out with people, whatever. You get the gist. I really don’t know why I’m trying to make this into a list.

Anyway I think it is time for me to say goodbye and I hope, future me will be very happy that I started this blog.

Thanks,

Blip-A

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog