Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Lamentations of a Tired Citizen
It took me a while to realise this, but common sense, logic, rationality...? These are not widely accepted in the thought process of a normal human being.
In fact, human beings are rooted in emotion, ego and arrogance. The first, while not a negative aspect of humanity, leads to the other two in a direct causality. And that, in turn, leads to the downfall of common sense.
from friendlyrefer
Kompletny przewodnik po pracy w Sofii w 2026 roku
Sofia, stolica Bułgarii, staje się w 2026 roku jednym z najbardziej atrakcyjnych kierunków dla obcokrajowców szukających pracy w Europie. Niskie koszty życia, dynamicznie rozwijający się rynek pracy i ciepły klimat przyciągają coraz więcej osób z Polski i innych krajów UE. W tym przewodniku znajdziesz wszystko, co musisz wiedzieć przed przeprowadzką – od zarobków, przez koszty wynajmu, aż po formalności i życie codzienne.
***
Sofia od lat przyciąga zagraniczne firmy z sektora IT, outsourcingu, obsługi klienta i moderacji treści. W 2026 roku trend ten utrzymuje się – na rynku pracy brakuje wykwalifikowanych pracowników, a firmy aktywnie rekrutują osoby znające języki europejskie, w tym polski, niemiecki, francuski czy włoski. bloombergtv
Sofia jest miastem, które łączy bałkańską atmosferę z rosnącą infrastrukturą korporacyjną. Znajdziesz tu biurowce klasy A, centra handlowe, restauracje z kuchnią z całego świata i aktywną społeczność ekspatów. mieszkania-bulgaria
***
Największe zapotrzebowanie w Sofii w 2026 roku dotyczy stanowisk w sektorze usług dla klientów i technologii. Oto najpopularniejsze kategorie pracy dla obcokrajowców:
Dla osób preferujących pracę sezonową lub w branży turystycznej, Sofia i region bułgarski oferują stanowiska pilotów wycieczek, pracowników biurowych i rezydentów. facebook
***
Sofia jest jedną z najtańszych stolic w Unii Europejskiej. Dla obcokrajowców zarabiających w euro lub w wyższych stawkach w BGN oznacza to bardzo wygodny standard życia przy stosunkowo niskich wydatkach.
Osoba pracująca w obsłudze klienta może spokojnie żyć w Sofii za 700–900 euro miesięcznie, wliczając wynajem, jedzenie, transport i rozrywkę. Przy zarobkach powyżej 1 200 euro netto zostaje realna nadwyżka oszczędności.
***
Nie. Jako obywatel UE masz pełne prawo do pracy i pobytu w Bułgarii bez wizy i bez pozwolenia na pracę. Wystarczy ważny dowód osobisty lub paszport. mieszkania-bulgaria
Jeśli planujesz zostać dłużej niż 3 miesiące, powinieneś zarejestrować swój pobyt w lokalnym biurze Dyrekcji ds. Migracji (Дирекция „Миграция”), co jest prostą formalnością i zazwyczaj zajmuje jeden dzień.
***
Szukanie pracy w Sofii z Polski jest łatwiejsze niż się wydaje. Większość rekrutacji odbywa się online, a wiele firm prowadzi rozmowy kwalifikacyjne zdalnie przez Teams, Zoom lub Google Meet.
***
Wiele firm w Sofii oferuje pakiet relokacyjny dla kandydatów spoza Bułgarii, który może obejmować: pl.jooble
Sofia ma rozbudowaną sieć metra, tramwajów i autobusów. Miesięczna karta komunikacji miejskiej kosztuje ok. 15–20 euro i zapewnia dostęp do całej sieci. Uber i Bolt działają sprawnie i są bardzo tanie w porównaniu do Warszawy czy Krakowa.
***
Sofia jest miastem, które zaskakuje. Wiele osób, które przyjechały na rok, zostaje na kilka lat. Łączy ona niskie koszty życia z dobrą jakością infrastruktury, bliskim dostępem do gór (Witosza jest dosłownie na granicy miasta), ciepłym klimatem i rosnącą społecznością międzynarodową. mieszkania-bulgaria
***
Tak. Zdecydowana większość stanowisk dla obcokrajowców w Sofii wymaga jedynie znajomości języka europejskiego (np. polskiego, niemieckiego, francuskiego) i podstawowego angielskiego. Język bułgarski nie jest wymagany w firmach z sektora outsourcingu i obsługi klienta. wczasywbulgarii
Przy zarobkach na poziomie 1 200–1 500 euro netto miesięcznie można żyć wygodnie w Sofii, wynajmując własne mieszkanie, regularnie jadać na mieście i podróżować w weekendy. Przy zarobkach powyżej 1 800 euro netto miesięcznie można spokojnie odkładać pieniądze. thecity.com
Tak, wiele firm w Sofii – szczególnie z sektora obsługi klienta i moderacji treści – oferuje pakiet relokacyjny obejmujący zwrot kosztów podróży, tymczasowe zakwaterowanie i wsparcie administracyjne. pl.jooble
Przy aktywnym szukaniu i znajomości jednego języka europejskiego (innego niż angielski), czas od aplikacji do oferty pracy zazwyczaj wynosi 1–3 tygodnie. Wiele firm prowadzi całkowicie zdalny proces rekrutacji. facebook
Sofia jest generalnie bezpiecznym miastem. Wskaźniki przestępczości należą do niższych wśród europejskich stolic. Obcokrajowcy żyjący w Sofii konsekwentnie oceniają ją jako miasto, w którym czują się bezpiecznie zarówno w dzień, jak i w nocy. mieszkania-bulgaria
Bułgaria jest w trakcie procesu wejścia do strefy euro. Planowane przejście na euro zwiększa stabilność finansową i atrakcyjność kraju dla zagranicznych pracowników i inwestorów. bulgariastreet
***
Dla Polaka szukającego nowego startu za granicą Sofia w 2026 roku to jeden z najlepszych wyborów w Europie. Niskie koszty życia, rosnące zarobki, duże zapotrzebowanie na osoby znające język polski i brak bariery językowej w środowisku pracy sprawiają, że przeprowadzka jest mniej ryzykowna niż do Niemiec, Holandii czy Skandynawii. bloombergtv
Jeśli mówisz po polsku i szukasz stabilnej pracy w obsłudze klienta lub moderacji treści w Sofii – sprawdź aktualne oferty na FriendlyRefer.com i aplikuj już dziś.
***
Artykuł zaktualizowany: maj 2026. Dane dotyczące zarobków i kosztów życia mają charakter orientacyjny i mogą się różnić w zależności od pracodawcy, dzielnicy i indywidualnej sytuacji.
from Mitchell Report
Why 1980s Meals Were Always Garnished With Parsley – Food Republic
From steak dinners to bowls of soup, 1980s restaurants topped nearly every dish with a sprig of parsley. But why was this garnish so ubiquitous?
— Food Republic (@foodrepublic.bsky.social) on bluesky (source) ___
I saw this Bluesky post come across my timeline because I follow Food Republic, and it got my attention. When I was a kid and teenager, I did not like parsley. But now, in my 50s, I actually do not mind it. I started using it after following some recipes from Chef Jean-Pierre, who has a YouTube channel. It really did make my pot roast pop and helped brighten the dish after a long cooking time.
It is strange to think that this may be why parsley was used so often in the 1970s, when I was growing up.
Parsley signaled sophistication. During the decade, French cuisine was particularly in vogue among American cooks, and the herb served as a marker of European plating habits. Subsequently, a sprig of it functioned as a quick and accessible way to inflect a dash of color and Old World charm.
I just thought this was interesting, especially since I used to really hate parsley. It made me think about how our tastes can change as we get older, and how something we once disliked can become something we appreciate later in life.
#cooking #food
from
Hunter Dansin
“So little do we see before us in the World, and so much reason have we to depend cheerfully upon the great Maker of the Wold, that he does not leave his Creatures so absolutely destitute, but that in the worst Circumstances they have always something to be thankful for, and sometimes are nearer their Deliverance than they imagine; nay, are even brought to their Deliverance by the Means by which they seem to be brought to their Destruction.”
— From Robinson Crusoe by Daniel Defoe (p259).

When I think about my recent creative output I get the same sick feeling in my stomach I used to get when I showed up to class without doing my homework. The months have gone so quickly, and my emotions have been so up and down, that I haven't been able to maintain any consistent output. My mind wants to turn to the worst habits, and I feel very distracted. We are not going through a crisis or anything like that, but we're just tired. I am ready for the school year to be over. At the very least, I can say that I did some things this past month, and I do consistently* play guitar and read and study my languages. I think discipline consists much more in the little decisions that we must make over and over every single day, than in the resolutions we make a few times a year.
I did a little work on my current novel, but not enough. Throughout my days I hear my characters calling to me, wondering where I am and why I am leaving them where they are. Then when I do sit down I get distracted and/or my toddler comes and starts poking my face or throwing books at me because she wants me to read to her.
I have been playing almost every day, but I haven't really produced anything but podcast episodes. It is just really hard to find the time and energy right now. Sometimes I try to play around the kids, and they enjoy it for a few minutes, but then my toddler twists the tuners on my guitar and I get mad. I do believe that being interruptible is a virtue that Jesus displayed, but more often I feel like Harrison Bergeron's Dad.
I spent a great deal of time with Needtobreathe's new album, The Long Surrender. It was the first time since the HARDLOVE era that I really connected with and decided to buy one of their albums (yes, I still buy physical discs). It had a confessional, honest tone that felt very timely. Favorite tracks are probably Say It Now and Strangeness of It All. It was a great comfort to reconnect with a beloved band, especially in this season of life and this season of the world.
I have just finished Robinson Crusoe and I enjoyed it. According to the Preface, Defoe intended it for “the Improvement and Instruction of Mankind in the Ways of Virtue and Piety, by representing the various Circumstances to which Mankind is exposed; and encouraging such as fall into ordinary or extraordinary Casualties of Life, how to work thro' Difficulties, with unwearied Diligence and Application, and look up to Providence for Success.” It is full of un-hypocritical 'middle-aged moralizing' that the world seems devoid of right now. It definitely has some rough edges, but for a novel written in 1719 I think you might be surprised how pleasant it is to read once you get used to the punctuation and spelling. My copy also has a bunch of appendices that give some context for the novel, which I appreciate.
It has shown me just how uncomfortable I have become with Solitude, and how hypocritical I am when it comes to my engagement with technology. I wish I could say that after reading Robinson Crusoe I have changed my Ways. But the awareness of a Sin does not always Deliver you from It. Sometimes it makes you feel more Wretched. One of the appendices includes a sermon of sorts, about Solitude, in which the author (Richard Baxter) describes how much “VANITY and VEXATION” we could be delivered from by Solitude, if only we could be delivered from ourselves. I think this is why we have engineered the extinction of boredom (besides greed). We use our devices to escape from ourselves, and I am too painfully aware of that in myself right now. Still, it is a starting place, and I am resolved to keep fighting for my Tranquility and Peace and Industry, by the Grace of God, throughout the ordinary and extraordinary “Casualties of Life.”
#update #May #2026
Thank you for reading! I greatly regret that I will most likely never be able to meet you in person and shake your hand, but perhaps we can virtually shake hands via my newsletter, social media, or a cup of coffee sent over the wire. They are poor substitutes, but they can be a real grace in this intractable world.
Send me a kind word or a cup of coffee:
Buy Me a Coffee | Listen to My Music | Listen to My Podcast | Follow Me on Mastodon | Read With Me on Bookwyrm
Defoe, Daniel. Crusoe Robinson. Edited by Evan R. Davis. Broadview Press, Toronto, Ontario, 2010 (1719).
from sugarrush-77
What’s classified as rejection. If they say no under any circumstance. Doesn’t matter if they’re taken, they don’t like your face, etc.
I’m currently at 7. I want to get to at least 100 by end of this year. 33 weeks left in the year, that’s roughly 3 rejections per week.
At this point I have dissociated away all sense of self to the point where rejection does not faze me anymore. Well, maybe a little. But I am deluding myself into levels of confidence reached only in my younger, more sprightly years. And whenever I imagine the women telling their friends about how they were approached by some crazy person, I’m comfortably able to push it away. There’s vulnerability involved in having to approach someone and expose yourself to the chance of rejection. Women typically don’t understand it because they’ve never tried. Their equivalent of asking someone out is smiling across the room and wondering why nothing happened. Generalization? Yes. But also who cares, I’m right.
Another thing that has helped approach women better is that I’ve stopped giving them as much respect. After careful observation of female family members and my friends’ girlfriends, I’ve realized they pull a lot of selfish and emotional shit where the men just have to take it. The societal justification implicit behind it is that it is all fine because they are women. And so logically I was at a crossroads. Either I give them a lot of respect and have an internal seizure when they pull stupid emotional shit because in my head men and women are subject to the same standards of conduct, or I just give them less respect and live with the bullshit. Crazily enough, the latter mindset will help you to be a better husband or boyfriend because women typically enjoy it when they can just be a child around their partners engaging in “I’m just a girl” behavior. Of course, there are exceptions, but this is probably typical. Am I becoming an incel? LOL
from Faucet Repair
8 May 2026
Image inventory: a vacant front desk in the lobby of an abandoned office building with a black chair manning the desk like a person, sky blue construction dividers funneling people towards a dead end, a full white trash bag, a full black trash bag, a full orange trash bag, a lion in low relief (Marble Arch), a lion in low relief (golden door knocker in Clerkenwell), a cardboard sign with smiling green hills (Horniman Primary), square flowers, Lilo & Dags, a trampled flower on the ground in a tube station with one leaf outstretched, the word “you” rubbed out of almost transparent drips on the window of a tube carriage, a pointed cloud poking out above a cluster of softer clouds, a 90s gas meter, a 90s power meter, a silver dragon with red eyes and a red tongue foregrounded over a distant horizon with small black figures, a black iron boat with a worried looking fish underneath it (Vintry), a tube map almost entirely erased by people who have leaned on it, earrings in a bag that look like fallen crescent moons, a party in a mirror embedded in a thick wall of vines.
from Faucet Repair
6 May 2026
Belief structure: finally a title and a resolution for the small wireframe star sculpture painting I've been working on. Originally thought it would serve as a study for a larger work, and it still might. But it holds its own now, I think. Jonathan's feedback helped me believe in it (thank you Jonathan if you're reading this). I've been spending a lot of time with Hans Bellmer's drawings and paintings, especially an untitled painting from 1956 that was included in Galerie 1900-2000's 2023 show The Surreal World of Hans Bellmer—a thin, delicate, precise constellation of thin forms, subtly highlighted by small pink accents, spanning a cloudy blue-green space that bring to mind knuckles or protrusions from a landscape in the vein of the 20s Paul Klee linework stuff I've mentioned here recently. That must have been a guide for Belief structure, and it seems like it is becoming fruitful to veer further into the space that work lives in as I try to formulate my own way of getting forms to reckon with the illusory space they inhabit, both in the imagination and on the surface.
from Faucet Repair
4 May 2026
Adrian Morris at Sylvia Kouvali: first time seeing his work in person, and first time seeing a show at Sylvia Kouvali. Which I mention because it will likely be my last if they install every painting show like this one. The gallery's space has some natural charm with its patterned wood floor and roughly-textured white walls capped by a ring of pale yellow tiling that kisses the ceiling, but the room was really dark, and the paintings were inexplicably lit by fluorescent white tube lights placed directly underneath them. Not only did this completely change the experience of the color and surface dimensionality of the work, but when you try to get close to a painting, the light nearly blinds you from below. Completely distracting, irresponsible, and unfair to the artist and the work. Not to mention the audience. Curatorial malpractice. It takes a lot for me to complain, but it's warranted here. Especially when presenting work that is all about subtlety of line and texture and space via long-term accumulated surfaces. The work is probably lovely in the right setting, and I'm glad I saw it. One little portion that was chipped away from a pink painting to reveal an entirely cerulean blue layer embedded deep down was worth the visit. I can imagine they were real mediations. I just think Mr. Morris would turn in his grave if he were to see how his life's work is being treated in this show.
from
Roscoe's Quick Notes
Or perhaps a recovery weekend? We'll just have to see how this recovery goes. I will keep you posted daily, as best I can.
Short version: at yesterday's eye appointment we learned that the MAD (wet macular degeneration) in both eyes has gotten much worse. After yesterday's injections my vision became extremely blurry and both eyes were very painful, especially when I opened my eyelids so I could try to see. That pain has now (on Saturday morning as I sit here at my desk) mostly passed, thank God!
Later this month I'll be seeing my primary doc at a regularly scheduled appointment, and my retina doc wants me to talk with him about things that he and I discussed at yesterday's appointment.
Next retina appointment is set for mid-June. Retina Doc will be talking with my insurance provider to learn how much of the cost of a new injection medicine they'll cover.
And the adventure continues.
from An Open Letter
Not really sure what happened but I got put on the waitlist for the Barcade event tomorrow that I was looking forward to. Oh well, I am kind of grateful that I get to take a little bit of a breather from all of the socialization, and I anyway need to catch up on attack on Titan in time for the movie. I feel like I’m starting to become more and more extroverted, I’m noticing that I’m less anxious with every new interaction and I’m also not necessarily drained afterwards. I don’t really feel that crash that sometimes comes with social experiences. I think that it’s actually really nice to have a kind of constant stream of events with people from a source that I do not need to create. Like I don’t need to worry about all the logistics of hosting or setting up an event, because I can just go to one of these events. I feel like there is a cup half full and cup half empty moment here, where I feel like I am very lively and constantly making everyone at my table laugh pretty frequently. And I think this has helped my self-confidence because I am more and more confident in the fact that I am a very interesting person that is charismatic and very good at conversation. I can talk to essentially anyone and have a good conversation, one where people look to join and want to interact more with me in the future. I think I’ve also gotten a lot more comfortable with soft social skills like ending conversations, introducing myself to people or joining and moving around different social groups. I’ve gone a lot more comfortable with eating with people, which is actually very nice. I used to be very anxious around it, because I wasn’t allowed to do this growing up and as a result I felt very anxious because it was very unfair. But I’ve had a good amount of experiences now both one on one and also in group setting, and I’ve been able to recognize that a lot of the concerns that I had while valid or rather things that only really exist when I try to solve some situation or make sure I fully understand it before jumping into it. I also want to recognize that it’s only taken me a few experiences to feel comfortable with this and I think that’s a testament to my growth and versatility.
I do think however there’s also the cup half empty perspective, where I’ve felt like I have met people varying from people I just don’t really mess with or don’t really enjoy interacting with too much, two people that are almost like sidekicks for a lack of better word. It’s felt like there are some friends that I’ve made that don’t really speak up in conversations or don’t really contribute too much, but are reliable people to laugh at jokes with, or to talk to at any point. And I do value these friends, and I think they serve an important niche in social groups, but I haven’t really felt like I’ve met people that are good at conversations or funny, like my gold standard of A. I get discouraged when I think about how I would like to find someone who reminds me of me and can make me laugh similarly, because I think it’s always going to be biased by the fact that I have spent my entire life with myself in a way that no one else can. And my perception of other people will always be different than a perception of self. But when I think about A, or A, they consistently can make me laugh without me providing something. I have a lot of friends that can make me laugh in the sense that I can make a joke or I can provide something or I can build on something they say, but I do have a few friends that are just genuinely very creative and funny. And I kind of wish I was able to meet more people like that, and it feels rare. And I think that’s the kind of pessimistic angle to view things, in the fact that I have met a dozen or so people in the last week and I haven’t really found anyone that has made me laugh consistently. This isn’t saying that I haven’t found great people and new friends, but there still is something to be desired.
from 下川友
今日のモーニングはミスタードーナツへ。 ミスドに行くなら、普段は電車で行ける近場の店舗に行くのだが、今日は5kgの米も買って帰らなければならなかったため、車で行ける店舗を探して向かった。
いつも行くミスドには少し不満がある。俺の好物であるココナツチョコレートと、オールドファッションシナモンが置いていないのだ。いつもは次点としてハニーチュロと、もう一つを気分で選んでいたのだが、今日行った店舗にはその二つがあった。 次からはここにしようと思いながら、ドーナツを注文する。
ドーナツに合うのは、やはりホットコーヒーだ。もう少し暑くなれば、さすがにアイスコーヒーかもしれないが、このくらいの暖かさなら、まだホットだろう。しかも、ここはコーヒーがおかわり無料らしい。
その後、近くのスーパーを探して歩いて向かう。ローゼンで、3,000円を切っている5kgの米を発見した。 まだそんな価格の米があったのかと思い、そのまま購入。
さらにスーパーをはしごして、次はあおばへ。やたらと人が多い。どうやら今日は月に一度のセール日だったらしい。野菜も肉も果物も、思った以上に安い。せっかくだから、ここでまとめて買うことにした。
本当に、ここまで明確に安くなるセールがあるのかと驚き、少し目からうろこだった。
苺も安くなっていたので購入する。大きく、しっかり熟している。あと少し置けば食べ頃を逃してしまいそうな、まさに今が一番おいしい状態だった。 口の中にみずみずしさがあふれて、生活が少し豊かになった気がした。
from
Meditaciones
La verdad la comprendemos a través de nuestras experiencias.
from
SmarterArticles

The fax machine in a Florida rheumatologist's office, the least futuristic object in any American clinic, still receives a steady stream of prior authorisation decisions from health insurers. In early April 2026, one of those faxes, addressed to a patient the Palm Beach Post would eventually call only by her first name, Iris, came back in under the time it takes to pour a cup of coffee. The request had been submitted a few minutes earlier. The reply, denying coverage for an injection she had been receiving for years, was generated, signed, and transmitted without any documented human pause in the middle. Iris is 80. Her hands, on the worst mornings, do not open. Her doctor, looking at the timestamp, understood instantly what had happened. The claim had not been reviewed. It had been processed.
The word processed has started to carry a weight it was never designed to hold. In the American health insurance system in 2026, it is the polite term for an event that, in almost any other domain of life, we would call a decision: a binding determination about whether a human being will have access to the medical care their doctor has recommended. Except the entity making the decision is not a person. It is a model. And the model, as anyone who has tried to ask one why it did what it did already knows, does not owe anyone an explanation.
This is the quiet crisis at the centre of the Palm Beach Post investigation published this month, which spent weeks charting how artificial intelligence has begun to deny health insurance claims at a scale and a speed no human reviewer could match. It is also the crisis at the centre of a Stanford study in Health Affairs, which landed in January, warning that the human oversight supposedly wrapped around these systems is too thin, too rushed, and too incentivised by the wrong things to function as a real check. And it is the crisis sitting on top of a three-billion-dollar bet from the largest health insurer in the United States, UnitedHealth Group, that the answer to all of this, after the litigation and the newspaper investigations and a murdered chief executive, is to put more artificial intelligence into the pipeline, not less.
The question the brief for this piece asked is deceptively simple: if the systems making some of the most consequential decisions in people's lives cannot explain their reasoning, and the regulatory framework to challenge them barely exists, what does the right to appeal actually mean in practice? It sounds like a legal question. It turns out to be something stranger. It is a question about whether a civic procedure that assumed a human decision-maker on the other end of the form still works when the other end of the form is a probability distribution.
Start with the basic mechanics, because they have moved faster than the public understanding of them. Cigna's now notorious PxDx system, exposed by ProPublica and The Capitol Forum in March 2023, was an early glimpse of the genre. Internal spreadsheets showed Cigna's medical directors spending an average of 1.2 seconds on each of more than 300,000 claim denials over two months. One doctor, Dr Cheryl Dopke, was reported to have signed off on approximately 60,000 denials in a single month. A former Cigna physician told ProPublica's reporters, Patrick Rucker, Maya Miller, and David Armstrong, that the review process was essentially cosmetic: “We literally click and submit. It takes all of 10 seconds to do 50 at a time.”
The revealing word in that sentence is “literally”. It is the language of someone who has realised that the verb “review”, as it appears in the regulatory paperwork, is doing work it cannot possibly do.
Eight months later, a class action lawsuit against UnitedHealth's nH Predict algorithm, operated through its NaviHealth subsidiary, alleged that Medicare Advantage patients in post-acute care were being cut off from rehabilitation services in bad faith, with employees pressured to keep stays within 1 per cent of the length predicted by the model. When federal administrative law judges eventually heard appeals on these denials, roughly 90 per cent were reversed, according to the complaint. Only a tiny fraction of denied patients ever appeal. In February 2025, the federal court in Minnesota denied UnitedHealth's motion to dismiss the breach of contract and bad-faith claims, allowing the case to proceed.
Then, in late 2024, ProPublica and The Capitol Forum turned to EviCore, the utilisation-management arm of Evernorth owned by Cigna, which sells its services to other insurers. EviCore operates what some internal sources called “the dial”, an algorithm that scores each prior authorisation request with a probability of approval. The company can tune the threshold: if it wants more denials, it can lower the bar at which a request gets referred to human reviewers, who are statistically much more likely to deny than to approve. ProPublica reported that EviCore markets itself to insurers on the basis of a three-to-one return, promising three dollars in saved medical costs for every dollar the insurer pays it. Its denial rate in Arkansas, one of the few states that requires publication of the figure, ran at close to 20 per cent, compared with about 7 per cent for Medicare Advantage nationally.
The Palm Beach Post's April 2026 investigation, reported by Anne Geggis, extends this lineage into the near-present. The Post documented how AI tools are now embedded deep inside pre-authorisation workflows in Florida, one of 22 states the paper identified as having adopted no specific rules governing how AI can be used to reject a claim. The figure of 22 is the one that ought to give pause. These are not marginal jurisdictions. They include Florida, Georgia, Minnesota, and Oregon. Roughly half the American population lives in a state where an insurer can, in principle, use an algorithm to deny care without a single statute on the books requiring that algorithm to be explainable, auditable, or subject to human sign-off.
In contrast, California's Physicians Make Decisions Act, signed by Governor Gavin Newsom in September 2024 and in force since January 2025, explicitly requires that a denial, delay, or modification based on medical necessity be made by a licensed physician or competent provider. Arizona, Maryland, Nebraska, and Texas have adopted versions of the same principle. The federal Centres for Medicare and Medicaid Services issued guidance in 2024 restricting the use of algorithmic tools as the sole basis for Medicare Advantage denials. None of this changes the underlying asymmetry. State laws end at state lines. The models are national, their deployments enterprise-wide, and the training data pooled from populations that do not consent to being training data in the first place.
Into this landscape, on 6 January 2026, Michelle Mello, Professor of Health Policy and Law at Stanford, and three colleagues (Artem A. Trotsyuk, Abdoul Jalil Djiberou Mahamadou, and Danton Char) published a paper in Health Affairs with the unusually blunt title, “The AI Arms Race in Health Insurance Utilization Review: Promises of Efficiency and Risks of Supercharged Flaws”. The paper is a careful, cold document. It does not call for a ban on AI in insurance. It does something more corrosive. It describes, in sober detail, why the reassurances everyone keeps giving, about human reviewers, about oversight, about governance, do not correspond to anything that is actually happening inside the insurers.
The central finding is that meaningful human oversight of AI-driven prior authorisation is, in Mello's own phrasing, largely a myth. Human reviewers at insurance companies, the paper observes, often lack the time, the relevant clinical expertise, and the incentives to meaningfully interrogate the recommendations produced by a model. The opacity of modern systems compounds this. An adjuster presented with a denial recommendation does not see a chain of reasoning that can be evaluated. They see an output. To push back on the output, they would have to reproduce, from scratch, the analysis that led to it, without access to the training data, the feature weights, or a record of how similar cases were decided in the past. Given production targets, they do not do this. They click.
Mello's paper notes that past flawed coverage decisions become embedded in the training data for the next generation of models, which then reproduce and scale the pattern. The phrase “supercharged flaws” is not rhetorical. It is a description of what happens when a statistical system is trained on a history of denials and then used to generate future denials, with the previous denials as ground truth. Mistakes do not get caught. They get normalised, archived, and re-expressed at volume.
The data on downstream appeals has circulated for a while, but the Stanford paper pulls it into focus. In Medicare Advantage, according to KFF's January 2025 analysis of 2023 figures, insurers made nearly 50 million prior authorisation determinations, denied 3.2 million of them, and saw only 11.7 per cent of those denials appealed. Of those appealed, 81.7 per cent were partially or fully overturned. In an earlier era, overturn rates above 80 per cent on appeal would have prompted a federal reckoning. In the current system, they are published in briefing notes and largely forgotten by the following week.
If the appeal process reverses more than four in five decisions on review, the appeal process is not a safety net bolted onto a functioning decision system. It is the decision system, belatedly engaged, in the small minority of cases where a patient has the time, the literacy, the advocacy, and the stamina to demand it. Everyone else simply absorbs the denial. That is not an operational detail. It is the design.
On 6 April 2026, STAT News reported that UnitedHealth Group, through its Optum Insight division, plans to spend at least three billion dollars over the next few years embedding AI more deeply into its claims processing, care management, fraud detection, and clinical documentation systems. Sandeep Dadlani, chief executive of Optum Insight, told reporters that the company employs 22,000 software engineers globally, that over 80 per cent of them now use AI to write code or build new agents, and that executives expect to generate a billion dollars in savings this year alone by pushing AI further into operations. Dadlani's framing was the one insurers have settled on: AI, he argued, will speed up decision-making and streamline health insurance's notoriously time-consuming bureaucracy.
He is not wrong about the bureaucracy. The American health insurance system wastes staggering amounts of time, labour, and money on a claims process that no participant, patient, provider, or payer, thinks works. The question is what “speed up decision-making” means when the original slowness was partly functional: the friction of human review was, at its best, the thing that caught errors, gave context, and let claimants be heard. If the friction is engineered out, so is the friction of accountability.
And the three-billion-dollar figure needs to be read alongside the context UnitedHealth is operating in. The company's former chief executive, Brian Thompson, was shot dead in Manhattan in December 2024 in an attack whose alleged perpetrator referenced the company's denial practices in his writings. The class action over nH Predict was allowed to proceed the following February. The Palm Beach Post investigation landed this April. There is, if one wants to read it this way, a choice the insurer has made. It could have used the last eighteen months to make its claims-processing systems more transparent, more accountable, more humane. It has instead committed to scaling them up, and measuring its own success in savings generated rather than denials avoided.
This is the logic that animates everything else in the sector. Under the business model that has built the American managed-care industry, every dollar approved in claims is a dollar of medical-loss ratio, and every dollar denied is, within the limits set by the Affordable Care Act's 80 to 85 per cent floor, a dollar of retained earnings. Any technology that lowers the marginal cost of generating a plausible denial, and raises the barrier to generating a successful appeal, is, from the perspective of the quarterly report, working exactly as intended. This is not a conspiracy theory. It is a reading of the incentives stated on the face of the filings.
Because the regulators have not, in most states, built the infrastructure to track algorithmic denials systematically, that job has fallen to the patients and clinicians themselves, largely on Reddit. Communities such as r/nursing, r/medicine, and the various state-level and condition-specific subreddits have become, almost by accident, one of the most useful public archives of how AI-driven prior authorisation actually functions at the point of care.
The threads follow a recognisable rhythm. A nurse describes submitting a request for a patient whose case is, clinically, straightforward. A denial returns in seconds or, at most, a couple of minutes. The denial letter cites the insurer's internal clinical guidelines, which are not, in most cases, the same as published medical society guidelines. An appeal is mounted. The appeal takes weeks to resolve. In the interim, the patient either forgoes the treatment, pays out of pocket, or lands in a more expensive emergency setting that the insurer will then, often, cover. The commenters in these threads document the pattern because nobody else does. They are, in effect, doing the work that in a different jurisdiction would be done by an independent audit office.
The sub-two-second denial is not a single documented statistic; it is a folk fact, borne out by the Cigna PxDx data, by screenshots circulated in these communities, by the fax-timestamp evidence that rheumatologists and oncologists have been quietly compiling. A system that returns a denial before the clinical reasoning could plausibly have been read is a system that has, as a matter of physics, not been read. The courts, slowly, are beginning to say so. In the Cigna class action in California and the nH Predict case in Minnesota, the factual allegations that reviews were not meaningfully performed have survived motions to dismiss. Discovery is going to be, in a phrase one plaintiff's lawyer used on background, interesting.
The Reddit record is, of course, anecdotal in a formal evidentiary sense. It is also, collectively, thousands of practitioners with professional licences describing a consistent pattern. When the formal data and the informal data align this closely, and both are saying the same thing that independent investigators and academic researchers are saying, the reasonable assumption is not that the nurses are wrong.
If the picture so far suggests that legislators would rush to impose a human signature on AI-generated denials, the story of Florida House Bill 527 is a useful corrective. The bill, introduced by state Representative Hillary Cassel, would have required that every insurance claim denial or reduction be reviewed and signed off by a qualified human professional, with AI output permitted as an input but not as the sole basis for the decision. It was, by the standards of recent American legislative politics, a popular proposal. In early December 2025, a House panel unanimously backed it. It then passed the full Florida House on a 108 to 0 vote, a consensus across parties that is almost unheard of on any contested business-regulation matter.
Cassel was candid about what had moved her. Speaking to reporters, she said: “The genesis of this bill came to me with the murder of the United Healthcare CEO. One of the alleged motives was the denial basis by that company, and there's currently a class action that shows allegedly that 90 per cent of their claims were denied with errors when they utilized AI.” It is an extraordinary quote, because it concedes that the political window for reform opened at the moment a billionaire insurance executive was killed in the street, and that the opening was narrow.
The Senate version, SB 202, sponsored by Senator Don Gaetz, did not survive. Its last action, according to the Florida Senate's public record, was on 13 March 2026, when it died in the Banking and Insurance Committee without a floor vote. Industry representatives from the Florida Insurance Council, the American Property Casualty Insurance Association, and the Personal Insurance Federation of Florida lobbied against it, arguing that mandatory human review would slow the resolution of claims. The Florida Hospital Association and the Florida Medical Association, who represent the entities actually filing claims for patients, lobbied for it. The committee did not bring it up.
Zoom out and the pattern is familiar. A bipartisan legislative majority in a populous, insurance-heavy state backed a minimum procedural protection that almost everyone not in the insurance industry supported. It died in committee, quietly, without a recorded vote. There was no scandal. There was no single villain. There was, instead, the ordinary friction of legislative attention: a bill that had the votes to pass did not have the procedural protection to reach a vote, and a session ended. Multiply this failure across two dozen states and you get, approximately, the current regulatory environment.
Here is the analytic move the whole debate has been circling. The right to appeal, in American administrative and insurance law, is a right that assumes certain things about the original decision. It assumes there was a decision-maker. It assumes the decision-maker had reasons, which can be stated, contested, and either defended or abandoned on review. It assumes the appellant, given adequate time, can understand the basis of the decision well enough to argue against it. It assumes a symmetry of cognition between the original decision-maker and the appellate one.
An algorithmic denial breaks all of these assumptions at once.
It breaks the first because the decision-maker is not an individual but a pipeline. It breaks the second because modern models do not have reasons in any sense a lawyer would recognise; they have weights, activations, and outputs. Even the engineers who built the system cannot generally, for a specific denial, reconstruct why this patient's case tipped into the negative region of the decision surface. They can say what features mattered on average. They cannot say what mattered for Iris.
It breaks the third because the denial letter, drafted as the output of a template populated with a justification selected from a limited menu, tells the appellant something that may not be a true description of the decision. It is a plausible description, designed to be legally defensible and clinically intelligible, but the actual cause, somewhere in the latent space of the model, is not accessible to anyone. To appeal a denial on its stated grounds is to joust with a shadow.
And it breaks the fourth because the appellant is human and the opponent is a statistical system trained on millions of prior cases. The insurer's machinery can generate, cheaply, a thousand variations on why the original denial was sound. The patient has one case, one letter from their doctor, one window of time before the treatment decision becomes moot. The asymmetry is not the small asymmetry of a lay person versus a trained adjuster. It is an asymmetry of cognitive capacity, of parallelism, of cost per round, of a kind the administrative law of the 1970s did not contemplate.
This is why the Stanford group's paper matters more than a straightforward policy critique. Mello and her coauthors are not simply pointing out that AI sometimes gets it wrong. They are pointing out that the institutional scaffolding that was supposed to catch the errors was built for a different kind of decision-maker, and does not scale to the one now making the calls. A patient appealing an algorithmic denial is not, functionally, appealing at all in the sense the word was originally meant. They are triggering a subsequent stage of the same algorithmic process, in which the second layer inherits the priors of the first.
You can see, in the published reform proposals, two broad theories of how to repair this. The first, reflected in California's SB 1120 and the dead Florida HB 527, is to legislate a human signature back into the decision. Require that a named, licensed professional review and sign off on any denial, with documentary evidence that they did so. This is the bluntest and, on current evidence, the only version that insurers can be counted on to resist. It is also the most fragile, because the record of Cigna's medical directors clicking through denials at 1.2 seconds per case shows that “human signature” can be gamed into meaninglessness unless the rules specify what review means in minutes, in content, and in accountability.
The second theory is algorithmic transparency: require insurers to disclose the logic, the training data, the error rates, and the audit trails of the systems they use. This is the preferred framing of academics, regulators, and some of the AI industry itself. Its limits are by now familiar to anyone who has worked on explainable AI. For classical rules-based systems, transparency is straightforward. For modern neural systems, it is a research problem that has not been solved, and may not be solvable in the strong sense. An audit report that says “the model weights were examined” is not a substitute for the ability to say, of a particular denial, why it was made.
Neither theory, on its own, is sufficient. A mandated human signature without transparency produces fake review at industrial scale. Transparency without a mandated human signature produces elegant documentation of decisions that nobody can be held accountable for. The only versions that might actually work combine both: a human who must sign, a record of what they looked at when they signed, and a genuine, externally audited account of what the model contributed and why. Nothing currently in force in the United States, at the federal level, does this.
It is tempting to frame the whole situation as a fight about artificial intelligence, because AI is the novel element. But the deeper fight is about something older: whether a person subject to a consequential institutional decision has the right to a reasoned account of why the decision went the way it did, and a real chance to change it.
American health insurance, for reasons that long predate generative AI, has been steadily undermining that right for decades, through the proliferation of prior authorisation requirements, through narrow networks, through opaque formulary tiers, through appeals processes designed to exhaust rather than enlighten. The arrival of AI has not created the pathology. It has industrialised it. What used to take an adjuster an hour now takes a model a second, and what used to happen to thousands of patients a year now happens to millions. The scale changes the moral physics.
And the scale will grow. UnitedHealth's three-billion-dollar investment will not sit alone. Every other major insurer will match it, because they must, because the efficiency gains compound and the laggards lose. The Palm Beach Post investigation will be joined by others. The Reddit threads will lengthen. The Florida-style bills will pass in a few more states, and die in committee in many more. Somewhere in the middle of this, the language will drift: the word “review” will come to mean something smaller than it used to, the word “decision” something less personal, the word “appeal” something closer to a ritual than a remedy. This is already happening.
What stops the drift, if anything does, is a reassertion of the civic premise the whole insurance system was supposed to honour: that a claim is not a data point but a moment in a person's life, that a denial is not an output but an act, and that the entity issuing that act owes the person on the other end an intelligible reason and a real chance to be wrong about them. None of that is technologically impossible. Some of it is, in fact, quite cheap. What makes it hard is that the incentives, as currently aligned, reward the opposite: the cheapest plausible denial, issued at scale, defended just well enough to exhaust the appellant's capacity to keep fighting.
Iris, in the Palm Beach Post story, eventually got her medicine. Her doctor appealed on her behalf. It took weeks. She is one of the lucky ones, in that she had a doctor with the time and inclination to wage the fight. Most people do not. They have a denial letter, a phone tree, a model on the other end of the form, and a finite number of mornings on which they can open their hands enough to sign the next appeal. What the right to appeal means in practice, at this moment, is that if you are patient, and articulate, and unusually well-represented, you can sometimes persuade the system to notice you. That is not a right. It is a lottery with a ticket price measured in stamina. Whether it can still be repaired into something that deserves its own name is the question the next decade will answer, and the answer will not be written by the models.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
wystswolf

What is done in love is done well.
To draw a woman is to make love to her.
Not with the crude crescendo of sex, but slowly—
through the study of fat and muscle, the way flesh lies over bone.
The stretch of skin. Its surrender. How afternoon light wraps her like a lover’s embrace.
And it cannot be clinical.
Her vulnerability will not allow it.
She disrobes in layers, not only cloth but history—
until she lies as bare as she can bear.
Though the artist wishes to lay open the heart itself, to place upon the dais all the grief, all the love, there is only so much one sitting can hold.
Because this sort of undressing takes years.
And it is done not with fingers, but with trust. With words.
So when he renders the breast, slaving to capture the caress of north light,
it is not merely flesh he paints,
but longing, memory, the armor she built around the fist of muscle beating behind it.
And the eye does not trespass upon her tenderness.
It moves over her like warm water.
And so love is made—
a current passing between the drawer and the drawn,
until they are bound forever in color and light.
#poetry #wyst #art #artist #painting
from Lastige Gevallen in de Rede
Less is more and nothing is the most you can get, and I think I'm on my way to achieve all that.
from
TechNewsLit Explores
Photos from Old Glory DC’s first home rugby match of the season are now posted on a gallery at the TechNewsLit portfolio on Smugmug. Old Glory DC, the Washington, D.C. region’s franchise in Major League Rugby, lost the match to California Legion, 36-23.
The team played its initial three matches on the road, first losing to Seattle, then beating New England and Carolina before taking on California Legion, a team from Southern California. It was also Old Glory DC’s first match at George Mason University stadium in Fairfax, Virginia, with fans filling most grandstand seats and touch-line (sideline) boxes and spaces. In previous years, the team played its matches in exurban Maryland, a longer distance from D.C.

Both DC and California moved the ball up and down the pitch (field) during the match, but California took advantage of DC turnovers to score early and run-up a big lead at halftime. DC scored two tries, like touchdowns in American football, late in the match, but it was not enough to overtake California.
TechNewsLit also covered Old Glory DC’s open practice on 28 March. Photos from both the 26 April match and the open practice carry a Creative Commons – Attribution (CC BY 4.0) license. See usage requirements and specifications at https://creativecommons.org/licenses/by/4.0/ .
Copyright © Technology News and Literature. All rights reserved.