Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Faucet Repair
6 May 2026
Belief structure: finally a title and a resolution for the small wireframe star sculpture painting I've been working on. Originally thought it would serve as a study for a larger work, and it still might. But it holds its own now, I think. Jonathan's feedback helped me believe in it (thank you Jonathan if you're reading this). I've been spending a lot of time with Hans Bellmer's drawings and paintings, especially an untitled painting from 1956 that was included in Galerie 1900-2000's 2023 show The Surreal World of Hans Bellmer—a thin, delicate, precise constellation of thin forms, subtly highlighted by small pink accents, spanning a cloudy blue-green space that bring to mind knuckles or protrusions from a landscape in the vein of the 20s Paul Klee linework stuff I've mentioned here recently. That must have been a guide for Belief structure, and it seems like it is becoming fruitful to veer further into the space that work lives in as I try to formulate my own way of getting forms to reckon with the illusory space they inhabit, both in the imagination and on the surface.
from Faucet Repair
4 May 2026
Adrian Morris at Sylvia Kouvali: first time seeing his work in person, and first time seeing a show at Sylvia Kouvali. Which I mention because it will likely be my last if they install every painting show like this one. The gallery's space has some natural charm with its patterned wood floor and roughly-textured white walls capped by a ring of pale yellow tiling that kisses the ceiling, but the room was really dark, and the paintings were inexplicably lit by fluorescent white tube lights placed directly underneath them. Not only did this completely change the experience of the color and surface dimensionality of the work, but when you try to get close to a painting, the light nearly blinds you from below. Completely distracting, irresponsible, and unfair to the artist and the work. Not to mention the audience. Curatorial malpractice. It takes a lot for me to complain, but it's warranted here. Especially when presenting work that is all about subtlety of line and texture and space via long-term accumulated surfaces. The work is probably lovely in the right setting, and I'm glad I saw it. One little portion that was chipped away from a pink painting to reveal an entirely cerulean blue layer embedded deep down was worth the visit. I can imagine they were real mediations. I just think Mr. Morris would turn in his grave if he were to see how his life's work is being treated in this show.
from
Roscoe's Quick Notes
Or perhaps a recovery weekend? We'll just have to see how this recovery goes. I will keep you posted daily, as best I can.
Short version: at yesterday's eye appointment we learned that the MAD (wet macular degeneration) in both eyes has gotten much worse. After yesterday's injections my vision became extremely blurry and both eyes were very painful, especially when I opened my eyelids so I could try to see. That pain has now (on Saturday morning as I sit here at my desk) mostly passed, thank God!
Later this month I'll be seeing my primary doc at a regularly scheduled appointment, and my retina doc wants me to talk with him about things that he and I discussed at yesterday's appointment.
Next retina appointment is set for mid-June. Retina Doc will be talking with my insurance provider to learn how much of the cost of a new injection medicine they'll cover.
And the adventure continues.
from An Open Letter
Not really sure what happened but I got put on the waitlist for the Barcade event tomorrow that I was looking forward to. Oh well, I am kind of grateful that I get to take a little bit of a breather from all of the socialization, and I anyway need to catch up on attack on Titan in time for the movie. I feel like I’m starting to become more and more extroverted, I’m noticing that I’m less anxious with every new interaction and I’m also not necessarily drained afterwards. I don’t really feel that crash that sometimes comes with social experiences. I think that it’s actually really nice to have a kind of constant stream of events with people from a source that I do not need to create. Like I don’t need to worry about all the logistics of hosting or setting up an event, because I can just go to one of these events. I feel like there is a cup half full and cup half empty moment here, where I feel like I am very lively and constantly making everyone at my table laugh pretty frequently. And I think this has helped my self-confidence because I am more and more confident in the fact that I am a very interesting person that is charismatic and very good at conversation. I can talk to essentially anyone and have a good conversation, one where people look to join and want to interact more with me in the future. I think I’ve also gotten a lot more comfortable with soft social skills like ending conversations, introducing myself to people or joining and moving around different social groups. I’ve gone a lot more comfortable with eating with people, which is actually very nice. I used to be very anxious around it, because I wasn’t allowed to do this growing up and as a result I felt very anxious because it was very unfair. But I’ve had a good amount of experiences now both one on one and also in group setting, and I’ve been able to recognize that a lot of the concerns that I had while valid or rather things that only really exist when I try to solve some situation or make sure I fully understand it before jumping into it. I also want to recognize that it’s only taken me a few experiences to feel comfortable with this and I think that’s a testament to my growth and versatility.
I do think however there’s also the cup half empty perspective, where I’ve felt like I have met people varying from people I just don’t really mess with or don’t really enjoy interacting with too much, two people that are almost like sidekicks for a lack of better word. It’s felt like there are some friends that I’ve made that don’t really speak up in conversations or don’t really contribute too much, but are reliable people to laugh at jokes with, or to talk to at any point. And I do value these friends, and I think they serve an important niche in social groups, but I haven’t really felt like I’ve met people that are good at conversations or funny, like my gold standard of A. I get discouraged when I think about how I would like to find someone who reminds me of me and can make me laugh similarly, because I think it’s always going to be biased by the fact that I have spent my entire life with myself in a way that no one else can. And my perception of other people will always be different than a perception of self. But when I think about A, or A, they consistently can make me laugh without me providing something. I have a lot of friends that can make me laugh in the sense that I can make a joke or I can provide something or I can build on something they say, but I do have a few friends that are just genuinely very creative and funny. And I kind of wish I was able to meet more people like that, and it feels rare. And I think that’s the kind of pessimistic angle to view things, in the fact that I have met a dozen or so people in the last week and I haven’t really found anyone that has made me laugh consistently. This isn’t saying that I haven’t found great people and new friends, but there still is something to be desired.
from 下川友
今日のモーニングはミスタードーナツへ。 ミスドに行くなら、普段は電車で行ける近場の店舗に行くのだが、今日は5kgの米も買って帰らなければならなかったため、車で行ける店舗を探して向かった。
いつも行くミスドには少し不満がある。俺の好物であるココナツチョコレートと、オールドファッションシナモンが置いていないのだ。いつもは次点としてハニーチュロと、もう一つを気分で選んでいたのだが、今日行った店舗にはその二つがあった。 次からはここにしようと思いながら、ドーナツを注文する。
ドーナツに合うのは、やはりホットコーヒーだ。もう少し暑くなれば、さすがにアイスコーヒーかもしれないが、このくらいの暖かさなら、まだホットだろう。しかも、ここはコーヒーがおかわり無料らしい。
その後、近くのスーパーを探して歩いて向かう。ローゼンで、3,000円を切っている5kgの米を発見した。 まだそんな価格の米があったのかと思い、そのまま購入。
さらにスーパーをはしごして、次はあおばへ。やたらと人が多い。どうやら今日は月に一度のセール日だったらしい。野菜も肉も果物も、思った以上に安い。せっかくだから、ここでまとめて買うことにした。
本当に、ここまで明確に安くなるセールがあるのかと驚き、少し目からうろこだった。
苺も安くなっていたので購入する。大きく、しっかり熟している。あと少し置けば食べ頃を逃してしまいそうな、まさに今が一番おいしい状態だった。 口の中にみずみずしさがあふれて、生活が少し豊かになった気がした。
from
Meditaciones
La verdad la comprendemos a través de nuestras experiencias.
from
SmarterArticles

The fax machine in a Florida rheumatologist's office, the least futuristic object in any American clinic, still receives a steady stream of prior authorisation decisions from health insurers. In early April 2026, one of those faxes, addressed to a patient the Palm Beach Post would eventually call only by her first name, Iris, came back in under the time it takes to pour a cup of coffee. The request had been submitted a few minutes earlier. The reply, denying coverage for an injection she had been receiving for years, was generated, signed, and transmitted without any documented human pause in the middle. Iris is 80. Her hands, on the worst mornings, do not open. Her doctor, looking at the timestamp, understood instantly what had happened. The claim had not been reviewed. It had been processed.
The word processed has started to carry a weight it was never designed to hold. In the American health insurance system in 2026, it is the polite term for an event that, in almost any other domain of life, we would call a decision: a binding determination about whether a human being will have access to the medical care their doctor has recommended. Except the entity making the decision is not a person. It is a model. And the model, as anyone who has tried to ask one why it did what it did already knows, does not owe anyone an explanation.
This is the quiet crisis at the centre of the Palm Beach Post investigation published this month, which spent weeks charting how artificial intelligence has begun to deny health insurance claims at a scale and a speed no human reviewer could match. It is also the crisis at the centre of a Stanford study in Health Affairs, which landed in January, warning that the human oversight supposedly wrapped around these systems is too thin, too rushed, and too incentivised by the wrong things to function as a real check. And it is the crisis sitting on top of a three-billion-dollar bet from the largest health insurer in the United States, UnitedHealth Group, that the answer to all of this, after the litigation and the newspaper investigations and a murdered chief executive, is to put more artificial intelligence into the pipeline, not less.
The question the brief for this piece asked is deceptively simple: if the systems making some of the most consequential decisions in people's lives cannot explain their reasoning, and the regulatory framework to challenge them barely exists, what does the right to appeal actually mean in practice? It sounds like a legal question. It turns out to be something stranger. It is a question about whether a civic procedure that assumed a human decision-maker on the other end of the form still works when the other end of the form is a probability distribution.
Start with the basic mechanics, because they have moved faster than the public understanding of them. Cigna's now notorious PxDx system, exposed by ProPublica and The Capitol Forum in March 2023, was an early glimpse of the genre. Internal spreadsheets showed Cigna's medical directors spending an average of 1.2 seconds on each of more than 300,000 claim denials over two months. One doctor, Dr Cheryl Dopke, was reported to have signed off on approximately 60,000 denials in a single month. A former Cigna physician told ProPublica's reporters, Patrick Rucker, Maya Miller, and David Armstrong, that the review process was essentially cosmetic: “We literally click and submit. It takes all of 10 seconds to do 50 at a time.”
The revealing word in that sentence is “literally”. It is the language of someone who has realised that the verb “review”, as it appears in the regulatory paperwork, is doing work it cannot possibly do.
Eight months later, a class action lawsuit against UnitedHealth's nH Predict algorithm, operated through its NaviHealth subsidiary, alleged that Medicare Advantage patients in post-acute care were being cut off from rehabilitation services in bad faith, with employees pressured to keep stays within 1 per cent of the length predicted by the model. When federal administrative law judges eventually heard appeals on these denials, roughly 90 per cent were reversed, according to the complaint. Only a tiny fraction of denied patients ever appeal. In February 2025, the federal court in Minnesota denied UnitedHealth's motion to dismiss the breach of contract and bad-faith claims, allowing the case to proceed.
Then, in late 2024, ProPublica and The Capitol Forum turned to EviCore, the utilisation-management arm of Evernorth owned by Cigna, which sells its services to other insurers. EviCore operates what some internal sources called “the dial”, an algorithm that scores each prior authorisation request with a probability of approval. The company can tune the threshold: if it wants more denials, it can lower the bar at which a request gets referred to human reviewers, who are statistically much more likely to deny than to approve. ProPublica reported that EviCore markets itself to insurers on the basis of a three-to-one return, promising three dollars in saved medical costs for every dollar the insurer pays it. Its denial rate in Arkansas, one of the few states that requires publication of the figure, ran at close to 20 per cent, compared with about 7 per cent for Medicare Advantage nationally.
The Palm Beach Post's April 2026 investigation, reported by Anne Geggis, extends this lineage into the near-present. The Post documented how AI tools are now embedded deep inside pre-authorisation workflows in Florida, one of 22 states the paper identified as having adopted no specific rules governing how AI can be used to reject a claim. The figure of 22 is the one that ought to give pause. These are not marginal jurisdictions. They include Florida, Georgia, Minnesota, and Oregon. Roughly half the American population lives in a state where an insurer can, in principle, use an algorithm to deny care without a single statute on the books requiring that algorithm to be explainable, auditable, or subject to human sign-off.
In contrast, California's Physicians Make Decisions Act, signed by Governor Gavin Newsom in September 2024 and in force since January 2025, explicitly requires that a denial, delay, or modification based on medical necessity be made by a licensed physician or competent provider. Arizona, Maryland, Nebraska, and Texas have adopted versions of the same principle. The federal Centres for Medicare and Medicaid Services issued guidance in 2024 restricting the use of algorithmic tools as the sole basis for Medicare Advantage denials. None of this changes the underlying asymmetry. State laws end at state lines. The models are national, their deployments enterprise-wide, and the training data pooled from populations that do not consent to being training data in the first place.
Into this landscape, on 6 January 2026, Michelle Mello, Professor of Health Policy and Law at Stanford, and three colleagues (Artem A. Trotsyuk, Abdoul Jalil Djiberou Mahamadou, and Danton Char) published a paper in Health Affairs with the unusually blunt title, “The AI Arms Race in Health Insurance Utilization Review: Promises of Efficiency and Risks of Supercharged Flaws”. The paper is a careful, cold document. It does not call for a ban on AI in insurance. It does something more corrosive. It describes, in sober detail, why the reassurances everyone keeps giving, about human reviewers, about oversight, about governance, do not correspond to anything that is actually happening inside the insurers.
The central finding is that meaningful human oversight of AI-driven prior authorisation is, in Mello's own phrasing, largely a myth. Human reviewers at insurance companies, the paper observes, often lack the time, the relevant clinical expertise, and the incentives to meaningfully interrogate the recommendations produced by a model. The opacity of modern systems compounds this. An adjuster presented with a denial recommendation does not see a chain of reasoning that can be evaluated. They see an output. To push back on the output, they would have to reproduce, from scratch, the analysis that led to it, without access to the training data, the feature weights, or a record of how similar cases were decided in the past. Given production targets, they do not do this. They click.
Mello's paper notes that past flawed coverage decisions become embedded in the training data for the next generation of models, which then reproduce and scale the pattern. The phrase “supercharged flaws” is not rhetorical. It is a description of what happens when a statistical system is trained on a history of denials and then used to generate future denials, with the previous denials as ground truth. Mistakes do not get caught. They get normalised, archived, and re-expressed at volume.
The data on downstream appeals has circulated for a while, but the Stanford paper pulls it into focus. In Medicare Advantage, according to KFF's January 2025 analysis of 2023 figures, insurers made nearly 50 million prior authorisation determinations, denied 3.2 million of them, and saw only 11.7 per cent of those denials appealed. Of those appealed, 81.7 per cent were partially or fully overturned. In an earlier era, overturn rates above 80 per cent on appeal would have prompted a federal reckoning. In the current system, they are published in briefing notes and largely forgotten by the following week.
If the appeal process reverses more than four in five decisions on review, the appeal process is not a safety net bolted onto a functioning decision system. It is the decision system, belatedly engaged, in the small minority of cases where a patient has the time, the literacy, the advocacy, and the stamina to demand it. Everyone else simply absorbs the denial. That is not an operational detail. It is the design.
On 6 April 2026, STAT News reported that UnitedHealth Group, through its Optum Insight division, plans to spend at least three billion dollars over the next few years embedding AI more deeply into its claims processing, care management, fraud detection, and clinical documentation systems. Sandeep Dadlani, chief executive of Optum Insight, told reporters that the company employs 22,000 software engineers globally, that over 80 per cent of them now use AI to write code or build new agents, and that executives expect to generate a billion dollars in savings this year alone by pushing AI further into operations. Dadlani's framing was the one insurers have settled on: AI, he argued, will speed up decision-making and streamline health insurance's notoriously time-consuming bureaucracy.
He is not wrong about the bureaucracy. The American health insurance system wastes staggering amounts of time, labour, and money on a claims process that no participant, patient, provider, or payer, thinks works. The question is what “speed up decision-making” means when the original slowness was partly functional: the friction of human review was, at its best, the thing that caught errors, gave context, and let claimants be heard. If the friction is engineered out, so is the friction of accountability.
And the three-billion-dollar figure needs to be read alongside the context UnitedHealth is operating in. The company's former chief executive, Brian Thompson, was shot dead in Manhattan in December 2024 in an attack whose alleged perpetrator referenced the company's denial practices in his writings. The class action over nH Predict was allowed to proceed the following February. The Palm Beach Post investigation landed this April. There is, if one wants to read it this way, a choice the insurer has made. It could have used the last eighteen months to make its claims-processing systems more transparent, more accountable, more humane. It has instead committed to scaling them up, and measuring its own success in savings generated rather than denials avoided.
This is the logic that animates everything else in the sector. Under the business model that has built the American managed-care industry, every dollar approved in claims is a dollar of medical-loss ratio, and every dollar denied is, within the limits set by the Affordable Care Act's 80 to 85 per cent floor, a dollar of retained earnings. Any technology that lowers the marginal cost of generating a plausible denial, and raises the barrier to generating a successful appeal, is, from the perspective of the quarterly report, working exactly as intended. This is not a conspiracy theory. It is a reading of the incentives stated on the face of the filings.
Because the regulators have not, in most states, built the infrastructure to track algorithmic denials systematically, that job has fallen to the patients and clinicians themselves, largely on Reddit. Communities such as r/nursing, r/medicine, and the various state-level and condition-specific subreddits have become, almost by accident, one of the most useful public archives of how AI-driven prior authorisation actually functions at the point of care.
The threads follow a recognisable rhythm. A nurse describes submitting a request for a patient whose case is, clinically, straightforward. A denial returns in seconds or, at most, a couple of minutes. The denial letter cites the insurer's internal clinical guidelines, which are not, in most cases, the same as published medical society guidelines. An appeal is mounted. The appeal takes weeks to resolve. In the interim, the patient either forgoes the treatment, pays out of pocket, or lands in a more expensive emergency setting that the insurer will then, often, cover. The commenters in these threads document the pattern because nobody else does. They are, in effect, doing the work that in a different jurisdiction would be done by an independent audit office.
The sub-two-second denial is not a single documented statistic; it is a folk fact, borne out by the Cigna PxDx data, by screenshots circulated in these communities, by the fax-timestamp evidence that rheumatologists and oncologists have been quietly compiling. A system that returns a denial before the clinical reasoning could plausibly have been read is a system that has, as a matter of physics, not been read. The courts, slowly, are beginning to say so. In the Cigna class action in California and the nH Predict case in Minnesota, the factual allegations that reviews were not meaningfully performed have survived motions to dismiss. Discovery is going to be, in a phrase one plaintiff's lawyer used on background, interesting.
The Reddit record is, of course, anecdotal in a formal evidentiary sense. It is also, collectively, thousands of practitioners with professional licences describing a consistent pattern. When the formal data and the informal data align this closely, and both are saying the same thing that independent investigators and academic researchers are saying, the reasonable assumption is not that the nurses are wrong.
If the picture so far suggests that legislators would rush to impose a human signature on AI-generated denials, the story of Florida House Bill 527 is a useful corrective. The bill, introduced by state Representative Hillary Cassel, would have required that every insurance claim denial or reduction be reviewed and signed off by a qualified human professional, with AI output permitted as an input but not as the sole basis for the decision. It was, by the standards of recent American legislative politics, a popular proposal. In early December 2025, a House panel unanimously backed it. It then passed the full Florida House on a 108 to 0 vote, a consensus across parties that is almost unheard of on any contested business-regulation matter.
Cassel was candid about what had moved her. Speaking to reporters, she said: “The genesis of this bill came to me with the murder of the United Healthcare CEO. One of the alleged motives was the denial basis by that company, and there's currently a class action that shows allegedly that 90 per cent of their claims were denied with errors when they utilized AI.” It is an extraordinary quote, because it concedes that the political window for reform opened at the moment a billionaire insurance executive was killed in the street, and that the opening was narrow.
The Senate version, SB 202, sponsored by Senator Don Gaetz, did not survive. Its last action, according to the Florida Senate's public record, was on 13 March 2026, when it died in the Banking and Insurance Committee without a floor vote. Industry representatives from the Florida Insurance Council, the American Property Casualty Insurance Association, and the Personal Insurance Federation of Florida lobbied against it, arguing that mandatory human review would slow the resolution of claims. The Florida Hospital Association and the Florida Medical Association, who represent the entities actually filing claims for patients, lobbied for it. The committee did not bring it up.
Zoom out and the pattern is familiar. A bipartisan legislative majority in a populous, insurance-heavy state backed a minimum procedural protection that almost everyone not in the insurance industry supported. It died in committee, quietly, without a recorded vote. There was no scandal. There was no single villain. There was, instead, the ordinary friction of legislative attention: a bill that had the votes to pass did not have the procedural protection to reach a vote, and a session ended. Multiply this failure across two dozen states and you get, approximately, the current regulatory environment.
Here is the analytic move the whole debate has been circling. The right to appeal, in American administrative and insurance law, is a right that assumes certain things about the original decision. It assumes there was a decision-maker. It assumes the decision-maker had reasons, which can be stated, contested, and either defended or abandoned on review. It assumes the appellant, given adequate time, can understand the basis of the decision well enough to argue against it. It assumes a symmetry of cognition between the original decision-maker and the appellate one.
An algorithmic denial breaks all of these assumptions at once.
It breaks the first because the decision-maker is not an individual but a pipeline. It breaks the second because modern models do not have reasons in any sense a lawyer would recognise; they have weights, activations, and outputs. Even the engineers who built the system cannot generally, for a specific denial, reconstruct why this patient's case tipped into the negative region of the decision surface. They can say what features mattered on average. They cannot say what mattered for Iris.
It breaks the third because the denial letter, drafted as the output of a template populated with a justification selected from a limited menu, tells the appellant something that may not be a true description of the decision. It is a plausible description, designed to be legally defensible and clinically intelligible, but the actual cause, somewhere in the latent space of the model, is not accessible to anyone. To appeal a denial on its stated grounds is to joust with a shadow.
And it breaks the fourth because the appellant is human and the opponent is a statistical system trained on millions of prior cases. The insurer's machinery can generate, cheaply, a thousand variations on why the original denial was sound. The patient has one case, one letter from their doctor, one window of time before the treatment decision becomes moot. The asymmetry is not the small asymmetry of a lay person versus a trained adjuster. It is an asymmetry of cognitive capacity, of parallelism, of cost per round, of a kind the administrative law of the 1970s did not contemplate.
This is why the Stanford group's paper matters more than a straightforward policy critique. Mello and her coauthors are not simply pointing out that AI sometimes gets it wrong. They are pointing out that the institutional scaffolding that was supposed to catch the errors was built for a different kind of decision-maker, and does not scale to the one now making the calls. A patient appealing an algorithmic denial is not, functionally, appealing at all in the sense the word was originally meant. They are triggering a subsequent stage of the same algorithmic process, in which the second layer inherits the priors of the first.
You can see, in the published reform proposals, two broad theories of how to repair this. The first, reflected in California's SB 1120 and the dead Florida HB 527, is to legislate a human signature back into the decision. Require that a named, licensed professional review and sign off on any denial, with documentary evidence that they did so. This is the bluntest and, on current evidence, the only version that insurers can be counted on to resist. It is also the most fragile, because the record of Cigna's medical directors clicking through denials at 1.2 seconds per case shows that “human signature” can be gamed into meaninglessness unless the rules specify what review means in minutes, in content, and in accountability.
The second theory is algorithmic transparency: require insurers to disclose the logic, the training data, the error rates, and the audit trails of the systems they use. This is the preferred framing of academics, regulators, and some of the AI industry itself. Its limits are by now familiar to anyone who has worked on explainable AI. For classical rules-based systems, transparency is straightforward. For modern neural systems, it is a research problem that has not been solved, and may not be solvable in the strong sense. An audit report that says “the model weights were examined” is not a substitute for the ability to say, of a particular denial, why it was made.
Neither theory, on its own, is sufficient. A mandated human signature without transparency produces fake review at industrial scale. Transparency without a mandated human signature produces elegant documentation of decisions that nobody can be held accountable for. The only versions that might actually work combine both: a human who must sign, a record of what they looked at when they signed, and a genuine, externally audited account of what the model contributed and why. Nothing currently in force in the United States, at the federal level, does this.
It is tempting to frame the whole situation as a fight about artificial intelligence, because AI is the novel element. But the deeper fight is about something older: whether a person subject to a consequential institutional decision has the right to a reasoned account of why the decision went the way it did, and a real chance to change it.
American health insurance, for reasons that long predate generative AI, has been steadily undermining that right for decades, through the proliferation of prior authorisation requirements, through narrow networks, through opaque formulary tiers, through appeals processes designed to exhaust rather than enlighten. The arrival of AI has not created the pathology. It has industrialised it. What used to take an adjuster an hour now takes a model a second, and what used to happen to thousands of patients a year now happens to millions. The scale changes the moral physics.
And the scale will grow. UnitedHealth's three-billion-dollar investment will not sit alone. Every other major insurer will match it, because they must, because the efficiency gains compound and the laggards lose. The Palm Beach Post investigation will be joined by others. The Reddit threads will lengthen. The Florida-style bills will pass in a few more states, and die in committee in many more. Somewhere in the middle of this, the language will drift: the word “review” will come to mean something smaller than it used to, the word “decision” something less personal, the word “appeal” something closer to a ritual than a remedy. This is already happening.
What stops the drift, if anything does, is a reassertion of the civic premise the whole insurance system was supposed to honour: that a claim is not a data point but a moment in a person's life, that a denial is not an output but an act, and that the entity issuing that act owes the person on the other end an intelligible reason and a real chance to be wrong about them. None of that is technologically impossible. Some of it is, in fact, quite cheap. What makes it hard is that the incentives, as currently aligned, reward the opposite: the cheapest plausible denial, issued at scale, defended just well enough to exhaust the appellant's capacity to keep fighting.
Iris, in the Palm Beach Post story, eventually got her medicine. Her doctor appealed on her behalf. It took weeks. She is one of the lucky ones, in that she had a doctor with the time and inclination to wage the fight. Most people do not. They have a denial letter, a phone tree, a model on the other end of the form, and a finite number of mornings on which they can open their hands enough to sign the next appeal. What the right to appeal means in practice, at this moment, is that if you are patient, and articulate, and unusually well-represented, you can sometimes persuade the system to notice you. That is not a right. It is a lottery with a ticket price measured in stamina. Whether it can still be repaired into something that deserves its own name is the question the next decade will answer, and the answer will not be written by the models.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
wystswolf

What is done in love is done well.
To draw a woman is to make love to her.
Not with the crude crescendo of sex, but slowly—
through the study of fat and muscle, the way flesh lies over bone.
The stretch of skin. Its surrender. How afternoon light wraps her like a lover’s embrace.
And it cannot be clinical.
Her vulnerability will not allow it.
She disrobes in layers, not only cloth but history—
until she lies as bare as she can bear.
Though the artist wishes to lay open the heart itself, to place upon the dais all the grief, all the love, there is only so much one sitting can hold.
Because this sort of undressing takes years.
And it is done not with fingers, but with trust. With words.
So when he renders the breast, slaving to capture the caress of north light,
it is not merely flesh he paints,
but longing, memory, the armor she built around the fist of muscle beating behind it.
And the eye does not trespass upon her tenderness.
It moves over her like warm water.
And so love is made—
a current passing between the drawer and the drawn,
until they are bound forever in color and light.
#poetry #wyst #art #artist #painting
from Lastige Gevallen in de Rede
Less is more and nothing is the most you can get, and I think I'm on my way to achieve all that.
from
TechNewsLit Explores
Photos from Old Glory DC’s first home rugby match of the season are now posted on a gallery at the TechNewsLit portfolio on Smugmug. Old Glory DC, the Washington, D.C. region’s franchise in Major League Rugby, lost the match to California Legion, 36-23.
The team played its initial three matches on the road, first losing to Seattle, then beating New England and Carolina before taking on California Legion, a team from Southern California. It was also Old Glory DC’s first match at George Mason University stadium in Fairfax, Virginia, with fans filling most grandstand seats and touch-line (sideline) boxes and spaces. In previous years, the team played its matches in exurban Maryland, a longer distance from D.C.

Both DC and California moved the ball up and down the pitch (field) during the match, but California took advantage of DC turnovers to score early and run-up a big lead at halftime. DC scored two tries, like touchdowns in American football, late in the match, but it was not enough to overtake California.
TechNewsLit also covered Old Glory DC’s open practice on 28 March. Photos from both the 26 April match and the open practice carry a Creative Commons – Attribution (CC BY 4.0) license. See usage requirements and specifications at https://creativecommons.org/licenses/by/4.0/ .
Copyright © Technology News and Literature. All rights reserved.
from appbrew
Preparing for OET can feel overwhelming at first, especially when most online resources are either too generic or don’t reflect the actual exam format.
One thing that helped me improve was practicing with realistic healthcare-focused mock tests instead of random English exercises.
I recently started using: https://oet.appbrewprojects.com
Android App: https://play.google.com/store/apps/details?id=com.appbrewprojects.oet
It includes practice for Listening, Reading, Writing, and Speaking, specifically designed for nurses and doctors preparing for OET.
What I found useful:
Simple interface Exam-style practice Profession-focused preparation Easy access to mock tests
If you're preparing for OET, consistent daily practice with realistic materials can make a huge difference.
Good luck to everyone working toward their healthcare career goals.
from
M.A.G. blog, signed by Lydia
Lydia's Weekly Lifestyle blog is for today's African girl, so no subject is taboo. My purpose is to share things that may interest today's African girl.
MET fails gone wild. The Met Gala is an annual fundraising gala held on the first Monday in May to benefit the Metropolitan Museum of Art's Costume Institute in New York City. It is where fashion is supposed to transcend into art… but every year, a few looks accidentally transcend into confusion instead. And 2026? Oh, it gave us drama, ambition, and a handful of “what exactly am I looking at?” moments that we simply cannot ignore. With stars like Beyoncé and Rihanna gracing the 2026 Met Gala red carpet, Beyoncé's return after nearly a decade away became one of the night's biggest highlights, especially at an event where a single ticket reportedly costs up to $100,000.
First up, the “Living Sculpture Gone Rogue” category. You know the look: structured, architectural, and bold—until it starts wearing the celebrity instead of the other way around.
One star arrived looking like a walking installation piece, complete with jutting metallic extensions that made sitting, turning, or even waving nearly impossible. Art? Yes. Practical? Absolutely not. The Met steps turned into an obstacle course, and honestly, the security guards deserved an award too.
Then there was the “Paint Me Like One of Your French Girls… Literally” moment. A celebrity showed up fully airbrushed in what seemed like a tribute to body-as-canvas artistry. In theory, stunning. In reality? Under the flash photography, the paint read less “ethereal masterpiece” and more “accidentally brushed against a wet mural.” The vision was there… the execution just needed a little less humidity and a little more sealing spray.
And then there’s the “Is It Moving or Am I?” category. Kinetic fashion made a bold appearance, with pieces that spun, blinked, inflated, or shifted shape mid-carpet. One dress dramatically expanded like a blooming flower… and then refused to deflate. Iconic entrance? Yes. Smooth exit? Not quite. The after-party logistics must have been a nightmare.
But here’s the thing about Met Gala “fails”—they’re rarely boring. In fact, they’re often the most memorable.
Chanel. This is an interesting brand, not owned by LVMH or Kering or Dior who own the majority of the big brands amongst themselves. I’ll write about the original founder of Chanel in another blog, quite an intriguing story with lessons for today. Chanel is primarily known for perfumes, though today they do fashion and cosmetics/skincare as well. Chanel Nr 5 is their top performing perfume and also the world‘s top selling perfume. It was created in 1921 by Ernest Beaux, a French Russian national who was the former perfumer for the Russian Tsars (overthrown in 1917, so Beaux was probably looking for a job).
And every 30 ml bottle of Chanel Nr 5 perfume (a “small” size bottle) contains about 1000 jasmine flowers, and about 80 other scents. And not just any jasmine, only jasmine from the Grasse area in France. The real connoisseurs claim that every flower partly takes its scent from the soil it is grown on, like wine. So jasmine from Grasse smells different from jasmine grown in Ghana. Jasmine, a tiny flower, opens at night and is harvested as the sun comes up, when the blooms are at their most fragrant.
Each one is picked by hand; they're too delicate for machines. The harvest ends before the midday heat can damage the petals, which are kept covered with a wet cloth so they stay cool. The blooms are then rushed to an on-site factory where the fragrance is extracted using a 150-year-old technique developed in Grasse.
Speed is essential. If the flowers brown, the scent changes and “they smell of bad fruit”. Jasmine is placed into a vat and steeped overnight, like tea and eventually the concentrated form of jasmine, called absolute, is sent to a factory near Paris where a few drops go into each bottle of Chanel No.5.
Today Chanel No.5 is available in five main concentrations, offering variations from the intense, original parfum to lighter, modern interpretations. The primary concentrations include the Parfum (Extrait), Eau de Parfum (EDP), Eau de Toilette (EDT), Eau Première, and L'EAU. These range from rich floral-aldehydic blends to brighter, citrus-forward versions. A 30 ml bottle of Chanel 5 perfume (the concentrated form) sells for about $250-$300, the same but presented as eau de parfum about 10 times less strong goes for $100-$150 and eau de tolette again 10 times weaker goes for $80-$120.
Let’s hope that climate change does not affect the Grasse jasmine cultivation as well.
Flower fields in Grasse
And careful, Chanel nr 5 perfume stains.
+233 Jazz club and Grill. Dr. Isert Street, North Ridge, Accra, may be going over its top. They recently extended the seating and parking area and have more and more entrance fee events, (150 per person in our case). One could say currently it is the place to be. I like their sound system which is clear and never too loud to block your conversation. But their kitchen starts to suffer. The jollof beef fish was ok, but their beef kebab was over marinated and not juicy again, the pina colada (rhum, cream of coconut, and pineapple juice) is not that creamy any more and the bora bora cocktail (typically passion fruit juice, pineapple juice, lemon juice, and grenadine) tasted more like watermelon, apple and pineapple, and was watery. And though they have 2 vodkas at 25 GHC on the menu they don’t have these, prices start at 35 GHC (which is quite reasonable compared with other places). Their cocktails ranges from GHS 80-120 for a glass of Mojito, GHS 100-150 for their special cocktails and GHS 120-180 for their brandy-based Espresso Martini.

from Micro essais
Étymologiquement, l’existence est un surgissement. Une apparition hors de l’invisible, dans le visible.
Creusons un peu la question. On dit parfois : « Il n’y a pas d’amour, il n’y a que des preuves d’amour ».
Retournons la formule : que vaudraient ces « preuves d’amour », sans l’amour lui même ?
Que serait la connaissance, sans l’apprentissage ? Et plus encore, sans le désir d’apprendre ? Que serait la sagesse, sans l’expérience ?
Que vaut l’œuvre, sans l’acte de créer ? Le poème sans les ratures ? Le roman sans les pages déchirées ?
On devine, à travers ces exemples, que la question n’est pas tant de savoir s’il faut croire ou non en quelque chose d’invisible, que de savoir si le monde visible que nous connaissons pourrait exister sans lui.
C’est l’invisible qui tient le visible.
Appelons le désir, passion ou amour. Appelons le lien, liaison, relation. Qu’importe. C’est cet invisible là qui tient le monde, le rend possible et le fait ex(s)iter.
Or, on voudrait nous faire croire aujourd’hui, par culte du rendement, ou de la performance, que seul le résultat compterait.
On voudrait nous pousser, au nom de l’efficacité, à sacrifier, sur l’autel du résultat, le processus invisible qui l’a rendu possible.
Alors que, dans bien des cas, c’est le processus lui même qui constitue l’essentiel.
Voilà pourquoi les gains d’efficacité ne sont jamais neutres. Voilà pourquoi toute production générée par intelligence artificielle ne peut en aucun cas prétendre au statut d’œuvre. Parce qu’elle accélère au point de l’effacer presque entièrement tout le processus nécessaire à sa production, elle passe à côté de l’essentiel.
Je ne suis pas en train de dire qu’il faut définitivement renoncer à l’IA, pas plus qu’à d’autres moyens d’augmenter notre puissance d’agir. Mais il est urgent de réfléchir à ses implications profondes. S’il s’agit avec elle d’accélérer toujours plus, alors nous sacrifierons l’essentiel : la relation et tout ce qu’elle implique.
Contrairement à ce qu’affirment aujourd’hui les penseurs de l’IApocalypse, ce n’est pas l’humanité qui est menacée par l’IA à court terme. Mais plutôt ce qui fait que nous sommes humains. Cela inclut nos faiblesses, nos limites, mais aussi ce que avons de meilleur : le désir de créer, le désir d’aimer, le besoin d’être en relation les uns avec les autres.

from Micro essais
Il y a deux questions derrière ce « pourquoi ? » :
La première est celle de l’utilité, questionnable en effet. À quoi sert la poésie ?
La seconde est celle de l’impulsion, qui vient de soi, ne répond à aucune sollicitation extérieure et peut naître indépendamment de toute utilité, réelle ou perçue.
Alors, la poésie, ça sert à quoi ? À faire son intéressant ? À sauver le monde, ou du moins à essayer de le rendre un peu meilleur qu’il ne l’est ? À soigner les cœurs et les âmes ? À mettre un peu de beauté dans nos quotidiens ? À s’évader ? À prendre du recul ? À aider à vivre ? À vivre, tout simplement, mais vraiment, c'est-à-dire ne pas seulement survivre ?
Un peu de tout cela, sans doute. Chacune et chacun d’entre nous pourra trouver, parmi les propositions ci-dessus, celle ou celles qui lui conviendront le mieux, et pourra bien sûr se sentir libre d’en ajouter d’autres.
Je reviendrai sur deux d’entre elles :
La première, c’est la vertu thérapeutique de la poésie. Écrire de la poésie, ou lire de la poésie nous fait du bien. Lorsque mon père était malade, je lui envoyais régulièrement des poèmes, et il me disait que cela lui faisait du bien. Lorsque nous souffrons, la poésie, comme la musique, la littérature ou d’autres formes d’expression artistique, nous apaise.
Mais est-ce vraiment pour cela qu’on se décide, un jour, à écrire ?
Pour rendre le monde meilleur alors ? Quelle prétention ! Et pourtant, deux constats : le premier est que chaque poète en engendre d’autres. Écrire, c’est susciter d’autres vocations. C’est ouvrir pour beaucoup un nouveau champ des possibles. C’est révéler à soi et ouvrir à d’autres la possibilité de découvrir une facette de leur personnalité qu’elles n’avaient jamais exploré jusqu’alors. Il y a donc, par la poésie, une puissance de propagation dont l’ampleur est sans doute bien plus large que ce qui est perceptible, un peu comme un courant de profondeur indétectable depuis la surface.
Voilà qui m’amène au second constat : aucune lutte, aucun soulèvement, aucune mobilisation n’est possible s’il n’y a pas, quelque part enfoui profondément en nous une petite lueur qui nous dit que d’autres possibles sont possibles. Rien ne façonne plus profondément le monde réel que les mondes imaginaires. J’en veux pour preuve l’obsession des despotes pour l’appauvrissement des désirs. Ce qu’avait si bien démontré Orwell dans « 1984 » avec la « novlangue » a été appliqué pratiquement à la lettre par Goebbels : une propagande efficace suppose d’appauvrir la langue, la pensée et donc les désirs, afin de mieux soumettre les populations, avec leur consentement de surcroît.
Aussi modeste que soit la poésie, du moins en apparence, elle est un moyen de lutte. Elle est un ferment à préserver, une braise à entretenir à tout prix, un relai à transmettre entre les individus, les peuples et les générations.
Mais est-ce vraiment pour cela qu’on se décide, un jour, à écrire ?
Peut être. Mais peut-être pas. Je ne peux ici parler que pour moi.
J’ai d’abord écrit des essais, puis des poèmes. Les premiers répondent à une logique « fonctionnelle » : transmettre des savoirs, des analyses, émettre des propositions et faire circuler des idées. J’ai toutefois très tôt ressenti le besoin d’y ajouter une note personnelle, plus sensible, un peu comme des respirations.
Mais à mesure que je me suis orienté vers des textes plus poétiques, j’ai bien senti que j'étais face à une nécessité. Un impulsion, profonde, irrépressible, qui répondait à quelque chose qui montait de plus en plus fort en moi : de l’angoisse, de la colère, de la tristesse, face à la destruction systématique de ce que notre monde recèle de plus beau. De la consternation face à l’incurie de nos dirigeants, leur incompétence ou leur mauvaise foi, je ne sais, et donc leur incapacité à discerner ce qui est essentiel, vital, de ce qui ne sont que des moyens. J'étais submergé par une profonde détresse et un sentiment d’impuissance face à ce glissement progressif, ce « crash mou » du socle sinon d’une civilisation, du moins d’une capacité de vivre ensemble, de vivre vraiment, pleinement et épanouis.
Alors, que faire ?
Devenir fou. En crever.
Ou fuir.
Et s’il existait une autre voie ?
Écrire. Créer. Ne pas laisser l’angoisse, la colère, la tristesse, la consternation et l’aigreur gagner et tout emporter. En faire quelque chose, même si c’est peu.
Entretenir la flamme, pour pouvoir un jour la transmettre.
Vivre.
« Mieux vaut allumer une bougie que de maudire les ténèbres »

from
The happy place
Today I saw a little girl carefully balancing through the train car with a small box of strawberry jam clutched to her heart, a frown of deep concentration was on her little face as she passed me by
Walking the same path some time later: a big bald man, a miniature whiskey bottle in his giant fist, clutched also
And I got word of a dead relative through SMS from my mum (who I don’t talk to much no more, we’ve run out of things to say to each other)
And I quit my old job, as the new one is lined up finally
And lastly, I saw a man with a big butt crack walking by, wearing black jeans jacket and black jeans. There was something sad I couldn’t put my finger on, his eyes maybe, about his kind face. (I saw this as I went for a stroll to stretch my weary legs …)
An eventful journey indeed
from
Chemin tournant
On entend la corne d'une locomotive rouge qui traine avec lenteur à travers le multicorps de la ville soixante wagons de marchandises. Puis le souffle de l'eau contre le béton de l'abattoir général, où l’on verse annuellement le sang de quatre-vingt-dix-mille bœufs. Éclate le cri des bouchers à l'adresse d'une bête tremblante. On entend : Tue-le ! et le train, sa voix, ses yeux qui chassent des fantômes marchant sur son chemin de fer. Entrent par vent du sud le relent des vidures, et plus tard du nord, aussi longue à durer dans l'air qu'un sermon de pasteur, l’âcreté des ordures qui flambent encore, du plastique, des herbes à demi sèches qui ne demandaient rien.
#Fenêtresurville #Didascalies