Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Micropoemas
Hay ciertas ausencias que son de ver y cortar con tijera.
from
Micropoemas
Tres palabras aquí. Y luego otras cuatro.
from 下川友
ストレスのある日は、明確に髪が硬い。 疲れていると、満員電車で立ちながらでも、自然と目が閉じる。 意外と、しんどいときほど満員電車では元気だったりする。 無理な姿勢で、スマホで音楽を聴いたり、文字を読んだりして、 好きなものを無理やり享受しようとする。 もちろんそんな姿勢での動作が、リラックスになるはずもなく、 電車を降りる頃には、新しい疲れが首から頭のてっぺんにかけて追加されている。 満員電車は、脳が疲れている状態で乗ることを推奨する。
毎日日記を書くことを自分に課してから、 100点満点中10点だとしても出さなければならないことに、ストレスを感じている。 それでも、この「10点を出す」という行為に、わずかな意味を見出している。 昔から、他人に何かの提出を求められると、 ナードな性格ゆえに、毎回、整形や添削にこだわってから提出していた。 自分で自分に合格を出すための、60点くらいのライン。 文句は言われないが、賞賛もされない。 褒められることよりも、がっかりされないことのほうが、 自分の行動理由になっているのだと思う。 だから、とりあえずこの10点を外部にさらすことに慣れる。 それが今の自分にとっては大事だ。
そう思いながら、いつものコーヒーと水を手元に置く。 コーヒーを飲むと口が汚れる気がして、それを水で流し込む。 ときどき、自分の口の中の匂いを自分で感じ取り、 それが自分の存在そのものを、少しだけ嫌にさせる。
最近はインターネットも見ていない。 「辛い、辛い」とわざわざ口に出し、眉間に皺を寄せて目をつぶる。 その閉じた目の先に浮かぶのは、 元気に走っている未来の自分だ。 なぜか今より肌がきれいで、草原を走り回っている。 そんなイメージがまだ浮かぶということは、 自分はまだ、自分の人生に心から挫折していないのだと思う。
そういえば朝、オフィスカジュアルのシャツを着たな、と思い、胸元を見る。 妻がアイロンをかけてくれた、きれいなYシャツを着ていた。 位置も、時間も、ずれていないことが分かった。
from
Shad0w's Echos

Izzy watched so much porn that weekend. She stayed naked. She didn't go out and barely ate; she just had to see more. Her other apps stayed untouched. No returned calls, and texts were left unread. If it wasn't porn, it wasn't her interest that weekend. She didn't care if it was sin, and she didn't care if she was craving the bare flesh of naked women more than men. For the first time she was truly happy. And then, Sunday came.
She had to pretend again. The dread of putting clothes on and going to church after a full day of porn just didn't feel right anymore. But she couldn't give that up just yet.
It felt so wrong to put her hard thick nipples in a bra. It felt alien to cover her throbbing wet pussy. She had spent so many years denying the urges that the very act of living in her apartment naked and throbbing was pure bliss to her mind. Izzy kept telling herself she was not ready to touch yet. But the more she questioned it, she didn't know what she was waiting for.
“I think I should shave my pussy like the porn girls,” she thought to herself. She loved them. She's learned so much about the world through them.
“I think I will be ready to masturbate soon. It's almost time.”
She has really enjoyed these internal thoughts she had to herself since she moved out. She can have actual conversations about her sexuality and her life choices. She doesn't have to obey or please others. It's been liberating for her. She slowly stopped judging herself. She felt lighter.
As Izzy looked at herself in the mirror dressed in her church clothes, she looked wrong. She missed her naked body and the new version of herself so much. She reached over with her right hand and twirled her purity ring and sighed. The light and sparkle in her eyes faded. She had to go to church and face Marco and his fiancée again.
However, Izzy didn't chastise that woman anymore. Her eyes were opened so wide this weekend. It was wrong of her to judge that broken woman. Izzy was just missing what was needed to attract Marco. She can't win him now, but maybe if she kept watching porn, she would find the answers she was looking for. Maybe she would find someone new once she makes a few more changes in her life.
As she sat her clothed body in her car, she was very calm. The naked women she had watched all weekend were so comforting to her. They were carefree, bold, shameless, and liberated. She didn't see sin anymore. She saw sexually charged artistic self-expression.
Each page and each creator had their own vibe and their own way of doing things. Some had goth-like appearances in dim lighting; others had vivid and sharp lighting with professional-level production quality. She loved watching it all. It made her feel closer to being a real woman. Not this sheltered mess of misguided purity that has dominated her youth and her most fertile years of her life.
She had already made preparations to take it easy this Sunday. No Sunday school lessons, no leading prayers, no choir rehearsals. She just wanted to “just be a member” and not have to participate. Unfortunately, that peace didn't last long. Change, no matter how insignificant, goes unnoticed.
Sister Gladice was one of the cornerstone elders of the church. She was wealthy, retired, bougie, entitled, and generally a negative passive-aggressive person to be around. She was literally the watch guard of the church no one wanted. Gladice was a literal black Karen. She always noticed Izzy without fail. Gladice often made comments about the poor women just in earshot, but never to her face. Izzy heard it all.
'Too pure for the world.' 'More holy than anyone else' 'Too innocent to know better' 'So prudish that Jesus couldn't get in.' Those were just some of the comments Gladice made about her.
So when Izzy decided to take a back seat and just enjoy church, Gladice took notice. Comments about 'being lazy' and 'lack of initiative' wafted on the wind. Izzy heard, but she ignored it. Izzy just wanted to sit quietly. In fact, other things were drawing her attention that were more important.
She was actively looking at all the attractive women in the church. Even the preacher's wife in her mid-40s was under her gaze. She scanned the congregation silently, undressing them with her eyes. She wondered what their breasts looked like; she wondered if they liked to masturbate. Izzy could feel a familiar warmth and throb from between her legs. She smiled at all the perverted thoughts in her head as she felt her panties get wet in church.
“Women make me horny now. I like this. I don't care if porn is making me gay; I'm really enjoying my life now,” she thought to herself. Feral and direct thoughts flooding her brain as she saw women in a whole new light.
Then it was time for announcements. Gladice had decided to set a plan in motion that no one asked for.
“Brothers and sisters in Christ! It's another blessed day in the Lord's House!” Gladice always wanted to make a bombastic entrance to something most would consider mundane. Her grandiose introductions lead to most people slowly checking out mentally as their smiles faded and attention waned.
“I would like to congratulate Marco and Jenise on their engagement!” Izzy perked up. This is new. Gladice never did this before. This topic was still a tender point for her. It only had been a week after all. Izzy immediately sensed there was an agenda and began to focus on every word. Then the penny dropped.
“Let us pray for our Sister Izabel!” Izzy blinked in stunned silence; this was it.
Her stomach started to sink out of embarrassment, and her fists started to clench. Gladice looked directly at Izzy and continued her unwelcome public criticism.
“We all know Izabel is our shining light of God-led purity and holiness. Always there to help, always there to brighten everyone's day, our unspoken hero. A light in the darkness!… But please pray for her. She deserves her own companion too. She is a devoted servant to God, and she deserves her king.”
Izzy was furious. She gritted her teeth. And then snarled. An inhuman guttural growl imitated from deep within her throat. People nearby took notice. The only thought in Izzy's mind was to shut this down now before it got out of hand. She rose to her feet. Enraged and unmoved, Izzy quickly retorted.
“YOU of all people will NOT ask for ANYONE to pray for me!!! YOUR character is not of God, and you are NOT worthy to pass judgment on ANYONE or ANYTHING. We tolerate you out of kindness, but I WILL NOT be put on the spot by some old crone like you!”
Izzy's mother knew what was coming next. Her baby's voice was getting deeper. Inhuman. Her mother trembled, visibly shaking. She grabbed her husband's arm so tightly it hurt. Izzy's father felt the fear from his wife. He feared for his daughter, and he finally understood why his wife had been acting so strangely since the move.
At that moment, Izzy turned to her parents. Her eyes were not normal. Almost glowing. Almost reptilian. Predatory. Dark.
“YOU SHELTERED ME UNTIL I WAS SO PURE THAT I WAS UNWANTED.” Izzy's voice had fully changed. That deep dual-tone animalistic growl reverberated from her chest. It traveled through the church as her anger and rage focused right at the source of her crippling innocence.
Izzy's voice brought extreme quiet into the room. What she created was an unnerving calm so complete that even the microphones in the room stopped their audible hiss. The everyday sounds of birds and traffic could no longer be heard.
As Izzy's voice changed, the overhead lights slowly flickered out. Only daylight lit the inside of the sanctuary. No one moved. No one dared to. Gladice trembled. Lower lip quivering. She was terrified because at that moment, she single-handedly unleashed something dark and evil upon the congregation.
They all heard Izzy's voice change. They all heard something that no amount of faith and prayer could ever prepare them for. It felt like something reached across from another realm and manifested something fierce in Izzy. This alien threat came from someone they thought was pure and holy and could do no wrong in God's eyes. But here she is, almost snarling, wielding unexplained and ancient power no one was prepared for.
Her father was wide-eyed and startled. Now he understood why his wife was so timid the past few days. He clutched her arm and rubbed her hand slowly. Clearly something had gotten into his daughter. But maybe it was there all along. Her mom broke the silence and started crying. Izzy wasn't done yet. There was no hesitation. No mercy.
She turned and addressed everyone. 'YOU ALL KNEW I WASN'T NORMAL, THAT I WASN'T BALANCED, AND YOU DID NOTHING.'—that bellowing menacing supernatural tone completely eclipsing her human voice. “I want NOTHING to do with ANY of you!” Her voice was no longer human. Izzy didn't even notice. She didn't care.
Izzy turned on her heels and left. Her whole body was trembling. Her nipples straining against her bra, her pussy soaking through her panties. Rage in her heart. And it all felt good.
Izzy's former classmates knew this day would come. They always talked quietly among themselves. They saw the look on her face, that pained expression of self-domestication.
The braver ones that knew her looked in pity and apology despite their fear. Most averted their gaze as this now seemingly complete stranger carried her unspoken demon out of the sanctuary. No one ever expected anything would manifest like this. This was not taught in the Bible, and no one was prepared for what they just witnessed.
Of all people, Jenise jumped up and ran over to her. Despite the apparent unknown danger, Jenise rose to face whatever Izzy had become. The very woman that won Marco's heart was coming to her aid. The irony cut deep. Izzy lost her composure again. She bared her teeth, and a guttural growl came deep from Izzy's throat. Jenise stood her ground, ready for anything.
With great restraint and fire in her eyes, Izzy snarled. “NO.” Her voice completely deep and demonic.
“I KNOW you mean well. I KNOW you understand, but please...ANYONE but you. ANYONE ELSE. Not you.” The mask was cracking. Tears were forming in her eyes. Her voice was slowly fading in weight and power. It wasn't nearly as harsh as before. Jenise stood there, teary-eyed.
Marco stood up cowardly, legs trembling. He was more concerned about appearances and reputation, refusing to look weak compared to his fiancée.
“You apologize to her… RIGHT… now!” Marco attempted to yell, his voice shaking. Making a feeble attempt to stand ground against something he didn't understand. Izzy stopped and turned her head slowly to Marco. Her eyes had a faint glow from within. It was very visible to all in the congregation. She wiped her tears as her rage welled up again. This was the last straw.
Izzy gently grabbed Jenise by the shoulders and moved her aside. Whatever was about to happen, this woman need no part of it. Izzy walked towards Marco. Her glowing, clearly reptilian eyes were unblinking. Her face contorted into a look of pure pained rage, hate, and conviction. Izzy yelled.
“YOU HAVE NO RIGHT TO SAY ANYTHING TO ME!” She walked up to Marco. The man towered over her, but he felt so small. Her index finger poked the man hard in the sternum, challenging his authority and masculinity. The man's mask of aggression started to crack. He was wearing a white suit that day, and he just visibly wet his pants.
“You lead me on. You LIED to my face. You couldn't even say you were not interested! I poured my heart out to you. I did EVERYTHING I knew to get you to notice me. Then one day, Jenise shows up, and then it's game over. YOU ARE NOT A REAL MAN. You are COWARDLY in your actions! You could have at least TOLD ME THE TRUTH instead of leading me on.” Izzy reached up and slapped Marco. He took a step back. He held his head down and didn't say another word. The shell of a man just stood there in the puddle of his own urine. His head bowed.
Jenise was the next one to speak. Her tone was also different. Heavier. And it was directed at Marco. “Marco, is this true?” she hissed. “You lead this poor woman on without any closure or communication of intentions?” He didn't respond. He didn't look up. Izzy had already turned on her heels and walked out of the church. Jenise threw her engagement ring at Marco without hesitation and turned to catch up with Izzy. Nothing else had to be said.
Jenise tried to get Izzy's attention. Izzy's bellowing demonic howls were nothing to be feared. She knew deep down, Izzy was still herself.
Izzy was on a dark path, and Jenise had to take action. So she reverted back to her old ways of the street, her old skills. In Jenise's drunken states, she had seen so many unspeakable things. Auras, shadows, voices. Jenise was a haunted soul and told no one. Izzy did not intimidate her despite the circumstances. Jenise revealed her true strength with her own voice. It's a card she rarely pulls, but it was needed today. She challenged Izzy.
“Listen to me, Izabel!” “What do you want!?” Izzy responded back, still enraged. Still unnervingly inhuman.
Jenise handed her business card to her. “When you cool off, you call me and we talk about this. I don't care how you feel about me; you have to get your anger out before it's too late.” Jenise carried a firm tone, but Izzy listened. Just because she sounded menacing doesn't mean the real woman was gone. Not yet.
Izzy snatched the business card and looked at it briefly. Jenise was a licensed physiologist. Izzy blinked. Her reptilian eyes slowly morphing back to human. Then Izzy looked back at Jenise, stunned. Jenise nodded.
“Go home now before they try to riot; you did a lot of damage today.” Jenise practically pushed Izzy out the door.
Once the two women left, the church breathed again. Then there was chaos.
Gladice had fainted, hitting her head hard on the floor. No one noticed. Izzy's parents collapsed in on each other. Shielding each other from the mean comments others threw their way. The church was in shambles. The first lady was nowhere to be found.
The pastor tried to call for order. All the recording equipment had failed; phones had been factory reset. Batteries drained. Lights would not turn on. It's as if Izzy's outburst was not meant to be recorded. It was meant to be experienced. Everything electrical around them had failed in unexplained ways.
Someone was frantically screaming, “Dial 911!” out of hysteria. No one could. One of the men rushed out of the building to find a pay phone, if that was even possible. The panicked congregation all heard the deep, menacing voice. Some saw the glowing reptilian eyes. Many started to question their faith. All the while, Izzy and Jenise quickly took their leave never to return.
'I'm so damn horny. What is wrong with me!?' Izzy thought to herself as she sat in her car. She ripped off her dress and panties and started touching herself. No, not here, not yet. I have to get home.
from
SmarterArticles

In a cinder-block clinic in one of Rwanda's rural districts, a community health worker unlocks her phone, opens a chat window, and types a question that, two years ago, she would have been forced to answer alone. A child has a fever that has not broken in three days. The nearest doctor is hours away by road, and the road, in April, is mostly mud. She describes the symptoms in Kinyarwanda, then in English, then in the awkward hybrid that her training has taught her the machine prefers. A few seconds later, the model replies. It is confident. It suggests a differential diagnosis, a likely cause, a set of next steps. The worker reads it twice. Then she makes a decision.
Multiply that scene by thousands. Multiply it again by the 101 community health workers who, in a study published in Nature Health on 6 February 2026, submitted 5,609 real clinical questions across four Rwandan districts to five different large language models. Multiply it by the 58 physicians in Pakistan who, in a parallel randomised controlled trial published in the same issue, were handed GPT-4o and twenty hours of training in how to argue with it, and whose diagnostic reasoning scores then jumped from 43 per cent using conventional resources to 71 per cent with the chatbot in the loop. By the researchers' own account, the large language models did not merely match the local clinicians. They beat them. Across every metric the team measured, the models won.
This is the story that spread through the health-technology press in February like a minor religious revelation. Cheap AI chatbots, the headlines said, are transforming medical diagnosis in places where the alternative is often no diagnosis at all. It was presented as a vindication. Years of hand-wringing about bias, hallucination, and the hype cycle, and finally here was evidence: in the clinics the world forgot, in the districts where a stethoscope is a luxury and a paediatrician is a fable, the chatbot is helping. Not perfectly. But helping. And helping, the argument went, is the only honest baseline when the competing product is nothing.
It is a persuasive story. It is also, if you stop and turn it over in your hand, a deeply uncomfortable one. Because four days after those Rwanda and Pakistan findings appeared, the University of Oxford published a different study in Nature Medicine, led by a doctoral researcher at the Oxford Internet Institute named Andrew Bean, that looked at what happens when the same class of models are handed to nearly 1,300 lay users and asked to help with the same basic task: figuring out what might be wrong and deciding where to go for care. In controlled benchmark tests, the chatbots identified relevant medical conditions around 94.9 per cent of the time and made the right call on disposition, whether a patient should stay home, see a GP, or go to A&E, in roughly 56.3 per cent of cases. Then the researchers let actual humans use the tools. The accuracy collapsed. Participants using an LLM identified at least one relevant condition in at most 34.5 per cent of cases, worse than the 47.0 per cent achieved by the control group left to its own devices with search engines and intuition. Only around 43 per cent of users made the correct disposition decision after consulting the model.
In the Oxford study, the bot offered one person with a suspected migraine the sensible advice to lie down in a dark room. Another person describing the same scenario was told to head immediately to an emergency department. Same condition. Same model. Different words, different outcomes, different versions of reality. Rebecca Payne, a GP and clinical senior lecturer at Bangor University who served as the study's clinical lead, told the British Medical Association's magazine The Doctor that the results were, in a word, disturbing. Bean, the lead author, described a two-way communication breakdown: people did not know what to tell the model, and the model did not know what to ask.
So here is the shape of the problem. Put in the hands of a trained community health worker in rural Rwanda, or a doctor in Karachi with twenty hours of prompting practice under her belt, a general-purpose AI chatbot apparently provides a genuine, measurable uplift. Put in the hands of an unsupervised patient in Oxford, or Bristol, or Manchester, and the same class of tool causes users to perform worse than they would have with a search engine. These are not contradictory findings. They are consistent findings. They are telling us that the value of an AI diagnostic tool depends almost entirely on the sophistication of the person holding it, the quality of the supervision around it, and the alternatives it is being compared against. And they are telling us that the populations with the least access to trained clinicians are the ones most likely to end up relying on these tools without any of those supports in place.
The hardest thing to argue with, in the case for chatbot medicine in low-resource settings, is the counterfactual. What is the alternative? In Rwanda, the density of physicians is roughly one doctor per ten thousand people, and for obstetricians and paediatricians the figures are an order of magnitude worse. Community health workers, often women with a few months of formal training, handle the first, second, and sometimes only point of contact between a sick person and the idea of medicine. In Pakistan, the Human Resources for Health picture is uneven in a different way: urban specialists cluster in the big private hospitals, while vast rural districts operate with a skeleton of overworked generalists. If you are a parent of a feverish child in either country, the chain of escalation is short and the brakes are few. The question of whether a chatbot's advice is good enough is a luxury question, one that presumes you had a choice in the first place.
Set against that reality, the Rwanda findings are striking. The models evaluated, Gemini-2, GPT-4o, o3-mini, DeepSeek R1, and Meditron-70B, were scored across eleven metrics by expert reviewers against the kinds of questions community health workers actually ask. Gemini-2 and GPT-4o both averaged above 4.48 out of 5. All five models significantly outperformed the local clinicians against whom they were compared. That is not a throwaway result. It is a claim, peer-reviewed and published in one of the most scrutinised venues in medical science, that the best frontier models are now more useful than some of the humans they might one day replace, at least for the narrow slice of tasks they were measured on.
And yet. The phrase “at least for the narrow slice of tasks they were measured on” is where the whole argument starts to creak. Diagnostic reasoning in a benchmarked question-and-answer format is not the same thing as diagnostic reasoning in a room with a crying toddler, a frightened mother, a thermometer that may or may not be reliable, and a supply chain that may or may not have the drug the chatbot recommends. The Pakistan study, to its credit, was a randomised controlled trial with real clinicians handling real-looking cases, and it built in 20 hours of training on how to use the AI safely and critically. The physicians who used GPT-4o did better than those who did not, by a wide margin. But a secondary analysis noted that doctors still outperformed the model in 31 per cent of cases, typically those involving contextual “red flags”, the kinds of signs that only a human who has seen a thousand patients knows to take seriously. That residual 31 per cent is not a rounding error. It is the catalogue of cases where the chatbot is wrong and the doctor is right.
The uncomfortable question is what happens when you strip the twenty hours of training, the verified clinical context, the peer-review loop, and the research supervision, and you are left with the chatbot and the patient. The Oxford study is, in effect, a simulation of that stripped-down reality. It suggests that in the absence of the supports the Rwanda and Pakistan trials provided, the same tools degrade from diagnostic ally to confident misinformant. And it suggests that the degradation is worst precisely at the moment of highest stakes: deciding whether something is an emergency.
Every health technology has a theory of accountability. When a drug fails, the regulator is supposed to catch it, the manufacturer is supposed to pay for the harm, the doctor is supposed to have exercised judgment in prescribing it, and the patient is supposed to be protected. The arrangement is imperfect, but it is at least legible. You can point at who is meant to carry the burden of an error.
AI diagnosis in under-resourced clinics does not yet have a theory of accountability. It has, at best, a set of competing rhetorical gestures. The model developer gestures toward the disclaimer in the terms of service that says the output is not medical advice. The clinic manager, if there is a clinic manager, gestures toward the fact that the health worker made the final call. The funder, often an NGO or a philanthropic arm of a wealthy-world foundation, gestures toward the pilot nature of the project and the counterfactual of no care at all. The regulator, in many of the countries where these tools are being deployed, is either absent, under-resourced, or, in the most honest assessment, unable to audit models whose weights live on servers in another hemisphere. The patient, in whose body the error is ultimately expressed, is left carrying a risk she did not choose and cannot price.
Compare this with the theory of accountability that wealthy-world health systems have evolved for their own medical AI deployments. The US Food and Drug Administration maintains a list of AI/ML-enabled medical devices that have been through some form of regulatory clearance. The European Union's AI Act, which began coming into force through 2025 and 2026, classifies clinical decision support tools as high-risk systems subject to post-market monitoring, human-oversight requirements, and documentation obligations. The UK's Medicines and Healthcare products Regulatory Agency has spent years building a Software and AI as a Medical Device programme. These regimes are not perfect, and a general-purpose chatbot like ChatGPT or Gemini is not licensed as a medical device anywhere: the whole point of a general-purpose model is that it evades that classification. But there is at least a framework, and an expectation that someone in a suit will eventually be called to account if things go badly wrong.
In the rural districts of Rwanda or the secondary hospitals of Sindh, there is no equivalent framework. There is nothing meaningful in place to tell a community health worker whether the model she is consulting was last updated yesterday or last year, whether it was fine-tuned on data relevant to her patient population, whether the version number she is typing into has been quietly deprecated by the provider, whether the sycophancy tuning that makes it so pleasant to argue with is also making it less likely to push back when she is about to make a mistake. The World Health Organization's January 2024 guidance on large multi-modal models in health, updated in March 2025, runs to more than forty recommendations, many of them sensible. But guidance is not regulation, and the WHO has neither the authority nor the enforcement mechanism to hold a model provider in California accountable for an outcome in a clinic in Nyagatare.
This asymmetry is what the language of “digital colonialism” is trying, sometimes clumsily, to name. The phrase was popularised by the scholars Nick Couldry and Ulises Mejias in 2019, and it has since spread through global-health and governance discourse as a way of describing the extractive dynamic in which data, users, and risk flow from the global South while capital, intellectual property, and control remain in the global North. At a UN briefing in 2024, the Senegalese AI expert Seydina Moussa Ndiaye warned that the continent risks a new form of colonisation by foreign companies that feed on African data without involving local actors in governance. You do not have to accept the full vocabulary of the critique to notice that something in the structure is badly off. When the tool is built in one place, deployed in another, regulated in neither, and breaks in a third, the burden of the break falls by default on whoever is physically closest to it. That, in almost every case, is the patient.
There is a particular history that hovers over this conversation, and pretending it does not is a form of intellectual cowardice. From the 1980s onwards, pharmaceutical companies based in the global North began conducting an increasing share of their clinical trials in low- and middle-income countries, often citing faster recruitment, lower costs, and less demanding regulatory environments as advantages. Some of those trials were conducted with genuine scientific rigour and produced treatments that benefited the populations who participated. Others did not.
The case that sits most heavily in the medical-ethics literature is Pfizer's 1996 trial of the experimental antibiotic trovafloxacin, marketed as Trovan, during a meningococcal meningitis outbreak in Kano, Nigeria. Pfizer enrolled roughly 200 children: 100 received Trovan, 100 received the existing standard of care, ceftriaxone. Eleven of the children died. Others were left with paralysis, deafness, liver failure. A secret Nigerian government report later concluded that Pfizer had conducted an illegal trial of an unregistered drug, and that crucial elements of informed consent and ethical oversight were either missing or falsified. The hospital's medical director stated that the letter granting ethical approval was a fabrication and that no ethics committee existed at the institution at the time. In 2009, after years of litigation, Pfizer agreed to a settlement of around 75 million US dollars with the Kano state government. The case is still taught in medical-ethics seminars as a textbook illustration of what happens when the protections meant to govern research on human subjects exist only as paperwork.
The analogy between Trovan and the current deployment of general-purpose AI in under-resourced clinics is imperfect. The Rwanda and Pakistan studies did not run experimental treatments on vulnerable populations without consent; they tested whether these tools might be useful to frontline workers, with expert review, peer publication, and clinician consent built into the protocols. The builders of the foundation models, meanwhile, are not pharmaceutical companies pushing a specific drug at a specific dose; they are providing a general-purpose tool whose medical use is an emergent application rather than a designed one. To equate the two cases directly would be lazy.
But the structural parallel is harder to dismiss. Both cases involve a technology developed with the global North in mind, deployed at scale in the global South while still being validated, where the regulatory architecture of the deployment country is not equipped to audit it, and where the population whose bodies become the site of validation has neither the information nor the institutional power to negotiate the terms. Both rely on a counterfactual argument: without the intervention, people would die. Both raise the same uncomfortable question about whose risk it is to take.
The Rwanda and Pakistan researchers would, I think, be the first to insist that their work is not a Trovan analogue. They are right to insist on it. But the global deployment of foundation models for diagnostic support is not, in practice, constrained to peer-reviewed research programmes. For every carefully designed Nature Health study, there are an unknown number of informal deployments: an NGO that bolts GPT into a WhatsApp triage line, a start-up that licenses a fine-tuned model to a chain of rural clinics, a district health authority that quietly rolls out a chatbot to its community health worker cadre because the phones were already there and the subscription was cheap. The published studies are the visible tip. The iceberg underneath is what ought to worry us.
Some of the best real-time reporting on the edges of this iceberg is happening not in medical journals but on Reddit. Subreddits like r/medicine and r/AskDocs, which verify credentials for physician posters, have become an accidental sentinel network for AI harms: places where doctors and patients alike surface the cases in which a chatbot has given advice that turned out to be dangerous, missed a red flag, or confabulated a reassuring explanation for a symptom that should have sent someone to hospital. The evidence on Reddit is anecdotal and unsystematic by design. It is also, because the posters are often trained clinicians describing what they are seeing in their own practices, unusually valuable.
A 2025 study in a health informatics journal examined endometriosis questions posted to r/AskDocs, comparing answers from verified physicians with answers generated by ChatGPT. On measures like clarity, empathy, and the selection of “most pertinent” response, the chatbot beat the humans in the majority of cases. On a parallel measure, a non-negligible proportion of the chatbot answers were flagged by expert reviewers as potentially dangerous. Other research has found that AI systems under-triaged emergency cases in more than half of tested scenarios, in one example failing to direct a patient with symptoms consistent with diabetic ketoacidosis and impending respiratory failure to the emergency department. Moderators of the medical subreddits have also documented the ingenuity with which users circumvent the safety rails of consumer chatbots: tricks involving framing medical images as part of a film script, or asking for a “hypothetical” differential diagnosis, or loading the prompt with enough fictive cover that the model forgets it is supposed to decline.
What the Reddit corpus captures, in a way that peer-reviewed studies struggle to, is the texture of chatbot medicine as it is actually practised by the unsupervised end user. It is the register of the late-night query, the frightened self-diagnoser, the patient who has been dismissed by one too many GPs and is now turning to an AI because the AI, unlike the receptionist, will listen for as long as it takes. It is also the register in which the Oxford findings become legible: the two-way communication breakdown, the wild swings in advice depending on how a symptom is described, the mix of good and bad information that the user has no way to separate. If the Nature Health studies are the controlled experiment, Reddit is the uncontrolled one. The uncontrolled one has millions of participants, no consent process, and no investigator taking notes.
One of the eeriest findings in the Reddit corpus is how readily the chatbots adapt to whatever framing the user provides. Ask about migraine symptoms in the confident voice of someone who wants reassurance and you will be told to lie down in a dark room. Ask in the anxious voice of someone who has been Googling brain tumours for an hour, and you may be told to head for the emergency department. Neither answer is exactly wrong. Both answers depend on information about the user, not the disease. The model is treating the conversation as a social exchange in which its job is to match the emotional register of the person on the other side. In a clinic, that might be called bedside manner. On an unsupervised chatbot with no training in clinical reasoning, it is called something considerably worse.
The argument that frames AI diagnosis in the global South as an advance because it beats the baseline of nothing is true. It is also, I would argue, incomplete in a way that flatters the people doing the deploying. The counterfactual of “no care at all” does a lot of moral work in this debate. It reframes what would otherwise be understood as under-validated technology aimed at a vulnerable population into a charitable intervention. It converts the question “is this good enough?” into the different, easier question “is this better than nothing?”. It allows developers, funders, and policymakers in high-income countries to feel that they are doing something constructive without having to confront the deeper fact that the shortage of human clinicians in Rwanda and Pakistan is not a natural disaster. It is the result of a global labour market that has for decades drained trained doctors and nurses from low-income countries into the hospitals of Europe, North America, and the Gulf states. It is the result of public-health underfunding, of structural adjustment programmes, of brain drain actively subsidised by the recruitment pipelines of richer countries. The absence of a doctor in that Rwandan clinic is not an act of God. It is an act of policy, and much of that policy was written in capitals that also happen to host the major AI labs now offering the chatbot as a solution.
None of this is an argument against the Rwanda and Pakistan deployments as such. The community health workers who participated in those studies are not better off because a Western commentator is worried about their position in a global labour market. They are better off, if the data is to be believed, because the chatbot helped them give better answers to patients who needed answers. That is a real good, and refusing to count it because it is entangled with a larger injustice is its own kind of bad faith. But the existence of the real good does not cancel the larger injustice. It coexists with it. The wealthy world gets to sell itself a story in which it is closing the gap in global health through the deployment of frontier AI, while quietly continuing to benefit from the structural forces that made the gap what it is.
That asymmetry is what a new form of medical inequality looks like. It is not the crude inequality of having care versus not having care. It is the subtler inequality of having care that is under-regulated, under-validated, and structured so that the costs of its failures flow in one direction and the benefits of its successes flow in another. It is care delivered by a system whose architects and whose accountable parties live in a different jurisdiction from the people whose bodies supply the test data. It is the same logic that structured the pharmaceutical trials of the 1990s, updated for a world in which the drug is software and the side effects are bad advice.
None of the serious people in this story are villains. The researchers who ran the Rwanda and Pakistan studies believe, with good reason, that AI tools can extend basic diagnostic capacity to populations systematically underserved for generations. They are probably right. The Oxford team is not arguing that chatbots should be banned from clinical use; they are arguing that benchmark tests rather than human-in-the-loop studies underestimate the failure modes that actually matter. They are probably right too. The WHO's 2024 and 2025 guidance on large multi-modal models tries to hold the genuine promise and the genuine risk in the same frame. It is also, like most WHO guidance, advisory rather than binding.
Both things are real at once. It is real that in a rural clinic where the counterfactual is silence, a chatbot giving useful advice 80 per cent of the time is a revolution. It is also real that an unvalidated chatbot deployed at scale across populations who lack the institutional power to audit it or seek redress creates a risk with no historical precedent and no settled framework of accountability. The Rwandan community health worker who consults a model to help diagnose a feverish child is, on the evidence, improving her care. The same model, used the same way, by a frightened patient in Birmingham the next morning, causes worse decisions than she would have made with a search engine. These are not two stories. They are one story, viewed from two angles.
In January 2024, when the WHO published its first major guidance on large multi-modal models in health, it urged governments and technology companies to ensure that the deployment of these tools did not widen existing health inequities. Two years on, the Nature Health and Nature Medicine studies together are giving us a map of what that widening might actually look like. It does not look like withholding the technology from the poor. It looks, instead, like deploying the technology to the poor under one set of conditions and to the rich under another, and allowing the differences between those conditions to do the work of quiet structural harm. The rich get the chatbot plus the regulator. The poor get the chatbot plus a hope that someone, somewhere, is watching the aggregate outcomes carefully enough to notice if something is going wrong.
Back in the Rwandan clinic, the community health worker puts down her phone. The child is still feverish, but she has a plan now. Whether the plan is the right one depends on a chain of assumptions she cannot directly verify: that the model she consulted was the model she thought she was consulting, that the fine-tuning was appropriate for her context, that the training data did not carry some invisible bias against children who look like the one on her lap, that the confidence in the model's reply reflects an actual epistemic state rather than the trained conversational habit of a system that has learned to sound sure. She does not know any of that. She is not meant to know it. Somewhere, in principle, there is meant to be a grown-up who knows it on her behalf.
Who, in this system, is that grown-up? Who is meant to be watching, with authority, with enforcement powers, with the mandate to pull the plug when the signal goes bad? The developer in Menlo Park? The regulator in Kigali? The ministry in Islamabad? The WHO in Geneva? The researchers who ran the Nature Health studies and who have already gone on to the next project? The philanthropic funder who paid for the initial pilot and whose annual report, next year, will list it as a success? Each of these actors can give a coherent account of what they are doing and why. None of them can give a coherent account of who is holding the whole thing together.
That is the shape the new medical inequality takes. Not the old, blunt kind where the poor get nothing and the rich get everything, though there is still plenty of that. A different kind, more modern, more subtle, and in some ways more dangerous for being so easy to mistake for progress. The poor get the tool, and the rich get the framework within which the tool is allowed to exist. The poor carry the risk of the errors. The rich carry the intellectual property and the option, should they need it, of pulling the plug. Whether this counts as an advance depends, in the end, on whether you believe a bad system with a good heart is closer to the right answer than a slow system with a functioning memory of what it is for.
So here is the question, sharpened. If the answer in Rwanda is that the chatbot helps, and the answer in Oxford is that the chatbot harms, and the answer in both places is that almost nobody in a position of authority can tell you with any precision who is responsible if it goes wrong, then what, exactly, have we built? A bridge, or a gap with a very convincing surface?

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * More yard work today. All mowing on the front lawn. Did more then I intended to do, not yet finished but what's left can wait a bit longer. Everything visible from the street or the sidewalk looks way better than it has for awhile.
No score yet in tonight's baseball game, 0 to 0 in the 3rd inning. Night prayers after the game, then bedtime. That's the plan.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 229.94 lbs. * bp= 121/78 (76)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 05:30 – 1 chocolate chip cookie, 1 banana * 06:50 – 1 ham sandwich * 08:30 – 1 peanut butter sandwich * 14:00 – pancakes, sausage, scrambled eggs, hash browns, biscuits & jam * 15:00 – 1 chocolate chip cookie * 17:00 – garden salad * 19:05 – small dish of ice cream
Activities, Chores, etc.: * 04:30 – listen to local news talk radio * 05:15 – bank accounts activity monitored. * 05:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:00 to 12:15 – yard work, more mowing on front lawn * 13:15 to 14:30 – watch old game shows and eat lunch at home with Sylvia * 15:00 – watching Intentional Talk on MLB Network * 16:40 – listening to the Cleveland Guardians pregame show ahead of their game tonight vs the Tampa Bay Rays
Chess: * 07:40 – moved in all pending CC games
from
Roscoe's Quick Notes

Tuesday's MLB Game of Choice in the Roscoe-verse once gain features the Tampa Bay Rays vs the Cleveland Guardians. Its scheduled start time of 5:10 PM CDT fits comfortably into my night's routine. As yesterday, I'll be following the radio call of the game tonight on the Cleveland Clinic Radio Network.
And the adventure continues.
from
fromjunia
Everything you do matters. There is not a breath you take which doesn’t make the world a better place. Every act of creativity, every kindness you do, every drop of compassion you feel fixes a shattered world, piece by piece. Humans are beautiful beings remarkably capable of mending what’s broken in a way that makes it better than it was before.
Have you ever looked at a starry sky and marveled at the specks of light unimaginably far away? Have you ever been dazzled by skyline city lights? Have you ever walked among the trees and listened to birdsong? Have you ever been awed by the capacity to build skyscrapers and organize cities? You introduced feeling to a universe that wouldn’t feel that without you.
Carbon, oxygen, and nitrogen don’t feel, but you do. Carbon, oxygen, and nitrogen can’t appreciate themselves or each other, but humanity produced chemists who dedicate their lives to doing so. Humanity produced physicists who study the behavior of the gasses that these elements compose. Humanity produced children awed by elementary science experiments, demonstrating the foundations of existence. Humanity introduces so much good.
We strive, and struggle, and reach great heights, and fix problems, and astound ourselves with what we achieve.
It is unfortunate, then, that we will lose this all. Everyone we love and everyone who loves them will die. Every ripple we make will become irretrievably subsumed in the sea of consequences we fill. Entropy will destroy everything we build and the coldness of the universe will overcome every degree of warmth we generate. It is sad because what we’ve done and made matters. It is a tragedy.
Knowing tragedy is always impending doesn’t change the goodness of what we do. It means we’re on the clock. We have limited time to enjoy life and be there for each other. The situation is urgent. The fire is coming and it will consume everything; love now and love deeply.
You lose in the end, so win now, while you still can.
from
The happy place
🙋♂️
I’ve been fixing some tickets to go see Placebo for their 30 yrs anniversary this fall/autumn, isn’t it fun how time flies like that?
Except when waiting for the microwave to finish these 2.5 minutes are very long
Or one night in my youth, I was having been drunk and I was with a friend and I slept in her little brother‘s room, right?
But problem was I woke up when the alcohol was out of my system, like at 03:00 and then I just lay there on the bed, looking at the gaming console, they had this goldeneye game, is it for the x-box? Doesn’t matter
I just lay there waiting for the others to wake up, because it wasn’t that known, the place… her parents were in there somewhere in some room, no clue which one, and I didn’t want to wake anyone, not wanting to bother anyone so I lay there waiting until the others were up, but they‘d been drinking too and it wasn’t until around 10.00 I started hearing some sounds and then I went down they had cereal, I’m pretty sure we had cereal
And that her mother liked me,
And that was something about me which made me seem lost like I was clueless or something, like a puppy or even a child? (Innocence?)
Anyway, That night I remember as having been incredibly long some reason felt incredibly slow, like incredibly slow
But
I had my friend whose jaw got broken because he encountered a football/(soccer) hooligan who just punched him for wearing the wrong colours.
And he was drunk, so he had to lay at the hospital for a very long time before they could sedate him, he just lay there with increasing pain also just letting time pass
And that was on new years eve. What a way to spend New Year’s Eve
They finally had some sort of metal to fix his jaw so he had to go for a very long time drinking soup with a straw, cause he couldn’t open his jaw or speak much
Goulash soup except he had to put it in the blender first, do you know?
Well anyway this all feels like it’s yesterday
An I am eager to see this Placebo of course, with some good friends I collected throughout these years
from Tuesdays in Autumn
The proprietor of the Music One record shop in Abergavenny, which closed after the flooding there last November, now has a stall in the town's indoor market. His stock, though less extensive in this new venue, remains good, just as the prices are still on high side. Even so, one of his less expensive LPs caught my eye when I was there the other weekend, and I was intrigued enough to hand over £15 for it: In The Townships by Dudu Pukwana, an '80s re-issue of an album first released in 1974.
I was delighted to find it's a marvellous record. Pukwana was an alto saxophonist, pianist and composer who had left his native South Africa for London in the ‘60s. In The Townships was recorded at Virgin Records’ ‘The Manor’ Studio. Also featured are Bizo Mngqikana on tenor sax, Mongezi Feza on trumpet, Harry Miller on bass, and Louis Moholo on drums. Its seven tracks are mostly built on buoyant, repetitive grooves over which there's a good deal of unison horn playing, augmented on some of the tracks by chanted vocals. Try 'Baloyi' or 'Sonia' for example, the opening salvoes on sides A & B respectively.
I bought myself a copy of Attila Veres’ The Black Maybe last month, the debut short story collection in English by this Hungarian author. I'd seen it often and enthusiastically recommended, and can now throw one more hearty recommendation on to the pile after finishing it on Sunday. It's as good a set of horror stories as I've read in many years, building on genre conventions (and sometimes undermining them) in original & surprising ways. Veres can layer on the lurid nastiness with the best of them but can do subtlety too, meanwhile leavening his prose with sardonic humour. His characters feel like proper individuals and not merely unfortunate puppets. A second collection of his stories (This'll Make Things a Little Easier) has recently been issued by Valancourt Books: I shall have to get a copy of it soon.
The cheese of the week (not for the first time) has been Gorwydd Caerphilly. It's one whose virtues I extolled in a post on my previous blog. With the local Sainsbury's now stocking it, I've lately been enjoying this excellent foodstuff on a more regular basis.