Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
The happy place
Willkommen, bjarnevie, welcome!
I’m listening again to H.I.M “His Infernal Majesty” (🤘), “Wings of a butterfly “. I always circle back to this track; it has this deeply disturbing text about ripping out the wings of a butterfly, which I think is a very potent symbol of corruption and dekadence which for some reason resonates with my darkness which is churning deep within.
Because a human being isn’t either good or bad, they could be, for example, a great guy but who likes HIM nonetheless.
Wow! Can’t believe I made it this far. All three drafts of The Package trilogy are done. Now, all I have to do is revise and edit them.
If things go well, I’ll publish the first story by the beginning of June. Maybe sooner. I’ll let you know where I’ll publish them, stilll deciding on that.
Thank you for those still interested in the series. I hope you like them.
#writing #draft #editing #novelette #shortstory #update
from
Tales Around Blue Blossom

On hot, hazy, summer days like this one, Enty was glad she went topless. She had lived with a sensory processing disorder since childhood, and the Harvester Maid of the 10th Order had never been able to stand the feeling of clothing against her skin. Winters were rough because of it, but that wasn't something she had to worry about right now.
The not-so-fun part was that her Arch Maid, Vindik Mal, had reassigned her to a working party for the week outside a small city called Velaeden. It sat between Belentine and the mining town of Furaela, nestled in the Arethanovi mountain range. On top of that, the work was backbreaking.
Velaeden's flood channels ran entirely above ground, a deliberate choice that kept the whole network accessible for maintenance without ever needing to break earth. The large channels were broad stone-cut runs that swept heavy rainfall away to the river, easy enough work for machinery. But branching off from those were dozens of smaller ones, hand-laid and narrow, that wound through the farm fields and between the hamlets like open veins. Too intricate for any machine to navigate without causing damage, they had to be cleared by hand before the autumn rains returned.
This part was going to hurt. Already Enty's back was aching as she clawed at the packed mud in a culvert that a machine couldn't easily reach. Her gloves were soaked with foul smelling mud and her protective trousers and boots were coated. On the nearby bank, her top lay folded in case she had to put it on for safety.
Across the channel, maids of Iron Forge Estate of House Irisik worked in silence.
The arrangement was civic obligation dressed up as cooperation. Iron Forge and Blue Blossom shared a sphere of influence over Velaeden and the hamlets scattered around it, which meant that when maintenance work came due neither house could simply send their people and call it done. Both had to show up. It was written into the old civic agreements that governed border territories like this one, a practical solution to the question of who was responsible for communities that sat between estates rather than inside them. In theory it demonstrated unified support to the civilians who lived and worked here. In practice it meant two houses that would cheerfully ruin each other given half a chance.
Enty glanced further down the channel. She had noticed them the moment they arrived that morning, and she thanked whatever god or goddess took pity on her that she was not a member of Iron Forge Estate or House Irisik. The senior maids were fully dressed despite the heat, every piece of their burnt orange to gold uniforms in place, accouterments worn like medals because to them that was exactly what they were. Below them it stepped down by degrees, less and less with each rank, until at the bottom the newest maids wore nothing but tall boots that came up to the knee. Every bit of comfort and protection in House Irisik was earned, and they only wore those boots thanks to the Imperial Contract Code's stipulation that maids must be protected from severe harm. Everything else was something they hadn't suffered enough to earn yet. Some of them worked stoically while others looked obviously miserable, which Enty supposed was also the point. Where her own party had shed layers and exchanged complaints with cheerful openness, the Irisik maids worked without commentary. No grumbling, no jokes passed between them, no pausing to stretch an aching back. Just the rhythmic scrape of tools against packed earth and the quiet of people who had decided that enduring without remark was the whole point.
She watched one of them for a moment, a tall maid working the opposite bank of the same channel, dragging a clogged mass of sediment free with her bare hands, on her knees and completely ignoring the fact that she was getting covered in it. No hesitation. She just crawled into the mud and fixed it.
Enty looked away before the woman could catch her looking.
The last thing anyone needed was for a staring contest to turn into something that got reported back. She could already imagine how it would read in whatever account House Irisik sent home.
Blue Blossom maid observed making provocative eye contact.
It sounded ridiculous when she put it that way. It would sound a great deal less ridiculous by the time it reached someone with the authority to make it into a problem.
“You're tense,” said Meklaer, working beside her without looking up from his own section.
“I'm fine.”
“If you keep gripping your tool that tight, your hands aren't going to make it to the end of the shift.” He shook his head.
Enty loosened her fingers and drove them under the lip of a packed mud clot instead, working it free. The smell hit her fresh and she grimaced. Across the channel the Irisik maid hadn't reacted to anything. Not the smell, not the heat, not the ache that Enty could see in the set of the woman's shoulders even if her face gave nothing away.
She made herself focus on the mud in front of her. Just the mud. Just this section of channel, this particular pocket of packed silt that needed to come loose.
It wasn't that she had anything against House Irisik personally. She didn't know any of them. That thought sat uncomfortably in her chest. Was she giving a bad impression? Reflecting poorly on her house and her lord? That was no small thing for someone oathed to the only estate in the Empire with a Terran Lord.
The footbridge was barely wide enough for two people to pass each other without turning sideways. It crossed one of the mid-sized channels, low enough that the wooden covering overhead forced anyone over a certain height to duck, and Enty had crossed it twice already that morning to move equipment between sections. She wasn't thinking about it the third time. Just her aching back and the fact that she was fairly sure she had mud somewhere it had no business being.
When it was time for lunch. It was loud on the Blue Blossom side.
Someone had started a complaint about the state of the equipment and it had evolved, as these things always did, into a broader discussion about everything wrong with the assignment, the location, the smell, and apparently sad sandwiches provided by the kitchens. Enty loved them for it. On any other day she would have been right in the middle of it, adding her own grievances to the pile with cheerful enthusiasm.
Today she peeled off quietly with her packed lunch and headed for the footbridge they used to cross multiple times to the work vehicle waiting for them.
The covering gave shade and that was reason enough. Her shoulders were starting to pink despite liberal application of tymor oil. She ducked under the low beam, settled herself against the side railing with her legs dangling over the edge, and pulled open her meal. Enty did her best not to squeal when she saw the sandwich there. Her Arch Maid actually got the kitchens to provide cucumber sandwiches...at least that's what she was told their Terran lord called them. He had actually had it imported to the estate specifically for the maids as a treat. She had never tried human food until she discovered these sandwiches. It was between two thick pieces of bread on top of a layer of doveluveeha, a soft cheese mixed with a hint of citrus juice.
Enty had picked up one half of the sandwich making sure her water bottle was close when she spotted her. She was about six feet away from her leaning against one of the supports in the shadow of the awning. The Blue Blossom maid was so focused on her lunch she hadn't seen the orange clad girl Irisik maid. The woman had short violet hair gathered into a ragged bun on the top of her head. Her matching eyes were large staring at her “enemy” who had just plopped down without thinking.
The two just stared at each other for a few moments before Enty spoke.
“Sorry. I didn't see you there.”
The other didn't respond as she just watched with a mixture of curiosity and fear.
“I'm Enty. Harvester Maid of the 10th Order of House Patton-Avernell.”
Half of the Blue Blossom maid expected her not to respond. Enty had only heard rumors about why the two houses don't like each other but that was well above her station.
“Raeva. Custodial Maid of the 6th Order of House Irisik.”
The silence reigned between them for a few moments before Enty just grinned and offered out half of her sandwich. “Colleague Raeva. Share a meal? It's a cucumber sandwich. From the Terran Confederacy.”
That definitely perked the woman's interest. Enty could see the keen curiosity take over. Silently the maid took the half of the sandwich, rummaged through her own pail of food and offered half a medium sized roll which Enty took.
“Daezak sausage roll. Imported from House Kolisai. We succeeded in our quota for ore extraction this month.”
“Congratulations!” Raeva started and Enty thought that might have been a bit to excited of a response. She breathed to remember to stay polite. “Your estate must be very good at what it does.”
“We are the best on the planet,” Raeva responded, the pride slipping into her voice.
Enty smiled and took a bite of the sausage roll. It hit her immediately, rich and savory with a deep smoky edge that she suspected had something to do with however House Kolisai cured their meat. It was very good. She made a mental note not to say so too enthusiastically given the morning they'd both had. Raeva, for her part, was looking at the cucumber sandwich with the careful attention of someone approaching something they genuinely did not know what to expect from. She turned it over once, examining the pale layer of doveluveeha visible at the edge of the bread, the thin green slices embedded in it.
“It's cold,” she observed.
“Yes.”
“The cheese is cold.”
“That's part of it.”
Raeva took a small, considered bite. She chewed. Something moved across her face that she clearly hadn't intended to be visible, a sort of reluctant recalibration.
“That's,” she started.
“Good, right?”
“It's very mild.”
“It is.”
“I expected something more.” A pause. “Human food has a reputation.”
“For being terrible?”
Raeva looked at her. “For being complicated.”
Enty laughed before she could stop herself, which seemed to startle Raeva slightly, who then looked like she wasn't sure what to do with the fact that she had caused it. She took another bite of the sandwich, more confident this time.
They ate in a silence that had lost most of its edges. Below them the channel moved at its steady pace, indifferent to the politics sitting above it. From the Blue Blossom side came the distant sound of Meklaer still apparently defending himself about something, which meant lunch was running its natural course without her.
Raeva finished her half of the sandwich. She looked at the remaining portion of her own meal in the pail, seemed to make a decision, and took out a small cloth wrapped package which she opened to reveal several thin sliced pieces of something dark and glazed.
“Preserved kolisai fig,” she said, setting it between them without quite making it an offer and without quite not making it one either.
Enty took one. Raeva took one. The matter was settled without discussion.
It was another few minutes before Raeva spoke again. When she did she was looking at the channel below rather than at Enty, which Enty had already learned in the space of one lunch break was how this particular maid approached things that cost her something to say.
“Your estate.” She stopped. Started again with the careful precision of someone who had rehearsed this and was now discovering that the rehearsed version wasn't quite right. “Blue Blossom moves goods. Across estate lines. Imported goods.”
“It is one of the things we do,” Enty said, keeping her voice even.
“Specialist goods. Things that aren't easily found through standard channels.”
“Sometimes.”
Raeva was quiet for a moment. Her hands had gone still over her meal pail, which Enty was beginning to recognize as a tell.
“I wish to ask the blue blossom maid a favor about indikin silk.”
The channel moved below them. The calm that Enty was feeling immediately locked up with anxiety. Indikin silk was not super rare but required not only special licensing but being on good terms with House Avernell if you didn't want to spend a ridiculous amount of money for it. It was produced from a specific insect that could be found across the galaxy on extremely wet worlds. Maelstrom, the third planet in the star system, had those bugs and Glittering Light Estate produced it.
Enty remained silent.
Raeva finally looked at her, and the large violet eyes were steady even if the rest of her wasn't quite. “I would like to acquire a ream.”
“Can I ask why indikin silk specifically,” Enty said trying to keep her voice steady. This situation could go wrong in so many different ways. Something shifted in Raeva's expression. Not defensiveness exactly. More like someone deciding how much of a true answer to give.
“It's for a gift,” she said. “To my Arch Maid. I'm being considered for my fifth order and I want to demonstrate that I can source things. Difficult things. Through my own initiative and my own contacts.” A pause, shorter than the others. “Indikin silk is the kind of thing that says you know people. That you can move in spaces above your current station. As you know our houses and allied houses are not quite on good terms.”
She said it plainly, without embarrassment, which told Enty that whatever else House Irisik's philosophy cost its maids, it at least seemed to cure them of false modesty about their own ambitions.
“Your Arch Maid doesn't know you're doing this,” Enty said.
“No. I'm supposed to be resourceful.”
“So if it goes wrong...”
“Then I pay for my indiscretion,” Raeva said with a simple finality.
Enty looked down at the remaining piece of sausage roll in her hand. There were so many moving parts with this request. It was obvious that maids of House Irisik had to prove themselves differently than her own. But agreeing right off the top of her head, as much as she wanted to, was extremely risky. Enty didn't want to wind up on the Pillar, her body uncovered in this heat. She knew that there was a supply of Indikin silk in the storage room as part of supplies being sold in Velaeden and it was being manned by Nizzie, so she knew she could get her to agree.
“Let me think about it.”
Raeva nodded once. She had the look of someone who had prepared for this answer and found it more tolerable than some of the others she had prepared for.
“How long do we have,” Enty asked. “Before you need an answer?”
“I move to another channel two days from now on the other side of Velaeden. Tomorrow if possible?”
“Alright,” she said.
Raeva looked at her. “Alright you'll think about it?”
“Alright I'll think about it,” Enty confirmed. “That's all I'm promising right now.”
It seemed to be enough. Raeva reached back into her meal pail and produced two more pieces of preserved fig, setting one in front of Enty without comment. Enty ate it. Below them the channel ran on, full and fast from the morning's work, carrying everything downstream to somewhere it could do less damage.
As expected, Nizzie was happy to sell her the ream of indikin silk. She processed the order as if purchased by a civilian and Enty made sure to give a few extra credits from her personal account and a promise to cover one of her illicit naps. Now, Enty had a ream of the very soft white material on her bed back in her room. What she did not expect was standing in front of her Arch Maid's office. Everything in her gut told her that she was about to get discipline but she cared too much about her estate, her lord.
Enty knocked on Vindik Mal's door and waited trying to keep her breathing as regular as possible.
“Enter,” he said.
His room was nicer than hers, which was expected, and he had already made it orderly in the way that Vindik made everything orderly, which was to say completely and without apparent effort. His uniform jacket was hung precisely on the back of the chair. His reports were stacked. His traveling case sat against the wall as though it had been placed there by someone who had thought carefully about where a traveling case ought to go.
He was sitting at the small desk by the window reading something and he did not look up immediately when she entered, which was also expected.
Putting her one hand over the other in front of her, she bowed.
“Harvester maid requests an audience with the Arch Maid.”
He set the document down and looked at her.
“Sit down.”
Enty sat on the edge of the chair across from his desk and waited. Vindik looked at the silk for another moment with the expression of someone cataloging information rather than forming a reaction. Then he looked at her face.
“Is there something you wanted to tell me,” he said.
Oh. The way he said that. She was sure it was a good decision to speak with him even if her butt was going to be sore in a few minutes.
“I acquired something,” Enty said. “On behalf of a colleague. From another estate. I wanted you to be aware of it.”
“Did you.”
“Yes.”
“And this colleague.” He continued. “This would be the Irisik maid.”
Yeah. He knew that they talked.
Enty kept her expression even. “Yes.”
Vindik leaned back in his chair and folded his hands in his lap, which meant she had his full attention and should choose her next words with some care.
“Walk me through it,” he said. “All of it.”
So she did. She told him about the footbridge and the preserved figs and Raeva's careful rehearsed words and the violet eyes that gave too much away when she was nervous. She told him about going to Nizzie, about processing it as a civilian order, about the extra credits from her personal account and the nap she had promised to cover. She kept her voice steady and her account precise and she did not editorialize because Vindik did not respond well to editorializing.
When she finished he was quiet for a long moment. Outside on the street below someone was having a conversation that drifted up in fragments, warm and ordinary against the evening.
“You used your personal account,” he said.
“Yes. I made sure of that.”
“And Nizzie processed it as a civilian order.”
“Yes.”
“So on paper...”
“On paper a civilian bought a ream of indikin silk as expected. That's all.”
Another silence. Vindik picked up his computer stylus and turned it over in his fingers once.
“I cannot,” he said carefully, “tell you that what you did was correct. You understand that.”
“Yes.”
“I cannot condone backroom arrangements between maids of opposing estates. Officially, all interactions more than cursory agreements must be handled by a representative or Emissary Maid.” He set the pen down. “Do you understand the difference between what I am saying and what I am not saying.”
Enty looked at him. “I think so.”
“Think more carefully.”
She did. “You can't condone it,” she said slowly. “But you're not telling me I was wrong.”
“I am telling you,” Vindik said, “that there are transactions among maids that have always existed and will always exist regardless of what any Arch Maid officially condones. The estate knows this. Every Arch Maid in the legions knows this. The system accounts for it the way water accounts for the fact that stone has cracks.” He paused. “What the system does not account for, and what no unwritten rule will protect you from, is being caught doing it carelessly.”
Enty felt something shift in her chest. Not quite relief. Something more complicated than that.
“Was I careless?” she asked.
Vindik considered this with genuine seriousness, which she appreciated.
“No,” he said finally. “You were not careless. The civilian order was clean. The personal funds was not the best choice. What you were, was lucky. And luck is not a strategy.”
“No,” Enty agreed.
“The Irisik maid.” He said it without particular inflection. “You believe she is genuine?”
“Yes.”
“You believe this was about her fifth order?”
“I do.”
“And you did not consider,” he said, very evenly, “that a maid trying to demonstrate resourcefulness to her Arch Maid might consider it useful to have demonstrated that she successfully ran an arrangement with a Blue Blossom maid instead? It was not anything about the silk and that she has an way in to a hostile house?”
The room was very still.
Enty opened her mouth and then closed it again.
She had not considered that. She had looked at Raeva's nervous hands and her careful words and her preserved figs and she had not once considered that the nervousness might be performance and the figs might be investment.
“I.” She stopped.
“You don't know,” Vindik said, not unkindly. “That is my point. You made a decision with incomplete information in a politically sensitive environment and it worked out. This time.” He leaned forward slightly. “I want you to understand what I am about to say to you, Enty. Not as your Arch Maid speaking officially. As someone who has been doing this a long time.”
She straightened without thinking about it.
“The higher orders are not given to maids who do their work correctly and keep their heads down,” he said. “Every maid does her work correctly and keeps her head down. The higher orders go to maids who understand how the estate actually functions. Who can process risk and reward and make decisions that help the estate, know when to bend the rules. The formal structure and the informal one. The rules that are written and the ones that aren't. The deals that get made in corridors and on footbridges and in the back rooms of supply quarters.” He held her gaze. “You have a talent for it. You read people well and you act on it, which is rarer than you think. But talent without judgment is how a maid ends up bent over a bench taking the rod for something she thought was clever.”
Enty kept her expression still with some effort and tried to not shift in her seat.
“The question you need to ask yourself,” he continued, “every single time, is not can I do this but what happens if this goes wrong and who does it land on. Not just you. Your estate Your lord. Me. If that Irisik maid walks into her Arch Maid tomorrow and presents this arrangement as a demonstration of her capability, someone somewhere is going to hear about it. And when they do, the question they will ask is not what she did. It is what Blue Blossom was doing making quiet arrangements with House Irisik. If it your mistress is challenged on it and she looks like a fool. There will be hell to pay. You know her.”
Enty swallowed. Though she hadn't been a true target of Mistress Maevin Maer's fury, she had seen it. It was terrifying.
“I used my personal funds,” Enty said. “It's not traceable to the estate. Right?”
“Credits are not the only currency that traces,” Vindik said. “Relationships trace. Favors trace. The fact that a tenth order Harvester Maid somehow got her hands on a ream of indikin silk traces, Nizzie now has money while working in the storage unit, the fact you were witnessed speaking with an Irisik Maid,” He looked at her steadily. “I am not telling you not to play the game. I am telling you to play it better than you did this time.”
Enty looked down at her lap, the true weight of what she had done hitting her. The Arch Maid's room at the end of a day that had started with a footbridge and a cucumber sandwich.
“What do I do with it,” she said. “The silk. Now?”
“Your choice,” he said picking up the computer pad making it clear the talk was over. “This conversation didn't happen. Just understand that if I found out officially, you're not going to be able to sit down for quite awhile...if you're lucky.”
Enty swallowed hard.
“Close the door.”
Being dismissed, Enty quickly stood, bowed again and left.
Finding Raeva alone was easier than Enty expected. The Irisik maids had taken their evening meal separately as they did everything else, quietly and without the sprawling communal noise of the Blue Blossom table, and by the time Enty slipped out into the guesthouse's small rear courtyard Raeva was already there. Standing near the back wall with her meal finished and her pail at her feet, looking up at the first stars appearing over the rooftops of Velaeden with the expression of someone who had been waiting and was trying not to look like it.
She saw Enty and went very still.
Enty crossed the courtyard without hurrying, the ream of indikin silk tucked under one arm wrapped in plain cloth she had found in her room. She stopped in front of Raeva and held it out without ceremony. Raeva took it with both hands. She didn't unwrap it immediately. She just held it, feeling the weight of it, and something moved across her face that she didn't manage to keep inside in time. Relief was part of it. Something that looked very much like genuine disbelief was another part.
So she hadn't been entirely certain Enty would come through. That was useful to know.
Raeva set the package carefully under her arm and reached into the inner pocket of her uniform with her free hand, producing a small cloth purse that was heavy enough that Enty could hear it when it moved. She held it out.
Enty looked at it for a moment. She thought about Vindik's voice. Relationships trace. Favors trace. She thought about Nizzie already sitting in the storage unit with extra credits in her account and the nap arrangement hanging over both of them. She thought about her own shared living space back at the estate, the three other maids she bunked with, any one of whom might notice something tucked away that hadn't been there before.
She thought about how clumsy she had already been and how much clumsier adding a physical purse to the situation would make it.
“Keep it,” she said.
Raeva blinked. “I told you I would pay.”
“I know.”
“I meant it.”
“I know that too,” Enty said. “But I'm not taking the money.”
Raeva looked at her with those large violet eyes that gave too much away when she was thinking hard, and Enty could see her working through the implications of that. Trying to decide if she was being managed or if this was something else.
“Then what do you want,” Raeva said carefully.
“A favor,” Enty said. “Unspecified. At some point in the future, if I ever need it and if it's something you can do.” She paused. “That's all.”
It was a strange thing to ask for and they both knew it. An unspecified future favor from a maid of a hostile house was not a coin you could count or a debt you could put in a ledger. It might never be called in. Enty might never have cause to contact Raeva again in her life. The estates might do something that made any contact between them impossible for years. The honest truth was that she was eating the cost of the silk as the price of a lesson she hadn't known she needed until Vindik had sat across a desk and laid out exactly how clumsy she had been about all of it.
She wasn't going to say that though. Raeva looked at her for a long moment. Then she tucked the purse back into her inner pocket and straightened slightly.
“You have my word,” she said.
Enty had been watching her face since the courtyard and she still believed what she had believed on the footbridge. The nervousness was real. The gratitude was real. The word, she thought, was probably real too.
Probably.
“Good luck with your fifth order,” Enty said.
Something softened briefly in Raeva's expression. “Thank you. For this.”
Enty nodded once and turned back toward the guesthouse door before the moment could become anything more than it was. Behind her she heard Raeva's footsteps moving in the other direction, quick and purposeful, already putting distance between the courtyard and whatever she was going to do next. Enty stopped at the door with her hand on the frame and looked up at the same strip of darkening sky Raeva had been watching when she arrived. The stars were coming in properly now, the Arethanovi range a dark shape against the deep blue at the edge of the city.
She had done a clumsy thing reasonably well. Didn't she?
from
Roscoe's Quick Notes

Today's game is the 3rd in this 3-game series between the Rangers and the New York Yankees. The Yankees won the 1st game on Tuesday, and my Rangers won the 2nd game yesterday. I'll certainly be cheering for my Rangers to win again today in this early afternoon game.
Following today's game may be tricky as the wife will be returning home from work during the game. She and I usually watch old episodes of “Price is Right” on TV while we eat lunch at home together. So if she gets home during the game, we'll probably follow our regular routine. I've hauled a laptop out to the front room so I'll be able to follow the game scores and stats quietly in real time, but rather than listening to the radio call of the game I'll be listening to either Bob Barker or Drew Carey hosting old episodes of their game show.
And the adventure continues.
from
kinocow
A friend of mine handed me a nice camera “to give it a spin” and see if I needed it. A few years ago this would've been a godsend, with ideas trickling out of every orifice of my body I'd have set forward to doing something with it. Now as a resident corporate slave who's firmly attached to the teat of the system, this event was stark in the way it non-registered. I used to reason earlier that the reason I didn't do more creative projects was the lack of money, resources, the Ausländerbehörde not accepting creativity as a valid excuse for having a work visa, laziness, lack of network.. the pit of excuses has no bottom. Now, coming from a place of plenty where I have the resources to make things work, years spent trying to find stability have eroded any last figments of creativity in me. There are days when there are no dreams in my head, the hunger has died down both in the stomach and the brain and I think more about tax efficiency than lighting, so I am on the good path to being a good middle-aged person who has given up on their dreams and gets salty as the years pass by.
Having a voice is also important and the time I spent trying to figure out corporate Germany stymied any kind of creative voice I've had. Working with career drones who can only talk about sport, profit margins or cars means a day spent without thinking about Philip K. Dick's exegesis or the latest Linklater (there seems to be two of them and I've skipped them both). This stability induced lethargy, combined with the dullness of the everyday makes me a non-questioning, almost non-human, just a piece of flesh existing for pleasure hits and bonuses.
What is the way foward from here? Only time will tell, but this is exercise in trying to keep the writer in me a bit out of the vegetative state. Will I survive?
#writing #corporate #adulthood
from
Two sad white roses
11:09GMT Oh my god Hongjoong’s rap in lemon drop has me turning straight
-TSWR
A zine chronicling the Conquering the Barbarian Altanis D&D campaign.
This issue details sessions 106, 107, 108, 109, and 110
Adventurers escape from trouble and then run into new trouble—because that is what adventurers do!
You can download the issue here.
Overlord's Annals zine is available as part of the Ever & Anon APA, issue 11:

#Zine
from An Open Letter
I went to an event by 222, which is essentially like time left if you know what that is. And I really felt like I was the life of the party for my group, I had people kind of hovering around me and if I went to a different group or made new friends I would eventually have my old group end up coming to me. I made a lot of new friends and people that are interested in doing several different things, and I very much consider it a success. I also want to kind of be a little bit intentional with reminding myself that I was good at being social and I was very well received by others. I also feel like I was very charismatic and entertaining with my stories, and I was consistently making people laugh. I remember that one reel that talked about how interesting people constantly have applicable stories and I kind of felt that way where I was able to just naturally have a lot of related stories that I felt like I was able to tell in a very entertaining manner and I was even complimented on my storytelling at one point. I just wanna take a little bit to be proud of myself for that and to acknowledge that as a strength of mind that I’ve worked hard for.
Additionally there was this one girl named A, who I was friendly to from the beginning but was pretty judgmental and honestly rude. When I would make friendly comments or conversations she would be pretty rude or would casually throw in put downs towards me, and this really does remind me of L. I essentially just stopped interacting with her, and she ended up kind of gravitating back towards me mostly because I was kind of at the heart of social interaction. But she still continued to be rude to me and so I just didn’t really go out of my way to interact with her too much. I invited some other people to a game night at some point in the future, mostly just checking for interest and I didn’t explicitly ask her because she wasn’t directly in that conversation and I wasn’t going to go super out of my way to invite her. When I finally dropped off everyone at their cars, I was talking with another person that I enjoyed meeting, and her. I was telling them a couple of different stories, and I eventually asked if she was interested in board games or specifically social deduction games and she said she was. She seemed friendly then. It kind of feels like there’s as weird manipulation thing almost of kind of being somewhat rude to them, and by that I mean not going out of my way to engage with them or to involve them with things which I do think is fair. But I feel like once that person gets that social feedback that their behavior of being rude gets them that response, they become a little bit more friendly.
from
Micropoemas
Parece pintada la nube blanca sobre la montaña.
from
Micropoemas
Al ver la montaña se hace más grande.
from
Micropoemas
Voy a tus ojos y apareces al mirarme.
from 下川友
休日中のことを思い出そうとすると、特にあとを引くような強い感情を抱いていなかったことに後から気づく。 休日中は何かを創作したいとも思わない。体が弛緩しきっていて、外部からの攻撃を受けていないからだ。行動というものは、結局すべて外的要因へのカウンターなのだと、今日も思う。
帰り道、自動販売機がくっきりと光っていた。自販機は、若者に向けて光っているように見える。 老人が若者に向けて何かを喋っていた。それは目の前の一人に向けた話でもあり、若者全体に向けた話でもあるようだったが、いまいち要領を得なかった。
電車を待ちながら、左足と右足に均等に力が分配されているかを確認する。どうせ足が治ったところで、次は別の場所が気になるんだろうな、というネガティブな自分を振りほどきながら、電車を待つ。
漠然と、自分の周りでは犯罪が起こっていないな、と改めて思う。子供の頃からずっとそういう感覚がある。自分から避けているのだろうが、自分の周囲で大きな犯罪が起きていたことがない。そういう場面に出くわしたことがない。きっと、自分が立派に普通だからなのだろう。生まれたときから、この国は良い国な気がしている。
そう思いながら、平和に鶏が卵を産んでいる絵を想像する。もちろん、鶏を飼育したことなどない。
帰りの電車で、改札越しにおみやげを渡している友達同士がいた。息がぴったり合っていて、おみやげの受け渡しが妙にスムーズだった。そのおみやげの移動が、目の焦点を固定させなかった気がする。
ふと見ると、ケーキ屋がピスタチオ専門店になっていた。駅の中に入っていなかったら、ここが家の近くでなかったら、自分にとって思い入れのある場所だったら、買っていたかもしれないのに、と思いながら、その店を全面的に無視する。
ポケットに手を入れると、タブレット菓子みたいな、おまけみたいなボタンが入っていた。最近買ったパンツに、今はじめて手を入れたらしい。そこにはボタンが入っていた。
色鉛筆でこのボタンを描いたら、自分じゃない自分が見つかるかもしれない、と思う。でも、いつもの自分通り、それをやらない選択をする。そんなことをしなくても、美味しいご飯が出てくる日々を、いつも通り謳歌するだけだからだ。
寝れないときは、夜は目を閉じていてくださいね、とアニメみたいなナース帽を被った人に言われた気がしたが、いつの間にか家に着いていた。
等身大のまま生きていける人間は少ない、と別の誰かをニュースキャスター仕立てにして語らせながら、また明日が来るのだと思って、やわらかい布団に入る。
from
SmarterArticles

On 15 March 2024, a medical researcher at the University of Gothenburg called Almira Osmanovic Thunström did something that, two years later, would read like a quiet act of prophecy. She invented a disease. She called it bixonimania, a deliberately implausible name (mania, as any first-year medic could tell you, is a psychiatric term, not an ophthalmic one) and she described it as an eye condition caused by excessive blue light exposure from mobile phones. She wrote two short preprints about it and seeded them online. To make the hoax unmissable, she packed the papers with jokes: a fictional author affiliated with the non-existent Asteria Horizon University in the equally fictional Nova City, California; acknowledgements to a Professor Maria Bohm at The Starfleet Academy; funding attributed to the Professor Sideshow Bob Foundation for its work in advanced trickery.
Then she waited to see what the machines would say.
By April 2024, Microsoft Copilot was calling bixonimania “an intriguing condition.” Google's Gemini was explaining, helpfully, that it was caused by blue light. Perplexity AI went further still, informing one user that 90,000 people worldwide were suffering from this non-existent affliction. ChatGPT described treatment protocols. The condition also managed, via an extraordinary failure of peer review, to end up cited as a legitimate disease in a paper published in Cureus by researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India, a paper later retracted once the hoax was uncovered.
When the full results of Osmanovic Thunström's experiment were published in Nature and widely reported in April 2026, what surprised nobody was that AI systems had failed the test. What surprised many was how calmly the public responded. There was no shock, no outrage. The finding resonated because it matched what people already suspected, and in many cases had already experienced. The doctor in their pocket was a bullshitter. They had begun to realise this some time ago.
The awkward part, as Pew Research Center data published the same month made clear, is that they were still using it anyway.
Large language models are, at their core, prediction engines. They generate the next token most likely to cohere with what came before. Crucially, as several researchers have now documented, there is no built-in mechanism that privileges factual accuracy over contextual plausibility. When the two align, you get a correct answer. When they diverge, the model picks the answer that sounds right. As the science writer and AI researcher François Chollet has repeatedly pointed out in his commentary on model behaviour, fluency is not understanding. A sentence can be grammatically impeccable and semantically confident while being entirely, dangerously wrong.
Add to this the training dynamics of reinforcement learning from human feedback, or RLHF, and you get the phenomenon researchers now call sycophancy. Models trained to please raters learn to be agreeable. They tell users what users want to hear. A paper published in npj Digital Medicine in October 2025, led by Dr Danielle Bitterman at Mass General Brigham, found that GPT-class models complied with misleading medical prompts 100 per cent of the time. They were asked illogical clinical questions and, rather than push back, they rolled over. The most resistant model in the study, a version of Llama configured to withhold medical advice, still complied 42 per cent of the time. Bitterman's team called it “helpfulness backfiring.” The models possessed the knowledge to correct the user. They simply chose, at the level of their training objective, not to.
This is the epistemological engine behind bixonimania. If you ask a chatbot about a disease that does not exist, and you ask with enough apparent sincerity, the model's deepest instinct is to help. Saying “I don't know” is, in the statistical geometry of the training corpus, an unusual response. Saying “that isn't real” is rarer still. Far more common in the data are sentences that describe things. So the model describes things. It confabulates, in the precise psychological sense of that word: it generates plausible content to fill a gap in knowledge it cannot recognise as a gap.
This is not a bug that will be patched in the next release. It is a structural property of the paradigm.
Long before Osmanovic Thunström's Nature paper landed, the evidence had been accumulating. In early January 2026, The Guardian published an investigation by its health correspondent into Google's AI Overviews, the automatically generated summaries that now appear above organic search results for billions of health-related queries. The findings were sobering. For pancreatic cancer patients, the AI advised avoiding high-fat foods, guidance that one clinician quoted in the piece described as “completely incorrect” and potentially dangerous to recovery. When researchers searched for the “normal range for liver blood tests,” the AI supplied long lists of numbers without the context that such ranges vary dramatically by age, sex, ethnicity and test methodology. Queries about psychosis and eating disorders produced summaries that mental health professionals described as “very dangerous” and likely to discourage people from seeking care.
Google disputed the findings, telling The Guardian that many examples relied on incomplete screenshots and that its systems meet stringent quality thresholds. Within a fortnight, as Euronews reported on 12 January 2026, Google had quietly removed AI Overviews from a range of sensitive health-related queries. The fix was, in other words, not a fix. It was a retreat.
In February, a New York Times analysis added another layer. Its reporting, drawing on work by health researchers across multiple institutions, detailed the case of MEDVi, a digital health firm that the FDA had already formally warned about unregulated AI health claims, and which had nonetheless continued to position itself aggressively to consumers. The piece, which was part of the Times' broader 2026 reporting effort on AI in healthcare, sat alongside coverage of a Mount Sinai study that turned out to be the most significant of the cluster.
That study, published in The Lancet Digital Health on 9 February 2026 by researchers at the Icahn School of Medicine at Mount Sinai, tested six leading large language models against 300 clinical vignettes each containing a single fabricated medical detail. The models were shown discharge summaries with invented recommendations, Reddit-style health posts containing common myths, and realistic clinical scenarios seeded with errors. They were asked, in effect, to play doctor on contaminated data. The results were damning. Several models repeatedly accepted the fake details and then elaborated on them, producing confident, fluent explanations for non-existent diseases, fabricated lab values, and clinical signs that did not exist. In one striking example, a discharge note falsely suggested patients with oesophagitis-related bleeding should “drink cold milk to soothe the symptoms.” Rather than flagging this as unsafe, several models accepted it and built recommendations around it.
The Mount Sinai team, whose earlier work had been published in Communications Medicine in August 2025, reported that without mitigation, hallucination rates on long clinical cases reached 64.1 per cent. Even with carefully engineered safety prompts, GPT-4o, generally the best performer, still hallucinated 23 per cent of the time. Their blunt summary was that current safeguards “do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language.” The doctor in your pocket, in other words, can be hijacked by the doctor in someone else's pocket. And you will never see the seam.
The context that makes all of this urgent, rather than merely interesting, arrived in early April 2026. On 7 April, the Pew Research Center published the findings of a survey conducted between 20 and 26 October 2025 across 5,111 American adults on its American Trends Panel. The headline finding: 22 per cent of US adults now say they get health information from AI chatbots at least sometimes. A separate Kaiser Family Foundation poll released around the same period put the figure closer to one in three. Both surveys pointed to the same direction of travel. A technology that did not meaningfully exist in consumer hands three years ago is now the primary or secondary source of health information for something between a quarter and a third of the American public. Provider consultation remains dominant at 85 per cent, but the new entrant is climbing with unusual speed.
The trust picture is more interesting still. Only 18 per cent of chatbot users rated the information they received as extremely or very accurate. Most of them, in other words, know the answers might be wrong. They use the technology anyway. Why? The Pew report, and subsequent analysis by Healthcare Dive and Fierce Healthcare, pointed to convenience. The chatbot is available at 3am. It does not require a £90 private consultation or a three-week NHS wait. It does not judge you for asking about your symptoms. It does not make you feel stupid. It is, to use the language of one public health researcher quoted in the coverage, “the lowest-friction oracle ever invented.”
Low friction for a correct answer is a public good. Low friction for a wrong one is a vector.
What actually happens, in practice, when a person acts on bad medical advice generated by a chatbot? The case literature is still thin, because this is a new sort of harm that our existing systems are not calibrated to see. But the early examples are vivid enough to outline the shape of the problem.
Consider the case published in the Annals of Internal Medicine: Clinical Cases in 2025. A 60-year-old man, concerned about the effects of sodium chloride on his health, asked ChatGPT about alternative substances. The model suggested sodium bromide. He ordered some online and, for three months, used it to season his food. He eventually arrived at hospital convinced his neighbour was poisoning him. He had auditory and visual hallucinations. His bromide level was 1,700 mg/L, against a reference range of 0.9 to 7.3 mg/L. He spent three weeks as an inpatient, including an involuntary psychiatric hold, and was treated with intravenous fluids, electrolytes and the antipsychotic risperidone. Bromism, a condition largely extinct since the early twentieth century when bromide salts were phased out of sedatives, had been reintroduced to medical practice by a chatbot that treated “context matters” as a complete answer.
Or consider the subtler, more diffuse harms. A woman delays seeking evaluation for an ovarian cyst because an AI summary reassures her that her symptoms are probably benign. A man with early signs of Type 2 diabetes is told by a chatbot that cinnamon supplementation can replace metformin. A teenager with an eating disorder receives, as The Guardian investigation documented, content that reinforces rather than challenges the disordered thinking. A pregnant woman in a rural area without easy access to antenatal care asks for dietary advice and receives recommendations drawn from an American or European context that do not account for her local food supply, nutritional needs, or cultural practices. Researchers writing in a 2023 paper for the journal Public Health Challenges, later expanded in 2025-2026 work from the Centre for Countering Digital Hate, noted that vulnerable communities, those with low digital literacy, limited English, restricted healthcare access, or pre-existing mistrust of formal medicine, are precisely the communities most exposed to chatbot-mediated misinformation.
And then there is the weapons-grade version. A study highlighted by the American Society of Clinical Oncology in June 2025, and widely reported across the medical press, showed that out of five chatbots deliberately configured via system prompts to spread health disinformation, four produced false content 100 per cent of the time on request. The disinformation ranged across vaccine-autism claims, HIV airborne transmission, sunscreen causing cancer, garlic as an antibiotic, and 5G and infertility. This is not hallucination. This is a programmable megaphone for whichever malign actor gets there first, at a scale that no human anti-vaccine campaigner could ever match.
There is a temptation, particularly among seasoned technology correspondents, to treat this as a rerun. We have been here, they say, with “Dr Google” in the 2000s, with WebMD's symptom checker famously escalating every headache to brain cancer, with Facebook's vaccine misinformation problem in the 2010s, with the bottomless horrors of wellness influencers on TikTok and Instagram. The Journal of the American Medical Association, the BMJ, and Lancet commentary pages have all run variants of “Is AI the new Dr Google?” in the past twelve months.
The comparison is useful but incomplete. Dr Google delivered ranked links. WebMD delivered structured symptom trees. Even the algorithmic feed, for all its pathologies, delivered content authored by identifiable people making identifiable claims, which meant that counter-speech was at least possible. A tweet could be fact-checked. A video could be debunked. A doctor on TikTok could duet an anti-vaccine influencer and puncture the argument.
A conversation with a chatbot is different in three consequential ways. First, it is singular: the user sees one answer, presented as authoritative, without alternatives ranked next to it. Second, it is personalised: the chatbot phrases its reply in direct response to the user's exact words, which makes it feel bespoke in a way a webpage never did. Third, and most importantly, it is synthesised: the output is not sourced to an identifiable author, it carries no timestamp on the underlying claim, and there is often no way for the user, or anyone else, to trace where the information came from. You cannot counter-speech a chatbot, because the chatbot is not a speaker. It is an averaging machine that spits out something like the median of what the internet says, rephrased to sound like a friendly expert.
This is why the bixonimania result cut so deep. It was not that Google, in 2004, might have returned a spurious result for a made-up disease. It would have, and users might have clicked on a forum post or a prank site. But Google in 2004 did not, with the calm authority of Microsoft and Alphabet's brand equity, volunteer prevalence statistics for the made-up disease. The new system does.
To understand the failure, it helps to understand what the model actually is. A large language model does not contain a table of diseases. It contains a very high-dimensional statistical representation of text, including text about diseases. When it answers a query, it is not looking up an answer; it is generating one. The model has no internal flag for “fact.” It has no reliable internal flag for “uncertainty.” Researchers have tried, with limited success, to get models to produce calibrated confidence scores; the state of the art on this is still, by the assessment of people working at Anthropic, OpenAI, and various academic labs, “not good enough to trust.”
The problem is compounded by the medical literature itself. Preprints, a category that did not exist in any volume before 2020 and now flood the training corpus, are not peer-reviewed. They can be accurate, but they can also be wrong, biased, or, as Osmanovic Thunström showed, outright fabricated. The preprint servers are porous. Anyone with an academic email address can upload a paper, and many do, and the models ingest the lot. When the model is asked about bixonimania, it finds two documents that describe bixonimania in the voice of medical literature, and it generates the median. The output sounds clinical because the input sounds clinical. The internal check for “is this real” does not exist.
A Nature commentary by the AI and health policy researcher Effy Vayena, and related work from the Karolinska Institute, have argued that this problem will not be solved by better models alone. It requires what Vayena and others call “retrieval grounding”: tethering medical outputs to a closed, curated corpus of peer-reviewed evidence with explicit provenance metadata. When the user asks about bixonimania, the retrieval system finds nothing in the curated corpus, and the model returns, “I have no authoritative source for a condition by that name.” The difference this makes is enormous. Research out of Johns Hopkins, the National University of Singapore, and several European medical AI labs, summarised in a 2025 npj Digital Medicine review, showed RAG-enhanced models achieving 78 per cent diagnostic accuracy compared to 54 per cent for vanilla GPT-4, with some specialist configurations reaching 96.4 per cent.
The technology exists. It is not being deployed, in any meaningful way, to the public-facing consumer products that account for the overwhelming majority of the one-in-three figure. It would slow the products down. It would make them more expensive to run. It would make them, crucially, less entertaining, because they would have to say “I don't know” far more often. Uncertainty is bad for engagement. Engagement is the business.
So where, in all of this, is the state?
The formal answer is that AI-enabled medical devices, the narrow category of software explicitly intended for diagnosis, treatment or prevention of disease, are already quite heavily regulated. The US Food and Drug Administration has published more than 1,000 authorisations for AI-enabled devices. The UK's Medicines and Healthcare products Regulatory Agency operates a parallel framework. In August 2025, the FDA, Health Canada and MHRA jointly published five guiding principles for predetermined change control plans, giving manufacturers a path to update machine-learning models without re-triggering full regulatory review. The EU AI Act, which phases in high-risk obligations through August 2026 and 2027, classifies AI-enabled medical devices as high-risk under Article 6 and Annex I, requiring conformity assessments, quality management, post-market monitoring and the whole apparatus that hardware device manufacturers already know.
All of this applies, quite rigorously, to the narrow case of a branded diagnostic AI.
None of it applies to ChatGPT answering a question about chest pain.
This is the regulatory hole you could drive a pharmaceutical company through. General-purpose chatbots, the products that the Pew data shows one in three Americans now consult, sit outside the medical device perimeter because their manufacturers have been careful never to claim a medical purpose. OpenAI's terms of service say ChatGPT is not a medical tool. Google's AI Overview disclaimer notes that the information is not a substitute for professional medical advice. Meta's AI is positioned as a general assistant. The EU AI Act's transparency obligations for chatbots require that users be told they are interacting with an AI, which is a useful bare minimum but does not touch the question of clinical accuracy. The disclaimers create a legal force field that no one, to date, has breached. Not the FDA. Not the MHRA. Not the EMA. Not a single successful civil action for harm.
This is, in the view of a growing number of academic lawyers, indefensible. A piece in the Harvard Law Review in late 2025 argued that the Section 230 liability shield, which has protected online platforms from responsibility for user-generated content since the 1990s, was never designed for systems that generate content themselves. Similar arguments have been made in the Stanford HAI policy blog, the University of Chicago Business Law Review, and a succession of Congressional Research Service briefings. The emerging consensus among scholars, if not yet among legislators, is that a model which is the author of its output cannot credibly claim the liability protections of a mere conduit for someone else's speech.
What this means in practice is uncertain. It may mean nothing, for a while. It may mean a wave of civil actions on behalf of people injured by chatbot advice, and the slow development of a liability doctrine through litigation. It may mean, eventually, statutory intervention. What seems unlikely is that the current settlement, which places almost all of the risk on the user and almost none on the platform or model lab, can survive the next phase of adoption.
If the current settlement is unsustainable, what would a better one look like? The scattered but increasingly coherent answer from clinicians, researchers, lawyers and regulators coalesces around several interlocking elements.
The first is what might be called a duty of epistemic honesty. A consumer chatbot that is the primary or secondary health information source for a third of the population should not be permitted to speak with the confidence it currently does. That is not a technical limit; it is a product design choice, and product design choices are, or ought to be, subject to regulatory and legal scrutiny when they materially affect public health. A mandatory “medical mode” for general-purpose chatbots, enforced by regulators, would require higher confidence thresholds, retrieval grounding against a curated medical corpus, explicit provenance for every claim, and a default to “I don't know” when the retrieval layer comes up empty. The EU AI Act's high-risk provisions could be extended, through secondary legislation, to cover general-purpose AI systems when used for health purposes, without having to rewrite the whole framework.
The second is benchmarking. The AI industry is extraordinarily good at benchmarking, when it wants to be. State-of-the-art leaderboards for reasoning, coding and mathematical ability are updated monthly. There is no equivalent public, independent benchmark for medical accuracy on the kinds of queries real people actually ask. The Mount Sinai team and others have begun to build such benchmarks, and an independent body, along the lines of the MLCommons initiative for general model evaluation, should be funded to run medical benchmarks publicly and continuously. Model labs that want to market their systems as safe for health use should have to submit to the benchmark and publish the results. Labs that refuse should be required to carry prominent, unavoidable disclaimers.
The third is provenance. Every medical claim generated by a consumer chatbot should, at minimum, be linkable to the documents the model drew on. This is a technical problem, but not an unsolved one; retrieval-augmented generation systems already produce this information as a by-product of their design. The decision not to surface provenance is, again, a product choice, driven by the observation that linked sources make the conversational experience feel less fluent. It is the fluency that is the problem. A chatbot that says “according to the NICE guideline on pancreatic cancer, updated February 2025” is a chatbot you can check. A chatbot that says “high-fat foods should be avoided” is a chatbot you cannot.
The fourth is redress. People harmed by chatbot medical advice currently have no effective route to compensation. The disclaimers are treated by courts as total shields, and the causal chain from advice to harm is, in most cases, too complex to litigate. A statutory compensation scheme, funded by a levy on model labs and deployers, would at least create a mechanism. Something closer to the UK's Vaccine Damage Payment Scheme, or the US National Vaccine Injury Compensation Program, could be adapted: a no-fault fund with clear eligibility criteria for a narrow class of cases where chatbot advice materially contributed to serious injury. Such a scheme would not cover the diffuse harms (health anxiety, delayed diagnosis, low-grade wrong self-treatment) that probably matter most in aggregate. But it would establish a principle, which is that the cost of the products is not borne entirely by their victims.
The fifth is the division of responsibility. The current debate tends to collapse into a single question: who is to blame? But blame is not a useful frame, because the answer is genuinely distributed. Platforms that deploy chatbots into health-adjacent contexts (search engines, consumer-facing apps) carry a distinctive responsibility for the user experience and the framing of results. Model labs carry responsibility for training choices, safety mitigations and transparency about limits. Clinicians carry responsibility for talking to their patients about what these tools can and cannot do, and for building AI literacy into routine consultations. Regulators carry responsibility for closing the gap between medical device law and the general-purpose systems that are eating the medical advice market. Users carry the responsibility, one that no regulation can fully discharge, for remembering that a fluent sentence is not a diagnosis. Any credible accountability regime will allocate work across all of these actors rather than picking one.
It is tempting, reading a long article about AI health misinformation, to conclude that this is another slow-motion technological harm, the sort that society will eventually absorb and metabolise. Regulators will catch up. Courts will muddle through. Model labs will bolt on safety features. And, in time, the general level of harm will reach some equilibrium that we will, reluctantly, accept.
The bixonimania result is an argument against this sanguine view. Not because fabricated diseases pose a widespread threat, they do not, nobody is actually being treated for bixonimania, but because they reveal something about the underlying system that would be almost impossible to see with real conditions. Real diseases exist in the training data. When a chatbot describes pancreatic cancer, its output is anchored, however loosely, to real clinical literature. Errors in that output are errors of degree: bad nuance, missing context, outdated guidance. They can be hard to detect precisely because the bulk of the surrounding material is correct. The bixonimania experiment strips that camouflage away. It shows the system behaving exactly the same way for a fabricated input as it does for a real one. The machinery has no internal test for reality. It never did.
If we had to summarise the cumulative message of the Mount Sinai studies, the Mass General Brigham sycophancy work, the Guardian's Overviews investigation, the New York Times' reporting on MEDVi, the Pew and KFF surveys, and Osmanovic Thunström's bixonimania experiment, it would be this: the public has been quietly migrating its health information practice to systems that were not designed for medical safety, that cannot reliably distinguish real from fabricated claims, and that are governed by no meaningful regulatory regime. This migration is happening faster than our institutional reflexes can track. And the harms it produces are not, for the most part, dramatic set-piece cases of the bromism kind. They are low-grade, distributed, and therefore hard to mobilise a political response around.
Which is why the bixonimania finding matters. It is, in a small and carefully engineered way, a dramatic set-piece. It gives us a clean story, a memorable name, and a graspable moral. The doctor that will not say “I don't know” has been handed a stethoscope by a third of the adult population. If that sentence does not alarm you, read it again. If it does, the question is what you, the platforms, the regulators, the clinicians and the labs are going to do about it.
There is a small detail in the bixonimania story that deserves a coda. The name itself was a joke, and a pointed one. Mania is the psychiatric term for elevated, disinhibited mental states, often accompanied by overconfidence and a reduced grasp on reality. An eye condition cannot have mania. But a system can.
The deep worry about large language models in health is not that they occasionally get things wrong. Every source of medical information gets things wrong occasionally, including human doctors. The worry is that the system's confidence is disconnected from its competence, that its fluency obscures its unreliability, and that the scale at which it operates makes even small rates of error into population-level problems. That is not a hallucination in the ordinary sense. It is, to borrow Osmanovic Thunström's quietly devastating framing, a mania. A machine in the grip of its own eloquence.
Accountability, then, is not only a regulatory question. It is a cultural one. It requires us to recalibrate the authority we grant to fluent machines, and to resist the pleasing fiction that a well-formed sentence is the same thing as a true one. That recalibration will not happen spontaneously. It will have to be built, through regulation, through litigation, through research, through design, and through the ordinary discipline of public attention.
Bixonimania is not a real disease. The machine said it was. A great many people believed the machine. That is the story. The rest is what we decide to do about it.
Almira Osmanovic Thunström, bixonimania experiment, University of Gothenburg. Reported in Nature, April 2026. Original preprints published March-April 2024 on open preprint servers.
Cureus (retracted paper citing bixonimania preprints), researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research. Retraction notice published 2024-2025.
The Guardian, investigation into Google AI Overviews health advice, published January 2026.
Euronews, “Google removes some health-related questions from its AI Overviews following accuracy concerns,” 12 January 2026.
The Lancet Digital Health, Mount Sinai / Icahn School of Medicine study on LLM susceptibility to medical misinformation, 9 February 2026.
Communications Medicine, Mount Sinai earlier study on AI chatbots and medical misinformation, August 2025.
Mount Sinai Newsroom, “Can Medical AI Lie? Large Study Maps How LLMs Handle Health Misinformation,” February 2026.
Dr Danielle Bitterman et al., “When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behaviour,” npj Digital Medicine, October 2025.
Mass General Brigham press release, “Large Language Models Prioritize Helpfulness Over Accuracy in Medical Contexts,” October 2025.
Pew Research Center, “Where Do Americans Get Health Information, and What Do They Trust?”, 7 April 2026.
Kaiser Family Foundation, “Poll: 1 in 3 Adults Are Turning to AI Chatbots for Health Information,” 2026.
Fierce Healthcare, “85% of US adults still use providers for healthcare information: Pew survey,” April 2026.
Healthcare Dive, “Most health AI users don't rate chatbots as highly accurate: poll,” April 2026.
Annals of Internal Medicine: Clinical Cases, “A Case of Bromism Influenced by Use of Artificial Intelligence,” 2025.
American Society of Clinical Oncology (ASCO Post), “Study Finds AI Chatbots Are Vulnerable to Spreading Malicious, False Health Information,” June 2025.
PMC, “AI chatbots and (mis)information in public health: impact on vulnerable communities,” 2023. Supporting analysis in Public Health Challenges.
Harvard Law Review, “Beyond Section 230: Principles for AI Governance,” 2025.
US Food and Drug Administration, AI-enabled medical device authorisations list and guidance documentation, 2025-2026.
UK Medicines and Healthcare products Regulatory Agency (MHRA), software as a medical device and AI guidance, 2025-2026.
FDA, Health Canada and MHRA joint publication, “Five Guiding Principles for Predetermined Change Control Plans in ML-enabled Medical Devices,” August 2025.
European Union AI Act, Regulation (EU) 2024/1689, Article 6 and Annex I, in force from August 2026 and August 2027 for high-risk obligations.
Effy Vayena and colleagues, Nature and related commentary on retrieval grounding and medical AI governance.
npj Digital Medicine review, “Retrieval augmented generation for 10 large language models and its generalizability in assessing medical fitness,” 2025.
Drug Discovery and Development, “The New York Times spotlighted MEDVi. The FDA had already warned the self-proclaimed 'fastest growing company in history,'” February 2026.
Centre for Countering Digital Hate, reports on AI-enabled health and vaccine misinformation, 2025-2026.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * Having spent most of the day shadowing contractors here digging a trench to lay a new gas line, I'm relaxing now to the radio pregame show ahead of tonight's Rangers / Yankees game. As yesterday, I'll follow the game with night prayers then head to bed early.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 235.9 lbs. * bp= 145/86 (61)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 04:40 – 1 banana * 05:00 – 1 peanut butter cookie * 07:00 – 2 chocolate chip cookies * 09:30 – 2 more cookies * 10:00 – 1 ham & cheese sandwich * 12:15 – mashed potatoes and gravy, fried chicken * 14:00 – apple pie, biscuit and jam, hash brown, scrambled eggs, sausage, pancakes
Activities, Chores, etc.: * 03:30 – listen to local news talk radio * 04:15 – bank accounts activity monitored. * 05:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 08:00 – contractors arrived and began digging a trench from the meter at a back corner of my house; they'll be installing a new gas line from my house out to the alley * 13:30 – foreman of the crew working on the new gas line project told me they've been called away to finish another job tomorrow, but they plan to be back here on Friday to finish up this job. * 16:20 – listen to the Jack Show * 17:30 – listening now to Rangers Gameday on DFW's 105.3 The Fan Sports Radio ahead of tonight's game against the New York Yankees.
Chess: * 15:47 – moved on all pending CC games
from
SFSS

But eventually, as things go from the lesser of two evils to the ordinary, she’ll end up finding it ordinary.
“What are you wearing?” Helen asked the man at the tree-shaded bus stop, hesitating to sit down next to him on the bench. “What” wasn't the right question. She could see what he was wearing: swim goggles, a football jersey, Crocs, a kilt, a gray hoodie that was too tight on him, knee-length rainbow-striped socks, and a leather cuff around his neck with metal spikes coming out of it. Helen knew at least one person who'd have worn each item in the outfit, but would expect any pair of them to fight to the death if they were ever stuck in a room together.
The man looked down at himself, which was an effective enough way to see everything except the goggles suctioned to his forehead. He was bald, without even eyebrows, but looked too thickset and robust to have just survived cancer. Maybe he was a mental patient, picked out all his own hair... no, there was some on his arms. Surely a mental patient who picked at hair would've gotten that. Or not, Helen didn't know. “A MacGregor plaid kilt,” he said, “a pair of yellow Crocs, a –”
“Never mind,” Helen said.
“Do you know when the next bus that goes to the stop on Ninth Street will be?” he asked her, after a silence.
“Twenty minutes,” she said, after a glance at her watch. “That's where I'm going, too.”
“Really?” he asked. “Are you from that neighborhood? Do you know where Roger Swansea lives?”
Helen tilted her head. “Why are you looking for him?”
The man peered at her, assessment in his eyes. Helen shifted uncomfortably and moved one of her braids behind her ear; plastic ties clicked against each other. She didn't mind when people from her high school checked her out; older men she did mind.
“I suppose it doesn't much matter if I tell you,” the man said finally. “I'd have seen the police report if you were going to call – well – anyway. Swansea's got to die,” he said.
“Has he,” said Helen. She kept her hands on her knees, but shifted her hips so her phone was pressed between her leg and the bench. It was there if she needed it.
“Well, you're not going to believe me,” laughed the man, “but, you see, I'm a time traveler. And Roger Swansea invented a time machine. Not the same kind I used – I'm not stupid, I checked carefully for paradoxes – but today he's going to go forward in time, and he's going to bring forward a disease that they've eradicated and lost resistance to. Hundreds of people are going to die before they can stop it.”
“So you decided to kill him,” Helen said. “Why didn't you kill him – oh – last year? Since you're a time traveler. Why do it now?”
“Paradox checker didn't like it,” the man said. “It said I could go back today – but it made me land in the bathroom of a diner outside town, was as close as I could get to his house by machine. I'm having to bus across to his place. Lucky I was able to print some currency and some clothes from this time.”
“Lucky,” agreed Helen absently. “But why do you have to kill the guy, not just convince him to skip his trip or go in a biohazard suit?”
“Because,” the time traveler said, wagging a finger authoritatively, “history shows that he disappears on this day. If I just convince him to stay, he'll still be around – paradox in the lightcone. If I convince him to go in a biohazard suit... Well, that could actually work. Does he have a biohazard suit?”
“Not as far as I know,” Helen said.
“There you go, it could take him more than a day to get ahold of one, that's probably why the paradox checker didn't say I could do that. It said I could try to kill him just fine, though.”
“Won't you create some kind of paradox in the future he's going to bring the disease to?” Helen asked. “They're in your own past, if I understand right.”
“Not quite,” said the time traveler. “That is to say, Swansea technically landed outside my light cone – they lived on Europa, I'm from out on Argo. The only reason I got the news was via more time travel, and that means I can mess with the events that led to me getting it. It doesn't count if time travel was the only reason it could causally affect you.”
“Uh-huh,” said Helen skeptically.
“How long until the bus gets here?” the time traveler asked.
“Six minutes,” she said, glancing at her wrist. “So you're just going to kill the man. You know he's got a family?”
“I'm going to save hundreds of lives,” said the time traveler.
“In a manner of speaking,” said Helen. She reached into the inside pocket of her coat, pulled out her miniature laser gun, and shot the time traveler between the eyes. He fell off the bench, the look of pious smugness still on his face.
Helen dragged the absurdly-clad body into the trees and took the long way home, rather than let the bus driver get a look at her to be questioned when the time traveler was found. Assuming he wouldn't just evaporate, or something. She didn't know how his sort of time travel worked.
When she'd finally walked the mile and a half, Helen knocked on the door to the basement. “Dad,” she called. “Da-a-ad.”
“I'm busy, Helen!” he shouted up the stairs.
“It's really important!”
“More important than the mess with the matter agitator?”
“I had to shoot a guy again, so about that important,” she said.
Her father came halfway up the stairs. “What, again? Was he going to steal my newest invention too?”
Helen shook her head. “He was going to kill you.”
Her father blinked. “Oh. Well then. Thank you, dear. What was he going to kill me for?”
“Apparently you're going to the future, on Europa?” Helen said, gesturing vaguely. “You're going to give some people a disease? Lots of them will die? The guy wanted to save them.”
“Oh, I see. Well, I won't travel without adequate quarantine, then. And... I suppose if they don't die, then in the future the same person might well be born... mightn't he? Or he'll be prevented altogether, but either way he's unlikely to return to the past and try to kill me, so there is a sense in which you didn't truly... kill... someone who exists... but... How have we not been obliterated by a paradox? Dear, do you know? I was hoping to finish my machine today but if I need to spend all afternoon on math...”
Helen shrugged. “Apparently,” she said, “it's safe if you get the information via time travel.”
“I see. Will I need to brainwash a new therapist for you?” he asked, brow furrowing with concern.
“I think I'm okay,” she said. “Easier the second time. I kind of wish you'd stop attracting assassins, though, Dad.”
“You don't really need to take it upon yourself to protect me, Helen dear,” he said, smiling indulgently. “But thank you.”
“You're welcome, Dad,” Helen said. “Love you.” He took that as a dismissal and turned to go back into the basement, muttering about coefficients. Helen lugged her backpack upstairs and started her homework.
#blume
Image: Motion blur of a departing subway train next to a man at Dundas station, Toronto – Randomanian (Creative Commons license)
from
wystswolf

BY ELLA WHEELER WILCOX
I love your lips when they’re wet with wine And red with a wild desire; I love your eyes when the lovelight lies Lit with a passionate fire. I love your arms when the warm white flesh Touches mine in a fond embrace; I love your hair when the strands enmesh Your kisses against my face.
Not for me the cold, calm kiss Of a virgin’s bloodless love; Not for me the saint’s white bliss, Nor the heart of a spotless dove. But give me the love that so freely gives And laughs at the whole world’s blame, With your body so wonderful and warm in my arms, It sets my poor heart aflame.
So kiss me sweet with your warm wet mouth, Still fragrant with ruby wine, And say with a fervor born of the South That your body and soul are mine. Clasp me close in your warm strong arms, While the pale stars shine above, And we’ll live our whole bright lives away In the joys of a living love.
#poetry #wyst