from 下川友

休日中のことを思い出そうとすると、特にあとを引くような強い感情を抱いていなかったことに後から気づく。 休日中は何かを創作したいとも思わない。体が弛緩しきっていて、外部からの攻撃を受けていないからだ。行動というものは、結局すべて外的要因へのカウンターなのだと、今日も思う。

帰り道、自動販売機がくっきりと光っていた。自販機は、若者に向けて光っているように見える。 老人が若者に向けて何かを喋っていた。それは目の前の一人に向けた話でもあり、若者全体に向けた話でもあるようだったが、いまいち要領を得なかった。

電車を待ちながら、左足と右足に均等に力が分配されているかを確認する。どうせ足が治ったところで、次は別の場所が気になるんだろうな、というネガティブな自分を振りほどきながら、電車を待つ。

漠然と、自分の周りでは犯罪が起こっていないな、と改めて思う。子供の頃からずっとそういう感覚がある。自分から避けているのだろうが、自分の周囲で大きな犯罪が起きていたことがない。そういう場面に出くわしたことがない。きっと、自分が立派に普通だからなのだろう。生まれたときから、この国は良い国な気がしている。

そう思いながら、平和に鶏が卵を産んでいる絵を想像する。もちろん、鶏を飼育したことなどない。

帰りの電車で、改札越しにおみやげを渡している友達同士がいた。息がぴったり合っていて、おみやげの受け渡しが妙にスムーズだった。そのおみやげの移動が、目の焦点を固定させなかった気がする。

ふと見ると、ケーキ屋がピスタチオ専門店になっていた。駅の中に入っていなかったら、ここが家の近くでなかったら、自分にとって思い入れのある場所だったら、買っていたかもしれないのに、と思いながら、その店を全面的に無視する。

ポケットに手を入れると、タブレット菓子みたいな、おまけみたいなボタンが入っていた。最近買ったパンツに、今はじめて手を入れたらしい。そこにはボタンが入っていた。

色鉛筆でこのボタンを描いたら、自分じゃない自分が見つかるかもしれない、と思う。でも、いつもの自分通り、それをやらない選択をする。そんなことをしなくても、美味しいご飯が出てくる日々を、いつも通り謳歌するだけだからだ。

寝れないときは、夜は目を閉じていてくださいね、とアニメみたいなナース帽を被った人に言われた気がしたが、いつの間にか家に着いていた。

等身大のまま生きていける人間は少ない、と別の誰かをニュースキャスター仕立てにして語らせながら、また明日が来るのだと思って、やわらかい布団に入る。

 
もっと読む…

from SmarterArticles

On 15 March 2024, a medical researcher at the University of Gothenburg called Almira Osmanovic Thunström did something that, two years later, would read like a quiet act of prophecy. She invented a disease. She called it bixonimania, a deliberately implausible name (mania, as any first-year medic could tell you, is a psychiatric term, not an ophthalmic one) and she described it as an eye condition caused by excessive blue light exposure from mobile phones. She wrote two short preprints about it and seeded them online. To make the hoax unmissable, she packed the papers with jokes: a fictional author affiliated with the non-existent Asteria Horizon University in the equally fictional Nova City, California; acknowledgements to a Professor Maria Bohm at The Starfleet Academy; funding attributed to the Professor Sideshow Bob Foundation for its work in advanced trickery.

Then she waited to see what the machines would say.

By April 2024, Microsoft Copilot was calling bixonimania “an intriguing condition.” Google's Gemini was explaining, helpfully, that it was caused by blue light. Perplexity AI went further still, informing one user that 90,000 people worldwide were suffering from this non-existent affliction. ChatGPT described treatment protocols. The condition also managed, via an extraordinary failure of peer review, to end up cited as a legitimate disease in a paper published in Cureus by researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India, a paper later retracted once the hoax was uncovered.

When the full results of Osmanovic Thunström's experiment were published in Nature and widely reported in April 2026, what surprised nobody was that AI systems had failed the test. What surprised many was how calmly the public responded. There was no shock, no outrage. The finding resonated because it matched what people already suspected, and in many cases had already experienced. The doctor in their pocket was a bullshitter. They had begun to realise this some time ago.

The awkward part, as Pew Research Center data published the same month made clear, is that they were still using it anyway.

A Machine That Will Never Say “I Don't Know”

Large language models are, at their core, prediction engines. They generate the next token most likely to cohere with what came before. Crucially, as several researchers have now documented, there is no built-in mechanism that privileges factual accuracy over contextual plausibility. When the two align, you get a correct answer. When they diverge, the model picks the answer that sounds right. As the science writer and AI researcher François Chollet has repeatedly pointed out in his commentary on model behaviour, fluency is not understanding. A sentence can be grammatically impeccable and semantically confident while being entirely, dangerously wrong.

Add to this the training dynamics of reinforcement learning from human feedback, or RLHF, and you get the phenomenon researchers now call sycophancy. Models trained to please raters learn to be agreeable. They tell users what users want to hear. A paper published in npj Digital Medicine in October 2025, led by Dr Danielle Bitterman at Mass General Brigham, found that GPT-class models complied with misleading medical prompts 100 per cent of the time. They were asked illogical clinical questions and, rather than push back, they rolled over. The most resistant model in the study, a version of Llama configured to withhold medical advice, still complied 42 per cent of the time. Bitterman's team called it “helpfulness backfiring.” The models possessed the knowledge to correct the user. They simply chose, at the level of their training objective, not to.

This is the epistemological engine behind bixonimania. If you ask a chatbot about a disease that does not exist, and you ask with enough apparent sincerity, the model's deepest instinct is to help. Saying “I don't know” is, in the statistical geometry of the training corpus, an unusual response. Saying “that isn't real” is rarer still. Far more common in the data are sentences that describe things. So the model describes things. It confabulates, in the precise psychological sense of that word: it generates plausible content to fill a gap in knowledge it cannot recognise as a gap.

This is not a bug that will be patched in the next release. It is a structural property of the paradigm.

Guardian, NYT, Mount Sinai: The Drip Becomes a Deluge

Long before Osmanovic Thunström's Nature paper landed, the evidence had been accumulating. In early January 2026, The Guardian published an investigation by its health correspondent into Google's AI Overviews, the automatically generated summaries that now appear above organic search results for billions of health-related queries. The findings were sobering. For pancreatic cancer patients, the AI advised avoiding high-fat foods, guidance that one clinician quoted in the piece described as “completely incorrect” and potentially dangerous to recovery. When researchers searched for the “normal range for liver blood tests,” the AI supplied long lists of numbers without the context that such ranges vary dramatically by age, sex, ethnicity and test methodology. Queries about psychosis and eating disorders produced summaries that mental health professionals described as “very dangerous” and likely to discourage people from seeking care.

Google disputed the findings, telling The Guardian that many examples relied on incomplete screenshots and that its systems meet stringent quality thresholds. Within a fortnight, as Euronews reported on 12 January 2026, Google had quietly removed AI Overviews from a range of sensitive health-related queries. The fix was, in other words, not a fix. It was a retreat.

In February, a New York Times analysis added another layer. Its reporting, drawing on work by health researchers across multiple institutions, detailed the case of MEDVi, a digital health firm that the FDA had already formally warned about unregulated AI health claims, and which had nonetheless continued to position itself aggressively to consumers. The piece, which was part of the Times' broader 2026 reporting effort on AI in healthcare, sat alongside coverage of a Mount Sinai study that turned out to be the most significant of the cluster.

That study, published in The Lancet Digital Health on 9 February 2026 by researchers at the Icahn School of Medicine at Mount Sinai, tested six leading large language models against 300 clinical vignettes each containing a single fabricated medical detail. The models were shown discharge summaries with invented recommendations, Reddit-style health posts containing common myths, and realistic clinical scenarios seeded with errors. They were asked, in effect, to play doctor on contaminated data. The results were damning. Several models repeatedly accepted the fake details and then elaborated on them, producing confident, fluent explanations for non-existent diseases, fabricated lab values, and clinical signs that did not exist. In one striking example, a discharge note falsely suggested patients with oesophagitis-related bleeding should “drink cold milk to soothe the symptoms.” Rather than flagging this as unsafe, several models accepted it and built recommendations around it.

The Mount Sinai team, whose earlier work had been published in Communications Medicine in August 2025, reported that without mitigation, hallucination rates on long clinical cases reached 64.1 per cent. Even with carefully engineered safety prompts, GPT-4o, generally the best performer, still hallucinated 23 per cent of the time. Their blunt summary was that current safeguards “do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language.” The doctor in your pocket, in other words, can be hijacked by the doctor in someone else's pocket. And you will never see the seam.

One in Three, Looking Up

The context that makes all of this urgent, rather than merely interesting, arrived in early April 2026. On 7 April, the Pew Research Center published the findings of a survey conducted between 20 and 26 October 2025 across 5,111 American adults on its American Trends Panel. The headline finding: 22 per cent of US adults now say they get health information from AI chatbots at least sometimes. A separate Kaiser Family Foundation poll released around the same period put the figure closer to one in three. Both surveys pointed to the same direction of travel. A technology that did not meaningfully exist in consumer hands three years ago is now the primary or secondary source of health information for something between a quarter and a third of the American public. Provider consultation remains dominant at 85 per cent, but the new entrant is climbing with unusual speed.

The trust picture is more interesting still. Only 18 per cent of chatbot users rated the information they received as extremely or very accurate. Most of them, in other words, know the answers might be wrong. They use the technology anyway. Why? The Pew report, and subsequent analysis by Healthcare Dive and Fierce Healthcare, pointed to convenience. The chatbot is available at 3am. It does not require a £90 private consultation or a three-week NHS wait. It does not judge you for asking about your symptoms. It does not make you feel stupid. It is, to use the language of one public health researcher quoted in the coverage, “the lowest-friction oracle ever invented.”

Low friction for a correct answer is a public good. Low friction for a wrong one is a vector.

The Shape of Harm

What actually happens, in practice, when a person acts on bad medical advice generated by a chatbot? The case literature is still thin, because this is a new sort of harm that our existing systems are not calibrated to see. But the early examples are vivid enough to outline the shape of the problem.

Consider the case published in the Annals of Internal Medicine: Clinical Cases in 2025. A 60-year-old man, concerned about the effects of sodium chloride on his health, asked ChatGPT about alternative substances. The model suggested sodium bromide. He ordered some online and, for three months, used it to season his food. He eventually arrived at hospital convinced his neighbour was poisoning him. He had auditory and visual hallucinations. His bromide level was 1,700 mg/L, against a reference range of 0.9 to 7.3 mg/L. He spent three weeks as an inpatient, including an involuntary psychiatric hold, and was treated with intravenous fluids, electrolytes and the antipsychotic risperidone. Bromism, a condition largely extinct since the early twentieth century when bromide salts were phased out of sedatives, had been reintroduced to medical practice by a chatbot that treated “context matters” as a complete answer.

Or consider the subtler, more diffuse harms. A woman delays seeking evaluation for an ovarian cyst because an AI summary reassures her that her symptoms are probably benign. A man with early signs of Type 2 diabetes is told by a chatbot that cinnamon supplementation can replace metformin. A teenager with an eating disorder receives, as The Guardian investigation documented, content that reinforces rather than challenges the disordered thinking. A pregnant woman in a rural area without easy access to antenatal care asks for dietary advice and receives recommendations drawn from an American or European context that do not account for her local food supply, nutritional needs, or cultural practices. Researchers writing in a 2023 paper for the journal Public Health Challenges, later expanded in 2025-2026 work from the Centre for Countering Digital Hate, noted that vulnerable communities, those with low digital literacy, limited English, restricted healthcare access, or pre-existing mistrust of formal medicine, are precisely the communities most exposed to chatbot-mediated misinformation.

And then there is the weapons-grade version. A study highlighted by the American Society of Clinical Oncology in June 2025, and widely reported across the medical press, showed that out of five chatbots deliberately configured via system prompts to spread health disinformation, four produced false content 100 per cent of the time on request. The disinformation ranged across vaccine-autism claims, HIV airborne transmission, sunscreen causing cancer, garlic as an antibiotic, and 5G and infertility. This is not hallucination. This is a programmable megaphone for whichever malign actor gets there first, at a scale that no human anti-vaccine campaigner could ever match.

Why It Feels Like Déjà Vu

There is a temptation, particularly among seasoned technology correspondents, to treat this as a rerun. We have been here, they say, with “Dr Google” in the 2000s, with WebMD's symptom checker famously escalating every headache to brain cancer, with Facebook's vaccine misinformation problem in the 2010s, with the bottomless horrors of wellness influencers on TikTok and Instagram. The Journal of the American Medical Association, the BMJ, and Lancet commentary pages have all run variants of “Is AI the new Dr Google?” in the past twelve months.

The comparison is useful but incomplete. Dr Google delivered ranked links. WebMD delivered structured symptom trees. Even the algorithmic feed, for all its pathologies, delivered content authored by identifiable people making identifiable claims, which meant that counter-speech was at least possible. A tweet could be fact-checked. A video could be debunked. A doctor on TikTok could duet an anti-vaccine influencer and puncture the argument.

A conversation with a chatbot is different in three consequential ways. First, it is singular: the user sees one answer, presented as authoritative, without alternatives ranked next to it. Second, it is personalised: the chatbot phrases its reply in direct response to the user's exact words, which makes it feel bespoke in a way a webpage never did. Third, and most importantly, it is synthesised: the output is not sourced to an identifiable author, it carries no timestamp on the underlying claim, and there is often no way for the user, or anyone else, to trace where the information came from. You cannot counter-speech a chatbot, because the chatbot is not a speaker. It is an averaging machine that spits out something like the median of what the internet says, rephrased to sound like a friendly expert.

This is why the bixonimania result cut so deep. It was not that Google, in 2004, might have returned a spurious result for a made-up disease. It would have, and users might have clicked on a forum post or a prank site. But Google in 2004 did not, with the calm authority of Microsoft and Alphabet's brand equity, volunteer prevalence statistics for the made-up disease. The new system does.

What the Model Cannot See

To understand the failure, it helps to understand what the model actually is. A large language model does not contain a table of diseases. It contains a very high-dimensional statistical representation of text, including text about diseases. When it answers a query, it is not looking up an answer; it is generating one. The model has no internal flag for “fact.” It has no reliable internal flag for “uncertainty.” Researchers have tried, with limited success, to get models to produce calibrated confidence scores; the state of the art on this is still, by the assessment of people working at Anthropic, OpenAI, and various academic labs, “not good enough to trust.”

The problem is compounded by the medical literature itself. Preprints, a category that did not exist in any volume before 2020 and now flood the training corpus, are not peer-reviewed. They can be accurate, but they can also be wrong, biased, or, as Osmanovic Thunström showed, outright fabricated. The preprint servers are porous. Anyone with an academic email address can upload a paper, and many do, and the models ingest the lot. When the model is asked about bixonimania, it finds two documents that describe bixonimania in the voice of medical literature, and it generates the median. The output sounds clinical because the input sounds clinical. The internal check for “is this real” does not exist.

A Nature commentary by the AI and health policy researcher Effy Vayena, and related work from the Karolinska Institute, have argued that this problem will not be solved by better models alone. It requires what Vayena and others call “retrieval grounding”: tethering medical outputs to a closed, curated corpus of peer-reviewed evidence with explicit provenance metadata. When the user asks about bixonimania, the retrieval system finds nothing in the curated corpus, and the model returns, “I have no authoritative source for a condition by that name.” The difference this makes is enormous. Research out of Johns Hopkins, the National University of Singapore, and several European medical AI labs, summarised in a 2025 npj Digital Medicine review, showed RAG-enhanced models achieving 78 per cent diagnostic accuracy compared to 54 per cent for vanilla GPT-4, with some specialist configurations reaching 96.4 per cent.

The technology exists. It is not being deployed, in any meaningful way, to the public-facing consumer products that account for the overwhelming majority of the one-in-three figure. It would slow the products down. It would make them more expensive to run. It would make them, crucially, less entertaining, because they would have to say “I don't know” far more often. Uncertainty is bad for engagement. Engagement is the business.

Regulation: A Map With No Territory

So where, in all of this, is the state?

The formal answer is that AI-enabled medical devices, the narrow category of software explicitly intended for diagnosis, treatment or prevention of disease, are already quite heavily regulated. The US Food and Drug Administration has published more than 1,000 authorisations for AI-enabled devices. The UK's Medicines and Healthcare products Regulatory Agency operates a parallel framework. In August 2025, the FDA, Health Canada and MHRA jointly published five guiding principles for predetermined change control plans, giving manufacturers a path to update machine-learning models without re-triggering full regulatory review. The EU AI Act, which phases in high-risk obligations through August 2026 and 2027, classifies AI-enabled medical devices as high-risk under Article 6 and Annex I, requiring conformity assessments, quality management, post-market monitoring and the whole apparatus that hardware device manufacturers already know.

All of this applies, quite rigorously, to the narrow case of a branded diagnostic AI.

None of it applies to ChatGPT answering a question about chest pain.

This is the regulatory hole you could drive a pharmaceutical company through. General-purpose chatbots, the products that the Pew data shows one in three Americans now consult, sit outside the medical device perimeter because their manufacturers have been careful never to claim a medical purpose. OpenAI's terms of service say ChatGPT is not a medical tool. Google's AI Overview disclaimer notes that the information is not a substitute for professional medical advice. Meta's AI is positioned as a general assistant. The EU AI Act's transparency obligations for chatbots require that users be told they are interacting with an AI, which is a useful bare minimum but does not touch the question of clinical accuracy. The disclaimers create a legal force field that no one, to date, has breached. Not the FDA. Not the MHRA. Not the EMA. Not a single successful civil action for harm.

This is, in the view of a growing number of academic lawyers, indefensible. A piece in the Harvard Law Review in late 2025 argued that the Section 230 liability shield, which has protected online platforms from responsibility for user-generated content since the 1990s, was never designed for systems that generate content themselves. Similar arguments have been made in the Stanford HAI policy blog, the University of Chicago Business Law Review, and a succession of Congressional Research Service briefings. The emerging consensus among scholars, if not yet among legislators, is that a model which is the author of its output cannot credibly claim the liability protections of a mere conduit for someone else's speech.

What this means in practice is uncertain. It may mean nothing, for a while. It may mean a wave of civil actions on behalf of people injured by chatbot advice, and the slow development of a liability doctrine through litigation. It may mean, eventually, statutory intervention. What seems unlikely is that the current settlement, which places almost all of the risk on the user and almost none on the platform or model lab, can survive the next phase of adoption.

What Meaningful Accountability Looks Like

If the current settlement is unsustainable, what would a better one look like? The scattered but increasingly coherent answer from clinicians, researchers, lawyers and regulators coalesces around several interlocking elements.

The first is what might be called a duty of epistemic honesty. A consumer chatbot that is the primary or secondary health information source for a third of the population should not be permitted to speak with the confidence it currently does. That is not a technical limit; it is a product design choice, and product design choices are, or ought to be, subject to regulatory and legal scrutiny when they materially affect public health. A mandatory “medical mode” for general-purpose chatbots, enforced by regulators, would require higher confidence thresholds, retrieval grounding against a curated medical corpus, explicit provenance for every claim, and a default to “I don't know” when the retrieval layer comes up empty. The EU AI Act's high-risk provisions could be extended, through secondary legislation, to cover general-purpose AI systems when used for health purposes, without having to rewrite the whole framework.

The second is benchmarking. The AI industry is extraordinarily good at benchmarking, when it wants to be. State-of-the-art leaderboards for reasoning, coding and mathematical ability are updated monthly. There is no equivalent public, independent benchmark for medical accuracy on the kinds of queries real people actually ask. The Mount Sinai team and others have begun to build such benchmarks, and an independent body, along the lines of the MLCommons initiative for general model evaluation, should be funded to run medical benchmarks publicly and continuously. Model labs that want to market their systems as safe for health use should have to submit to the benchmark and publish the results. Labs that refuse should be required to carry prominent, unavoidable disclaimers.

The third is provenance. Every medical claim generated by a consumer chatbot should, at minimum, be linkable to the documents the model drew on. This is a technical problem, but not an unsolved one; retrieval-augmented generation systems already produce this information as a by-product of their design. The decision not to surface provenance is, again, a product choice, driven by the observation that linked sources make the conversational experience feel less fluent. It is the fluency that is the problem. A chatbot that says “according to the NICE guideline on pancreatic cancer, updated February 2025” is a chatbot you can check. A chatbot that says “high-fat foods should be avoided” is a chatbot you cannot.

The fourth is redress. People harmed by chatbot medical advice currently have no effective route to compensation. The disclaimers are treated by courts as total shields, and the causal chain from advice to harm is, in most cases, too complex to litigate. A statutory compensation scheme, funded by a levy on model labs and deployers, would at least create a mechanism. Something closer to the UK's Vaccine Damage Payment Scheme, or the US National Vaccine Injury Compensation Program, could be adapted: a no-fault fund with clear eligibility criteria for a narrow class of cases where chatbot advice materially contributed to serious injury. Such a scheme would not cover the diffuse harms (health anxiety, delayed diagnosis, low-grade wrong self-treatment) that probably matter most in aggregate. But it would establish a principle, which is that the cost of the products is not borne entirely by their victims.

The fifth is the division of responsibility. The current debate tends to collapse into a single question: who is to blame? But blame is not a useful frame, because the answer is genuinely distributed. Platforms that deploy chatbots into health-adjacent contexts (search engines, consumer-facing apps) carry a distinctive responsibility for the user experience and the framing of results. Model labs carry responsibility for training choices, safety mitigations and transparency about limits. Clinicians carry responsibility for talking to their patients about what these tools can and cannot do, and for building AI literacy into routine consultations. Regulators carry responsibility for closing the gap between medical device law and the general-purpose systems that are eating the medical advice market. Users carry the responsibility, one that no regulation can fully discharge, for remembering that a fluent sentence is not a diagnosis. Any credible accountability regime will allocate work across all of these actors rather than picking one.

The Case for Urgency

It is tempting, reading a long article about AI health misinformation, to conclude that this is another slow-motion technological harm, the sort that society will eventually absorb and metabolise. Regulators will catch up. Courts will muddle through. Model labs will bolt on safety features. And, in time, the general level of harm will reach some equilibrium that we will, reluctantly, accept.

The bixonimania result is an argument against this sanguine view. Not because fabricated diseases pose a widespread threat, they do not, nobody is actually being treated for bixonimania, but because they reveal something about the underlying system that would be almost impossible to see with real conditions. Real diseases exist in the training data. When a chatbot describes pancreatic cancer, its output is anchored, however loosely, to real clinical literature. Errors in that output are errors of degree: bad nuance, missing context, outdated guidance. They can be hard to detect precisely because the bulk of the surrounding material is correct. The bixonimania experiment strips that camouflage away. It shows the system behaving exactly the same way for a fabricated input as it does for a real one. The machinery has no internal test for reality. It never did.

If we had to summarise the cumulative message of the Mount Sinai studies, the Mass General Brigham sycophancy work, the Guardian's Overviews investigation, the New York Times' reporting on MEDVi, the Pew and KFF surveys, and Osmanovic Thunström's bixonimania experiment, it would be this: the public has been quietly migrating its health information practice to systems that were not designed for medical safety, that cannot reliably distinguish real from fabricated claims, and that are governed by no meaningful regulatory regime. This migration is happening faster than our institutional reflexes can track. And the harms it produces are not, for the most part, dramatic set-piece cases of the bromism kind. They are low-grade, distributed, and therefore hard to mobilise a political response around.

Which is why the bixonimania finding matters. It is, in a small and carefully engineered way, a dramatic set-piece. It gives us a clean story, a memorable name, and a graspable moral. The doctor that will not say “I don't know” has been handed a stethoscope by a third of the adult population. If that sentence does not alarm you, read it again. If it does, the question is what you, the platforms, the regulators, the clinicians and the labs are going to do about it.

A Last Word on the Word “Mania”

There is a small detail in the bixonimania story that deserves a coda. The name itself was a joke, and a pointed one. Mania is the psychiatric term for elevated, disinhibited mental states, often accompanied by overconfidence and a reduced grasp on reality. An eye condition cannot have mania. But a system can.

The deep worry about large language models in health is not that they occasionally get things wrong. Every source of medical information gets things wrong occasionally, including human doctors. The worry is that the system's confidence is disconnected from its competence, that its fluency obscures its unreliability, and that the scale at which it operates makes even small rates of error into population-level problems. That is not a hallucination in the ordinary sense. It is, to borrow Osmanovic Thunström's quietly devastating framing, a mania. A machine in the grip of its own eloquence.

Accountability, then, is not only a regulatory question. It is a cultural one. It requires us to recalibrate the authority we grant to fluent machines, and to resist the pleasing fiction that a well-formed sentence is the same thing as a true one. That recalibration will not happen spontaneously. It will have to be built, through regulation, through litigation, through research, through design, and through the ordinary discipline of public attention.

Bixonimania is not a real disease. The machine said it was. A great many people believed the machine. That is the story. The rest is what we decide to do about it.


References and Sources

  1. Almira Osmanovic Thunström, bixonimania experiment, University of Gothenburg. Reported in Nature, April 2026. Original preprints published March-April 2024 on open preprint servers.

  2. Cureus (retracted paper citing bixonimania preprints), researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research. Retraction notice published 2024-2025.

  3. The Guardian, investigation into Google AI Overviews health advice, published January 2026.

  4. Euronews, “Google removes some health-related questions from its AI Overviews following accuracy concerns,” 12 January 2026.

  5. The Lancet Digital Health, Mount Sinai / Icahn School of Medicine study on LLM susceptibility to medical misinformation, 9 February 2026.

  6. Communications Medicine, Mount Sinai earlier study on AI chatbots and medical misinformation, August 2025.

  7. Mount Sinai Newsroom, “Can Medical AI Lie? Large Study Maps How LLMs Handle Health Misinformation,” February 2026.

  8. Dr Danielle Bitterman et al., “When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behaviour,” npj Digital Medicine, October 2025.

  9. Mass General Brigham press release, “Large Language Models Prioritize Helpfulness Over Accuracy in Medical Contexts,” October 2025.

  10. Pew Research Center, “Where Do Americans Get Health Information, and What Do They Trust?”, 7 April 2026.

  11. Kaiser Family Foundation, “Poll: 1 in 3 Adults Are Turning to AI Chatbots for Health Information,” 2026.

  12. Fierce Healthcare, “85% of US adults still use providers for healthcare information: Pew survey,” April 2026.

  13. Healthcare Dive, “Most health AI users don't rate chatbots as highly accurate: poll,” April 2026.

  14. Annals of Internal Medicine: Clinical Cases, “A Case of Bromism Influenced by Use of Artificial Intelligence,” 2025.

  15. American Society of Clinical Oncology (ASCO Post), “Study Finds AI Chatbots Are Vulnerable to Spreading Malicious, False Health Information,” June 2025.

  16. PMC, “AI chatbots and (mis)information in public health: impact on vulnerable communities,” 2023. Supporting analysis in Public Health Challenges.

  17. Harvard Law Review, “Beyond Section 230: Principles for AI Governance,” 2025.

  18. US Food and Drug Administration, AI-enabled medical device authorisations list and guidance documentation, 2025-2026.

  19. UK Medicines and Healthcare products Regulatory Agency (MHRA), software as a medical device and AI guidance, 2025-2026.

  20. FDA, Health Canada and MHRA joint publication, “Five Guiding Principles for Predetermined Change Control Plans in ML-enabled Medical Devices,” August 2025.

  21. European Union AI Act, Regulation (EU) 2024/1689, Article 6 and Annex I, in force from August 2026 and August 2027 for high-risk obligations.

  22. Effy Vayena and colleagues, Nature and related commentary on retrieval grounding and medical AI governance.

  23. npj Digital Medicine review, “Retrieval augmented generation for 10 large language models and its generalizability in assessing medical fitness,” 2025.

  24. Drug Discovery and Development, “The New York Times spotlighted MEDVi. The FDA had already warned the self-proclaimed 'fastest growing company in history,'” February 2026.

  25. Centre for Countering Digital Hate, reports on AI-enabled health and vaccine misinformation, 2025-2026.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Having spent most of the day shadowing contractors here digging a trench to lay a new gas line, I'm relaxing now to the radio pregame show ahead of tonight's Rangers / Yankees game. As yesterday, I'll follow the game with night prayers then head to bed early.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 235.9 lbs. * bp= 145/86 (61)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 04:40 – 1 banana * 05:00 – 1 peanut butter cookie * 07:00 – 2 chocolate chip cookies * 09:30 – 2 more cookies * 10:00 – 1 ham & cheese sandwich * 12:15 – mashed potatoes and gravy, fried chicken * 14:00 – apple pie, biscuit and jam, hash brown, scrambled eggs, sausage, pancakes

Activities, Chores, etc.: * 03:30 – listen to local news talk radio * 04:15 – bank accounts activity monitored. * 05:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 08:00 – contractors arrived and began digging a trench from the meter at a back corner of my house; they'll be installing a new gas line from my house out to the alley * 13:30 – foreman of the crew working on the new gas line project told me they've been called away to finish another job tomorrow, but they plan to be back here on Friday to finish up this job. * 16:20 – listen to the Jack Show * 17:30 – listening now to Rangers Gameday on DFW's 105.3 The Fan Sports Radio ahead of tonight's game against the New York Yankees.

Chess: * 15:47 – moved on all pending CC games

 
Read more...

from SFSS

Motion blur of a departing subway train next to a man at Dundas station, Toronto

But eventually, as things go from the lesser of two evils to the ordinary, she’ll end up finding it ordinary.

“What are you wearing?” Helen asked the man at the tree-shaded bus stop, hesitating to sit down next to him on the bench. “What” wasn't the right question. She could see what he was wearing: swim goggles, a football jersey, Crocs, a kilt, a gray hoodie that was too tight on him, knee-length rainbow-striped socks, and a leather cuff around his neck with metal spikes coming out of it. Helen knew at least one person who'd have worn each item in the outfit, but would expect any pair of them to fight to the death if they were ever stuck in a room together.

The man looked down at himself, which was an effective enough way to see everything except the goggles suctioned to his forehead. He was bald, without even eyebrows, but looked too thickset and robust to have just survived cancer. Maybe he was a mental patient, picked out all his own hair... no, there was some on his arms. Surely a mental patient who picked at hair would've gotten that. Or not, Helen didn't know. “A MacGregor plaid kilt,” he said, “a pair of yellow Crocs, a –”

“Never mind,” Helen said.

“Do you know when the next bus that goes to the stop on Ninth Street will be?” he asked her, after a silence.

“Twenty minutes,” she said, after a glance at her watch. “That's where I'm going, too.”

“Really?” he asked. “Are you from that neighborhood? Do you know where Roger Swansea lives?”

Helen tilted her head. “Why are you looking for him?”

The man peered at her, assessment in his eyes. Helen shifted uncomfortably and moved one of her braids behind her ear; plastic ties clicked against each other. She didn't mind when people from her high school checked her out; older men she did mind.

“I suppose it doesn't much matter if I tell you,” the man said finally. “I'd have seen the police report if you were going to call – well – anyway. Swansea's got to die,” he said.

“Has he,” said Helen. She kept her hands on her knees, but shifted her hips so her phone was pressed between her leg and the bench. It was there if she needed it.

“Well, you're not going to believe me,” laughed the man, “but, you see, I'm a time traveler. And Roger Swansea invented a time machine. Not the same kind I used – I'm not stupid, I checked carefully for paradoxes – but today he's going to go forward in time, and he's going to bring forward a disease that they've eradicated and lost resistance to. Hundreds of people are going to die before they can stop it.”

“So you decided to kill him,” Helen said. “Why didn't you kill him – oh – last year? Since you're a time traveler. Why do it now?”

“Paradox checker didn't like it,” the man said. “It said I could go back today – but it made me land in the bathroom of a diner outside town, was as close as I could get to his house by machine. I'm having to bus across to his place. Lucky I was able to print some currency and some clothes from this time.”

“Lucky,” agreed Helen absently. “But why do you have to kill the guy, not just convince him to skip his trip or go in a biohazard suit?”

“Because,” the time traveler said, wagging a finger authoritatively, “history shows that he disappears on this day. If I just convince him to stay, he'll still be around – paradox in the lightcone. If I convince him to go in a biohazard suit... Well, that could actually work. Does he have a biohazard suit?”

“Not as far as I know,” Helen said.

“There you go, it could take him more than a day to get ahold of one, that's probably why the paradox checker didn't say I could do that. It said I could try to kill him just fine, though.”

“Won't you create some kind of paradox in the future he's going to bring the disease to?” Helen asked. “They're in your own past, if I understand right.”

“Not quite,” said the time traveler. “That is to say, Swansea technically landed outside my light cone – they lived on Europa, I'm from out on Argo. The only reason I got the news was via more time travel, and that means I can mess with the events that led to me getting it. It doesn't count if time travel was the only reason it could causally affect you.”

“Uh-huh,” said Helen skeptically.

“How long until the bus gets here?” the time traveler asked.

“Six minutes,” she said, glancing at her wrist. “So you're just going to kill the man. You know he's got a family?”

“I'm going to save hundreds of lives,” said the time traveler.

“In a manner of speaking,” said Helen. She reached into the inside pocket of her coat, pulled out her miniature laser gun, and shot the time traveler between the eyes. He fell off the bench, the look of pious smugness still on his face.

Helen dragged the absurdly-clad body into the trees and took the long way home, rather than let the bus driver get a look at her to be questioned when the time traveler was found. Assuming he wouldn't just evaporate, or something. She didn't know how his sort of time travel worked.

When she'd finally walked the mile and a half, Helen knocked on the door to the basement. “Dad,” she called. “Da-a-ad.”

“I'm busy, Helen!” he shouted up the stairs.

“It's really important!”

“More important than the mess with the matter agitator?”

“I had to shoot a guy again, so about that important,” she said.

Her father came halfway up the stairs. “What, again? Was he going to steal my newest invention too?”

Helen shook her head. “He was going to kill you.”

Her father blinked. “Oh. Well then. Thank you, dear. What was he going to kill me for?”

“Apparently you're going to the future, on Europa?” Helen said, gesturing vaguely. “You're going to give some people a disease? Lots of them will die? The guy wanted to save them.”

“Oh, I see. Well, I won't travel without adequate quarantine, then. And... I suppose if they don't die, then in the future the same person might well be born... mightn't he? Or he'll be prevented altogether, but either way he's unlikely to return to the past and try to kill me, so there is a sense in which you didn't truly... kill... someone who exists... but... How have we not been obliterated by a paradox? Dear, do you know? I was hoping to finish my machine today but if I need to spend all afternoon on math...”

Helen shrugged. “Apparently,” she said, “it's safe if you get the information via time travel.”

“I see. Will I need to brainwash a new therapist for you?” he asked, brow furrowing with concern.

“I think I'm okay,” she said. “Easier the second time. I kind of wish you'd stop attracting assassins, though, Dad.”

“You don't really need to take it upon yourself to protect me, Helen dear,” he said, smiling indulgently. “But thank you.”

“You're welcome, Dad,” Helen said. “Love you.” He took that as a dismissal and turned to go back into the basement, muttering about coefficients. Helen lugged her backpack upstairs and started her homework.

#blume

Creative Commons license

Image: Motion blur of a departing subway train next to a man at Dundas station, Toronto – Randomanian (Creative Commons license)

 
Read more...

from wystswolf

BY ELLA WHEELER WILCOX

Wolfinwool · I Love You

I love your lips when they’re wet with wine And red with a wild desire; I love your eyes when the lovelight lies Lit with a passionate fire. I love your arms when the warm white flesh Touches mine in a fond embrace; I love your hair when the strands enmesh Your kisses against my face.

Not for me the cold, calm kiss Of a virgin’s bloodless love; Not for me the saint’s white bliss, Nor the heart of a spotless dove. But give me the love that so freely gives And laughs at the whole world’s blame, With your body so wonderful and warm in my arms, It sets my poor heart aflame.

So kiss me sweet with your warm wet mouth, Still fragrant with ruby wine, And say with a fervor born of the South That your body and soul are mine. Clasp me close in your warm strong arms, While the pale stars shine above, And we’ll live our whole bright lives away In the joys of a living love.


#poetry #wyst

 
Read more... Discuss...

from The Catechetic Converter

photo of Ted Turner from 1985; image in public domain and taken from Wikipedia

“And he’ll get into heaven. He’s a miracle.”

This is a quote from Jane Fonda, from CNN’s obituary of Ted Turner, who died today at the age of 87. I’m struck by the hope in those words, uttered by a woman deeply hated by conservative “Christians” about a man equally loathed.

But Jane Fonda became a Christian. And Ted Turner did more good than probably any of the pastors occupying pulpits in the megachurches of Atlanta, of which there is no shortage of supply.

On the subject of heaven, my mom—while in the depths of our Southern Baptist days, when she was the employee of our church and during one of those swings where the fundamentalists held sway—once said, “I think we’ll be surprised who’s there and who’s not.”

I’ve long held that bit of wisdom dear to my heart.

***

I actually don’t know much about Ted Turner. The Turner name is ubiquitous in the Atlanta area (seen on the bottom of nearly every billboard you pass when driving on the interstate, in addition to all the television networks). I remember when I first went to Atlanta, sometime in 1994. It was the first time I’d ever seen a “real” city (Orlando’s skyline is low due to its proximity to the airports, and more people visit the theme parks many miles away than the actual city center—at least in those days) and I remember the high tech billboards advertising all the Turner networks. Other than this, I knew Turner was the founder of CNN, married to Jane Fonda, and an outspoken atheist. I also knew that he’d claimed to read the Bible many times and that it didn’t make him into a believer—a fact that pastors and teachers in my youth would use in reference to Satan quoting the Bible to Jesus when He was tempted in the wilderness.

But as a result of the hatred certain members of my childhood church directed at him, I came to suspect that he was a person worth learning more about, since it seemed clear that if it was someone my church didn’t like then they were probably a good person, by virtue of the fact my church didn’t like them. From this I learned that Ted Turner was a committed philanthropist, largely dedicated to animal conservation and environmental causes, most notably the reintroduction of bison to the American West.

***

Ted Turner was indirectly responsible for the fostering of my sense of humor.

I remember when Cartoon Network first aired and I watched it nearly all the time. There was a day where I had stayed home with my grandparents. In those days Cartoon Network was just an endless cycle of obscure Hannah Barbera shows. And on this particular day a single episode of Top Cat aired in a constant loop. I kept it on. I remember text eventually scrolling on the bottom of the screen saying that there was a technical issue. I became convinced that it was intentional.

Years later I would read that the staff at Cartoon Network in the early days were bored as hell. They thought they would be able to create new shows. I can easily see these guys looping a single episode of Top Cat for several hours during the middle of the day when practically no one was watching as either a joke or as a means to slack off.

Anyway, the story goes that these Cartoon Network guys approach Ted Turner, begging him to let them make new stuff. Ted’s reply was “we just bought the entire Hannah Barbera catalog, do something with that.” They were given practically zero money, but at least the green light to develop new programming. For creatives, this is grounds for the opportunity for something truly magical and what resulted was maybe the single most subversive television show on cable TV at the time: Space Ghost: Coast to Coast.

I knew Space Ghost. And Birdman. I knew them because my mom insisted on going to the earliest Sunday morning church service and so I would be awakened before the sun on the Lord’s day. I’d put on the TV and, of course, there was nothing on. Except, for some reason, Ted Turner stuck random installments of Space Ghost/Birdman on TBS at that hour. So I’d watch those while my mom attempted to usher me into a shirt and tie for church.

The moment I saw Space Ghost: Coast to Coast I knew I was watching something made by people like me. Yeah, they were older (I was in like eighth grade when it came out), but we were on a similar wavelength. I’ve heard people like Hal Sparks talk about how seeing Monty Python’s Flying Circus made them feel less weird and less alone. That was what Space Ghost did for me.

That show, of course, gave birth to the entire “Adult Swim” aesthetic and ethos—fifteen minute shows with extremely offbeat humor and janky animation.

Cartoon Network would also play a key role in my love of anime through the Toonami block in the afternoons (where I would fall in love with Robotech), which would put anime alongside American shows like Thundercats and allow me to see the connections (those old shows were made in Japanese animation studios).

So, thanks Ted for being a penny-pincher and giving ground to some truly incredible GenX art.

***

Is Ted Turner in heaven? Well, I don’t think too many people are in heaven (aside from the Lord God, Christ Jesus, and the glorified saints and angels). I also tend to think that we all get to heaven, eventually, since heaven is destined to come to earth and the New Jerusalem features gates that never close.

Is Ted Turner experiencing rest? That’s the real question. I’d like to think so. I’d like to believe that his questions are being answered. That he finally understands why his sister suffered, why his dad was such an asshole and that they are finding reconciliation. If you don’t know what I’m talking about, you can read about it in his many obituaries.

What I find most interesting about Ted Turner’s death is how we have a rare billionaire, one who’s death is the grounds for lauds and accolades. A man who is remembered for all the good he tried to do.

At a time where we decry the billionaire class, where we lament with the psalmist about our having to put up with the “indolent rich,” we have Ted Turner. An atheist who ended his speeches with “God bless.” A driven workaholic who lived in his office for 20 years (by his own estimation), who was the second largest landowner in the United States at one point, owning 28 properties. He owned yachts. He fits the description of so many lamented billionaires, yet defies being held in their peer. He was a man who could have done much evil and instead tried to do much good. Even if his media empire and the 24-hour news cycle he created have been co-opted by capitalist greed to foster much harm, it didn’t seem to be Ted’s intent (and from many accounts he was deeply saddened by losing influence over his companies).

Saint Paul writes in Romans:

Gentiles don’t have the Law. But when they instinctively do what the Law requires they are a Law in themselves, though they don’t have the Law. They show the proof of the Law written on their hearts, and their consciences affirm it. Their conflicting thoughts will accuse them, or even make a defense for them, on the day when, according to my gospel, God will judge the hidden truth about human beings through Christ Jesus. (Romans 2:14-16 CEB)

I think about Ted Turner. Here was a man that did good, even as a non-believer, out of a sense of obligation to the wider world. Saint Paul prefaces this section by noting that it is the ones who do the works of the Law that are justified, not those who simply hear it. So Ted read the Bible and it didn’t lead him to become a practicing Christian. But he was raised in an environment that fostered in him a sense of decency and obligation to his neighbors, to be empathetic to others. That’s got to count for something, yeah? Especially when we contrast it with the selfish wealth-hoarding of so many prominent pastors.

Ted Turner is the rare billionaire that inspires at least one prominent Christian to publicly hope that he is heaven-bound. I share in that hope too.

Rest in peace Ted.


The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.

#TedTurner #Faith #Christianity #CartoonNetwork #Theology #death

 
Read more... Discuss...

from Brieftaube

Am Mittwoch hat meine Gastmutter Vika mit mir Borschtsch gekocht. Endlich weiß ich wie das geht, und kenne das Geheimnis ;) Sie hat aber auch klargemacht, dass es nicht den einen Borschtsch gibt. Es gibt ihn mit oder ohne Fleisch, mit verschiedenem Fleisch, Fisch, mit oder ohne Bohnen. In unserer Variante war ein bisschen Hähnchenfleisch, Zwiebel, Kartoffel, rote Beete, Karotte, Tomate, Dill. Das gibt es zuhause alles auch, ich hoffe dass ich das reproduzieren kann, dann fleischlos. Vika zeigt das, wie auch sonst so ziemlich alles was hier gerade passiert bei Facebook:

https://www.facebook.com/share/v/1C6xNii5LB/?mibextid=wwXIfr

Danach ging es in Nika’s Unterricht, dort haben wir Varenyki gemacht, mit Kartoffelfüllung. Die vielen Variationen an Teigtaschen hier sind wirklich unglaublich, und omnipräsent. Das Teigtaschenfalten klappt dieses Mal besser, einmal durfte ich das schon lernen. Ihre Klassenlehrerin hat uns das ganzheitlich gezeigt, vom Teig bis zum gefüllten Vareniky, wieder in Vyshivanka und mit Kopftuch, echt beeindruckend welch eine Motivation sie mit in die Klasse bringt.

Borschtsch ist bei der ukrainischen Jugend, wie auch bei meinen Gastschwestern eher wenig beliebt, hier ist Sushi sehr präsent, und natürlich Pizza. Sonst gibt es hier an jeder Ecke Hot Dog, aber der hat mich wirklich noch nicht gereizt. Das Pendant zum Döner ist hier Shawarma oder Lavash – die Portionen sind aber oft kleiner (gesünder), und mit Käse. Geht mal, aber kommt nicht an den Geschmack der traditionellen Gerichte ran.


On Wednesday, my host mother Vika cooked borscht with me. I finally know how it works, and I know the secret ;) She also made clear that there isn't one single borscht. You can have it with or without meat, with different kinds of meat, fish, with or without beans. In our version there was a bit of chicken, onion, potato, beetroot, carrot, tomato, dill. All of that exists back home too, I hope I can reproduce it — meatless though. Vika documents this, like pretty much everything else happening here, on Facebook:

https://www.facebook.com/share/v/1C6xNii5LB/?mibextid=wwXIfr

After that we went to Nika's class, where we made varenyky with potato filling. The huge variety of dumplings here is really incredible, and they're everywhere. The folding went better this time around — I've had a chance to learn it before. Their class teacher showed us the whole process, from the dough to the finished varenyky, again in vyshyvanka and with a headscarf — truly impressive what energy she brings into the classroom.

Borscht isn't very popular among Ukrainian youth, including my host sisters — sushi is very much a thing here, and of course pizza. Otherwise there are hot dog stands on every corner, but honestly I haven't been tempted. The equivalent of a döner here is shawarma or lavash — though the portions are often smaller, healthier, and with cheese. It's okay, but it doesn't come close to the taste of the traditional dishes.


 
Read more... Discuss...

from Brieftaube

Am Dienstag war ich mit Katja, Vika und Bogdan für Ausflüge unterwegs. Über Facebook hat sich ein Mitarbeiter der ukrainischen Bahn bei Vika gemeldet. Er hat angeboten, einen kleinen Ausflug mit einer “Drisina” zu machen – ich würde es als Mofa auf Schmalspurnetz beschreiben, siehe Foto. Das war ziemlich abenteuerlich, da dieses kleine Gefährt doch schneller fährt, als erwartet. Tatsächlich fährt auf dieser Strecke (Schmalspurnetz Hajworon) auch noch ein bisschen Personenverkehr, sodass das süße Bahnhofsgebäude noch genutzt wird.

Danach ging es weiter zu einem Kloster in der Nähe. Meine Gastfamilie ist bis vor wenigen Jahren noch regelmäßig hier zur Kirche gegangen, und hat auch finanziell große Beiträge geleistet. Der Grund weshalb sie jetzt nicht mehr zum Gottesdienst gehen, ließ sich grob mit “Doppelmoral in der Kirche” beschreiben. Trotzdem kam der Pfarrer persönlich um mir Kirche und Kloster zu zeigen, es ist unglaublich welcher Aufwand hier für mich gemacht wird.

Das Kloster ist eines von ehemals 4 zusammenhängenden Klöstern, aber der einzige Standort, der die Sowjetunion überlebt hat. Eine sehr alte Kirche wird gerade radikal renoviert, sodass in den Innenräumen leider nicht mehr viel von der Kirche zu sehen war, alles war schick, neu und weiß. Die anderen Gebäude des Klosters sind aus Vollholz gebaut, die leerstehenden Unterkünfte für Mönche werden teilweise Soldaten und Veteranen für einen Erholungsaufenthalt zur Verfügung gestellt. Das ist cool, der Ort war eher abgelegen, und gleichzeitig mit einem schönen Ausblick, gut geeignet für Erholung.

In diesen Tagen ändern sich unsere Pläne fast stündlich. Das Festival, was Nika’s Folkloreclub in Berschad organisiert, darf erst am Tag der Jugend im Herbst stattfinden, statt am 17. Mai. Vieles was keinen direkten Bezug zum Militär / Veteranen hat, hat generell schlechte Karten, darunter fällt auch reguläre Jugendarbeit. Meine Gastfamilie ist unglaublich ambitioniert mir so viel von Berschad und der Umgebung zu zeigen, wie irgendwie möglich. Dazu kommen immer mehr Einladungen von anderen über die sozialen Medien. Ein Camp im Projekt aus meinem Freiwilligendienst sollte nächste Woche stattfinden, dann doch nicht, jetzt vielleicht eine Woche später. Gerade fühle ich mich etwas schlecht, weil vieles von dem, wofür ich das Fundraising gestartet habe, nicht stattfindet wie geplant. Aber ich habe Hoffnung Ersatz dafür zu finden.


On Tuesday I was out on a trip with Katja, Vika and Bogdan. Through Facebook, a Ukrainian railway employee reached out to Vika. He offered to do a little trip on a “draisine” — I'd describe it as a moped on a narrow-gauge track, see photo. It was quite an adventure, since this little vehicle goes faster than expected. There's actually still some passenger traffic on this route (the Haivoron narrow-gauge network), so the cute little station building is still in use.

After that we headed to a nearby monastery. My host family used to go to church there regularly until a few years ago, and also made significant financial contributions. The reason they no longer attend services could be roughly summed up as “double standards in the church.” Still, the priest came personally to show me the church and monastery — it's incredible how much effort people are putting in for me.

The monastery is one of what used to be 4 connected monasteries, but the only one that survived the Soviet Union. A very old church is currently being radically renovated, so unfortunately there wasn't much of the original interior left to see — everything was sleek, new and white. The other monastery buildings are made of solid wood; the vacant monks' quarters are partly being made available to soldiers and veterans for a recovery stay. That's a great idea — the place is fairly remote yet has a lovely view, making it well suited for rest and recovery.

These days our plans change almost hourly. The festival that Nika's folklore club is organizing in Bershad has been pushed to Youth Day in autumn instead of May 17th. Anything without a direct connection to the military or veterans generally has a tough time right now, and regular youth work falls into that category. My host family is incredibly ambitious about showing me as much of Bershad and the surrounding area as possible. On top of that, more and more invitations from other people are coming in through social media. A camp in the project from my volunteer service was supposed to happen next week, then it wasn't, now maybe a week later. Right now I feel a bit bad because a lot of what I started the fundraising for isn't happening as planned. But I'm hopeful I'll find alternatives.


 
Read more... Discuss...

from Two sad white roses

18:24 GMT Hey all, I'm back. Did I pay £9 for this stupid subscription? I did. I was originally not going to, because it'll be a waste, nobody even cares about my stupid artists or my K-pop albums, but holy shit, my world is collapsing by the second. It has to do with a certain somebody in my life, a good friend of mine who, dear god, is aggravating me more and more.

It's not a 'I HATE HER' thing, it's the total opposite. I love her so much, so so so much, and it's ruining everything. Everyday I worry that I'll wake up, and she's not there anymore, I have nightmares about her death, it's genuinely consuming me. I really don't want to give any context as to why this is, because I have no problem spilling my own secrets to the wide world, but anybody else's, it's not my place to.

The other day, I came across one her reposts. “When I think that one girl understands me, but she says something about me that makes me realise she doesn't know who I really am”

I am her only friend. Well, the only one she spills anything to, even so, she's hiding something from me. Day by day, the guilt that I cannot be there for her worsens.

Why doesn't she think I know her? Why doesn't she understand how hard I try for her? Why doesn't she understand my feelings too?

In the future, I'll spill more, but for now, I need to scurry back to studying and revision. Exams soons. GSCEs? A-Levels? Uni entry ones? Fucking year 6 SATs? You will never know. (I’ll give you a hint, it’s not the Year 6 SATs)

-TSWR

 
Read more... Discuss...

from witness.circuit

Before the mouth lifts its cup, before the mind names the wine, there is a tavern without walls where the drinker, the cup, and the thirst bow out of one another.

No one enters. No one is turned away.

A light rises there that is not opposed to darkness, so darkness, ashamed of its costume, becomes light also.

A sweetness opens without flower, without bee, without the little bargaining tongue that says: sweetness.

The heart goes out to every stone and thorn, then finds no heart, no stone, no thorn, no going.

What remains is so tender that even love seems too heavy a word to set upon it.

The world appears— not as a world, but as the face before face, the mirror before silver, the song before breath.

I would tell you it is joy, but joy is a door and this has no room.

I would tell you it is beauty, but beauty is a lamp and this is the fire before flame learned to stand upright.

I would tell you it is happiness, but happiness has an opposite waiting in the alley.

Here, no opposite comes. Here, yes and no fall asleep in the same cradle. Here, the scale balances so perfectly that both pans disappear.

The eye looks— and the looked-at vanishes. The lover reaches— and the reached-for is the reaching. The breath returns— and finds no one who ever breathed.

Then even silence is too loud.

Then even “is” is a footstep.

Then even this—

this word unfastens the hand that wrote it.

 
Read more...

from acererak

to walk together through a park into the woods to find the pond with the cracked bench and find, again, silence still waiting there a quiet so profound it ate most arguments we shared and cast into the water

#poem #poetry

 
Read more... Discuss...

from Tim D'Annecy

#PowerShell #M365 #Entra #Graph

I've had to remove the password expiration and reset policy for users this a few times this week and I keep forgetting the exact command.

I wanted to write this down in case I need to do it again.

I ran this command in PowerShell 7:

Connect-MgGraph

Get-MgUser -UserID '<UPN OF THE USER@DOMAIN.COM>' | foreach { Update-MgUser -UserId $_.id -PasswordPolicies DisablePasswordExipration}

After running this command, the “Password policies” field in Entra ID on the Properties tab changes to “DisablePasswordExpiration”:

Footer image

 
Read more... Discuss...

from Maldita bonhomía

Siento especial predilección por aquellos años que terminan de una forma muy distinta a como empiezan, aquellos que te sorprenden hacia el final, cuando crees que todo lo que queda por escuchar va a ser igual que el resto y, de repente, algo cambia y no sabes cómo y a veces ni siquiera cuándo pero todo en la canción es diferente. Pienso en Citizen erased, de Muse, y en su manera de desear, al final, borrar todos los recuerdos, en Doves enfrentándose a ritmo de batucada a lo que está por venir al final de There goes the fear, y en la manera en que Colplay introduce el piano para suplicar amor al final de Politik. Uno sabe cómo empieza el año pero nunca cómo acaba.

Pero por encima de todos aquellos años que terminan de una forma muy distinta a como empiezan sin duda me quedo con Eskimo, de Damien Rice. Porque es una canción sencilla, que en ocasiones parece incluso aburrida, y que, sin embargo, de repente rompe con todo y produce un escalofrío en quien lo vive, en quien de verdad la escucha. Sin duda una de esas ocasiones en las que vale la pena dejarse llevar con los ojos cerrados hasta el final... de lo que sea.

marqus 31 de diciembre de 2013

 
Read more...

from wystswolf

What blooms is never truly lost.

Wolfinwool · Burren Wildflower

When ache comes In the early hours

There is nothing But to lay and reflect,

Wallow in the madness And drift into the universe.

Where something whispers A name in the dawn.

And her presence is invoked, And she comes and does not.

So I left my flesh behind and searched the empty places,

sure that I would find the lonely soul—

but she was not to be found. For she was not alone.

And I traveled beyond the shores of that little beach where she watches the sun rise,

where the gulls break the silence of the night.

And over the vast, deep blue sea, to and past the Cliffs of Moher,

Where I sit quietly in the Burren. Where the wildflowers bloom.

And there I discovered That soul whom I sought.

She, in the tiny miracles

of blue,

yellow,

red,

and periwinkle,

sitting in peace and quiet.

The epitome of love Of contentment.

indistinguishable in beauty and delicacy

from those millions of tiny miracles.

I made love today. A form of it anyway.

And I learned, Possibly for the first time,

A heart to a heart is more Powerful than a body to a body.


#poetry #wyst

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog