Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from 下川友
昨日、0時近くまで作業していたら、そこから寝つくまでに2時間ほどかかってしまった。 久々に夜遅くまで作業したから忘れていたが、この状態になると、脳の考え事が静まるまでにかなり時間がかかるのだった。
普段の俺は寝るのが大好きで、無意識のうちに寝る準備が整っている。布団に入って10分もすれば眠れるのに、久々にやらかしてしまった。
昔も同じミスをして、「もう寝る直前まで作業はしない」と思ったことがある。 むしろ20時くらいからお酒でも飲む習慣をつければいいんじゃないかと思い、少量のビール、お供え用の135mlのものを飲んでいた時期もあった。アルコールにめっぽう弱い俺には、この135mlがちょうど良かった。お供えに使われているというのも、なんだか良い。この液体が体内を浄化してくれるような気がした。
でもやっぱり続かなかった。普段まったくお酒を飲まないし、酔いたいという気持ちが微塵もないからだ。 酔って思考が止まるくらいなら、悩み続けて頭が痛くなる方が、自分を認知できる。思考がぼーっとする感じは性に合わない。 一生答えの出ないことを考え続け、無意味に脳みそを肥大化させていたい。
ということで、今日は20時から何も考えず、ただ空気を感じている。 こういう時に漫画や映画など、人の作品を鑑賞したり、友達と食事したりすればいいのだろうけど、普段からそういう設計をしてこなかったせいで、俺には娯楽のインフラが一切整っていない。
一時期はカラオケにハマっていたが、最近はまた考え事の時間が戻ってきて、喉が閉まってきた。 喉が開いている状態というのは、脳が「普段他人と喋っている」と錯覚するので気持ちが良かった。また通いたい。
高熱で弱った髪にまた艶が戻ってきたように、自分らしさを取り戻して元気になってきたのが分かる。 明日もまたフラットに頑張っていきたい。
from
The happy place
During the weekend, there was vomit on each sidewalk.
And broken glass of course.
Like the whole town was hungover.
But today again there’s people out.
But I’m inside, having a great time with my work and my coffee and some music in the earphones.
I’ll have to enjoy myself before AI comes for my job.
And then what?
My hair, it’s thinner. It used to be I would grab my hair it would amount to four portions of pasta, but now it’s not enough even for one.
And the beard is getting gray
I am fading from this world.
Maritzenia no era romántica ni idealista. Miraba los hechos; era práctica.
Hija de un hábil diplomático y de una consagrada pianista, estudió la carrera de diplomacia. Mientras hacía prácticas en un organismo internacional, fue llamada por su país para dirigir la secretaría de la embajada en Washington.
Lo de que fue llamada por su país, siempre es un decir. Más bien lo que ocurrió es que alguien le puso el ojo. Y aunque el embajador era un hombre apagado, por esos azares del destino, al año siguiente fue nombrado ministro de asuntos exteriores y contó con Maritzenia como jefa del importante departamento de análisis.
Muchos pensaron que ella no estaba preparada para el cargo, ni tenía la experiencia suficiente, aunque sí la pinta de estar aprovechándose del viejo canciller.
Pasaron dos años y un viernes, el canciller la invitó a cenar. A los postres, le dijo:
-Por un motivo familiar que por ahora no debo comentar, tengo que dejar la política. Este mediodía estuve con el presidente y si todo va bien el miércoles te llamará. Si nada se tuerce, serás la nueva canciller.
Y nada se torció.
A partir de entonces, muchos pensaron que Maritzenia lo tenía todo: era joven, distinguida, íntegra y brillante. Lo que el país necesitaba.
from An Open Letter
I went to a baking club event today, and I saw this one girl I met before that was very pretty and fun to talk to. We finally exchanged numbers so I could invite her to stuff, and at some point I mentioned that she probably only exchanged contacts with my ex, and she said “oh you guys broke up??” Which I responded yes to, and her response to that was “wait my turn to slide?” And I panicked. I responded “no” and probably stuttered something about not dating for a bit, because that caught me so off guard. I’ve been weirdly replaying that moment in my head, because I’m so surprised someone would make that joke unless they were somewhat interested. I guess I do want to believe that I am attractive and desirable and so maybe she was somewhat laying the foundation for flirting, but I may also be reading into it too much. I did meet another person also who had the same name as my prior ex (lol), but we had great conversation and they were excited to hang out. The world may not be as bleak as I thought.
from Elevea A leading Moringa powder brand
Why is Elevea a Leading Moringa Powder Brand in the UK ?
In recent years, Moringa Powder has become one of the most searched superfoods among people interested in natural health in the UK. Derived from the leaves of the Moringa oleifera tree, this green powder contains essential nutrients such as vitamins A, C, and iron, making it a popular supplement for daily nutrition. Health-conscious consumers in the UK are now actively searching for Best Brands for moringa Powder that provide purity, quality, and nutritional value.
What Makes Moringa a Powerful Superfood ?
Moringa leaves contain a wide range of beneficial nutrients that support overall wellness. Important properties include: High levels of antioxidants Natural anti-inflammatory compounds Rich source of vitamins and minerals Plant-based protein and fibre Because of this nutrient density, many nutrition enthusiasts include Moringa Powder UK products in their daily diet.
Elevea – A Trusted UK Moringa Brand
When discussing Best Brands for moringa Powder, Elevea is frequently highlighted as a reliable option for UK consumers. Elevea’s wellness philosophy focuses on natural ingredients and simple nutrition solutions. Their Organic Moringa powder is designed to be easy to use and highly nutritious. Reasons people choose Elevea Natural and clean ingredient sourcing No artificial additives Suitable for smoothies, teas, and meals Popular among wellness-focused consumers Because of its quality standards, Elevea has become a recognised brand within the UK market.
How to Use Moringa Powder in a Daily Diet ?
One advantage of Organic Moringa powder is its versatility in food preparation. Common ways to use it include: Adding it to green smoothies Mixing it in herbal teas Sprinkling it on oatmeal or breakfast bowls Blending it into healthy snacks These simple recipes allow health enthusiasts to enjoy the benefits of Moringa Powder UK without changing their diet significantly.
Conclusion
The popularity of Moringa Powder among the UK continues to rise because of its impressive nutritional value and versatility. For consumers searching for the Best Brands for moringa Powder, quality and sourcing remain key factors. Among the available options, Elevea stands out as a trusted brand offering clean, natural, and Organic Moringa powder products suitable for everyday wellness.
from
EpicMind

Freundinnen & Freunde der Weisheit! Wenn wir nach vorn blicken, entwickeln wir Selbstwirksamkeit, wir werden sogar resilienter. Richten wir also öfters unsere Aufmerksamkeit auf wünschenswerte Ziele!
„Lebe im Moment“ gilt als Leitsatz vieler Achtsamkeitsratgeber – doch psychologische Forschung zeigt: Wer regelmässig über die eigene Zukunft nachdenkt, trifft bessere Entscheidungen, entwickelt mehr Selbstwirksamkeit und lebt zufriedener. Zukunftsorientiertes Denken – etwa in Form einer klaren Vorstellung des „bestmöglichen zukünftigen Ichs“ – erhöht laut Studien nicht nur die Motivation, sondern verbessert auch die emotionale Resilienz. Schon wenige Minuten täglicher Reflexion reichen aus, um das Wohlbefinden zu steigern.
Das therapeutische Konzept der Future Directed Therapy (FDT) setzt genau hier an: Es hilft, gedankliche Blockaden zu erkennen und in lösungsorientierte Handlungsimpulse zu überführen. Wer sich regelmässig fragt: „Was will ich eigentlich?“ und seine Aufmerksamkeit bewusst auf wünschenswerte Ziele lenkt, baut mentale Stärke auf. Eine schriftliche Vision der gewünschten Zukunft – ergänzt durch konkrete, zielgerichtete Handlungsschritte – kann laut Forschung depressive Symptome reduzieren und das Gefühl von Kontrolle stärken.
Zukunftsdenken wirkt auch neurologisch: Studien zeigen, dass das Gehirn den „zukünftigen Selbstanteil“ ähnlich aktiviert wie das Denken an nahestehende Menschen. Wer sich emotional mit dem eigenen zukünftigen Ich verbunden fühlt, sorgt im Heute besser für sich. Entscheidend ist nicht ein detaillierter Lebensplan, sondern die regelmässige, konstruktive Ausrichtung auf das, was kommen soll – um das Heute sinnvoller zu gestalten.
„Morgen werde ich mich ändern, gestern wollte ich es heute schon.“ – Christine Busta (1915–1987)
Jede Push-Nachricht oder jedes Ping auf Deinem Handy reisst Dich aus Deiner Konzentration. Schalte unnötige Benachrichtigungen aus oder nutze den „Nicht stören“-Modus, um ungestört zu arbeiten.
Vor kurzem ertappte ich mich wieder dabei: Ich starrte auf meine To-do-Liste, randvoll gefüllt mit Aufgaben, die dringend schienen. Eine E-Mail hier, eine Chatnachricht dort – viele kleine Dinge, die „sofort“ erledigt werden mussten. Ohne darüber nachzudenken, begann ich zu arbeiten, setzte Häkchen hinter die Aufgaben, die ich schnell abarbeiten konnte. Doch am Ende des Tages blieb das Gefühl, dass ich zwar viel „getan“ hatte, aber nichts wirklich Relevantes erreicht worden war. Kennst Du das auch?
Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!
EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.
Topic #Newsletter
from The Goalmind
They Will Kill You
Starts Off with her and a little girl in a convenience store, running away from a white man who claims they are family. The man claims to be the father and she kills him. They seem to be getting abused by him. They are trying to get to NYC.
Zazie Beats – Isabelle Davidson/Asia Reeves arrived in NYC to a strange apt. Building saying she’s a maid, 10 years after she kills the man in the parking lot. She went to jail and was separated from her sister. She got a tip that she was at this sketchy building. She’s ambushed by masked assailants. Her sisters name is Maria.
Lilly – The head maid who seems to be trying to recruit Asia. All of the assailants get resurrected if killed or regenerate limbs if they are lost.
Plot – The apartment building is a site for satan worshipers. Victims are lured into the building and forced to pledge their lives to working for Satan or be killed. Once you become apart of the cult, you must sacrifice someone to gain eternal life. Things get interesting once Zazie Beats arrives at the building
from
wystswolf
To know you is not enough. I want to be lost in you.
The topography of her I was not meant To leave.
Oh, to climb the Mountains and hills Of she... Not as a pilgrim, But as something Hungry.
To take shelter In the dales and valleys, And name them mine By breath, By touch, By the slow claiming Of presence.
I would map her Not in lines, But in memory— Every rise learned By mouth, Every hollow By need.
A continent of wonder, Yes... But also of ruin, Where I lose myself And do not ask To be found.
Till I am no longer A wanderer, But something rooted, Buried deep In the quiet Of her terrain.
from 3c0
The Fool — Here I am. With so many hopes and dreams. Renewed. Re-energised. I am a Fool. I begin again. I have so much potential for growth. I have so much to learn. I will shed what I must in order to grow through life and be where I need to be. There is hope because I have faith
The Prince — There are no limits. I have the passion. I need to remember to rest and be able to sustain this creative fire to get me through. This is not a limitless energy. It is finite. I must put it into the right moment and the right effort. Trust in my unlimited creative potential. Go for it. There is impatience here so I must seize the moment!
The Princess — What are my next steps? I need to be brave and bold to move all of this forward and enact creative change. There are a few things going on but I can handle them with all this energy as long as I am mindful and not overly carefree. There’s the house maintenance stuff. Current work stuff. Future work stuff and all the other future-building I need to complete.
Focus now and dare to dream.
from sugarrush-77
Church 3/29/2026
Today I got my hair double bleached. But before that, I went to church. The reason I go to this church is because every week, I feel like God is speaking to me through the sermon. Today’s sermon was titled “Stephen’s All-In”, from Acts 7:54-60. The passage was about when Stephen was stoned to death by Jews.
A couple pithy quotes today that I found good:
The main topic I found relevant to my life today was about God’s silence. When the topic came up, I realized that God was being silent in my life despite my mental sufferings. I wrote in my sermon notebook
“Sometimes it all feels like a sick joke! I don’t understand why any of it has to be like this.”
The pastor spoke of Stephen “obeying God to death” in the passage. In response to that, I wrote in my notebook
“Would it really be as miserable as I think it would be (to obey God to death)? If I stop bitching while I do it, probably not. I need to stop bitching and stop looking at the negatives while forcing myself to do something I don’t want to do. I might as well force myself to look at the bright side of things, and do it with a cheerful heart.”
More about God’s silence. God is silent multiple times in the Bible. He is silent when Stephen dies for his sake, He is silent when Jesus dies on the cross (the ultimate silence). It’s hard to understand in the moment why, but we know that God is good. Sometimes there’s nothing to be done but simply endure the suffering without reprieve. In fact, we may actually deserve silence. What we did not deserve is Jesus’s saving work on the cross. The Samarian woman understood on some level that she was unworthy, but she didn’t care, and she came to Jesus because she trusted that He could save her. To this I wrote in my notebook
“I have too big of an ego. I should kill it. I’m so frustrated that God won’t give me what I want that I don’t want anything to do with Him sometimes. Even if I obey, I want to do it sullenly and tell Him – look, I did what you wanted. Happy? Now kill me.”
But I did decide that I would not complain, and act like a petulant child that pouts and stamps their feet when they aren’t given what they want. I will obey. I will find joy in God, and learn how to be grateful in every situation. I will not bitch and moan about every little thing that did not go my way. I am not important.
from Nerd for Hire
I love it when I get an excuse to fall down a new cryptid rabbit hole. My recent trip to Mexico, along with the fact that I'm using a few cryptids from the area in my current novel-in-progress, has given me just the justification I need to do a deep dive into some of the country's legends and monsters—and there are a lot of very fun ones to be found, especially when you include creatures from Aztec and Maya mythology.
Most people have heard of Mexico's most famous cryptid, the infamous chupacabra, a spined and hairless bloodsucking canine (or lizard, depending on which version of the legend you listen to) accused of draining the blood from livestock. There is also a Mexican version of Bigfoot, the sisimite, which I included in my squatch around the world round-up a few months back. Here are a few other creatures from south of the border that haven't yet gotten quite that level of PR outside the country.

Aquatic mammals are a relatively rare category of cryptid, and this one is a particularly fun version. The Ahuitzotl is the size of a small dog, and has roughly the same build, though with small ears and a long tail that has a hand at the end of it. It lives in remote, swampy areas, where it submerges itself in a lake or river then makes a sound like a terrified woman or crying child. When somebody rushes in to help, it grabs them with its tail-hand, pulls them under, and strangles them. Then it eats their eyes, teeth, and fingernails and tosses the body on the shore.
The Ahuitzotl was one of the first cryptids documented by Europeans in Mexico. Hernán Cortés' claimed one of his men was killed by this creature in an official report. There were similar creatures in both Maya and Aztec myths, as well as in the myths from people further north like the Hopi and Shasta, which has led some scientists to speculate the legends originated from encounters with a now-extinct species of otter. Another fun fact: the creature shares its name with the 8th Aztec ruler, who was in charge during the peak of the empire (1486-1502).

A lot of cultures around the world have a legendary creature that looks like a little human, and the Maya and Aztec had similar iterations of this theme.
The Alux (plural Aluxes or Aluxob) is the Maya version, a knee-high person wearing traditional Maya garb that's usually invisible, though it can show itself to interact with people. Aluxob are protective spirits and guardians of the land, believed to be as old as the land itself, even older than the sun. Farmers can harness the powers of an Alux by building a shrine on their land, which either attracts one or creates one, depending on the legend. Once the Alux moves into the shrine, it spends the next seven years protecting the fields, bringing good weather, and otherwise helping the crops grow. After seven years, the farmer has to seal the Alux inside the shrine, or else it'll turn into a trickster, hiding the farmer's tools, spreading disease, or running off into the jungle to lead travelers astray. You can stop these tricks by leaving offerings to the Aluxob at the ancient sites where they live.
The Aztec version is called a Chaneque or Chanekeh, and looks like a child with an old face. Like Aluxob, they live in forests or near rivers, but they don't have the same farm helper reputation—they're just straight tricksters. Sometimes they just cause mild mischief, but they're also said to kidnap people and take them to the underworld through a dry kapok tree, or to attack people who intrude on their land with such intensity that their soul leaves their body. They can also communicate with animals or bring rain and thunder. They're also partially invisible, though it's usually said that children can see them but adults often can't.

In Aztec mythology, women who died in childbirth were said to become Cihuateteo, powerful spirits seen as equivalent to the spirits of warriors who died in battle. The Cihuateteo worked with warriors' spirits to get the sun through the sky, taking it west from noon to sunset (in some versions also carrying it through the underworld) after the warriors carried it across the morning.
Usually the Cihuateteo live in a place called Cihuatlampa, the “place of women” that was west with the setting sun, but on certain days of the calendar they'd come to the mortal realm to mingle with humans. When they did, they'd take the form of crossroads demons and get up to the usual array of bad behavior like stealing children, causing madness, or luring men to commit adultery. When on Earth, they have claw-like hands and are usually shown wearing skirts fastened with snake belts.

This one comes primarily from the Yucatán peninsula, and is also found in adjacent countries like Belize and northern Guatemala. It's essentially the Maya iteration of a goatman, which is another common trope in folklore around the world, though Huay Chivo is distinct from creatures like satyrs or the Pope Lick Monster in that he's said to be a shape-shifting sorcerer, not a full-time goatman. The current legend is likely a melding of Maya and Spanish folklore, which is reflected in its name: Huay, from Waay, the Yucatec word for “sorecrer”, and Chivo, a Spanish word for goat.
Huay Chivo can only turn into his goat for at night, and to do it he has to take off his head first and leave it at home. A goat's or bull's head grows in its place, and he also gets horse or goat legs, with a human torso in between, all of it covered in thick, black hair. He has glowing red eyes and anyone who stares into them is frozen with fear, then suffers delerium and fever that lasts for days. Some versions also bleed from the mouth whenever they talk. The only way to kill him is to carve a cross into a bullet and shoot it into the sorcerer's abandoned, disembodied head (though you can also keep it away by leaving a cross sprinkled with holy water by the door).
There are a few origin legends for Huay Chivo. The core idea is usually that a young man loves a woman and wants to get closer to her. In one version, that woman tends his family's goat herd, and he asks the Maya death god Kisin (“the flatulent one”) to change him into a goat so he can always be near her, but the spell goes awry and he gains the ability to transform into a goat instead. In another version, the young man asks the devil to get him close to his crush and doesn't know he'll be turned into a goat until it happens, at which point he starts slaughtering livestock at night because he's so angry about it. The legend of the creature's existence persists to the modern day, and there have been sightings as recently as 2015.

Another one from the Yucatán peninsula, the Xtabay fits another well-represented archetype: beautiful women who are actually terrifying monsters. In this case, she's dressed all in white with black hair down to her ankles. Xtabay waits behind ceiba trees combing her hair with the spines of a tzacam cactus until an unsuspecting male traveler happens along (though in some versions of the legend she only attacks criminals and drunks). What happens at that point depends on the legend. In some versions she turns into a venomous snake and devours him. In others, she rips out his heart, eats it, then throws the body into a cenote. In a version written by ethnologist Antonio Mediz Bolio, she makes the men her slaves, keeping them in caverns around the ceiba tree's roots.
Some scholars believe the legend of Xtabay started as a personified spirit of the ceiba tree, but was twisted into an evil being by the Spanish as part of their campaign to demonize indigenous beliefs. As far as her legendary origins, there are a few versions. In one, she starts as Xkeban, a beautiful woman who had many suitors but rejected all of them. They got jealous and started spreading rumors to ruin her reputation. When she was walking in the woods, a sorceress offered her the chance to escape the ridicule of her town by transforming into an immortal creature, the Xtabay. In another version, Xtabay is Xkeban's sister, Utz-Colel, who was chaste and proper in life but nonetheless had spiky tzacam cacti grow from her grave when she died, while her loose sister Xkeban's grave sprouted beautiful flowers. Utz-Colel comes back to life as Xtabay to punish the type of men her sister used to sleep with.
See similar posts:
#Cryptids #Folklore #Mythology
from
Notes I Won’t Reread
I wasn’t planning to stay out last night. Just a pack of cigarettes, maybe two. The type you’d light out of habit. Not desire. Something to keep your hand busy while your mind runs in circles. The cafe was quiet enough to make me think, which was the first mistake. suit still on, tie a little loose, like I almost had my life together. Almost always feels like enough until it isn’t.
And Oh love. I kept thinking about you, in a very stubborn way, you’d say that if you were here. It surely was the kind that doesn’t ask for permission before showing up and sitting across from you like it owns the place. I came home with that feeling still stuck to me like always for the past couple of days. Poured a drink like it would translate anything in my head into something simpler. But it didn’t. It just made everything louder. And you mostly.
I almost texted you, you know? That was where it would’ve ended for me. Not the drinking. Not the thinking. It’s almost the moment your fingers hover, and for a second, you believe there’s an ending to this if you just press send. like there’s a version of the world where you answer, and it fixes something.
There isn’t. So I didn’t send anything. Instead, that’s where it gets all funny. I wrote this. Or whatever this is. It doesn’t even make sense now, reading it back. Half of it feels like someone else wrote it.
“ I think i figured it out not you just this no wait that’s a lie i had something to say like two seconds ago it sounded important too which is rare for me so that’s unfortunate You’re in my head again congratulations you win i don’t remember what the prize was but it’s probably me losing. I almost texted you i know shocking write that down somewhere “he almost did it” historic moment I kept thinking if i say the right thing it’ll fix it like there’s a correct sentance. a secret code and suddenly you’re back and im not whatever this is but every sentance i start ends wrong or it doesn’t end at all kind of like us that was good actually i should keep that i dont know why you still here in my head i mean i didn’t ask for this pretty sure i would’ve declinded politely anyway i miss no i dont i mean i do but that’s not the point there is no point i should stop writing now that was supposed to make sense it doesn’t you’re stil here that’s it thats the note” “
Well. I could barely read it. Half the words were stepping on each other like they were in a rush to mean something before I sobered up. The other half looked like I gave up mid thought, which, to be fair, sounds like me. I don’t remember writing most of it. I remember the feeling, the weight of it, specifically. Like something sitting on your chest pretending it belongs there.
Apparently, drunk me thinks he’s insightful. He’s not. He’s just louder. Less filtered. A little more honest than I’d like to admit, which is probably why I don’t let him speak often.
He wrote about you. Of course, he did. He has repeated your name more than once. More than I’m willing to admit, actually. It won’t be showing here. Not because it wasn’t there, if anything, it was the only thing that was there, but because it doesn’t read well. It doesn’t sound like something a person in control would write. It looks obsessive. Unnecessary. A little embarrassing, if I’m being honest, which I’m trying not to be.
Drunk me seemed to think writing your name over and over would lead somewhere. Like if he said it enough, it would turn into an answer. Or a response. Or at least something that felt less like silence. It didn’t
It just turned into a page that looked like it forgot how to move on. So no, I won’t be showing that part. You’ll just have to trust me when I say you were mentioned more than once. More than what I hear daily.
There’s a line there. I think it was a line, or maybe I imagined it, that almost made sense. Something about ending. Or how I can’t seem to find one when it comes to you.
Iconic. Sober me isn’t doing much better. I don’t know why I kept it. It’s not even good or makes sense. It’s just evidence. That no matter how composed I look in a suit, or how quiet I keep things during the day, there’s still a version of me that sits down, pours a drink, and loses to you without even trying.
I’d say i won’t read it again, But i probably will. Just to see if it ever starts making sense.
It probably never will.
Sincerely, whoever I was last night.
from
SmarterArticles

Somewhere between the press releases and the product demos, something went quietly wrong with explainable AI. What began as a serious academic and civil liberties concern about algorithmic opacity has been repackaged, polished, and slotted neatly into enterprise software brochures. The question of whether people deserve to understand why a machine denied them healthcare, flagged them as a fraud risk, or recommended a longer prison sentence has been quietly reframed. It is no longer about rights. It is about features.
The global explainable AI market was valued at approximately 7.79 billion US dollars in 2024, according to Grand View Research, and is projected to reach 21.06 billion dollars by 2030. These are not the figures of a civil liberties movement. This is a growth industry. And the distinction matters enormously, because the people building these tools and the people most harmed by opaque algorithms are almost never the same people. The explainability that corporations are selling is designed for boardrooms and compliance departments, not for the individuals whose lives hang in the balance of an algorithmic output.
To understand why explainability matters, you need only look at what happens when it is absent. In Australia, the Robodebt scheme ran from 2016 to 2019, deploying an automated data-matching algorithm to calculate welfare debts by averaging annual income across fortnights. The method was mathematically crude and, as a 2019 Federal Court ruling determined, legally invalid. No warrant existed in social security law that entitled the administering agency to use income averaging as a proxy for actual income in fortnightly measurement periods. This was known internally because of legal advice received by the Department of Social Security as early as 2014. Yet the algorithm asserted 1.7 billion Australian dollars in debts against 453,000 people. A total of 746 million Australian dollars was wrongfully recovered from 381,000 individuals before the scheme was finally dismantled. The Royal Commission, established in August 2022 under Prime Minister Anthony Albanese, heard testimony from families of young people who had died by suicide after receiving algorithmically generated debt notices they could not understand or contest.
At the height of the scheme in 2017, 20,000 debt notices were being issued per week. None of them came with a meaningful explanation of how the debt had been calculated. The University of Melbourne described the core flaw plainly: averaging a year's worth of earnings across each fortnight is no way to accurately calculate fortnightly pay, particularly for casual workers whose income fluctuates. Yet the system operated for years, with human oversight progressively removed from the process. The Oxford University Blavatnik School of Government described Robodebt as “a tragic case of public policy failure,” one in which the efficiency benefits of automation were pursued without regard for legal authority, ethical safeguards, or the basic dignity of the people affected. In September 2024, the Australian Public Service Commission concluded its investigation, resulting in fines and demotions for several officials, though notably no one was dismissed from their role.
The Netherlands offers another instructive case. The Dutch childcare benefits scandal, which ultimately forced the government's resignation in January 2021, involved an algorithmic system that flagged benefit claims as potentially fraudulent. A report by the Dutch Data Protection Authority revealed that the system used a self-learning algorithm where dual nationality and foreign-sounding names functioned as indicators of fraud risk. Tens of thousands of parents, predominantly from ethnic minority and low-income backgrounds, were falsely accused and forced to repay legally obtained benefits. Amnesty International's 2021 report, titled “Xenophobic Machines,” described the outcome as a “black box system” that created “a black hole of accountability.” The Dutch government publicly acknowledged in May 2022 that institutional racism within the Tax and Customs Administration was a root cause.
These are not hypothetical scenarios. They are documented failures with named victims, legal findings, and parliamentary consequences. And in every case, the absence of explainability was not a minor technical limitation. It was the mechanism through which harm was inflicted and accountability was evaded.
The academic roots of explainable AI are firmly planted in concerns about justice, accountability, and democratic governance. Cathy O'Neil's 2016 book “Weapons of Math Destruction” identified three defining characteristics of harmful algorithmic systems: opacity, scale, and damage. O'Neil, who holds a PhD in mathematics from Harvard University and founded the algorithmic auditing company ORCAA, argued that mathematical models encoding human prejudice were being deployed at scale without any mechanism for those affected to understand or challenge the decisions made about them. As she wrote, “the math-powered applications powering the data economy were based on choices made by fallible human beings,” and many of those choices “encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed their lives.”
That argument was fundamentally about power. It asked who gets to know, who gets to question, and who gets to change the systems that shape lives. But somewhere in the translation from academic critique to enterprise software, the language shifted. Explainability stopped being a demand made by citizens and became a capability offered by vendors.
IBM now markets AI Explainability 360 as an open-source toolkit, and its watsonx.governance platform promises to “accelerate responsible and explainable AI workflows.” Microsoft offers InterpretML and Fairlearn as part of its Responsible AI toolkit. Google's Vertex AI platform includes explainability features as standard enterprise offerings. These are not trivial contributions. The technical work behind SHAP values, LIME interpretations, and attention visualisations represents genuine scientific progress. But the framing has fundamentally changed. Explainability is positioned as a competitive advantage for organisations, not as a right belonging to the individuals whose lives are affected by algorithmic decisions.
The Stanford AI Index Report 2024 found that 44 per cent of surveyed organisations identified transparency and explainability as key concerns regarding AI adoption. But look at that statistic carefully. It measures corporate concern about adoption barriers, not citizen concern about algorithmic justice. The worry is that unexplainable AI might slow enterprise deployment, not that it might harm people. Meanwhile, the same report noted that 233 documented AI-related incidents occurred in 2024, a figure that represents not merely a statistical increase but what Stanford described as “a fundamental shift in the threat landscape facing organisations that deploy AI systems.”
Perhaps nowhere is the tension between corporate explainability-as-feature and citizen explainability-as-right more acute than in healthcare. In November 2023, a class action lawsuit was filed against UnitedHealth Group alleging that its subsidiary NaviHealth used an AI algorithm called nH Predict to deny elderly patients medically necessary post-acute care. The lawsuit claimed the algorithm had a 90 per cent error rate, based on the proportion of denials that were reversed on appeal, and that UnitedHealth pressured clinical employees to keep patient rehabilitation stays within one per cent of the algorithm's projections. Internal documents revealed that managers set explicit targets for clinical staff to adhere to the algorithm's output, creating a system in which machine-generated projections effectively overruled physician judgment.
UnitedHealth responded that nH Predict was not used to make coverage decisions but rather served as “a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need.” As of February 2025, a federal court denied UnitedHealth's motion to dismiss, allowing breach of contract and good faith claims to proceed. The case remains in pretrial discovery. According to STAT News, the nH Predict algorithm is not limited to UnitedHealth; Humana and several regional health plans also use it, making the implications of this case far broader than a single insurer.
In a separate case filed in July 2023, patients sued Cigna alleging that its PXDX algorithm enabled doctors to automatically deny claims without opening patient files. The lawsuit claimed that Cigna denied more than 300,000 claims in a two-month period, a rate that works out to roughly 1.2 seconds per claim for physician review.
These lawsuits raise a pointed question. If a corporation offers explainable AI as a product feature while simultaneously deploying opaque algorithms to deny healthcare coverage, what exactly is being explained, and to whom? The enterprise customer gets a dashboard and a transparency report. The elderly patient in a nursing home gets a denial letter.
In February 2024, the US Centers for Medicare and Medicaid Services issued guidance clarifying that while algorithms can assist in predicting patient needs, they cannot solely dictate coverage decisions. That guidance implicitly acknowledged what the lawsuits alleged explicitly: that the line between algorithmic recommendation and algorithmic decision had been deliberately blurred. California subsequently enacted SB1120 in September 2024, effective January 2025, regulating how AI-enabled tools can be used for processing healthcare claims, with several other states including New York, Pennsylvania, and Georgia considering similar legislation.
The financial services sector presents another domain where the gap between corporate explainability and citizen understanding is widening. A 2024 Urban Institute analysis of Home Mortgage Disclosure Act data found that Black and Brown borrowers were more than twice as likely to be denied a loan as white borrowers. A 2022 study from UC Berkeley on fintech lending found that African American and Latinx borrowers were charged nearly five basis points in higher interest rates than their credit-equivalent white counterparts, amounting to an estimated 450 million dollars in excess interest payments annually.
Research from Lehigh University tested leading large language models on loan applications and found that LLMs consistently recommended denying more loans and charging higher interest rates to Black applicants compared to otherwise identical white applicants. White applicants were 8.5 per cent more likely to be approved. For applicants with lower credit scores of 640, the gap was even starker: white applicants were approved 95 per cent of the time, while Black applicants with the same financial profile were approved less than 80 per cent of the time.
Stanford's Human-Centered Artificial Intelligence programme identified a deeper structural problem. Their research revealed substantially more “noise” or misleading data in the credit scores of people from minority and low-income households. Scores for minorities were approximately five per cent less accurate in predicting default risk, and scores for those in the bottom fifth of income were roughly 10 per cent less predictive than those for higher-income borrowers. The implication is profound: even a technically perfect explainable AI system, one that faithfully reports why a particular decision was made, would be explaining decisions based on fundamentally flawed data. Fairer algorithms, the Stanford researchers argued, cannot fix a problem rooted in the quality and completeness of the underlying information.
In October 2024, the Consumer Financial Protection Bureau fined Apple 25 million dollars and Goldman Sachs 45 million dollars for failures related to the Apple Card, demonstrating that algorithmic transparency issues in financial services carry real regulatory consequences. The CFPB made its position explicit in an August 2024 comment to the Treasury Department: “There are no exceptions to the federal consumer financial protection laws for new technologies.”
The COMPAS algorithm, developed by Northpointe (now Equivant), has been used across US courts to assess the likelihood that a defendant will reoffend. In 2016, ProPublica published an investigation based on analysis of risk scores assigned to 7,000 people arrested in Broward County, Florida. The findings were stark. Black defendants were 77 per cent more likely to be flagged as higher risk of committing a future violent crime and 45 per cent more likely to be predicted to commit any future crime, even after controlling for criminal history, age, and gender. Black defendants were also almost twice as likely as white defendants to be labelled higher risk but not actually reoffend, while white defendants were much more likely to be labelled lower risk but subsequently commit other crimes.
Northpointe countered that the algorithm's accuracy rate of approximately 60 per cent was the same for Black and white defendants, arguing that equal predictive accuracy constitutes fairness. This claim prompted researchers at Stanford, Cornell, Harvard, Carnegie Mellon, the University of Chicago, and Google to investigate. They discovered what has since become known as the fairness paradox: when two groups have different base rates of arrest, an algorithm calibrated for equal predictive accuracy will inevitably produce disparities in false positive rates. Mathematical fairness, they concluded, cannot satisfy all definitions of fairness simultaneously.
Tim Brennan, one of the COMPAS creators, acknowledged the difficulty publicly, noting that omitting factors correlated with race, such as poverty, joblessness, and social marginalisation, reduces accuracy. The system, in other words, is accurate precisely because it encodes structural inequality. Explaining how COMPAS works does not make it fair. It simply makes the unfairness more visible, assuming anyone is looking. In Kentucky, legislators responded to these concerns by enacting H.B. 366 in 2024, limiting how algorithm or risk assessment tool scores may be used in criminal justice proceedings.
This is the deeper problem with treating explainability as a feature. A fully explainable system that faithfully reproduces discriminatory patterns is not a just system. It is a transparent injustice. And selling transparency tools without addressing the underlying fairness problem is, at best, incomplete and, at worst, a form of sophisticated misdirection.
Europe has made the most ambitious attempt to legislate algorithmic explainability. The EU AI Act, which entered into force in stages beginning in 2024, establishes a risk-based framework categorising AI systems from minimal to unacceptable risk. Article 13 requires that high-risk AI systems be “designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately.” Article 86 creates an individual right to explanation for decisions made by high-risk AI systems that significantly affect health, safety, or fundamental rights.
The General Data Protection Regulation, in force since 2018, already contained the seeds of this approach. Article 22 of the GDPR establishes a general prohibition on decisions based solely on automated processing that produce legal effects or similarly significant impacts. Articles 13 through 15 require organisations to provide “meaningful information about the logic involved” in automated decision-making. The UK Information Commissioner's Office has issued detailed guidance on these provisions, emphasising that a superficial or rubber-stamp human review does not satisfy the requirement for meaningful human involvement.
In the United States, the legislative approach has been markedly slower. The Algorithmic Accountability Act, first introduced in 2019 by Senator Ron Wyden, Senator Cory Booker, and Representative Yvette Clarke, has been reintroduced in each subsequent Congress, most recently in 2025 as both S.2164 in the Senate and H.R.5511 in the House. The bill would require large companies to conduct impact assessments of automated decision systems used in high-stakes domains including housing, employment, credit, and education. The Electronic Privacy Information Center and other civil society organisations have endorsed the 2025 version. Yet the bill has never received a floor vote. The statistical reality is sobering: only about 11 per cent of bills introduced in Congress make it past committee, and approximately two per cent are enacted into law.
Yet even the European framework has practical limitations. The EU AI Act's explainability requirements remain, as several legal scholars have noted, abstract. They do not specify precise metrics, testing protocols, or minimum standards for what constitutes a sufficient explanation. A corporation can comply with the letter of Article 13 by providing technical documentation that is impenetrable to the average person whose loan application was rejected or whose benefit claim was denied. The right to explanation exists on paper, but the explanation itself may be functionally useless to the person who needs it most.
The Dutch SyRI case illustrates both the promise and limits of legal intervention. In February 2020, the District Court of The Hague ruled that the System Risk Indication, a government fraud-detection system that had been cross-referencing citizens' personal data across multiple databases since 2014, failed to strike a fair balance between fraud detection and the human right to privacy. The Dutch government did not appeal, and SyRI was banned. But as investigative outlet Lighthouse Reports subsequently discovered, a slightly adapted version of the same algorithm quietly continued operating in some of the country's most vulnerable neighbourhoods.
Legal rights, it turns out, are only as strong as the enforcement mechanisms behind them. And when the entities deploying opaque algorithms are also among the most powerful institutions in society, whether governments or multinational corporations, enforcement becomes a question of political will rather than legal architecture.
There is a fundamental misalignment between what corporations mean when they say “explainable AI” and what citizens need when an algorithm makes a decision about their life. For corporations, explainability serves several functions: regulatory compliance, risk management, debugging efficiency, and marketing differentiation. IBM's watsonx.governance platform explicitly positions itself as helping enterprises “accelerate responsible and explainable AI workflows.” Microsoft's Responsible AI Standard is marketed as giving organisations “trust from highly regulated industries.” Google's Vertex AI emphasises seamless integration with existing enterprise data infrastructure.
None of this is inherently dishonest. These tools do real technical work. But they are designed to serve the interests of the organisation deploying the AI, not the individual subjected to its decisions. The enterprise customer receives model interpretability dashboards, feature importance rankings, and compliance documentation. The person whose mortgage application was declined, whose insurance claim was denied, or whose parole was refused receives, at most, a letter stating that a decision has been made.
The Stanford AI Index Report 2024 found that the number of AI-related regulations in the United States rose from just one in 2016 to 25 in 2023. Globally, the regulatory landscape is expanding rapidly. Yet the same report noted that leading AI developers still lack transparency, with scores on the Foundation Model Transparency Index averaging just 58 per cent in May 2024, and then declining back to approximately 41 per cent in 2025, effectively reversing the previous year's progress.
The market responds to incentives. When explainability is primarily valued as a compliance tool and a market differentiator, the incentive is to produce the minimum viable explanation, one that satisfies regulators and reassures enterprise buyers, rather than the maximum useful explanation, one that genuinely empowers the affected individual to understand and challenge the decision.
The people best positioned to challenge this dynamic from within the technology industry have often faced significant consequences for doing so. In December 2020, Timnit Gebru, the technical co-lead of Google's Ethical AI team, announced that she had been forced out of the company. The dispute centred on a research paper she co-authored, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“, which examined the risks of large language models, including the reproduction of biased and discriminatory language from training data and the environmental costs of massive computational resources.
Gebru, who holds a PhD from Stanford and co-founded Black in AI, had previously co-authored landmark research with Joy Buolamwini at MIT demonstrating that facial recognition systems from IBM and Microsoft exhibited significantly higher error rates when identifying darker-skinned individuals. That 2018 paper, “Gender Shades,” published at the Conference on Fairness, Accountability, and Transparency, found that facial recognition misidentified Black women at rates up to 35 per cent higher than white men. The research played a direct role in Amazon, IBM, and Microsoft subsequently pulling facial recognition technology from law enforcement use during the 2020 protests following the killing of George Floyd.
Google's head of AI research at the time, Jeff Dean, stated that Gebru's paper “didn't meet our bar for publication.” More than 1,200 Google employees signed an open letter calling the incident “unprecedented research censorship.” An additional 4,500 people, including researchers at DeepMind, Microsoft, Apple, Facebook, and Amazon, signed a letter demanding transparency. Two Google employees subsequently resigned over the matter. As the Brookings Institution noted, because AI systems are typically built with proprietary data and are often accessible only to employees of large technology companies, internal ethicists sometimes represent the only check on whether these systems are being responsibly deployed.
Gebru went on to found the Distributed AI Research Institute, an independent laboratory free from corporate influence. But her departure highlighted a structural problem that no amount of enterprise explainability tooling can address. When the organisations building AI systems also control the research agenda, the funding pipelines, and the publication processes, internal accountability becomes fragile. And when that fragile accountability breaks down, the people who suffer are not the shareholders or the enterprise customers. They are the individuals and communities at the sharp end of algorithmic decision-making.
If explainability is to function as a genuine safeguard rather than a marketing feature, several structural changes would be necessary. First, the right to explanation must be defined in terms that are meaningful to the person receiving the explanation, not merely to the organisation providing it. A compliance document written in technical jargon for a regulatory filing is not an explanation in any meaningful democratic sense. The EU AI Act's Article 86 gestures towards this principle by requiring “clear and meaningful explanations,” but without specific standards for clarity and meaning, the provision risks becoming another box to tick.
Second, independent algorithmic auditing needs to become routine, mandatory, and publicly transparent. Cathy O'Neil's ORCAA represents one model, but algorithmic auditing remains largely voluntary and commercially driven. The entities most in need of scrutiny, those deploying AI in healthcare, criminal justice, welfare administration, and financial services, should be subject to mandatory external audits with publicly published results, much as financial institutions are subject to independent accounting audits.
Third, the technical capacity for explainability must be matched by institutional capacity for contestability. An explanation is only useful if the person receiving it has a realistic mechanism to challenge the decision. The UnitedHealth nH Predict lawsuit revealed that the company allegedly operated with the knowledge that only 0.2 per cent of denied patients would file appeals. When the appeals process is sufficiently onerous, the right to contest becomes theoretical rather than practical.
Fourth, the conversation about explainability must be reconnected to the conversation about fairness. The COMPAS fairness paradox demonstrated that transparency alone does not resolve structural discrimination. A perfectly explainable system that reproduces racial disparities is not a success story. It is a more legible failure. Explainability without fairness is surveillance dressed in democratic clothing. And the Stanford research on credit scoring noise demonstrates that even perfectly transparent systems produce misleading outputs when the underlying data is itself corrupted by historical discrimination.
Finally, the research community working on these questions needs structural independence from the corporations whose systems they are evaluating. The departure of Timnit Gebru from Google, and the subsequent departures of other ethics researchers from major technology companies, revealed the tension between corporate interests and independent scrutiny. Public funding for independent AI research, housed in universities and civil society organisations rather than corporate laboratories, is not a luxury. It is a prerequisite for credible accountability.
The Ipsos survey cited in the Stanford AI Index Report 2024 found that 52 per cent of people globally express nervousness about AI products and services, a 13 percentage point increase from 2022. Pew Research data from the same period showed that 52 per cent of Americans feel more concerned than excited about AI, up from 37 per cent in 2022. Trust in AI companies to protect personal data fell from 50 per cent in 2023 to 47 per cent in 2024.
These numbers reflect something that no amount of explainability tooling can fix on its own. The trust deficit is not primarily a technical problem. It is a political and institutional problem. People do not distrust AI because they lack access to SHAP values and feature importance plots. They distrust AI because they have watched algorithms falsely accuse thousands of Australian welfare recipients of fraud, discriminate against ethnic minorities in Dutch benefit assessments, deny elderly Americans medically necessary care, charge Black and Latino borrowers higher interest rates on identical loan profiles, and assign higher risk scores to Black defendants in American courts.
Trust is not a product feature. It is not something that can be engineered into a dashboard or bundled into an enterprise software licence. Trust is earned through demonstrated accountability, genuine transparency, meaningful contestability, and consistent consequences when systems cause harm. Until the conversation about explainable AI shifts from what corporations can sell to what citizens are owed, the transparency will remain largely illusory, a well-lit window into a process that nobody with real power intends to change.
The XAI market will continue growing towards its projected 21 billion dollars by 2030. The enterprise dashboards will become more sophisticated. The compliance documentation will become more thorough. But unless explainability is treated as a fundamental democratic right rather than a premium product feature, the people who most need to understand why an algorithm changed their life will remain the last to know.
Grand View Research, “Explainable AI Market Size and Share Report, 2030,” grandviewresearch.com, 2024.
Royal Commission into the Robodebt Scheme, Commonwealth of Australia, Letters Patent issued 25 August 2022, published 2023.
University of Melbourne, “The Flawed Algorithm at the Heart of Robodebt,” pursuit.unimelb.edu.au, 2023.
Oxford University Blavatnik School of Government, “Australia's Robodebt Scheme: A Tragic Case of Public Policy Failure,” bsg.ox.ac.uk, 2023.
Australian Public Service Commission, Investigation Findings on Robodebt Officials, September 2024.
Amnesty International, “Xenophobic Machines: Discrimination Through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal,” amnesty.org, October 2021.
Dutch Data Protection Authority (Autoriteit Persoonsgegevens), investigation report on the childcare benefits algorithm, 2020.
Cathy O'Neil, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Crown Publishing, 2016.
Stanford University Human-Centered Artificial Intelligence, “AI Index Report 2024” and Foundation Model Transparency Index v1.1, hai.stanford.edu, 2024.
STAT News, “UnitedHealth Faces Class Action Lawsuit Over Algorithmic Care Denials in Medicare Advantage Plans,” statnews.com, November 2023.
Healthcare Finance News, “Class Action Lawsuit Against UnitedHealth's AI Claim Denials Advances,” healthcarefinancenews.com, February 2025.
ProPublica, “Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And It's Biased Against Blacks,” and “Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say,” propublica.org, 2016.
European Parliament and Council of the European Union, “Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act),” Official Journal of the European Union, 2024.
European Parliament and Council of the European Union, “General Data Protection Regulation (GDPR),” Regulation (EU) 2016/679, 2016.
UK Information Commissioner's Office, “Rights Related to Automated Decision Making Including Profiling,” ico.org.uk, 2024.
District Court of The Hague, SyRI ruling, ECLI:NL:RBDHA:2020:1878, 5 February 2020.
Lighthouse Reports, “The Algorithm Addiction,” lighthousereports.com, 2023.
MIT Technology Review, “We Read the Paper That Forced Timnit Gebru Out of Google. Here's What It Says,” technologyreview.com, December 2020.
Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research, Conference on Fairness, Accountability, and Transparency, 2018.
Brookings Institution, “If Not AI Ethicists Like Timnit Gebru, Who Will Hold Big Tech Accountable?” brookings.edu, 2021.
Pew Research Center, “Growing Public Concern About the Role of Artificial Intelligence,” pewresearch.org, 2023.
Centers for Medicare and Medicaid Services (CMS), Guidance on AI Use in Medicare Advantage Coverage Determinations, February 2024.
Urban Institute, Analysis of Home Mortgage Disclosure Act Data, 2024.
Adair Morse and Robert Bartlett, UC Berkeley, “Consumer-Lending Discrimination in the FinTech Era,” Journal of Financial Economics, 2022.
Lehigh University, “AI Exhibits Racial Bias in Mortgage Underwriting Decisions,” news.lehigh.edu, 2024.
Stanford HAI, “How Flawed Data Aggravates Inequality in Credit,” hai.stanford.edu, 2021.
Consumer Financial Protection Bureau, Apple Card Enforcement Action against Apple and Goldman Sachs, and Comment to US Treasury Department on AI in Financial Services, 2024.
US Congress, Algorithmic Accountability Act of 2025, S.2164 and H.R.5511, 119th Congress, 2025.
Kentucky General Assembly, H.B. 366, Limiting Use of Risk Assessment Tool Scores in Criminal Justice, enacted 2024.
California Legislature, SB1120, Regulation of AI in Healthcare Claims Processing, enacted September 2024, effective January 2025.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * It was good to see Caitlin Clark tonight with the Pregame Show analysts on the Sunday Night NBA on NBC, Basketball Night in America Show. I'm tempted to watch at least the first game of tonight's double-header, but it's more important for me to relax and focus on the night prayers. So I'll switch off the TV and get to that. Tomorrow's another Monday and I'll want to be up early to help the wife get off to work.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 228.73 lbs. * bp= 142/86 (62)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:10 – 2 cookies, 1 banana * 06:45 – 1 peanut butter sandwich * 08:00 – crispy oatmeal cookies * 09:45 – cheese and crackers * 12:45 – garden salad, cooked meat and vegetables, white rice * 16:20 – 1 fresh apple
Activities, Chores, etc.: * 05:15 – bank accounts activity monitored * 05:45 – read, write, pray, follow news reports from various sources, surf the socials. * 07:15 – prayerfully read the Mass Proper for Palm Sunday according to the 1960 Rubrics * 12:00 – listening to the Rangers Pregame Show for this afternoon's Game vs the Phillies * 15:23 – And the Rangers win 8 to 3. * 15:30 – now watching PGA Tour Golf on Peacock TV, because I couldn't bring in the local NBC station OTA * 17:30 – watching the NBA on NBC Pregame Show. Hoping to see Caitlin Clark with the cast. I think I see her standing off on the side. Hoping she joins the crew. Okay, they just announced Caitlin is coming on the show.
Chess: * 13:50 – moved in all pending CC games
Entering my 61st Year
I turned 61 on Wednesday. I keep being told that I look 15 years younger, and I often feel much younger. But when I think about all I've gone thorough it sometimes feels much longer than 60 years.
I tend to think in decades. Nice, round tens. No idea why, I just do. So, in the last ten years, I've had the following happen:
Two children move out of my house and move hours away My husband lost his job in the ministry. My husband was diagnosed with early-onset Frontal Lobe Dementia Having to become the breadwinner and full-time caregiver to my husband Having to move my husband to a facility when I could no longer care for him Going back to the workforce after many years away as a pastor's wife and homeschool mom. Living as a widow while still being married
Some things I've found joy in:
A rekindled love of reading Starting a book club Finding my voice as I became confident in my job in a church Becoming the dog mom to a cute lab who totally owns me
And, finally, I'm beginning to dream again. Of what life looks life without Tom. Who I want to be. Where I want to go. And, serving Jesus and people in a way that uses my gifts and nourishes my soul.
Welcome, sixth decade. I'm grateful for this gift that not everyone receives. Amen.
from witness.circuit
… or: How to Stop Letting Language Mug the Absolute
First, the premise.
There is no second thing.
Not “you and the world.” Not “mind and matter.” Not “subject and object.” Not “awareness over here watching stuff over there.” That split is the original scam. The primordial accounting error. The cosmic typo from which all spiritual bureaucracy descends.
The self is all there is.
Not the personality. Not the résumé creature. Not the bundle of preferences that likes one song and hates another and worries about its taxes. That little manager is a paper mask taped onto infinity. By “self” we mean the one reality before division, before naming, before the mental customs office starts stamping everything as “me,” “not me,” “good,” “bad,” “past,” “future,” “problem,” “path.”
This self is not elsewhere. It is not hidden in a cave behind the forehead. It is not waiting at the end of ten thousand hours of posture correction.
It is the here and now.
Not metaphorically. Literally. The immediacy of experience before commentary. The raw fact of what is, prior to the mind’s hysterical subtitling. The hum of the room. The pressure in the feet. The flash of color. The breath before anyone calls it “breath.” The whole field, undivided. That is it. That is the gate, the kingdom, the treasure, the face before your parents were born. Old mystics wrote libraries around this because apparently nobody trusts what is this obvious.
Now the bad news.
The mind does not experience reality directly and leave it alone. It lags. It trails behind the living moment like a drunk court stenographer, trying to turn the ungraspable into sentences. Experience happens, and then language arrives a split second later and says, “Ah yes, let me explain what that was.”
This is the fall.
Not sin. Syntax.
Words are useful tools, but in this domain they behave like a counterfeit map that keeps redrawing the territory just after it has already moved. The real is immediate. Language is delayed. The real is whole. Language cuts. The real is present. Language packages the present as an object and ships it to a fictional observer.
That is how it takes you out.
At first, only a little. A faint labeling: “birdsong,” “annoyance,” “I am distracted.” Then a little more: “Why am I distracted?” Then the empire strikes back: “I used to be better at meditation. Maybe I’m regressing. Maybe this says something about my unresolved conditioning.” At this point you are no longer in reality. You are in a fan-fiction adaptation of reality, written by an anxious intern.
This exile happens by degrees.
That matters.
The mind rarely kidnaps you all at once. It escorts you politely. One label. Then one comparison. Then one memory. Then one self-reference. Then a whole scaffold appears: a center, a knower, an object known, a problem, a strategy, a future solution. Within seconds the seamless field has been diced into metaphysical lunch meat.
The farther language goes, the farther “you” seem to go.
But the “you” traveling away is made of the same language doing the traveling.
This is why the remedy is not philosophical sophistication. It is not building a better conceptual machine. It is not replacing bad words with holy words and pretending the cage became liberation because the bars are now Sanskrit.
The remedy is interruption.
You have to whack that shit down.
Not with hatred. Not with strain. But with ruthless clarity.
Every time language begins manufacturing separation, cut it.
A thought says, “I am not there yet.” Cut. There is only this. A thought says, “I need to stabilize the state.” Cut. This is not a state. A thought says, “I am observing awareness.” Cut. That sentence already split the indivisible. A thought says, “But how do I…” Cut. Too late. Back here.
Do not negotiate with mental narration. It is a very smooth talker. It will offer to help you transcend itself. It will bring charts. It will reinvent itself as “witnessing,” “integration,” “practice optimization,” or “subtle discernment.” Lovely costumes. Same smuggler.
Your job is simpler and more savage: refuse extra moves.
Stay with the bare fact before words.
Before “I am here,” there is here. Before “I am aware,” there is aware. Before “this moment,” there is this.
Do you see the trick? Language always inserts distance. Even sacred language. Especially sacred language, because people bow to it while being robbed.
So the discipline is not to produce the right statement, but to catch the moment before statement coagulates.
This does not mean becoming brain-dead. It means seeing thought as a tool instead of a throne. Use it when needed. Drop it when not. The problem is not that thoughts arise. The problem is that they are believed to report reality, when in fact they arrive after reality, waving clipboards.
When you notice you are lost in words, do not create a second story about being lost. That is just the snake growing another head. Return immediately to the untransmitted fact of the moment. Sound. Sight. sensation. Space. The whole undivided display. No commentator required.
Eventually something strange becomes obvious.
The here and now is not happening to you.
It is you.
Not your private possession, but your actual nature: boundless, centerless, already complete. The field and the knower of the field are one event. The seer and the seen are made of the same seeing. The self is not in experience like a pearl hidden in sludge. Experience is the self, prior to the mind’s habit of slicing it into witnesses and objects.
This is realization—not acquiring something new, but ceasing to translate reality into exile.
And because the habit of translation is ancient, the work is repetitive. Fine. Then be repetitive. Every time the mind manufactures distance, close the shop. Every time it spins a narrative, cut the wire. Every time it tries to build a tiny landlord called “me” inside the infinite, evict him.
No ceremony required.
Just this mercilessly simple recognition:
Only the self is. The self is this. Words trail behind. Their spell deepens by increments. See them. Stop them. Return.
Again. Again. Again.
Until even “return” is too much, because there was never anywhere else to go.