Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
The Home Altar

The places we live have plenty of locations for special purposes; rooms to prepare meals and store food, rooms for breaking bread, rooms for cleaning and caring for our bodies, rooms for rest, rooms for work, rooms for play, and rooms for putting things away until they’re needed. When an activity has a space, whether that’s a room, a corner of a room, a closet, or just one little spot, it encourages and facilitates that activity as a part of our daily living. The same can be true of our prayer and spiritual practice, making physical space for it can help us to make temporal and emotional space for it in our days.
While it is true that many practices can happen in virtually any space, dedicating space is a powerful reminder that this is an essential activity of daily life. While Jesus recommends entering a prayer closet and praying in secret (Matthew 6:6), I believe that this guidance is more about an interior location within ourselves, versus a space in our domicile. Furthermore, as a queer person of faith, I really dislike the metaphor of meeting God only in the closet.

At the same time, I have found it to be a powerful reminder to have a space that invites me back into a deeper relationship with God in the place where I live. The meditation cushion and singing bowl invite me to the power of silence and now. The icons and art give me a place to fix my gaze and to be beheld in love. The vows of my community remind me of my promises. A begging bowl invites me into practices of shameless begging for the sake of my neighbors and world. My father’s pondering chair invites me to gentle contemplation. My beloved sibling’s crocheted prayer shawl wraps me in comfort and holy love. Various rosaries invite me into conversation with God(ess), the Holy Mother, and with Christ. This space invites me over and over and over into a posture and attitude of reverence, prayer, and love.
This doesn’t mean that this is the only place I pray or practice my spirituality. Indeed there is ample prayer that echoes forth in walks around the neighborhood, at the sink washing dishes, in the gardens as I tend the earth (more on this in part 2), beside the dog as she presses into me, and at my bedside as I begin and end each day. What makes it powerful is that this space is one that I enter daily, and I am invited once more to be mindful of God and neighbor in a gentle and compassionate way.
It’s one small corner in my living space that is loaded with meaning and the chance for additional meaning making abounds. For this, I am deeply grateful.
from
Littoral
“Society takes no responsibility for Black people’s poverty and their social exclusion and isolation, even though the history of our continuing mistreatment and subjection at the hands of that very same society is well-known; rather, our poverty and exclusion are offered as evidence of our inherent inferiority.”
— Rinaldo Walcott, On Property, p. 40
from
hustin.art
The inclement atmosphere exacerbated the protagonist's valetudinarian disposition, a stark contrast to the pulchritudinous neon flickering above. He masticated a toothpick, his countenance an inscrutable palimpsest of historical transgressions. “Your lugubrious machinations possess no efficacy here,” he intoned, his voice a gravelly baritone of cynicism. The evanescent vapor from his cigarette ascended, intertwining with the miasma of the subterranean alcove while he contemplated the inexorable approach of fate.
#Scratch

Miniguns or missile launchers? Choose wisely.
#humor #cat #guns #missiles
from
EpicMind
![]()
Wenn ich heute Menschen zuhöre – im Zug, im Büro, beim Abendessen oder auch einfach online –, dann höre ich erstaunlich oft dieselben Untertöne: Erschöpfung, Gereiztheit, Vergleichsdruck, diffuse Unruhe. Viele leben in materiellem Wohlstand und wirken gleichzeitig innerlich erschöpft. Man optimiert Schlaf, Ernährung, Produktivität und Freizeitgestaltung, und dennoch bleibt häufig das Gefühl zurück, dass irgendetwas nicht stimmt. Und doch: Noch nie hatten wir so viele Möglichkeiten, unser Leben angenehm zu gestalten, und gleichzeitig so grosse Mühe, Ruhe zu finden.
In solchen Momenten lohnt sich manchmal der Blick weit zurück. Manche Probleme sind nämlich erstaunlich konstant geblieben. Der römische Philosoph Seneca schrieb vor fast zweitausend Jahren an seinen Bruder Gallio über Zorn, Ehrgeiz, Angst, Reichtum, öffentliche Meinung und die Schwierigkeit, ein gutes Leben zu führen (De vita beata, eigentlich ad Gallionem de vita beata, deutsch „An Gallio über das glückliche Leben“). Seine Welt war brutaler als unsere, politisch noch instabiler und von existenziellen Risiken geprägt. Trotzdem wirken viele seiner Gedanken heute fast irritierend aktuell. Vielleicht gerade deshalb, weil sie nicht auf Komfort abzielen, sondern auf innere Stabilität.
Seneca fordert keine Gefühllosigkeit. Er verlangt nicht, dass man kalt oder unberührt wird. Ihm geht es vielmehr um das, was die Stoa „Apatheia“ nennt – nicht Gleichgültigkeit, sondern Freiheit gegenüber den eigenen emotionalen Ausschlägen. Wer ständig zwischen Euphorie und Verzweiflung schwankt, wird zum Spielball der Umstände.
Er formuliert das überraschend klar: „… da Alles verbannt ist, was uns entweder reizt oder schreckt.“ III (4.)
Und an anderer Stelle schreibt er von einer „… sicher gestellten Ruhe und Erhabenheit der Seele …“ V (1.)
Ich finde bemerkenswert, wie modern das klingt. Unsere Gegenwart lebt geradezu von emotionaler Übersteuerung. Empörung erzeugt Reichweite, Angst bindet Aufmerksamkeit und digitale Plattformen belohnen starke Reaktionen. Wer permanent online ist, lebt oft in einem künstlich erhöhten Erregungszustand. Man reagiert auf jede Nachricht, jede Krise, jede Provokation. Ruhe wirkt beinahe verdächtig.
Seneca würde darin vermutlich keine Freiheit sehen, sondern Abhängigkeit. Nicht die Welt regiert dann unser Leben, sondern unsere Reaktionen auf sie. Gerade deshalb erscheint mir seine Forderung nach innerem Gleichgewicht heute weniger wie antike Weisheit und mehr wie eine Form geistiger Selbstverteidigung.
Kaum etwas widerspricht der Gegenwart so sehr wie Senecas Verhältnis zum Reichtum. Er verteufelt Besitz nicht grundsätzlich. Er war selbst wohlhabend und politisch einflussreich. Gerade deshalb ist seine Position interessant. Das Problem ist für ihn nicht der Besitz, sondern die seelische Bindung daran.
Er schreibt: „Ich will Reichthümer, sowohl vorhandene, als mir abgehende, auf gleiche Weise verachten …“ XX (2.)
Und weiter: „… er erklärt, man müsse jene Dinge verachten, nicht damit man sie nicht besitze, sondern damit man sie nicht mit Angst besitze …“ XXI (3.)
Das trifft einen empfindlichen Punkt moderner Gesellschaften. Heute wird Konsum oft nicht mehr nur als Luxus verstanden, sondern als Ausdruck der eigenen Identität. Wohnungen, Kleidung, Reisen oder technische Geräte dienen nicht selten dazu, sich selbst darzustellen. Wer bin ich? Die Antwort lautet immer häufiger: Schau an, was ich besitze.
Das Problem beginnt dort, wo Besitz psychologisch notwendig wird. Dann erzeugt Wohlstand nicht Ruhe, sondern Verlustangst. Man hat plötzlich nicht mehr Dinge, sondern die Dinge haben einen selbst. Seneca würde vermutlich sagen: Wer seinen inneren Wert vom Äusseren abhängig macht, lebt ständig auf unsicherem Boden.
Interessanterweise klingt das keineswegs asketisch. Es ist vielmehr ein Plädoyer für innere Unabhängigkeit. Reichtum darf angenehm sein. Er darf das Leben erleichtern. Er darf aber nicht darüber entscheiden, ob ein Mensch sich selbst achtet.
Einer der schönsten Gedanken Senecas ist vielleicht auch einer der unbequemsten. Der Mensch, so schreibt er, lebt nicht nur für sich selbst. Sinn entsteht erst in Beziehung zu anderen.
„Ich will so leben, als wüßte ich, ich sei für Andere geboren …“ XX (2.)
Und weiter: „Mich, den Einzelnen, hat sie Allen, mir, dem Einzelnen, Alle geschenkt.“ XX (3.)
Das steht quer zu einer Kultur, die Selbstverwirklichung oft fast ausschliesslich individuell denkt. Natürlich ist persönliche Freiheit wichtig. Doch viele Menschen erleben irgendwann, dass reine Selbstoptimierung seltsam leer werden kann. Karriere, Status oder Erlebnisjagd ersetzen keine Verbundenheit.
Ich habe manchmal den Eindruck, dass unsere Gesellschaft das Gemeinschaftliche verlernt hat. Man spricht viel über Selbstschutz, #Selbstmanagement und Selbstvermarktung – aber erstaunlich wenig darüber, wem man eigentlich nützt. Genau dort setzt Seneca an. Ein sinnvolles Leben entsteht nicht allein aus persönlichem Genuss, sondern aus Beziehung, Verantwortung und Grosszügigkeit.
Das klingt zunächst moralisch. Tatsächlich ist es aber auch psychologisch plausibel. Menschen brauchen das Gefühl, Teil von etwas Grösserem zu sein als ihrer eigenen Biografie.
Die Antike kannte noch keine sozialen Medien, keine Streamingplattformen und keine digitale Dauerablenkung. Dennoch verstand Seneca bereits etwas Grundsätzliches über den Menschen: Grenzenlosigkeit macht selten glücklich.
„Alles, was ich besitze, will ich weder auf schmutzige Weise hüten, noch verschwenderisch verstreuen …“ XX (3.)
Und über den Genuss schreibt er knapp: „… die Mäßigung darin erfreut.“ X (3.)
Beinahe banal. Natürlich soll man Mass halten. Doch genau das scheint modernen Gesellschaften immer schwerer zu fallen. Unsere Welt ist auf Maximierung angelegt: mehr Leistung, mehr Sichtbarkeit, mehr Konsum, mehr Unterhaltung, mehr Effizienz. Selbst Erholung wird optimiert.
Dabei entsteht oft ein paradoxes Ergebnis. Menschen haben unendlich viele Möglichkeiten und verlieren gerade dadurch ihre innere Ruhe. Senecas Idee der Mässigung ist deshalb nicht kleinbürgerliche Bescheidenheit, sondern eine Form bewusster Selbstbegrenzung. Nicht alles, was möglich ist, muss ausgeschöpft werden. Vielleicht liegt darin sogar eine unterschätzte Form von Freiheit.
Kaum eine Passage wirkt aktueller als Senecas Warnung vor der Macht der öffentlichen Meinung.
„Nichts will ich der Meinung, Alles meiner Ueberzeugung wegen thun …“ XX (3.)
Und schon ganz am Anfang des Werkes schreibt er: „… daß wir nicht nach Vernunftgründen, sondern nach Beispielen leben …“ I (3.)
Man könnte meinen, dieser Satz sei für das Zeitalter sozialer Medien geschrieben worden. Noch nie war es so einfach, sich permanent mit anderen zu vergleichen. Zustimmung wird sichtbar gemacht, Meinungen werden öffentlich bewertet und soziale Anerkennung lässt sich in Zahlen messen.
Das verändert Menschen. Viele beginnen irgendwann unbewusst, nicht mehr nach Überzeugung zu handeln, sondern nach Resonanz. Was wirkt gut? Was wird geliked? Was bringt Zustimmung? Seneca sieht darin eine Gefahr für die innere Freiheit. Wer sich ständig am Urteil der Menge orientiert, verliert irgendwann den Zugang zum eigenen Urteil.
Bemerkenswert ist dabei, dass Seneca selbst kein weltfremder Einsiedler war. Er bewegte sich im Machtzentrum des römischen Reiches, war reich, politisch einflussreich und zugleich ständig bedroht. Gerade deshalb wirken seine Gedanken glaubwürdig. Er schrieb nicht aus sicherer Distanz über die Versuchungen von Ruhm und Macht, sondern mitten aus ihnen heraus.
Vielleicht erleben stoische Denker gerade deshalb eine Renaissance. Nicht weil Menschen plötzlich wieder ernsthaft antike Philosophie lesen würden, sondern weil moderne Gesellschaften permanent Bedürfnisse erzeugen und gleichzeitig kaum Orientierung bieten. Allerdings wird die Stoa heute oft missverstanden. Viele behandeln sie wie ein weiteres Werkzeug der Selbstoptimierung: effizienter arbeiten, härter werden, produktiver funktionieren, emotional unangreifbar erscheinen. In sozialen Medien wirkt der Stoiker nicht selten wie ein asketischer Hochleistungsmensch mit Morgenroutine und perfekter Selbstkontrolle.
Damit verfehlt man Seneca allerdings ziemlich gründlich. Die Stoa ist keine Technik zur Leistungssteigerung und auch keine emotionslose Business-Philosophie für Menschen mit Kalender-App und Koffeinproblem. Seneca interessiert sich nicht dafür, wie Du mehr erreichst, sondern wie Du innerlich freier wirst. Er verspricht kein perfektes Leben, keine dauernde Zufriedenheit und schon gar keine Wellness-Philosophie. Seine Texte handeln vielmehr davon, wie man trotz Unsicherheit, Verlust, Druck und menschlicher Schwäche Haltung bewahren kann.
Seine Gedanken sind nämlich unbequem. Sie verlangen Disziplin, Selbstbeobachtung und die Bereitschaft, sich nicht vollständig von Konsum, öffentlicher Meinung oder Emotionen treiben zu lassen. Gerade darin liegt ihre Aktualität.
Interessanterweise war Seneca selbst keine makellose Figur und lebte keineswegs immer nach seinen eigenen Idealen. Doch vielleicht macht gerade das seine Texte menschlich. Er schrieb nicht als unfehlbarer Weiser, sondern als jemand, der dieselben Spannungen kannte wie wir: Ehrgeiz und Zweifel, Komfort und Gewissen, Macht und innere Unruhe.
Je älter ich werde, desto weniger überzeugen mich einfache Glücksversprechen. Viele moderne Ratgeber versprechen Optimierung, Effizienz oder mentale Kontrolle. Seneca interessiert sich für etwas anderes: Charakter. Für ihn entsteht ein gutes Leben nicht aus maximalem Genuss, sondern aus innerer Haltung.
Das wirkt zunächst streng. Gleichzeitig liegt darin etwas Tröstliches. Denn äussere Umstände lassen sich nur begrenzt kontrollieren. Die eigene Haltung dagegen zumindest teilweise schon. Vielleicht ist genau das der Grund, warum ein römischer Philosoph aus dem ersten Jahrhundert plötzlich wieder relevant erscheint. Nicht weil er einfache Lösungen bietet, sondern weil er uns daran erinnert, dass ein ruhiger Geist wahrscheinlich wertvoller ist als ein perfekt kuratiertes Leben.
Bildquelle Atelier/Werkstatt von Gerrit van Honthorst (1592–1656): Der Tod Senecas, Centraal Museum, Utrecht, Public Domain.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet.
Topic #Philosophie | #ProductivityPorn
from 下川友
夜、古いチャットログを読み返していた。「交換日記を始める」とだけ書かれた短いメッセージが残っている。誰から送られてきたものだったのか、もう思い出せない。ただ、その直後に添付されていた写真だけは覚えている。ピアノの下に潜り込んだ子供が、スマホの光を見つめている。そのときの影は狐みたいで、暗がりだけが妙に静かだった。
その写真を眺めているうちに、自分の机の広さを思い出した。あれは何かを置くための広さではなく、最初から、何も存在しないことを確認するための余白だったのではないかと思う。
机の横に積まれていた8ミリフィルムを再生した。大学時代の文化祭の映像だった。画面の中の自分は、何かを探している。紙コップだった。内容ではなく、8ミリフィルムで紙コップを撮れたという事実だけで、当時は何かを手に入れた気持ちになっていた。 しゃがみ込んだり、意味もなくストレッチをしたりしている姿を見ていると、自分は、自分の行動を客観視できる状況そのものに高揚していたのだと分かる。
ふと視線を上げると、再生していたフィルムの画面が俯いていた。アームの関節が緩んでいて、放っておくとモニターがゆっくり床を向く。説明書には、どこかのネジを締めれば直ると書いてあった気がする。でも、どの力がどこへ伝わっているのか、自分には理解できない。仕組みというものは、知ろうとした途端に遠ざかっていく。
昼休み、会社の机の上に置いた古いブラウン管テレビの電源を入れた。「好きに使っていいですよ」と渡された机は、自分には大きすぎた。空いた面積を埋めるためだけに、角の丸い不格好なテレビを置いたのだった。以前は、こういうときには昔のテレビスターを眺めれば良いのだと思っていた。でも、それにも少し前に飽きてしまって、今はただ、深海魚の美しい部分だけをまっすぐ見ていた。
夏が近づいてきて、その前に梅雨がある。その梅雨をなんとなく感じ始めると、その瞬間に感覚が前年度の冬へ引き戻され、自分の中でまた冬が始まる。 冬になると、その瞬間だけ世界は極端に単純化される。余計な情報が剥がれ落ち、視界は透明な方眼紙みたいになる。誰と誰が交わらず、どこにも線が引かれていないのかが、静かに分かる。
自分と彼女の間には、何も接続されていなかった。 彼女は急に現れて、関係のなさだけを示して去っていく。
そこに、自分らしいつまらなさがあった。
from An Open Letter
I feel pretty conflicted right now because I’ve been somewhat talking to this girl K For a little bit now and I am very confident that she is interested in at least a date if not more. I feel like there are logistical reasons why I can say that this is not maybe the relationship I’m looking for, she works opposite hours than I do and so I would only be able to spend time with her on the weekend. She also lives pretty far away from me. And additionally there are a couple things that aren’t necessarily red flags but maybe more yellow for me, she isn’t in therapy, and has said a couple things that kind of feel like they aren’t indicators of emotional majority but I also could be wrong. She also isn’t really chalant or expressive the same way that I am, and that’s not necessarily a dealbreaker but I do really like it when someone can match my energy. Otherwise I feel like I’m kind of constantly fighting this pull to match their energy. Also isn’t really convinced on if she wants to have kids or not, and she really wants to travel around a lot, meaning she isn’t necessarily ready to set roots.
But at the same time she is fun to talk to, and also is pretty well established with a job and her own friend group. We do have some similar interests like certain kind of games, and the gym. She also does give off a lot of that tomboyish energy that I like, where she is competitive for stupid things.
I keep thinking about this one reel that I saw, a woman was at a restaurant and the menu only had fries. She ordered the fries, and then notices another patron had a really nice plate of pasta. When she asked the patron how she got that it’s as simple as just asking for it. When she mentions that pasta is not on the menu, the other patron says that they’re simply is no menu and what you ask for you will get. It’s just a question about knowing what you can expect. An additionally when she orders the pasta she gets fries. She has to then wait and send the fries back. Then she gets her pasta right before she starts to eat it, it turns into fries. She has to again send it back until finally she gets the pasta that she wanted.
I feel like this is almost a test from the universe, seeing if I am willing to say no and wait a little bit more for someone that I truly fall in love with. The universe has been kind to me by making it explicitly hard logistically, but also by illustrating that it’s not going to be super clear answers of someone who says that they refuse to have kids, or that they live hundreds of miles away. Often these things happen in these gray in between. And I guess in journaling here I feel like the answer is clear to me. I guess now the question is how to make sure I’m not leading someone on even though we haven’t explicitly showed interest.
I guess when I think about it a little bit more, if I visualize the person that I want to spend my life with, it’s someone who looks at me and smiles in a slightly mischievous but very grateful way. I think I could really value someone that can help me stand up when I’m at my lowest. I don’t want my partner to be my therapist or anything like that but I absolutely want them to be someone that I feel safe going to. And I know that I grew up only eating fries but maybe I would like to hold out until I can find someone that would notice the little things that come from knowing me for long enough to tell that I’m struggling and maybe give me like a little pack of candies when I get home and a hug. And the thought of that makes me wanna breakdown crying. I want to be careful of saying that I’m not asking for too much because honestly to me that’s the world. I think that being loved can look like a dollar store pack of sour patch kids. And it’s a quiet reminder that you have a place in my mind. And even if that place gets dirty and neglected because you’re struggling, that’s a place that’s worth cleaning and tidying up for you. And instead of just shutting the door, letting a bit of sunlight in and letting me know that no matter what I am loved.
I’ve gone pretty far from the original point but I guess another kind of a litmus test for me is the fact that I’ve kind of spent my entire life learning that depression was something to be hidden. And this was also because of my fault. At least in the sense of I was doing something that wasn’t good for other people, before I was properly treated I was essentially making this massive concern someone else’s problem when they would give me some space, and I would like to acknowledge the fact that that is no longer the case. But it still is something I’ve had to unlearn and relearn again, taking up space and asking for help from friends and family. And I think that is something that’s really hard for me but incredibly important, and when I choose a future partner I want someone who recognizes that that’s both a weakness but also something great importance to me. And I think you’d be so incredibly sweet and loving if a partner that finds out about my struggles work conditions doesn’t shy away from them, but rather goes inside with curiosity and compassion, the same way I would hope I do.
If I’m being completely honest I hope that I find this person sued, because I really want to meet them and I would love to be able to start spending time with them, and I know that an important thing is controlling my hunger for it because that is what blinds me into taking fries instead of pasta. But I think it would be an incredibly beautiful dish of pasta and I would be lying if I said I didn’t look forward to it.
from POTUSRoaster
Hello again. I hope your Monday went well.
While you were working to earn enough money so you and your family can live a better life, POTUS said “I don’t think about Americans’ financial situation”. He really doesn't care if life is hard for you. He has insured that his own family including himself have garnered millions by ignoring the restrictions in our constitution.
He sells bibles and other merch which enrich himself and his family while acting as POTUS. There is no stopping his greed and he will use any power he has in order to enrich himself and his family, while you are barely covering the needs of you and your family. He doesn't care.
POTUS Roaster
Thank you for reading this and my other posts . If you enjoy them please tell your friends and family. To read the other posts written just for you, please go to write.as/potusroaster/archive.
from
SmarterArticles

The first thing the sensor sees is the ceiling. It is an unremarkable ceiling, white acoustic tile, fluorescent strip, a slight nicotine tinge from a generation of residents who were once allowed to smoke indoors. The sensor is not a camera in the conventional sense. It does not record video; the procurement document made a point of that. It is a low-resolution thermal array, mounted in a discreet white housing about the size of a smoke alarm, and it watches the room beneath it as a heat map. When a heat-map blob detaches from the bed and crosses the floor, it logs movement. When the blob lies horizontal in a place a human body should not be horizontal, it pings a tablet at the nurses' station. The vendor calls this fall detection. The procurement notice called it dignity-preserving monitoring. The night shift on a typical residential aged care floor in Australia or England in early 2026, which is often one registered nurse and two personal care workers covering upwards of forty residents, calls it the thing that goes off.
What the thing goes off about, on the kinds of nights the Australian Royal Commission into Aged Care Quality and Safety documented across ninety-nine sitting days of evidence and that the Care Quality Commission in England continues to describe in its state-of-care reports, is the sort of incident that happens when an older resident with dementia transfers from bed, returns toward it, and falls. The sensor logs the transfer; it logs the horizontal heat signature on the floor; it pings the tablet. The personal care worker on duty may be two corridors away changing another resident. By the time anybody arrives, the resident has been on the carpet long enough for a hip to break. The sensor has done exactly what the brochure said it would do. Nobody has been close enough for the information to matter. That pattern, not any one incident, is what the evidence that regulators have taken in sworn testimony describes.
It is the gap between those two facts, the thing that the technology measured and the thing that the system did with the measurement, that a paper published in The Conversation on 24 February 2026 by Barbara Barbosa Neves of the University of Sydney, Alexandra Sanders, and Geoffrey Mead set out to dramatise. Their argument, distilled from an analysis of the marketing materials of thirty-three companies selling AI tools into aged care across Australia, East Asia, Europe and North America, is that the industry has succeeded in convincing governments and investors that algorithmic monitoring, automated care planning and companion robotics are the answer to a workforce crisis when they are, in fact, a way of avoiding the question. The crisis is structural. The tools, however clever, cannot be structural answers. “If we let AI companies define what is broken,” the authors write, “we also let them define what repair looks like. That may leave our systems more profitable, but far less caring and humane.”
The numbers behind the pitch are now large enough to set the rest of the policy debate around them. Fortune Business Insights estimated the global elderly care market at 53.29 billion US dollars in 2025 and projected it to reach 57.78 billion in 2026, on its way to roughly 114 billion by 2034. The agetech subsegment, the layer of digital and AI products sold into that market, is projected by industry analysts cited in the Neves paper to reach A$170 billion by 2030. By any reading, the next decade of aged care will be one of the most heavily capitalised periods in the sector's history, and a substantial fraction of that capital is going into systems that are designed to do things humans currently do.
The question this article is concerned with is not whether the technology works. Some of it does, in narrow ways, under controlled conditions. The question is what accountability structures would have to exist before deploying it at scale, into a population that cannot easily refuse it and cannot reliably tell anyone when it has failed, could be considered ethical. The honest answer, in April 2026, is that very few of those structures exist anywhere, and most of what passes for them is designed to manage the reputational risk of providers and vendors rather than the safety of residents.
The Royal Commission into Aged Care Quality and Safety, which delivered its 2,500-page final report, Care, Dignity and Respect, on 1 March 2021, did not lack for diagnoses. Across twenty-three public hearings, ninety-nine sitting days, 641 witnesses and more than ten thousand public submissions, commissioners Lynelle Briggs and Tony Pagone arrived at 148 recommendations. The findings were as plain as they were grim. Commissioner Briggs put the proportion of residents who had experienced physical or sexual assault at between thirteen and eighteen per cent. The report described two decades of underfunding amounting to approximately 9.8 billion Australian dollars cut from the sector's annual budget. It documented residents left in soiled continence aids, malnourished, restrained chemically and physically, and dying in conditions the Commission did not euphemise.
What the Commission did not say, in any of those pages, is that the answer to those failings lay in machine learning. The recommendations focused on staff ratios, on the qualifications and pay of personal care workers, on a new statutory framework for the rights of older people, on enforceable care standards, and on an independent regulator with real teeth. The Aged Care Act 2024, which came into effect on 1 November 2025 after a delay from its originally legislated 1 July date, codified some of that framework. From October 2024, providers had been required to deliver a national average of 215 minutes of personal and nursing care per resident per day, of which 44 minutes was to come from a registered nurse. From 1 October 2025, the Star Ratings used to grade residential providers were re-engineered to require those minutes for a three-star or better staffing rating. None of those reforms involved an algorithm.
The same pattern recurs in every comparable jurisdiction. The Care Quality Commission in England, which by the summer of 2024 was being publicly described by the Secretary of State for Health and Social Care, Wes Streeting, as a failing organisation, commissioned the Dash Review of its operational effectiveness; the full report, published in October 2024, found that the time taken by the regulator to re-inspect a service rated “requires improvement” had risen from 142 days in 2015 to 360 days in 2024. The CQC's chief executive, Ian Trenholm, resigned that July. Skills for Care reported that as of March 2025 there were 111,000 vacant posts in adult social care in England, a vacancy rate of 6.4 per cent against a labour-market average of 2.2, with care worker vacancies running at 8.3 per cent and homecare vacancies above 10 per cent. Annual turnover sat at thirty per cent. In May 2025 the UK government closed the international recruitment route for new care workers, cutting off a pipeline that had been delivering an average of twelve thousand recruits a quarter into the independent sector. None of those problems have algorithmic solutions.
In the United States, the federal minimum staffing standard for long-term care facilities published by the Centres for Medicare and Medicaid Services in May 2024, requiring 3.48 hours of nursing care per resident per day and twenty-four-hour onsite registered nurse coverage, was repealed in December 2025. Section 71111 of Public Law 119-21 then prohibited CMS from implementing or enforcing the rule until at least 30 September 2034. Public Citizen and the Center for Medicare Advocacy estimated that the original rule, had it survived, would have prevented approximately thirteen thousand deaths a year. In Canada, the May 2020 Canadian Armed Forces report on five Ontario long-term care homes, which described cockroaches, rotting food, ulcerated bed-bound residents and staff cycling between units in contaminated personal protective equipment, prompted no national workforce reform of any depth; provincial inquiries in Ontario and Quebec produced more recommendations than implementations. The same picture, with local variations, holds in the Nordic countries, in France and in much of east Asia.
What the inquiries documented, in other words, was not a sector that had failed to adopt the latest technology. It was a sector that had failed to be funded, staffed, regulated and respected. The premise of the agetech pitch, that AI can plug the gap, is in this light a category error. There is no reasonable reading of Care, Dignity and Respect in which the missing ingredient is more sensors.
Walk the floor of any of the recent agetech expos, the SilverEco Forum in Cannes, the Aged Care 2026 conference in Melbourne, the Health 2.0 trade fair in Tokyo, and the categories repeat. There are passive monitoring systems, of which the thermal sensor in the opening scene is one example. There are wearable fall detectors that combine accelerometers and machine-learned gait classifiers, sold by firms like Vayyar, Kepler Vision and a long tail of European start-ups. There are continuous bed and chair sensors, marketed under names like SafelyYou and Tellus You Care. There are automated care-planning platforms that ingest electronic health records and generate suggested daily routines, hydration prompts and bowel charts. There are medication management dispensers. There are predictive analytics layers that promise to flag clinical deterioration days before it shows up in vital signs. There are companion robots: PARO, the harp-seal-shaped therapeutic robot developed by Takanori Shibata at Japan's National Institute of Advanced Industrial Science and Technology, in clinical use since the mid-2000s; ElliQ, the desktop social companion built by Intuition Robotics in Israel; SoftBank's humanoid Pepper, repurposed from retail receptionist into care-floor entertainer; and the various lower-cost robotic-cat and robotic-dog products that proliferate at the budget end.
The evidence base for these products is uneven and almost always thinner than the marketing implies. A 2021 systematic review and meta-analysis published in Innovation in Aging by Hung and colleagues, “Effectiveness of Companion Robot Care for Dementia”, found that PARO produced statistically meaningful but small improvements in agitation, depression and medication use across pooled trials, with the authors noting that most studies were small, short and unblinded. A 2023 systematic review in the International Journal of Nursing Studies reached a similar conclusion: a possible benefit, evidence quality low to moderate, no demonstrated long-term effect. A pilot randomised controlled trial of a different companion robot for community-dwelling people with dementia, published in 2017 by Moyle and colleagues, recorded engagement with the device but did not show robust effects on the primary outcomes.
ElliQ has produced more uplifting headline numbers, largely from one programme. The New York State Office for the Aging began deploying ElliQ in 2022; by May 2025 the agency reported 834 active clients, with 94 per cent saying they felt less lonely, 97 per cent feeling better overall, average usage of forty-one interactions per day, and a customer satisfaction score of 4.6 out of 5. Those are the figures Intuition Robotics quotes in its marketing decks. The peer-reviewed literature is more cautious. A 2024 review by Broxvall and colleagues, “ElliQ, an AI-Driven Social Robot to Alleviate Loneliness: Progress and Lessons Learned”, described the deployment as “promising” and explicitly called for randomised controlled trials before efficacy claims could be considered established. The NYSOFA outcomes, reassuring as they are, were collected from a self-selected user base that consented, that engaged voluntarily, and that retained the cognitive bandwidth to fill in a satisfaction survey. They tell us very little about what would happen if the same device were deployed by default to a less able population.
Fall detection is the category in which the gap between vendor claim and operational reality is widest. A 2025 scoping review in Applied Sciences, “AI-Driven Inpatient Fall Prevention Using Continuous Monitoring”, examined the evidence on continuous monitoring systems in hospital and long-term care settings and reached a conclusion that vendors do not put on their websites: while sensitivity for detecting falls can exceed ninety per cent, false-positive rates of thirty to forty per cent are common, and across the evidence base detection systems “did not consistently reduce fall incidence or the occurrence of injurious falls”. The same paper, like a closely related 2025 review in Medicina, found that reporting of “implementation-critical metrics” such as alert burden, response times and downstream actions was patchy. Studies of clinical alarm fatigue across acute care have repeatedly found that as many as eighty to ninety per cent of audible alarms in monitored wards are non-actionable. There is no plausible mechanism by which adding more alarms to an understaffed care floor improves outcomes, and reasonable mechanisms by which it makes them worse.
Predictive analytics for clinical deterioration carry a related set of problems. Algorithms trained on the electronic health records of one population have been shown repeatedly, including in a much-cited 2023 JAMA Internal Medicine analysis of the Epic sepsis prediction model, to perform worse than advertised when deployed in different populations. Aged care residents are an unusually heterogeneous group, often with multimorbidity, polypharmacy and cognitive complications that distort the signals the model was trained to detect. The risk is not that the model produces nothing useful; it is that it produces enough useful output to displace clinical judgement while the genuinely unusual cases, the ones a human carer would recognise on sight, slip past unflagged.
Across all of these tools, the same population variable does most of the moral work. The people on whom the sensors and dispensers and screens are aimed are, by definition of the sector they are in, frail. A substantial proportion are cognitively impaired; the Australian Institute of Health and Welfare estimated in its 2024 dementia report that more than half of permanent residents in Australian aged care had a diagnosis of dementia. Many are socially isolated; the loneliness data that companion robots cite as a justification is real. Many have limited or no digital fluency; older Australians in residential care are dramatically under-represented in surveys of internet use, smartphone ownership and the everyday literacy that allows a person to interrogate, refuse or modify a digital tool. And almost all of them sit in a profound power asymmetry with the people on whom they depend for daily care.
The implications for consent are not theoretical. The standard model of informed consent in healthcare assumes a person capable of understanding the nature of the proposed intervention, weighing it against alternatives, and communicating a decision. A 2025 review in Frontiers in Digital Health, “Designing for Dignity: Ethics of AI Surveillance in Older Adult Care”, catalogued how badly that model breaks down in practice when the intervention is a continuous, ambient monitoring system and the person being monitored has fluctuating capacity. Many older adults in care settings, the authors noted, have “no knowledge about what data is being harvested” and lack the cognitive or technical capability to adjust settings. Consent is typically obtained at admission, signed by a family member acting as substitute decision-maker, and never revisited. The system that the resident did not knowingly agree to becomes the system they live inside.
The asymmetry is sharper still where AI is making, or shaping, allocative decisions. Australia's new Support at Home programme, introduced in November 2025 to replace earlier home-care packages, uses a rules-based algorithm called the Integrated Assessment Tool to convert assessor responses into funding entitlements. As reported by The Conversation in March 2026 in a follow-up piece by Sebastian Cordoba and colleagues titled “First Robodebt, now NDIS and aged care: how computers still decide who gets care”, neither assessors nor participants can clearly see how the algorithm converts answers into funding levels. Departmental officials told a Senate inquiry that there is “no discretionary element” in the process; an override function present during testing was removed before the system went live. Evidence presented to the inquiry suggested the tool was systematically underestimating need, with reports of older Australians, including those with serious or degenerative conditions, having their support reduced. The Robodebt scandal, in which an automated debt-recovery system run by Services Australia issued more than 470,000 unlawful debt notices between 2016 and 2019 and was the subject of a 2023 Royal Commission, is the cautionary tale every Australian policy commentator now invokes. The aged care sector's algorithmic infrastructure is being built by a state apparatus that demonstrably has not learned its lesson.
The classic argument for surveillance and substitution technologies in care is that the people receiving them benefit, and that any inconvenience to autonomy is outweighed by safety. The problem with this argument is that it cannot be tested by the people on whom it is being made. A resident with moderate dementia cannot reliably explain to an inspector why the sensor in the corner of her room makes her feel watched, or whether she would prefer a human attendant to a tablet that pings someone who arrives nine minutes later. A non-verbal resident with advanced cognitive impairment cannot tell a researcher whether the companion robot is comforting her or merely keeping her quiet. The marketing literature sometimes claims that residents prefer the robots; the more careful research, including work by Neves and her collaborators in the Journal of Applied Gerontology in 2023, “Artificial Intelligence in Long-Term Care: Technological Promise, Aging Anxieties, and Sociotechnical Ageism”, finds that older adults' attitudes towards AI in their own care are considerably more ambivalent than the agetech sector implies, that they are acutely aware of being positioned as objects of management rather than subjects of care, and that they often experience monitoring as a loss of dignity rather than a gain in safety.
The business case for AI in aged care, in board meetings rather than press releases, is largely about labour. A monitoring system that allows a single night-shift carer to cover sixty residents instead of forty is, on paper, a workforce multiplier. A medication dispenser that prompts a resident through a regimen reduces the registered nursing time required for medication rounds. An automated care plan reduces the documentation burden on personal care workers. A companion robot, if it can hold attention, reduces the demand on staff for the relational work that has historically been the floor of dignified care. Each of these is a legitimate engineering goal in a sector where workforce shortage is real, severe and not going away. None of them is the same thing as improved outcomes for residents.
The distinction matters, because the risk of miscalibration falls asymmetrically. If a fall sensor's false-positive rate produces alarm fatigue and a real fall is missed, the cost is borne by the resident on the floor, not the procurement team that signed the contract. If a predictive deterioration model misses an unusual sepsis presentation in a resident with atypical baseline observations, the resident dies. If an automated care plan recommends a hydration schedule calibrated to a baseline weight two years out of date, the resident whose actual weight has dropped sharply goes thirsty. If a companion robot becomes the dominant social contact for a resident whose family visits have tapered, the human relationships that aged-care research consistently identifies as protective against decline are the ones that quietly disappear.
This asymmetry is what makes the cost-reduction framing dangerous. In a properly functioning market, the people who bear the risk of a product underperforming push back. In aged care, the people who bear the risk are very often unable to. The carers who notice that the system is not working, who see a resident on the floor long after the sensor said so, are positioned several layers below the procurement decisions that put the system there. They have, as the Neves paper notes, taken on additional cognitive labour interpreting the data the system generates, but they have lost discretion over whether the system should be used at all. The families who pay the bills are typically not on the floor when the system fails. The regulators who would, in theory, audit whether the technology was performing as advertised lack the technical capability and, increasingly, the inspection cadence.
A 2024 paper in Humanities and Social Sciences Communications titled “Paternalistic AI: the case of aged care” framed the underlying ethical structure crisply. AI systems in care, the authors argued, function as a particularly powerful form of soft paternalism. They purport to act in the interests of the person being cared for, but they remove from that person the practical opportunity to refuse, modify or contest the intervention. In the context of cognitive impairment, where soft paternalism shades into hard paternalism almost imperceptibly, the absence of accountability structures around the technology means that the ethical work that would normally be done by consent simply does not happen.
If AI is going to be deployed at scale in aged care, the question is what would have to be in place before such deployment could be considered ethical. The honest answer is a layered structure, none of whose layers currently exist in anything like a complete form in any major jurisdiction.
The first layer is consent that is genuine, ongoing and revocable. Admission paperwork signed by a substitute decision-maker is not consent to a continuous monitoring regime. A robust framework would require that residents, where they have any capacity, are walked through what each technology in their environment does, in plain language, with the right to refuse specific elements without losing access to other care. Where capacity is absent, substitute decision-makers should be required to revisit consent on a defined cadence, and to weigh the technology's use against alternatives that include increased staffing. The recommendation, drawn from the 2025 Frontiers in Digital Health paper, of “easy-to-visualize dashboards and plain-language explanations” should be a procurement requirement, not a research aspiration.
The second layer is independent auditing, with statutory backing, of the actual performance of deployed systems in their actual settings. Vendor-supplied performance figures are, as the scoping reviews on fall detection make clear, systematically optimistic. An accountability regime worth the name would require providers to log false positives, false negatives, response times and downstream actions in a standardised format, and would require regulators, not vendors, to publish the resulting performance data. Australia's Aged Care Quality and Safety Commission, the CQC in England, CMS in the United States and their equivalents would need substantial additional resourcing and technical capability to conduct such audits credibly. None has it now.
The third layer is algorithmic transparency. Where an AI tool affects the allocation of care, including hours of staffing, level of monitoring, eligibility for funding or assignment to a particular care pathway, residents and their advocates should have a legal right to an explanation of how the system reached its conclusion, expressed in terms an ordinary person can understand. Article 22 of the General Data Protection Regulation in the European Union already prohibits decisions based solely on automated processing that produce significant legal or comparable effects. That principle needs to be operationalised specifically for aged care, with explicit recognition that algorithmic recommendations that substantially shape human decisions count, and that the convenient fiction of “human in the loop” cannot be used to launder automation.
The fourth layer is incident reporting. When an AI tool contributes to harm, whether by missing a fall, misallocating medication, displacing human contact or generating an unsafe care recommendation, the incident should be reportable, on the same statutory footing as a medication error, to the relevant regulator, with public aggregate reporting. The current regime, in which AI-related incidents are typically classified as either workflow events or clinical errors and never as software failures, makes systemic learning impossible.
The fifth layer is a hard ban on substitution where it matters most. The question of whether a companion robot should ever be the primary social contact for a person with dementia is not a question for procurement officers. The position taken by Sherry Turkle of MIT in her 2011 book Alone Together, and elaborated in subsequent work, is that the deployment of robots as substitutes for, rather than supplements to, human relational care is an abdication. That position should be encoded in regulation. Companion robots may have a role; they may not have a role that displaces the requirement for staffed human contact. Procurement should require evidence that a tool augments rather than replaces the relational work, and operational data should be auditable to confirm that what was contracted as augmentation has not, over time, drifted into substitution.
The sixth layer is procurement conditionality, and it is the lever that actually moves the others. Public funders of aged care, which in most jurisdictions means the state, have far more bargaining power than they currently use. Every procurement contract for an AI system in publicly funded aged care should carry conditions on consent processes, audit access, transparency, incident reporting, anti-substitution and a ceiling on the proportion of care time that may be displaced by the system. Vendors that decline to meet those conditions should not be funded. The market will adjust quickly when it has to.
The seventh layer is the one that the agetech sector finds least convenient to discuss. None of the above is a substitute for adequate staffing. Every accountability regime for AI in aged care has to be built on top of, not in place of, the staffing standards, pay levels and workforce protections that the Royal Commission, the Dash Review, the CMS rule and the Canadian Armed Forces report were calling for. AI deployed into an under-staffed environment cannot be made ethical by audits alone. The ethical baseline is a staffed floor.
It is tempting, when writing about technology and vulnerability, to land on a hopeful note. The honest reading of the evidence in April 2026 does not really support one. The Aged Care Act 2024 in Australia is in early implementation; the staffing minutes are being met on national average but missed in many individual facilities. The CQC in England is mid-restructure following the Dash operational review. The federal staffing rule in the United States has been repealed and is statutorily prohibited from re-implementation until at least 2034. The Canadian provinces have made limited structural progress since 2020. The agetech market continues to grow. The companies whose pitches Neves, Sanders and Mead analysed are not slowing down their fundraising rounds because the academic literature is cautious about their effect sizes.
What the Conversation article points at, and what the evidence on every category of agetech tool quietly confirms, is that the question of whether AI in aged care is ethical cannot be answered at the level of the individual product. PARO has uses. ElliQ helps some lonely people in Buffalo and Albany. A well-calibrated fall sensor, in a building with enough carers to respond inside three minutes, may well be a net good. None of those local truths bears on the systemic question, which is whether the deployment, in aggregate, is being driven by considerations that the people on whom it is deployed would endorse if they could.
The resident whose hip breaks while the sensor pings an empty corridor does not appear in any vendor case study. Her mobility does not fully return, in the way ninety-year-old mobility rarely does. The room in which she fell still has a sensor on the ceiling, and the sensor still pings when it sees a heat-map blob in the wrong place. The night shift on her floor is still, in April 2026, one registered nurse and two personal care workers covering upwards of forty residents, the kind of configuration that the inspectorate reports from three continents have documented as standard. The vendor's quarterly filings continue to note strong growth in the Asia-Pacific market and new partnerships with major residential care operators. None of those facts, on their own, is scandalous. Together they describe the architecture of a sector that has decided, without ever quite deciding, that the cheaper option is also the wiser one.
The accountability structures that would make AI in aged care ethical are not technically difficult. They are politically expensive. They require regulators to be staffed and funded to a level that no government has yet been willing to fund them to. They require public procurement to drive standards in a market where vendors have grown accustomed to selling unvalidated tools into desperate buyers. They require a public conversation about the proper role of human contact in care that the sector and the technology industry have, between them, been content to defer.
Until those structures exist, the most defensible position is the one Neves and her colleagues argue for: that AI in aged care, deployed primarily to manage the consequences of under-investment in human care, is not a solution to the crisis the Royal Commission documented. It is a way of making the crisis less visible. The sensor sees the ceiling. The ceiling is white. The blob on the floor is logged at a particular minute, and again two minutes later. Somewhere down the corridor, somebody is doing the work that the technology was sold as a substitute for, and somebody else is doing without the work because there was nobody to do it for them. The accounting we owe the residents is the one we have, so far, declined to do.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast
from
Roscoe's Story
In Summary: * Today has been a low energy day in the Roscoe-verse, thanks to late season allergies. Listening now to WFAN's pregame show wind down ahead of tonight's game: New York Yankees vs Baltimore Orioles. I'll stay with WFAN for the radio call of the game. After the game I'll wrap up the night prayers and head to bed early.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 239.86 lbs. * bp= 146/86 (66)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 05:30 – 1 banana, 1 small cookie * 06:45 – sweet rice * 10:00 – chicken casserole * 11:00 – 2 small cookies * 15:00 – garden salad * 16:45 – 1 fresh apple
Activities, Chores, etc.: * 04:30 – listening to local news talk radio * 05:10 – bank accounts activity monitored. * 05:50 – read, write, pray, follow news reports from various sources, surf the socials, nap, * 15:00 – listening to the Jack Show * 17:00 – listening to WFAN New York Sports Radio broadcasting the pregame show ahead of tonight's Yankees vs Orioles MLB game. I'll stay with WFAN for the call of the game.
Chess: * 08:55 – moved in all pending CC games