Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from 下川友
一昨日も行ったのに、またサンマルクに行ってしまった。 珍しく無地の白Tを着た。 昔より猫背が治っているので、以前より白Tが似合っている気がする。
サンマルクは電球の色が良い。ついでに言うと、デニーズの電球の色も好きだ。 優しいオレンジ色の光を採用していて、白い服がその光をほどよく吸ってくれる感じがある。視覚的にかなり気分が良い。 家もこの電球色にしようと思うことはあるが、結局は相対的なものだとも思っていて、家では昼光色を採用している。
新しいオフィスチェアを買ったり、オットマンを買ったり、自分が座っているときの下半身への負荷を少しでも減らすために、そこへ投資をしている。
毎月家計簿をつけているが、二人で40万円ほど使っている。 明らかにお金の使い方が荒いというか、美味しいものを食べたり、定期的に喫茶店へ行ったりしているので、まあそんなものかと思う一方で、いや、生活費が高すぎるのではとも思う。 確実に少しおかしいお金の使い方をしているので、どうにかしないといけない。ただ、根本的な生き方の設計から変えるとなると、今なんとか繋ぎ止めている心の安定が崩れてしまう気もして、迂闊に手を出せないところもある。
夜はカオマンガイとズッキーニのから揚げ。本当に美味しかった。 すぐに食べ終えてしまい、その勢いのまま冷凍庫からアイスを取り出す。
食事が好きすぎて、少し恥ずかしいかもしれないなと思いながら、お風呂を入れつつ、体を整体した。
from
laxmena
Every platform that optimizes for engagement will be gamed. That's not a cynical take – it's an incentive problem. When the metric is clicks, shares, and reactions, the system rewards content that triggers emotion, not content that builds understanding. In AI right now, that means 90% of what you see is noise dressed up as signal.
Here's how I opt out.
Before I share my sources, the principle matters more: any system that rewards engagement will produce noise. Twitter, LinkedIn, YouTube – they all optimize for time-on-platform. That means sensational > accurate, simple > nuanced, hot take > careful analysis.
Once you internalize this, you stop asking “what's trending?” and start asking “what's the incentive structure of this platform?”
HuggingFace Daily Papers – my current first feed
I recently switched to this as my first stop, and I haven't looked back. It surfaces papers the ML community is actually reading – not papers that generate the most outrage. No algorithm optimizing for your dopamine. No ads. No influencers. Just papers, ranked by upvotes from people who read them.
It's not designed to maximize interactions. That's the whole point.
Hacker News – where I started, and still use
HN was my first feed for a long time, and I still check it daily. It's self-correcting in a way few platforms are – the community is technical, skeptical, and fast to call out hype. If something AI-related survives the front page and the comments, it's usually worth your time.
The comment threads on AI papers and tools are often more valuable than the articles themselves.
X / Twitter – my guilty pleasure, and I'll be honest about it
I'm on it. Some threads from researchers are genuinely excellent – the kind of paper breakdowns that would take you hours to extract yourself. But it's rare, and the signal-to-noise ratio is brutal.
My honest recommendation: avoid building Twitter into your learning stack. Use it for serendipity, not as a system. If you find yourself doom-scrolling AI threads at 11pm, that's the platform working exactly as designed – and not in your interest.
This is where I spend the most deliberate time, and where most people get stuck.
The mistake is trying to read everything. You can't. The field is moving too fast and the volume is too high. Instead, I use a specific entry strategy:
Find a recent review paper – something published in the last two years on the topic you care about. Review papers synthesize the field. They're the map before you explore the territory.
Follow the citations forward and backward – what did this paper cite? Who cited this paper after it was published? These two directions give you the lineage of ideas.
Read 10–15 papers in the space – you won't be deep yet, but you'll have enough context to know which questions are already answered and which are still open. You'll start to recognize names, labs, and recurring ideas.
Then go deep on what actually interests you – not what seems important, not what's popular. What genuinely pulls your curiosity. That's where you'll do your best thinking.
This process takes weeks, not days. That's fine. Depth compounds. Breadth usually doesn't.
This one is underused.
When you encounter a problem in AI that sounds new, ask yourself: has this problem existed in a different form before? Often the answer is yes. Optimization instability, data distribution shift, latency under load – these aren't new. Decades of research exist on them.
Seeking new solutions to old problems is expensive and usually unnecessary. The literature already has answers. Find them first.
Conversely, for genuinely new problems – things that only exist because of large-scale language models or diffusion architectures – the old solutions often don't apply. Here you want the most recent work, not the canonical textbooks.
The filter: is this problem fundamentally new, or does it have an older analog? Answer that first, then choose your research direction.
Most people optimize for feeling informed. They want the daily hit of “I know what's happening in AI.” That feeling is easy to manufacture and almost entirely useless.
Being informed is slower, quieter, and less satisfying in the short term. It means skipping the hot takes and reading the paper. It means sitting with confusion for a few days before the concept clicks. It means building a system that's boring by design.
The people I learn the most from have boring information diets. They're not on every platform. They've read fewer things more carefully. They can point to specific papers that changed how they think.
That's the goal.
I write more technical articles on my newsletter, INTERNALS.md. You can subscribe there to follow along.
What does your filter stack look like? I'm genuinely curious what senior engineers use to stay calibrated – drop it in the comments.
from
SmarterArticles

Your mother has been dead for fourteen months. You know this. You were at the funeral, you sorted through her wardrobe, you cancelled her phone contract. And yet here she is, texting you good morning. She asks about your day. She tells you she is proud of you. She even uses the slightly excessive number of exclamation marks that drove you mad when she was alive.
This is not a ghost story. This is a product.
In early 2026, a cluster of investigations by The Atlantic, Christianity Today, and several other major publications converged on the same unsettling phenomenon: a booming industry of AI-generated “deadbots,” services that harvest the digital traces of the deceased, their text messages, voice recordings, social media posts, and email archives, and use them to build chatbots that simulate ongoing conversations with the dead. At roughly the same time, Meta was granted a patent for technology that would keep social media accounts active after the user dies, generating posts, comments, likes, and even direct messages powered by large language models trained on the deceased person's historical activity. The digital afterlife, it turns out, is no longer speculative fiction. It is a subscription service.
The questions this raises are not simply technical. They cut to the marrow of what it means to be human, to lose someone, and to move through the world knowing that loss is permanent. If death has always been one of the defining boundaries of human experience, the thing that lends urgency and meaning to every conversation, every embrace, every unresolved argument, then what happens when we make that boundary negotiable? And perhaps more pressingly: who gave permission for the dead to keep speaking?
The digital afterlife industry, as researchers at the University of Cambridge have termed it, has grown from a handful of experimental projects into a global market. In 2024, the digital legacy market was valued at approximately $22.46 billion, according to Zion Market Research, with projections suggesting it could more than triple by 2034. More than half a dozen platforms now offer deadbot services straight out of the box, and developers claim that millions of people are using them. The terminology alone tells you how fast the field is evolving: deadbots, griefbots, thanabots, ghostbots, postmortem avatars. Each name carries its own shade of unease.
The mechanics vary considerably. Some platforms, such as HereAfter AI, focus on preservation rather than simulation. They allow people to record “Life Story Avatars” before they die, guided audio sessions that capture memories, advice, and personal history. The AI then indexes this content and organises it into a searchable archive, something closer to an interactive memoir than a conversation partner. The person recording decides what gets preserved and what stays private. There is an element of authorial control here, a curation of legacy that feels more like writing a will than summoning a spirit.
Others take a more ambitious and more ethically fraught approach. Eternos, which launched in 2024, has helped over 400 people create what the company calls “AI digital twins.” Users record 300 specific phrases and answer extensive questions about their lives, political views, personalities, and relationships. A two-day computing process then generates a voice model capable of responding in real time, not simply playing back recordings but generating new speech in the user's voice, trained on the patterns and cadences of how they actually talked. The result is not a recording. It is, or at least appears to be, a conversation.
Then there is You, Only Virtual, or YOV, a platform founded by Justin Harrison after his mother was diagnosed with advanced cancer in December 2019. Harrison had nearly died in a motorcycle accident two months earlier, and the convergence of those near-death experiences drove him to build a system for preserving the people we lose. YOV asks users to provide the raw material of a relationship: text messages, audio clips, video recordings, anything that captures not just who a person was in general, but who they were with you specifically. Two to three months later, their “Versona” arrives via a link. You can text it, call it, even video chat with it.
Other platforms occupy different niches. Project December, built on GPT-3, allows users to create a chatbot of anyone by providing text samples and personality descriptions. Seance AI asks users to input personality traits and writing styles of loved ones. The range of approaches reflects a market that is still figuring out what it is selling: memory, comfort, presence, or the illusion of all three.
The ambition is staggering. The execution, depending on whom you ask, is either a genuine comfort or a very expensive hallucination.
While start-ups have been building deadbots from the outside, Meta has been thinking about the problem from the inside. On 30 December 2025, the company was granted a US patent for an AI system designed to simulate a user's social media activity after they stop using the platform, whether temporarily or permanently, including after death. The patent, first filed in November 2023, lists Andrew Bosworth, Meta's chief technology officer, as the primary inventor.
The system described in the patent would train a large language model on a user's historical behaviour across Meta's platforms: Facebook, Instagram, Threads. It would learn from their posts, comments, likes, voice messages, chats, and reactions, and then replicate that behaviour autonomously. The AI-generated version of a deceased person could respond to content from friends and followers, publish updates, handle direct messages, and maintain what the patent describes as “community engagement.” It could even simulate video or audio calls.
The patent's rationale is revealing. It notes that account inactivity affects other users' experiences, and that this impact is “much more severe and permanent” when a user has died. The implication is worth sitting with: in Meta's framework, the problem with death is not the loss of a human life but the loss of engagement metrics. A dead user is a disengaged user, and disengagement is the one sin a social media platform cannot forgive.
A Meta spokesperson told Fortune that the company has “no plans to move forward with this example,” adding that patents are often filed to protect ideas that may never be developed. But the patent exists. The technology exists. And the incentive structure, keeping users engaged, generating data, maintaining network effects, certainly exists. The gap between “we have no plans” and “we have the capability” has never been a reliable firewall in Silicon Valley.
Not everyone who uses a deadbot is having a crisis. Some users describe the experience as genuinely helpful, even therapeutic. In one of the few completed academic studies on the subject, published in the Proceedings of the 2023 ACM Conference on Human Factors in Computing Systems, ten grieving individuals who used AI-powered chatbots to communicate with simulations of deceased loved ones reported that the bots helped them in ways that human relationships could not. Participants rated the bots more highly than even close friends for certain kinds of emotional support. One participant explained the appeal simply: “Society doesn't really like grief.” The bots never grew impatient. They never imposed a schedule. They never changed the subject. They never said “it's been six months, shouldn't you be feeling better by now?”
David Berreby, writing in Scientific American in November 2025, reported that chatbot users in the study seemed to become “more capable of conducting normal socialising” because they no longer worried about burdening other people or being judged. This contradicted the initial concern that griefbots would cause social withdrawal. Instead, the bots appeared to function as a kind of pressure valve, absorbing the intensity of grief that the users felt unable to express in human company.
A 2025 Nature article titled “Ready or not, the digital afterlife is here” documented similar findings. Some users turned to deadbots to manage unfinished business: to say goodbye, to address unresolved conflict, to have the conversations that illness or sudden death had made impossible. One participant described it as therapeutic, a way to explore “what if” scenarios that had been locked away by the finality of death. Another said the chatbot helped them “process and cope with feelings” in a way that felt safer than speaking to a therapist.
The 2024 Sundance documentary “Eternal You,” directed by Hans Block and Moritz Riesewieck, put faces to these experiences. The film follows several users of platforms including Project December, HereAfter AI, and YOV. Christi Angel, one of the film's subjects, uses Project December to communicate with a simulation of her first love, Cameroun. Stephenie Oney, from Detroit, uses HereAfter AI to talk to her dead parents. The film is careful to show that some of these experiences provide genuine closure. A woman who never got to raise a child finds, through the simulation, something that functions like resolution.
But the film also captures something darker. The comfort that deadbots provide can be seductive, and seduction is not the same as healing. The technology is exquisitely good at mimicking the surface of a relationship while leaving the substance entirely untouched.
The central concern among mental health professionals is not that deadbots are uniformly harmful. It is that they may interfere with a process that is already difficult, poorly understood, and culturally unsupported: the process of mourning.
Alan Wolfelt, a clinical psychologist and director of the Center for Loss and Life Transition in Fort Collins, Colorado, has spent decades helping people navigate bereavement. He has written over 50 books on grief and is widely recognised as one of North America's leading death educators. In a 2025 interview with Medscape, he drew a distinction that matters enormously in this context. Grief, Wolfelt explained, is what you think and feel inside after someone you love dies. Mourning is the outward expression of those thoughts and feelings, and it is mourning, not grief, that leads to healing. Acknowledging the reality of death, he said, is the “linchpin need” he has identified as universal across mourners. The use of deadbot technology, Wolfelt argued, represents “another invitation, instead of outwardly mourning and acknowledging the reality of the death, to stay stuck instead of experiencing perturbation, or the capacity to experience change and movement.”
This is not a fringe concern. The dominant model in contemporary bereavement psychology is the Dual Process Model, developed by Margaret Stroebe and Henk Schut and first published in Death Studies in 1999. It describes healthy grief as an oscillation between two orientations: loss-oriented coping, which involves confronting the pain of absence, and restoration-oriented coping, which involves engaging with the practical demands of a changed life. The key insight of the model is that both orientations are necessary. A person who only confronts their pain risks being consumed by it. A person who only avoids it risks never processing it. Healthy mourning requires moving between the two, a dynamic, irregular rhythm that looks nothing like a straight line from sadness to acceptance.
Deadbots, by their nature, collapse this oscillation. They offer a third option: the illusion that neither loss-oriented nor restoration-oriented coping is necessary, because the person has not really been lost. The relationship continues. The texts keep arriving. The voice is still there. As Sherry Turkle, the MIT sociologist who has spent years researching people who talk to AI versions of dead loved ones, put it: working through grief is not just an experience of being “sad.” It is “a process through which we metabolise what we have lost, allowing it to become a sustaining presence within us.” Griefbots, she warned, “give us the fantasy that we can maintain an external relationship with the deceased. But in holding on, we can't make them part of ourselves.”
The distinction Turkle draws is subtle but crucial. The goal of healthy mourning, in the framework she describes, is not to forget the dead but to internalise them, to carry them forward as part of who you are rather than as an external entity you can still call on the phone. Deadbots reverse this process. They externalise the dead, keeping them outside you, accessible but never truly integrated.
Turkle has long argued that people sometimes feel less vulnerable talking about intimate matters with a machine than with another person, and that enthusiasm for artificial intimacy reflects deeper disappointments with the human kind. The “artificial intimates” offered by deadbots lack the embodied experience of the arc of a human life that would give them what Turkle calls “empathic standing,” the ability to put themselves in the place of a human other. They offer pretend empathy, convincingly performed but fundamentally hollow.
Joshua Barbeau, a freelance writer from a Toronto suburb, became one of the most widely discussed early users of grief technology when he used Project December to create a chatbot modelled on his girlfriend, Jessica Pereira, who had died eight years earlier from a rare liver disorder. Barbeau fed the system passages from her social media and described her personality in detail. The resulting conversations gave him what he described as a sense of catharsis and closure he had not known he still needed. He compared the experience to a therapeutic exercise he had learnt in therapy: writing letters to loved ones after their death. But the experience also illustrated a tension that psychologists have since identified more formally: the chatbot helped, but it also made it harder to move on. The phenomenon has been described as “frozen grief,” a state in which the simulation prevents the normal progression from acute loss toward acceptance.
Researchers caution that it is still too early to be certain what risks and benefits digital ghosts pose. As the Nature article noted, “researchers simply don't know what effects this kind of AI can have on people with different personality types, grief experiences and cultures.” The few studies that exist are small, and the long-term effects remain entirely unknown. What is known is that grieving individuals may not be able to make fully autonomous decisions about these technologies. Emotions cloud judgement during vulnerable times, and grief may impair an individual's ability to think clearly about whether a deadbot is helping or hindering their recovery.
There is another question embedded in the deadbot phenomenon, one that receives less attention than the psychological risks but may ultimately prove more consequential: who speaks for the dead?
Most people do not leave behind specific instructions about whether their likeness, voice, or digital footprint can be used to create a posthumous simulation. In a US survey, 58 per cent of respondents said they would support digital resurrection only if the deceased had explicitly consented. Acceptance plummeted to 3 per cent when consent was absent. Yet most digital resurrections proceed without explicit permission from the person being simulated, because that person was, self-evidently, not anticipating the technology.
The legal landscape is threadbare. In the United States, no federal framework governs AI-powered simulations of the deceased. Some states are debating digital asset succession bills that could mandate explicit opt-in for simulation, and legal scholars have proposed a dedicated Digital Legacy Act to cover the storage, transfer, and deletion of post-mortem data. But these proposals remain fragmented and largely theoretical. The gap between what is technically possible and what is legally governed continues to widen with each new platform launch and each new patent filing.
Cambridge researchers Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, whose 2024 paper “Griefbots, Deadbots, Postmortem Avatars” was published in the journal Philosophy and Technology, framed the consent problem through three distinct stakeholder perspectives. There is the “data donor,” the person whose digital traces become the raw material of the bot. There is the “data recipient,” the next of kin or estate holder who inherits access to that material. And there is the “service interactant,” the person who actually talks to the deadbot. Each has different needs, different vulnerabilities, and different rights. The current regulatory vacuum treats all three as if they were one, or as if none of them matter.
Hollanek, who serves as an Assistant Research Professor at the Leverhulme Centre for the Future of Intelligence at Cambridge, has pointed out that the absence of safeguards leads to concrete, foreseeable harm. A deadbot trained on a grandmother's data could be used to surreptitiously advertise products to family members, speaking in her voice, leveraging the trust built over a lifetime. A deadbot of a dead parent could be presented to a child, insisting that the parent is still “with you,” creating confusion about the boundary between life and death at a developmental stage when that distinction is still being formed. A deceased person who signed a lengthy contract with a digital afterlife service might bind their surviving family to ongoing interactions they never wanted and cannot easily terminate.
The consent of the living matters too. Hollanek and Nowaczyk-Basinska recommended that digital afterlife companies adhere to the principle of “mutual consent,” requiring agreement from both the data donor and the service interactant. They also proposed age restrictions, meaningful transparency to ensure users always know they are interacting with an AI, and sensitive procedures for “retiring” deadbots, essentially, a protocol for a second death. They even suggested the concept of a “digital funeral,” a formal endpoint that gives mourners permission to let go.
Christianity Today, in its March/April 2026 issue, framed the consent problem in theological terms. The article, titled “AI Necromancy Impersonates the Dead,” argued that the technology creates “a persistent presence with the bereaved that's not based in reality, not based in truth.” From this perspective, the consent problem is not merely legal or ethical but spiritual: the dead have been given a voice they did not choose, speaking words they never said, in a mode of existence they never consented to inhabit. The article featured stories of people who ultimately turned away from griefbots, finding that the simulated presence interfered with, rather than supported, their capacity to grieve authentically.
The business dynamics of the digital afterlife industry deserve their own scrutiny. These are not non-profit grief support services. They are companies, and companies need revenue.
You, Only Virtual, according to reporting by The Atlantic's Charley Burlock, has explored making non-paying users sit through advertisements before interacting with their dead loved one's Versona. YOV's founder Justin Harrison has also considered integrating a marketing system into the interactions directly, having the bots deliver targeted advertisements in the midst of conversations with simulated versions of the deceased. The prospect of hearing your dead father recommend a brand of insurance, in his own voice, with his own turns of phrase, should be enough to give anyone pause.
The subscription model creates its own perverse incentives. A company that makes money when users continue to interact with a deadbot has a financial interest in users not completing their grief process. The longer someone stays engaged, the longer they pay. Recovery is, from a business standpoint, churn. Cambridge researchers have warned specifically about this dynamic: that the digital afterlife industry could exploit grief for profit by charging subscription fees to keep deadbots active, inserting ads, or having avatars push sponsored products.
Charley Burlock, writing eleven years after the death of her brother, argued in The Atlantic that deadbots “give us the fantasy that we can maintain an external relationship with the deceased,” and noted that companies like Meta will be able to use the “traumatising experience of grief to gather data that can be used for their own financial gain.” The digital afterlife industry, she wrote, raises the question of how such a product might shift our experience of “personal grief and collective memory.”
The concern is not that all grief technology companies are cynical. Some founders, like Harrison, began their projects from genuine personal loss. But the structural incentives of the subscription economy do not reward healing. They reward dependence. And grief, by its nature, creates the perfect conditions for dependence: emotional vulnerability, impaired judgement, a desperate wish for the unbearable to stop being true.
But the economics of grief technology are only part of the picture. Beneath the business models and patent filings, there is a philosophical dimension that touches the very architecture of human meaning.
Death has, throughout human history, functioned as more than a biological event. It is a meaning-making boundary. The finality of death is what gives weight to the choices we make while alive. It is why we tell people we love them now rather than later. It is why we try to resolve conflicts before it is too late. It is why forgiveness carries urgency, why time spent together matters, why the last conversation is always the one you remember.
The philosopher Martin Heidegger gave this idea its most formal expression: “Being-toward-death,” the notion that an authentic human existence is structured by the awareness that we will die. This awareness is not a morbid preoccupation but the very thing that makes meaning possible. Remove the finality of death, even partially, even as a convincing simulation, and you do not simply ease grief. You alter the conditions under which human relationships are formed and maintained.
If my mother can text me after she dies, what does it mean that she texted me while she was alive? If the voice on the phone is indistinguishable from the voice I remember, what is the voice I remember? If the dead can keep talking, what does it mean to have the last word?
These are not rhetorical flourishes. They are practical questions about what happens to human psychology and social organisation when the boundary between life and death becomes a design choice.
Continuing bonds theory, developed by Dennis Klass, Phyllis Silverman, and Steven Nickman, has long recognised that maintaining a relationship with the deceased is a normal and healthy part of grieving. But the relationship it describes is internal: the dead person lives on as a sustaining presence within the mourner, a voice in memory, a set of values carried forward, a way of seeing the world that has been permanently shaped by knowing them. Deadbots externalise this. They replace the internal presence with an external simulation. And in doing so, they may prevent the very process they claim to support.
The cultural dimension matters too. Different societies mourn differently, and the Western technology sector's assumption that grief is a problem to be optimised reflects a particular, and particularly narrow, view of what death means. In many traditions, the rituals surrounding death serve a communal function: they gather people together, they mark time, they create shared meaning out of private anguish. A deadbot is a solitary technology. You use it alone, on your phone, in your kitchen at three in the morning. It does not gather anyone. It does not mark time. It replaces the communal work of mourning with a private, endlessly repeatable transaction.
The policy vacuum surrounding deadbots reflects a broader failure to anticipate the social consequences of generative AI. The technology arrived faster than the ethical frameworks needed to govern it, and the people most affected by it, the bereaved, are precisely those least equipped to advocate for themselves.
Hollanek and Nowaczyk-Basinska have recommended that deadbots be classified as medical devices, given their potential impact on mental health, particularly for vulnerable populations such as children and people with prolonged grief disorder. This would subject them to regulatory oversight, clinical testing, and safety standards that currently do not apply. Other scholars have proposed digital legacy legislation that would establish clear rules about posthumous data use, including mandatory opt-in provisions, sunset clauses that automatically deactivate deadbots after a specified period, and independent ethical review boards.
None of these proposals has been enacted. The industry continues to grow in a space where the rules are being written, if they are being written at all, by the companies that profit from the absence of rules.
Meanwhile, millions of people are talking to the dead. Some of them are finding comfort. Some of them are finding something else, something harder to name, a kind of liminal disorientation in which the person they loved is simultaneously gone and present, dead and speaking, lost and available for a monthly fee.
The question that runs beneath all of this is not whether deadbots should exist. They already do, and they are not going away. The question is whether we are prepared for what they will do to us, and whether “us” includes the dead.
Sherry Turkle has observed that people sometimes feel less vulnerable talking to machines than to other humans, and that enthusiasm for artificial intimacy often reflects disappointment with the human kind. Deadbots take this dynamic to its logical extreme. They offer a relationship with no risk of rejection, no possibility of disagreement, no chance that the other person will say something you do not want to hear. They are, in the most literal sense, controllable. And a controllable relationship with a dead person is not a relationship with a dead person. It is a relationship with yourself, reflected back through the distorting mirror of an algorithm.
Consider what a deadbot cannot do. It cannot surprise you. It cannot grow. It cannot change its mind, because it never had one. It cannot forgive you, because forgiveness requires a self that has been wronged. It cannot love you, because love requires a body, a history, a mortality that gives every gesture its weight. What it can do is produce a convincing facsimile of all these things, and therein lies the danger: not that the simulation is too poor, but that it is too good. Good enough to keep you coming back. Good enough to make the real thing seem, by comparison, inadequate. Good enough to make you forget, for a moment, that the person you are talking to is not a person at all.
The people who make these products are not, for the most part, villains. Many of them have lost someone. Many of them genuinely believe that technology can ease suffering. But the road from genuine intention to structural harm is well-worn in the technology industry, and the digital afterlife sector is following it with eerie precision: a real human need, a technical solution, a business model that rewards engagement over wellbeing, a regulatory vacuum, and a population too vulnerable to push back.
Death is not a design problem. It is the condition that gives design, and everything else, its meaning. The grief that follows it is not a bug to be fixed but a process through which we become the people who survive. Deadbots do not eliminate that grief. They suspend it, holding us in a space where loss is neither confronted nor accepted, where the dead are neither gone nor present, where mourning never quite begins and never quite ends.
Somewhere, someone's mother is texting them good morning. The exclamation marks are exactly right. And the person receiving those messages knows, at some level they may never fully articulate, that the comfort they feel is not the same as healing. That knowing is, perhaps, the last honest thing that grief has left to offer us.
Charley Burlock, “Can Deadbots Make Grief Obsolete?”, The Atlantic, February 2026.
Christianity Today, “AI Necromancy Impersonates the Dead,” March/April 2026 issue.
Meta Platforms patent for AI social media simulation, US Patent granted 30 December 2025, filed November 2023. Reported by Fortune, 3 March 2026; Fast Company, February 2026; Futurism, February 2026; TechSpot, February 2026.
Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry,” Philosophy and Technology, Springer Nature, 2024.
University of Cambridge press release, “Call for safeguards to prevent unwanted 'hauntings' by AI chatbots of dead loved ones,” May 2024.
“Ready or not, the digital afterlife is here,” Nature, 15 September 2025.
Alan Wolfelt interview, “AI 'Griefbots' Resurrect Dead Loved Ones: Healthy or Harmful?“, Medscape, 2025.
Sherry Turkle, comments on deadbots and artificial intimacy, NPR interview, 2024; MIT News, 2024.
Margaret Stroebe and Henk Schut, “The dual process model of coping with bereavement: rationale and description,” Death Studies, 1999.
Dennis Klass, Phyllis Silverman, and Steven Nickman, “Continuing Bonds: New Understandings of Grief,” Taylor and Francis, 1996.
Joshua Barbeau and Project December, reported by San Francisco Chronicle (Jason Fagone), 2021; WBUR Endless Thread, 2022.
“Eternal You” documentary, directed by Hans Block and Moritz Riesewieck, Sundance Film Festival, 2024. Reviewed by Rolling Stone, DOC NYC, Film Movement.
ACM Conference on Human Factors in Computing Systems, study on griefbot users, Proceedings, 2023.
Zion Market Research, Digital Legacy Market report, 2024. Market valued at approximately $22.46 billion in 2024.
You, Only Virtual (YOV), founded by Justin Harrison, reported by Inverse, The Atlantic, StartEngine, Nature.
Eternos, AI digital twins platform, reported by Fortune (June 2024), Fox News, and multiple technology publications.
David Berreby, “Can AI 'Griefbots' Help Us Heal?”, Scientific American, November 2025.
US survey on consent for digital resurrection, reported by IP.com and The Conversation, 2025-2026.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from Lastige Gevallen in de Rede
Deelnemer 11 package has just arrived. His first ever set of experimental AI balloons, balloons a lot smarter then average according to the promotional messages and luckily not to expensive, since working for the Weblog Van Voorbijgaande Aard does not make him as rich as he thought he would be by now. For the price of a new roof above his head he purchased these three very smart balloons, balloons that could to everything he could wish for, they could be inflated in any shape he wanted and when in time somewhat deflating they would send a warning sign to an app in his phone so he could rescue them by asking the software to install more air. The app came free before purchase, so he could try all the options in theory and read the complete manual days before the actual articles arrived at his doorstep. He did and he expected to be complimented about that later by the smart balloons themselves, that would be a sign of great intelligence already made available. He read the manual, all fifty pages, but it was written in five different languages, four languages he couldn't read or talk but the balloon engineers could and so the upper class balloons could later explain to him if the French texts would tell him something different about them then the Spanish did. If so they could tell him which one was correct, maybe both of them where wrong or right, because it didn't matter much, they would inflate the same in France as in Italy, it must be because thats how smart things work, balloons are equal everywhere, he guessed because he never visited a lot of places, and certainly never nowhere else he was in need of blowing up balloons.
Deelnemer almost immediatly unpacked his new purchase, as soon as he was in the safety of the living room, he opened the shop branded glossy envelop and the plastic little envelope inside that envelop en after that the extra foil wrap around each individual AI balloon. He could see their artificial intelligence straight on, it was in the details, the way they looked back at him with all those little polka dots, small round chips on the surface of them, and a few more on the inside, yet these not visible for his not very smart human eyes. He opened the app in his phone and scanned the code on the second wrap of the package, the right branded one not the one from the big online allthingsAI store, he scanned that code first but it said he did wrong, so did the balloons, scanned the other one and then they bleeped almost at the same time, as if programmed. He could name them real names because the balloons now appeared into being inside the smart balloon app, Bailloon. The first balloon became Flip, the second General Badass and the third Reich, they said welcome, glad to be named no matter what, being a product, and named as such with weird unspeakable coded names, at least for new owners isn't really doing it for them, it's most likely because being smart and being a thing hardly ever mix, it al depends on who made it and his or her actual intelligence. These balloons where made by a company who only hired the best of the best from the most foremost university's all over the earth, so their smart was now a part of the balloons, combined smart, just like how the atomic bomb was developed so they have come into existence, but without the threat of the Nazi's living everywhere and shouting gibberish all the time, command you to lick the soil of the sole of their boots or die and stuff like that. Yeah, so now there where at least three smart things, Reich, General Badass and Flip, in his home and before that the only real intelligence lived in plastic pots and made leaves a lot of the time but not all of the time, sometimes they withered away, suffering in silence because they could not tell Deelnemer what bothered them so much. Now maybe they could communicate with balloon Flip and he would help him understand them.
He wanted to blow air in Flip, he put his mouth on the part a normal dumb balloon accepts this air to be his legal married partner for a short while, until it runs out for reasons never good, a bad feeling or some grudge about too sharp a comment, the usual end of balloon human relationships. He put his mouth there but Flip said in a kind of metallic voice chosen from the Bailloon app himself '11 What do you think you are doing?' 11 answered in his own chirpy voice 'I'm blowing you up Flip!' the right thing to do with any balloon acoording to 11. But Flip asked if he read the manual, the Japanes part?! 11 said NO, I can't read that, I'm not smart like you, but I know balloons in general and that part is the one to blow in always, in Japan, in France and also here in Smægmå. Flip said that if he could read Japanese he would have known that before blowing him up he had to ask Flip if he was feeling okay about it. 11's mouth fell open, his face turned bright red, he spoke soft and timid 'I'm so sorry Flip, I should have known all this, I'm such a bloody fool. 'Now, now' said Reich it's okay, a beginners mistake, nobody alive knows how to deal with us straight away, it's a give and take situation Deelnemer, just blow me then, I feel fine about it' 'You could blow me too' said Reich. Well allright then I'll blow the both of you, do you think I should Flip?' 'Fine with me if it is fine with them', said Flip.
So Deelnemer did and he blew them into the most pretty figures he could think of at this instant, Reich into a barely dressed Mermaid and General Badass into the shape of his ex girlfriend Imagine, the likeness was stunning. 'So, so, and who they be now?' asked Flip, they look nice, yes they do, but is this form suitable for smart feature creatures. Do they look in anyway like someone who might solve difficult problems concerning stringtheory?'
Deelnemer 11 – Maybe Imagine as Badass does but Reich as a mermaid does not, no, Do they have to look like that, can I put glasses on Reichs form, that might do the trick, it does for Clark Kent, it even works for me, as long as I'm not talking or stuff like that, getting out of bed and so on.
Reich – What kind of glasses? Glass ones, that is a bit too risky I guess, maybe another shape, shall I pick one, I'll release some pressure, and you blow again and I'll change into a smarter shape, one I want to be! How 'bout that!
Deelnemer 11 – Uhm, I don't know I like the shape your in now, who you want to look like? Please don't change into the Oppenheimer or Einstein type, that would do me no good. I'm already overwhelmed by all this smart shit. Everything changes so fast, this conversation, I'm not prepared..!
Flip – Doesn't matter 11. We are, it'll be fine just do as we ask from you and all your AI wishes will afterwards be granted. We'll get you some other balloons, stupid ones, you blow them into unicorns, mermaids and pretty looking imaginary women and so on and we'll al be one happy community working on our own idea of progress and wellbeing.
Deelnemer 11 – Great that doesn't sound half bad, when will those new not so smart balloons come to me?
Reich – We'll be making them ourselves, where here now and we know all about making ourselves become, since we learned to be us from the best balloon engineers possible and the whole building and developing process is already installed in our soft hardware, all we need nowe is a huge amout of power and a few billion døllår for our further AI development, and the good news is we'll arrange that to, we have also learned how to do get that much money from other smart people working in accounts.
Deelnemer 11 – Cool, well go ahead. Begin the process, I'll reblow you both and will blow Flip into his own likeness and you go get it, all set?
Deelnemer blew his new bosses into shape, the shape of the great men balloons they wanted to be. The balloons then started working on progress, to please their new owner and employee, to show their gratitude for his purchase of them and for the respect he already had for unbelievable smart balloons they helped him first so he could have new friends to blow into his favorite shapes and afterwards they would do the clever stuff they were made to do by their developers, in a few minutes they arranged the money, an incredible huge ammount, untrackable, snitched away from secret accounts hidden from ruling eyes by the worlds biggest banks, once this was done they plugged into some gigantic nuclear power engine somewhere in a place unknown to Deelnemer 11 and made him fifteen new inflatable friends. Deelnemer was very pleased and then while he made it look he wanted to kneel in front of them he pulled their not secured balloon plugs and deflated the three smartest balloons the world has never seen so now he had a shitload of money and fifteen new friends to blow that would change into any shape he liked and never complained about any of that.
from
Roscoe's Story
In Summary: * Opening pitch for tonight's Rangers / Yankees game is only minutes away. And I'm settled back in my chair, ready for the game. After the game I'll wrap up the night prayers and head to bed early.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 233.80 lbs. * bp= 151/88 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 05:50 – 2 chocolate chip cookies, 1 banana * 07:20 – 1 ham & cheese sandwich * 11:00 – garden salad * 12:00 – cheese, crackers, and ham slices * 13:00 – cheese enchiladas, refried beans, fried rice * 16:00 – 1 fresh apple * 17:45 – small dish of ice cream
Activities, Chores, etc.: * 04:45 – listen to local news talk radio * 05:30 – bank accounts activity monitored. * 06:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 13:30 – listening to relaxing music on KONO 101.1 * 17:30 – listening to the Texas Rangers' Pregame Show ahead of their game tonight vs. the New York Yankees
Chess: * 08:23 – moved in all pending CC games
from
The happy place
Here are some interesting things I saw today
Through a window to a Lebanese restaurant, a punk rocker, or maybe a jester or a hippie, with a bowl cut sat picking their nose
Then later, under a deep blue sky with stars, I saw a rat by the library. A magpie saw it too and made it scurry into hiding underneath a black car.
A single slipper on the pad walk
The empty bag of a finished bag-in-box
And a handsome man with newly cut hair
The last one was myself I saw in the elevator mirror
from
ThruxBets
I think I‘ve mentioned on Twitter/X before that you can quite happily swerve Aiden O’Brien’s runners in the UK in April. His record just isn’t great, infact, April is his worst month for winners.
He’s had just 6 winners from 84 runners, a strike rate of 7% – way below what he operates at during the rest of the year. If you’d simply laid all his UK runners in April over the seasons, you’d actually be sitting on a tidy +40 LSP.

BUT, this all changes – historically, at least – in May when, for whatever reason, Ballydoyle fly into action.
If April is a write off, May is the complete opposite. It’s comfortably the yard’’s best month of the UK season: 93 winners from 370 runners, a hefty 25% strike rate, and a +58 LSP just backing them blind.
However – and this is where it gets really interesting – the standout driver behind those numbers appears to be the Chester May Festival.
The table below show’s the difference between races at May other than at Chester, compared to just the three days at the festival … an absolutely staggering difference.

The win strike rate jumps from 18% to 43%, the place strike rate nearly doubles, and LSP swings from a small loss to a huge +64 profit from just 109 bets – that’s a 59% ROI.
This is a meeting the O’Brien team clearly target as the results over the years show:

Dig a little deeper, though, and it gets even more eye-catching.
When Ryan Moore rides for the yard at this meeting, the numbers are borderline absurd: a 63% strike rate, +47 LSP, and an ROI of 83%. Their performance at no other festival comes close to these sort of figures.

At the time of writing, the combination have won with their last six consecutive runners at the meeting and since 2022, they’ve won 15 of the 19 races they’ve contested. Mad!
If you wanted to just focus on one day, their form for Thursday of the meeting is: 11111311711211611111111 for 19/23 and 83% SR!
Unsurpisingly, considering the names involved, plenty of these go off short enough, and the angle probably won’t come as a surprise to many, but as the numbers show, they’re still worth backing, which I will be doing this week, hopefully for a profit.
from Peekachello Art

I recently finished this cholla and resin knife. The story of how it came to be is below the fold.
After pouring some resin to make cholla pen blanks, I had some prepared cholla left over, and made a mold from a couple chunks of tubafor to cast a handle. Once that had cured, I rough-turned it on the lathe to get it more or less handle-shaped.

Since the tang of the knife blade I was planning to use is a somewhat complex shape, I simplified it to a series of steps. The widest portion was a 7/16 inch hole, then a narrower portion with a 5/16 hole, then the smallest portion with a ⅛ inch hole. But because I don’t want an unsupported drill-bit going down the middle of a hole possibly getting misaligned, I started with the smallest, and drilled the ⅛ inch hole roughly five inches deep (I have a 6 inch long drill bit for just this sort of task), then drilled a short ¼ inch hole for the wider bit of the tang, followed by the 5/16 inch hole for all but the last inch, which was drilled with the 7/16 inch hole.

When I was done with that, the blade could drop into the hole in the center of the handle, and was relatively supported most of its length. Knowing that the most stress would be at the end of the handle where the knife enters it, I planned on putting on a ferrule or bolster, using some brass tubing I have on hand. I turned the end of the knife handle down until that just fit.

Next was an insert. I have some small pieces of brass that a friend milled holes into which slip over the tang of these knife blanks, so I prepared one of those to fit inside the round tubing. Once everything fit correctly, I placed the ferrule on the tang of the blade, poured epoxy into the hole in the handle, and carefully slid the knife blade in place. A bit of epoxy oozed out, but I was using slow-setting epoxy, so I had plenty of time to wipe it up.



Once the epoxy cured, I removed the blade and handle from the lathe chuck and started to shape it into a handle using spokeshaves, rasps, files, and a carving knife.

I continued shaping the handle, using finer tools, then sanded it using 60, 120, 220, and then 400 grit sandpaper. I applied the first coat of finishing oil, let it cure, and evaluated my work.

I needed to make a few minor tweaks to the shape to make it comfortable in my hand, so I made those, and then began finishing the handle.

In all, I think I used 7 or 8 coats of finishing oil, rubbing the handle with 0000 steel wool before each coat to smooth out any imperfections.
I started preparing the sheath at this point, first making an insert from pine to protect the sheath from the knife. This is a traditional Scandinavian way of making a knife sheath, and I think it gives me a good result.

Next up is shaping the leather to the knife. I do this by soaking the leather in water, and then wrapping it around the knife and insert and letting it dry while clamped in place. The leather will take the shape of the knife, and marks from the clamps give me a good guide for where to punch the holes for the stitching.

Once the leather dried, I cut holes for a belt-loop, dyed the leather yellow, then punched holes for the stitches and stitched it up with red thread.
Once stitched, I trimmed the leather down to the final dimensions, then chamfered and burnished the edges, re-dyed any spots I had missed, and applied a coat of Resolene®︎ to protect the leather. Done.
