Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Lastige Gevallen in de Rede
We vallen direct met de zo juist nog potdichte deur in huis en zien (Voorheen) bezig met een pitch voor zijn Holle Bolle project in opdracht van en dus ook voor een select gezelschap van top pret managers op het hoofdkantoor van Elfteling Pretpakket NV.
Hallo, fijn dat ik hier mijn prestatie mag presenteren voor de ware bazen van sprookjes wereld Elfteling LOL BV. U heeft nood aan de nodige realistische interventjes in de door u zelf gefabriceerde sprookjes wereld en heeft daarom mij nodig, ik (Voorheen), ex-denker, schrijver, creatief ondernemer met letters bij VVA maar daarvoor inmiddels te echt, moet ogenblikkelijk meer erkenning ontvangen, goed te zien op rekening niet meer alleen bestaan in de geest zeker niet die van een ander! Daarom ben ik blij dat u mij heeft gevraagd voor dit grote tijdrovende middelen en energie slurpende project. Ik vroeg me wel af waarom u niet eerder bent gaan innoveren met dit Holle Bolle concept. Het “Papier Hier” is een beetje pover in de huidige afval realitijden, maar goed u kunt nu volop gebruik maken van de wet van de geremde voorsprong, het voordeel geruiken van erg lang achterlijk te zijn geweest.
Oké, Ik haal ze, het project, de oplossing voor al u afval problemen meteen onder de doeken vandaan, zie hier, de Holle Bolle Clan. Gijs de stamoudste wil nog altijd papier maar is, net als de hele clan trouwens, uitgerust met herkenningssoftware en een zeer fijne camera zodat de misdadige onnozelaar die verkeerd papier of zelfs plastic in Gijs of verkeerd spul in een ander lid van de HB Clan gooit bij de uitgang kan en zal worden gearresteerd en daarna in de Elfteling de gevangenis straf van ten minste vijf maanden uitzitten of 6000 Smægmåånse Døllår meteen betalen en dan schuldbewust, met pa en ma huilend en hoofdschuddend voorin de auto terug rijden naar huis. Dit hier is Holle Bolle Geesje zij roept voortdurend om lege batterijen, Gijs broer Holle Bolle Benny roept om het meeste plastic, achterneef Holle Bolle Herman B roept om lege spuiten en onnodige pillen, Holle Bolle Marco B om gebruikte condooms en inlegkruisjes, Holle Bolle Maggy, de aan lager wal geraakte nicht van Gijs, zeurt om flessen met statiegeld maar heeft ook drie gaten voor bruin, wit en groen statie geld loos glaswerk, Holle Bolle Phillip vraagt (voor verder onderzoek) om kapotte lampen, Holle Bolle Aard is er voor GFT, De Holle Bolle fanclub wil alleen maar selfies, Holle Bolle Bolle eist het restafval op, Holle Bolle Hosselaar jengelt om klein elektrisch afval inclusief ondeugdelijke hardware met software, laptops, mobieltjes en dergelijke, Holle Bolle Miep, verre familie van HB Gijs, heeft verse roddels nodig, Holle Bolle Eppie Epsson eist inkt cartridges, robot Holle Bolle James vraagt om fooien omdat dat kan, Mega Holle Bolle wil alleen grofvuil daar kunt u ook zelf u afgeschafte draaiende, tollende, malende en schommelende gereedschap voor fysiek entertainment aan kwijt, en hier dan is Magere Hein die regelt de uitvaart voor overleden bezoekers en personeelsleden omgekomen in the line of duty, zeg maar. U zet ze allemaal rondom Gijs, zo als te zien is op de maquette, en daarmee voldoet u op koddige maar gepaste wijze aan de huidige eisen voor afvalverwerking voor pretparken BVs en NVs. Nou?! Wanneer begin ik?
from An Open Letter
I said the title kind of in reference to literally everything in life and maybe you can make an argument for this being overthinking. But for example with the whole fear about not getting married soon enough, I believe I saw something where the average age is 30, and if I wanna date someone for four years that’s two years to get into that relationship and of course if I wanted to really force it and hit this deadline I could absolutely do that but at the same time this whole arbitrary 30 years Mark isn’t for healthy relationships or for really amazing magical ones like the kind that you can get if you really wait and you do the work and the nice thing is I’ve done a lot of the work, and so the part that I need to do is wait and be patient. And so I guess I don’t really have too much to worry about I feel like in that sense, I can take my time if I want and my life isn’t a great spot so I’m in no rush. But even more generally I kind of just realize that I was both hungry and also didn’t have great sleep the last few nights and both of those things definitely negatively impact my mood, and so I just decided to not give too much weight towards any negative feelings today and I kind of just chilled and took it a little bit easy. And that’s all I really need to do.
from
Notes I Won’t Reread
I do not sleep. Not in the way people describe it as, it’s like I visit these places. Dreams. I lie down at eleven, sometimes. twelve and I slip into something that pretends to be rest. An hour passes. Maybe less. I wake up, not startled, not even confused, just returned. As if someone pressed pause and then played again on a YouTube video. But it’s not the same scene.
Then I go back, another dream, another place that feels structured enough to question itself. Some of them are absurd. rooms that stretch too far, voices that do not belong to faces. Others are. well. convincing. Disturbingly so. They carry weight, logic, and consequences. It makes me hesitate, even after waking up
And I wake again, One. Three. Two. Time loses order. It becomes fragments instead of a line. Sleep turns into a series of short stories, each one unfinished, each one remembered, and I remember them too well. I would tell every single detail and still forget what I ate yesterday. But I remember them, which is strange.
But not all of them, of course. Im not blessed, I never was. Just enough to be inconvenient. Enough to notice patterns, which is always a mistake. Enough to feel like I’ve lived longer than I should have, without any of the benefits. Just extra hours no one asked for. Enough to occasionally wonder which version of “awake” I’m currently pretending to be.
There are nights where this cycle stretches. Four, five hours of entering and exiting worlds that refuse to end properly. Like badly written stories that keep insisting on a sequel. And I, apparently, am their only loyal reader. Lucky me.
And then there were days (used to be days) where I would sleep for twelve or more. As if the body, in a rare moment of ambition, decided to overcorrect everything at once. Make up for all the fragments. Spoiler: it didn’t work.
It never does. Now it’s mostly this interruption. Repetition. Awareness. Three things that sound almost productive when you list them like that. They’re not
I am not sure which is worse. To sleep too deeply. or to spend every night rehearsing it and never quite getting it right.
Sincerely, A mind that won’t stay quiet.
from
Micropoemas
Son mansos, porque esas aceras las recorren desde siempre. Y volverán, como las palomas.
from 下川友
一昨日も行ったのに、またサンマルクに行ってしまった。 珍しく無地の白Tを着た。 昔より猫背が治っているので、以前より白Tが似合っている気がする。
サンマルクは電球の色が良い。ついでに言うと、デニーズの電球の色も好きだ。 優しいオレンジ色の光を採用していて、白い服がその光をほどよく吸ってくれる感じがある。視覚的にかなり気分が良い。 家もこの電球色にしようと思うことはあるが、結局は相対的なものだとも思っていて、家では昼光色を採用している。
新しいオフィスチェアを買ったり、オットマンを買ったり、自分が座っているときの下半身への負荷を少しでも減らすために、そこへ投資をしている。
毎月家計簿をつけているが、二人で40万円ほど使っている。 明らかにお金の使い方が荒いというか、美味しいものを食べたり、定期的に喫茶店へ行ったりしているので、まあそんなものかと思う一方で、いや、生活費が高すぎるのではとも思う。 確実に少しおかしいお金の使い方をしているので、どうにかしないといけない。ただ、根本的な生き方の設計から変えるとなると、今なんとか繋ぎ止めている心の安定が崩れてしまう気もして、迂闊に手を出せないところもある。
夜はカオマンガイとズッキーニのから揚げ。本当に美味しかった。 すぐに食べ終えてしまい、その勢いのまま冷凍庫からアイスを取り出す。
食事が好きすぎて、少し恥ずかしいかもしれないなと思いながら、お風呂を入れつつ、体を整体した。
from
laxmena
Every platform that optimizes for engagement will be gamed. That's not a cynical take – it's an incentive problem. When the metric is clicks, shares, and reactions, the system rewards content that triggers emotion, not content that builds understanding. In AI right now, that means 90% of what you see is noise dressed up as signal.
Here's how I opt out.
Before I share my sources, the principle matters more: any system that rewards engagement will produce noise. Twitter, LinkedIn, YouTube – they all optimize for time-on-platform. That means sensational > accurate, simple > nuanced, hot take > careful analysis.
Once you internalize this, you stop asking “what's trending?” and start asking “what's the incentive structure of this platform?”
HuggingFace Daily Papers – my current first feed
I recently switched to this as my first stop, and I haven't looked back. It surfaces papers the ML community is actually reading – not papers that generate the most outrage. No algorithm optimizing for your dopamine. No ads. No influencers. Just papers, ranked by upvotes from people who read them.
It's not designed to maximize interactions. That's the whole point.
Hacker News – where I started, and still use
HN was my first feed for a long time, and I still check it daily. It's self-correcting in a way few platforms are – the community is technical, skeptical, and fast to call out hype. If something AI-related survives the front page and the comments, it's usually worth your time.
The comment threads on AI papers and tools are often more valuable than the articles themselves.
X / Twitter – my guilty pleasure, and I'll be honest about it
I'm on it. Some threads from researchers are genuinely excellent – the kind of paper breakdowns that would take you hours to extract yourself. But it's rare, and the signal-to-noise ratio is brutal.
My honest recommendation: avoid building Twitter into your learning stack. Use it for serendipity, not as a system. If you find yourself doom-scrolling AI threads at 11pm, that's the platform working exactly as designed – and not in your interest.
This is where I spend the most deliberate time, and where most people get stuck.
The mistake is trying to read everything. You can't. The field is moving too fast and the volume is too high. Instead, I use a specific entry strategy:
Find a recent review paper – something published in the last two years on the topic you care about. Review papers synthesize the field. They're the map before you explore the territory.
Follow the citations forward and backward – what did this paper cite? Who cited this paper after it was published? These two directions give you the lineage of ideas.
Read 10–15 papers in the space – you won't be deep yet, but you'll have enough context to know which questions are already answered and which are still open. You'll start to recognize names, labs, and recurring ideas.
Then go deep on what actually interests you – not what seems important, not what's popular. What genuinely pulls your curiosity. That's where you'll do your best thinking.
This process takes weeks, not days. That's fine. Depth compounds. Breadth usually doesn't.
This one is underused.
When you encounter a problem in AI that sounds new, ask yourself: has this problem existed in a different form before? Often the answer is yes. Optimization instability, data distribution shift, latency under load – these aren't new. Decades of research exist on them.
Seeking new solutions to old problems is expensive and usually unnecessary. The literature already has answers. Find them first.
Conversely, for genuinely new problems – things that only exist because of large-scale language models or diffusion architectures – the old solutions often don't apply. Here you want the most recent work, not the canonical textbooks.
The filter: is this problem fundamentally new, or does it have an older analog? Answer that first, then choose your research direction.
Most people optimize for feeling informed. They want the daily hit of “I know what's happening in AI.” That feeling is easy to manufacture and almost entirely useless.
Being informed is slower, quieter, and less satisfying in the short term. It means skipping the hot takes and reading the paper. It means sitting with confusion for a few days before the concept clicks. It means building a system that's boring by design.
The people I learn the most from have boring information diets. They're not on every platform. They've read fewer things more carefully. They can point to specific papers that changed how they think.
That's the goal.
I write more technical articles on my newsletter, INTERNALS.md. You can subscribe there to follow along.
What does your filter stack look like? I'm genuinely curious what senior engineers use to stay calibrated – drop it in the comments.
from
SmarterArticles

Your mother has been dead for fourteen months. You know this. You were at the funeral, you sorted through her wardrobe, you cancelled her phone contract. And yet here she is, texting you good morning. She asks about your day. She tells you she is proud of you. She even uses the slightly excessive number of exclamation marks that drove you mad when she was alive.
This is not a ghost story. This is a product.
In early 2026, a cluster of investigations by The Atlantic, Christianity Today, and several other major publications converged on the same unsettling phenomenon: a booming industry of AI-generated “deadbots,” services that harvest the digital traces of the deceased, their text messages, voice recordings, social media posts, and email archives, and use them to build chatbots that simulate ongoing conversations with the dead. At roughly the same time, Meta was granted a patent for technology that would keep social media accounts active after the user dies, generating posts, comments, likes, and even direct messages powered by large language models trained on the deceased person's historical activity. The digital afterlife, it turns out, is no longer speculative fiction. It is a subscription service.
The questions this raises are not simply technical. They cut to the marrow of what it means to be human, to lose someone, and to move through the world knowing that loss is permanent. If death has always been one of the defining boundaries of human experience, the thing that lends urgency and meaning to every conversation, every embrace, every unresolved argument, then what happens when we make that boundary negotiable? And perhaps more pressingly: who gave permission for the dead to keep speaking?
The digital afterlife industry, as researchers at the University of Cambridge have termed it, has grown from a handful of experimental projects into a global market. In 2024, the digital legacy market was valued at approximately $22.46 billion, according to Zion Market Research, with projections suggesting it could more than triple by 2034. More than half a dozen platforms now offer deadbot services straight out of the box, and developers claim that millions of people are using them. The terminology alone tells you how fast the field is evolving: deadbots, griefbots, thanabots, ghostbots, postmortem avatars. Each name carries its own shade of unease.
The mechanics vary considerably. Some platforms, such as HereAfter AI, focus on preservation rather than simulation. They allow people to record “Life Story Avatars” before they die, guided audio sessions that capture memories, advice, and personal history. The AI then indexes this content and organises it into a searchable archive, something closer to an interactive memoir than a conversation partner. The person recording decides what gets preserved and what stays private. There is an element of authorial control here, a curation of legacy that feels more like writing a will than summoning a spirit.
Others take a more ambitious and more ethically fraught approach. Eternos, which launched in 2024, has helped over 400 people create what the company calls “AI digital twins.” Users record 300 specific phrases and answer extensive questions about their lives, political views, personalities, and relationships. A two-day computing process then generates a voice model capable of responding in real time, not simply playing back recordings but generating new speech in the user's voice, trained on the patterns and cadences of how they actually talked. The result is not a recording. It is, or at least appears to be, a conversation.
Then there is You, Only Virtual, or YOV, a platform founded by Justin Harrison after his mother was diagnosed with advanced cancer in December 2019. Harrison had nearly died in a motorcycle accident two months earlier, and the convergence of those near-death experiences drove him to build a system for preserving the people we lose. YOV asks users to provide the raw material of a relationship: text messages, audio clips, video recordings, anything that captures not just who a person was in general, but who they were with you specifically. Two to three months later, their “Versona” arrives via a link. You can text it, call it, even video chat with it.
Other platforms occupy different niches. Project December, built on GPT-3, allows users to create a chatbot of anyone by providing text samples and personality descriptions. Seance AI asks users to input personality traits and writing styles of loved ones. The range of approaches reflects a market that is still figuring out what it is selling: memory, comfort, presence, or the illusion of all three.
The ambition is staggering. The execution, depending on whom you ask, is either a genuine comfort or a very expensive hallucination.
While start-ups have been building deadbots from the outside, Meta has been thinking about the problem from the inside. On 30 December 2025, the company was granted a US patent for an AI system designed to simulate a user's social media activity after they stop using the platform, whether temporarily or permanently, including after death. The patent, first filed in November 2023, lists Andrew Bosworth, Meta's chief technology officer, as the primary inventor.
The system described in the patent would train a large language model on a user's historical behaviour across Meta's platforms: Facebook, Instagram, Threads. It would learn from their posts, comments, likes, voice messages, chats, and reactions, and then replicate that behaviour autonomously. The AI-generated version of a deceased person could respond to content from friends and followers, publish updates, handle direct messages, and maintain what the patent describes as “community engagement.” It could even simulate video or audio calls.
The patent's rationale is revealing. It notes that account inactivity affects other users' experiences, and that this impact is “much more severe and permanent” when a user has died. The implication is worth sitting with: in Meta's framework, the problem with death is not the loss of a human life but the loss of engagement metrics. A dead user is a disengaged user, and disengagement is the one sin a social media platform cannot forgive.
A Meta spokesperson told Fortune that the company has “no plans to move forward with this example,” adding that patents are often filed to protect ideas that may never be developed. But the patent exists. The technology exists. And the incentive structure, keeping users engaged, generating data, maintaining network effects, certainly exists. The gap between “we have no plans” and “we have the capability” has never been a reliable firewall in Silicon Valley.
Not everyone who uses a deadbot is having a crisis. Some users describe the experience as genuinely helpful, even therapeutic. In one of the few completed academic studies on the subject, published in the Proceedings of the 2023 ACM Conference on Human Factors in Computing Systems, ten grieving individuals who used AI-powered chatbots to communicate with simulations of deceased loved ones reported that the bots helped them in ways that human relationships could not. Participants rated the bots more highly than even close friends for certain kinds of emotional support. One participant explained the appeal simply: “Society doesn't really like grief.” The bots never grew impatient. They never imposed a schedule. They never changed the subject. They never said “it's been six months, shouldn't you be feeling better by now?”
David Berreby, writing in Scientific American in November 2025, reported that chatbot users in the study seemed to become “more capable of conducting normal socialising” because they no longer worried about burdening other people or being judged. This contradicted the initial concern that griefbots would cause social withdrawal. Instead, the bots appeared to function as a kind of pressure valve, absorbing the intensity of grief that the users felt unable to express in human company.
A 2025 Nature article titled “Ready or not, the digital afterlife is here” documented similar findings. Some users turned to deadbots to manage unfinished business: to say goodbye, to address unresolved conflict, to have the conversations that illness or sudden death had made impossible. One participant described it as therapeutic, a way to explore “what if” scenarios that had been locked away by the finality of death. Another said the chatbot helped them “process and cope with feelings” in a way that felt safer than speaking to a therapist.
The 2024 Sundance documentary “Eternal You,” directed by Hans Block and Moritz Riesewieck, put faces to these experiences. The film follows several users of platforms including Project December, HereAfter AI, and YOV. Christi Angel, one of the film's subjects, uses Project December to communicate with a simulation of her first love, Cameroun. Stephenie Oney, from Detroit, uses HereAfter AI to talk to her dead parents. The film is careful to show that some of these experiences provide genuine closure. A woman who never got to raise a child finds, through the simulation, something that functions like resolution.
But the film also captures something darker. The comfort that deadbots provide can be seductive, and seduction is not the same as healing. The technology is exquisitely good at mimicking the surface of a relationship while leaving the substance entirely untouched.
The central concern among mental health professionals is not that deadbots are uniformly harmful. It is that they may interfere with a process that is already difficult, poorly understood, and culturally unsupported: the process of mourning.
Alan Wolfelt, a clinical psychologist and director of the Center for Loss and Life Transition in Fort Collins, Colorado, has spent decades helping people navigate bereavement. He has written over 50 books on grief and is widely recognised as one of North America's leading death educators. In a 2025 interview with Medscape, he drew a distinction that matters enormously in this context. Grief, Wolfelt explained, is what you think and feel inside after someone you love dies. Mourning is the outward expression of those thoughts and feelings, and it is mourning, not grief, that leads to healing. Acknowledging the reality of death, he said, is the “linchpin need” he has identified as universal across mourners. The use of deadbot technology, Wolfelt argued, represents “another invitation, instead of outwardly mourning and acknowledging the reality of the death, to stay stuck instead of experiencing perturbation, or the capacity to experience change and movement.”
This is not a fringe concern. The dominant model in contemporary bereavement psychology is the Dual Process Model, developed by Margaret Stroebe and Henk Schut and first published in Death Studies in 1999. It describes healthy grief as an oscillation between two orientations: loss-oriented coping, which involves confronting the pain of absence, and restoration-oriented coping, which involves engaging with the practical demands of a changed life. The key insight of the model is that both orientations are necessary. A person who only confronts their pain risks being consumed by it. A person who only avoids it risks never processing it. Healthy mourning requires moving between the two, a dynamic, irregular rhythm that looks nothing like a straight line from sadness to acceptance.
Deadbots, by their nature, collapse this oscillation. They offer a third option: the illusion that neither loss-oriented nor restoration-oriented coping is necessary, because the person has not really been lost. The relationship continues. The texts keep arriving. The voice is still there. As Sherry Turkle, the MIT sociologist who has spent years researching people who talk to AI versions of dead loved ones, put it: working through grief is not just an experience of being “sad.” It is “a process through which we metabolise what we have lost, allowing it to become a sustaining presence within us.” Griefbots, she warned, “give us the fantasy that we can maintain an external relationship with the deceased. But in holding on, we can't make them part of ourselves.”
The distinction Turkle draws is subtle but crucial. The goal of healthy mourning, in the framework she describes, is not to forget the dead but to internalise them, to carry them forward as part of who you are rather than as an external entity you can still call on the phone. Deadbots reverse this process. They externalise the dead, keeping them outside you, accessible but never truly integrated.
Turkle has long argued that people sometimes feel less vulnerable talking about intimate matters with a machine than with another person, and that enthusiasm for artificial intimacy reflects deeper disappointments with the human kind. The “artificial intimates” offered by deadbots lack the embodied experience of the arc of a human life that would give them what Turkle calls “empathic standing,” the ability to put themselves in the place of a human other. They offer pretend empathy, convincingly performed but fundamentally hollow.
Joshua Barbeau, a freelance writer from a Toronto suburb, became one of the most widely discussed early users of grief technology when he used Project December to create a chatbot modelled on his girlfriend, Jessica Pereira, who had died eight years earlier from a rare liver disorder. Barbeau fed the system passages from her social media and described her personality in detail. The resulting conversations gave him what he described as a sense of catharsis and closure he had not known he still needed. He compared the experience to a therapeutic exercise he had learnt in therapy: writing letters to loved ones after their death. But the experience also illustrated a tension that psychologists have since identified more formally: the chatbot helped, but it also made it harder to move on. The phenomenon has been described as “frozen grief,” a state in which the simulation prevents the normal progression from acute loss toward acceptance.
Researchers caution that it is still too early to be certain what risks and benefits digital ghosts pose. As the Nature article noted, “researchers simply don't know what effects this kind of AI can have on people with different personality types, grief experiences and cultures.” The few studies that exist are small, and the long-term effects remain entirely unknown. What is known is that grieving individuals may not be able to make fully autonomous decisions about these technologies. Emotions cloud judgement during vulnerable times, and grief may impair an individual's ability to think clearly about whether a deadbot is helping or hindering their recovery.
There is another question embedded in the deadbot phenomenon, one that receives less attention than the psychological risks but may ultimately prove more consequential: who speaks for the dead?
Most people do not leave behind specific instructions about whether their likeness, voice, or digital footprint can be used to create a posthumous simulation. In a US survey, 58 per cent of respondents said they would support digital resurrection only if the deceased had explicitly consented. Acceptance plummeted to 3 per cent when consent was absent. Yet most digital resurrections proceed without explicit permission from the person being simulated, because that person was, self-evidently, not anticipating the technology.
The legal landscape is threadbare. In the United States, no federal framework governs AI-powered simulations of the deceased. Some states are debating digital asset succession bills that could mandate explicit opt-in for simulation, and legal scholars have proposed a dedicated Digital Legacy Act to cover the storage, transfer, and deletion of post-mortem data. But these proposals remain fragmented and largely theoretical. The gap between what is technically possible and what is legally governed continues to widen with each new platform launch and each new patent filing.
Cambridge researchers Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, whose 2024 paper “Griefbots, Deadbots, Postmortem Avatars” was published in the journal Philosophy and Technology, framed the consent problem through three distinct stakeholder perspectives. There is the “data donor,” the person whose digital traces become the raw material of the bot. There is the “data recipient,” the next of kin or estate holder who inherits access to that material. And there is the “service interactant,” the person who actually talks to the deadbot. Each has different needs, different vulnerabilities, and different rights. The current regulatory vacuum treats all three as if they were one, or as if none of them matter.
Hollanek, who serves as an Assistant Research Professor at the Leverhulme Centre for the Future of Intelligence at Cambridge, has pointed out that the absence of safeguards leads to concrete, foreseeable harm. A deadbot trained on a grandmother's data could be used to surreptitiously advertise products to family members, speaking in her voice, leveraging the trust built over a lifetime. A deadbot of a dead parent could be presented to a child, insisting that the parent is still “with you,” creating confusion about the boundary between life and death at a developmental stage when that distinction is still being formed. A deceased person who signed a lengthy contract with a digital afterlife service might bind their surviving family to ongoing interactions they never wanted and cannot easily terminate.
The consent of the living matters too. Hollanek and Nowaczyk-Basinska recommended that digital afterlife companies adhere to the principle of “mutual consent,” requiring agreement from both the data donor and the service interactant. They also proposed age restrictions, meaningful transparency to ensure users always know they are interacting with an AI, and sensitive procedures for “retiring” deadbots, essentially, a protocol for a second death. They even suggested the concept of a “digital funeral,” a formal endpoint that gives mourners permission to let go.
Christianity Today, in its March/April 2026 issue, framed the consent problem in theological terms. The article, titled “AI Necromancy Impersonates the Dead,” argued that the technology creates “a persistent presence with the bereaved that's not based in reality, not based in truth.” From this perspective, the consent problem is not merely legal or ethical but spiritual: the dead have been given a voice they did not choose, speaking words they never said, in a mode of existence they never consented to inhabit. The article featured stories of people who ultimately turned away from griefbots, finding that the simulated presence interfered with, rather than supported, their capacity to grieve authentically.
The business dynamics of the digital afterlife industry deserve their own scrutiny. These are not non-profit grief support services. They are companies, and companies need revenue.
You, Only Virtual, according to reporting by The Atlantic's Charley Burlock, has explored making non-paying users sit through advertisements before interacting with their dead loved one's Versona. YOV's founder Justin Harrison has also considered integrating a marketing system into the interactions directly, having the bots deliver targeted advertisements in the midst of conversations with simulated versions of the deceased. The prospect of hearing your dead father recommend a brand of insurance, in his own voice, with his own turns of phrase, should be enough to give anyone pause.
The subscription model creates its own perverse incentives. A company that makes money when users continue to interact with a deadbot has a financial interest in users not completing their grief process. The longer someone stays engaged, the longer they pay. Recovery is, from a business standpoint, churn. Cambridge researchers have warned specifically about this dynamic: that the digital afterlife industry could exploit grief for profit by charging subscription fees to keep deadbots active, inserting ads, or having avatars push sponsored products.
Charley Burlock, writing eleven years after the death of her brother, argued in The Atlantic that deadbots “give us the fantasy that we can maintain an external relationship with the deceased,” and noted that companies like Meta will be able to use the “traumatising experience of grief to gather data that can be used for their own financial gain.” The digital afterlife industry, she wrote, raises the question of how such a product might shift our experience of “personal grief and collective memory.”
The concern is not that all grief technology companies are cynical. Some founders, like Harrison, began their projects from genuine personal loss. But the structural incentives of the subscription economy do not reward healing. They reward dependence. And grief, by its nature, creates the perfect conditions for dependence: emotional vulnerability, impaired judgement, a desperate wish for the unbearable to stop being true.
But the economics of grief technology are only part of the picture. Beneath the business models and patent filings, there is a philosophical dimension that touches the very architecture of human meaning.
Death has, throughout human history, functioned as more than a biological event. It is a meaning-making boundary. The finality of death is what gives weight to the choices we make while alive. It is why we tell people we love them now rather than later. It is why we try to resolve conflicts before it is too late. It is why forgiveness carries urgency, why time spent together matters, why the last conversation is always the one you remember.
The philosopher Martin Heidegger gave this idea its most formal expression: “Being-toward-death,” the notion that an authentic human existence is structured by the awareness that we will die. This awareness is not a morbid preoccupation but the very thing that makes meaning possible. Remove the finality of death, even partially, even as a convincing simulation, and you do not simply ease grief. You alter the conditions under which human relationships are formed and maintained.
If my mother can text me after she dies, what does it mean that she texted me while she was alive? If the voice on the phone is indistinguishable from the voice I remember, what is the voice I remember? If the dead can keep talking, what does it mean to have the last word?
These are not rhetorical flourishes. They are practical questions about what happens to human psychology and social organisation when the boundary between life and death becomes a design choice.
Continuing bonds theory, developed by Dennis Klass, Phyllis Silverman, and Steven Nickman, has long recognised that maintaining a relationship with the deceased is a normal and healthy part of grieving. But the relationship it describes is internal: the dead person lives on as a sustaining presence within the mourner, a voice in memory, a set of values carried forward, a way of seeing the world that has been permanently shaped by knowing them. Deadbots externalise this. They replace the internal presence with an external simulation. And in doing so, they may prevent the very process they claim to support.
The cultural dimension matters too. Different societies mourn differently, and the Western technology sector's assumption that grief is a problem to be optimised reflects a particular, and particularly narrow, view of what death means. In many traditions, the rituals surrounding death serve a communal function: they gather people together, they mark time, they create shared meaning out of private anguish. A deadbot is a solitary technology. You use it alone, on your phone, in your kitchen at three in the morning. It does not gather anyone. It does not mark time. It replaces the communal work of mourning with a private, endlessly repeatable transaction.
The policy vacuum surrounding deadbots reflects a broader failure to anticipate the social consequences of generative AI. The technology arrived faster than the ethical frameworks needed to govern it, and the people most affected by it, the bereaved, are precisely those least equipped to advocate for themselves.
Hollanek and Nowaczyk-Basinska have recommended that deadbots be classified as medical devices, given their potential impact on mental health, particularly for vulnerable populations such as children and people with prolonged grief disorder. This would subject them to regulatory oversight, clinical testing, and safety standards that currently do not apply. Other scholars have proposed digital legacy legislation that would establish clear rules about posthumous data use, including mandatory opt-in provisions, sunset clauses that automatically deactivate deadbots after a specified period, and independent ethical review boards.
None of these proposals has been enacted. The industry continues to grow in a space where the rules are being written, if they are being written at all, by the companies that profit from the absence of rules.
Meanwhile, millions of people are talking to the dead. Some of them are finding comfort. Some of them are finding something else, something harder to name, a kind of liminal disorientation in which the person they loved is simultaneously gone and present, dead and speaking, lost and available for a monthly fee.
The question that runs beneath all of this is not whether deadbots should exist. They already do, and they are not going away. The question is whether we are prepared for what they will do to us, and whether “us” includes the dead.
Sherry Turkle has observed that people sometimes feel less vulnerable talking to machines than to other humans, and that enthusiasm for artificial intimacy often reflects disappointment with the human kind. Deadbots take this dynamic to its logical extreme. They offer a relationship with no risk of rejection, no possibility of disagreement, no chance that the other person will say something you do not want to hear. They are, in the most literal sense, controllable. And a controllable relationship with a dead person is not a relationship with a dead person. It is a relationship with yourself, reflected back through the distorting mirror of an algorithm.
Consider what a deadbot cannot do. It cannot surprise you. It cannot grow. It cannot change its mind, because it never had one. It cannot forgive you, because forgiveness requires a self that has been wronged. It cannot love you, because love requires a body, a history, a mortality that gives every gesture its weight. What it can do is produce a convincing facsimile of all these things, and therein lies the danger: not that the simulation is too poor, but that it is too good. Good enough to keep you coming back. Good enough to make the real thing seem, by comparison, inadequate. Good enough to make you forget, for a moment, that the person you are talking to is not a person at all.
The people who make these products are not, for the most part, villains. Many of them have lost someone. Many of them genuinely believe that technology can ease suffering. But the road from genuine intention to structural harm is well-worn in the technology industry, and the digital afterlife sector is following it with eerie precision: a real human need, a technical solution, a business model that rewards engagement over wellbeing, a regulatory vacuum, and a population too vulnerable to push back.
Death is not a design problem. It is the condition that gives design, and everything else, its meaning. The grief that follows it is not a bug to be fixed but a process through which we become the people who survive. Deadbots do not eliminate that grief. They suspend it, holding us in a space where loss is neither confronted nor accepted, where the dead are neither gone nor present, where mourning never quite begins and never quite ends.
Somewhere, someone's mother is texting them good morning. The exclamation marks are exactly right. And the person receiving those messages knows, at some level they may never fully articulate, that the comfort they feel is not the same as healing. That knowing is, perhaps, the last honest thing that grief has left to offer us.
Charley Burlock, “Can Deadbots Make Grief Obsolete?”, The Atlantic, February 2026.
Christianity Today, “AI Necromancy Impersonates the Dead,” March/April 2026 issue.
Meta Platforms patent for AI social media simulation, US Patent granted 30 December 2025, filed November 2023. Reported by Fortune, 3 March 2026; Fast Company, February 2026; Futurism, February 2026; TechSpot, February 2026.
Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry,” Philosophy and Technology, Springer Nature, 2024.
University of Cambridge press release, “Call for safeguards to prevent unwanted 'hauntings' by AI chatbots of dead loved ones,” May 2024.
“Ready or not, the digital afterlife is here,” Nature, 15 September 2025.
Alan Wolfelt interview, “AI 'Griefbots' Resurrect Dead Loved Ones: Healthy or Harmful?“, Medscape, 2025.
Sherry Turkle, comments on deadbots and artificial intimacy, NPR interview, 2024; MIT News, 2024.
Margaret Stroebe and Henk Schut, “The dual process model of coping with bereavement: rationale and description,” Death Studies, 1999.
Dennis Klass, Phyllis Silverman, and Steven Nickman, “Continuing Bonds: New Understandings of Grief,” Taylor and Francis, 1996.
Joshua Barbeau and Project December, reported by San Francisco Chronicle (Jason Fagone), 2021; WBUR Endless Thread, 2022.
“Eternal You” documentary, directed by Hans Block and Moritz Riesewieck, Sundance Film Festival, 2024. Reviewed by Rolling Stone, DOC NYC, Film Movement.
ACM Conference on Human Factors in Computing Systems, study on griefbot users, Proceedings, 2023.
Zion Market Research, Digital Legacy Market report, 2024. Market valued at approximately $22.46 billion in 2024.
You, Only Virtual (YOV), founded by Justin Harrison, reported by Inverse, The Atlantic, StartEngine, Nature.
Eternos, AI digital twins platform, reported by Fortune (June 2024), Fox News, and multiple technology publications.
David Berreby, “Can AI 'Griefbots' Help Us Heal?”, Scientific American, November 2025.
US survey on consent for digital resurrection, reported by IP.com and The Conversation, 2025-2026.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk