from EpicMind

Illustration eines antiken Philosophen in Toga, der erschöpft an einem modernen Büroarbeitsplatz vor einem Computer sitzt, umgeben von leeren Bürostühlen und urbaner Architektur.

Freundinnen & Freunde der Weisheit! All die Selbstoptimierungsgurus erklären Glück zur reinen Privatsache. Echtes Glück ist aber etwas anderes: nicht eine fortlaufende Selbstinszenierung, sondern Verantwortung und Sinn.

Unsere gegenwärtige Vorstellung von Glück ist oft erstaunlich schmal geraten. Was früher mit Tugend, Gemeinsinn und einer gerechten Gesellschaft verknüpft war, erscheint heute als Frage individueller Befindlichkeit: Wie kann ich mich besser fühlen, produktiver werden, meine Ziele effizienter erreichen? In einer Kultur, die Selbstoptimierung, Wohlfühlroutinen und persönliche Markenbildung zum Ideal erhoben hat, ist das Verständnis von Glück zur Privatsache geworden – reduziert auf Momente der Zufriedenheit und messbar in Datenpunkten. Was dabei verloren geht, ist die tiefere Dimension von Glück: jene, die sich aus Zugehörigkeit, Verantwortung und Sinn ergibt.

Ein erster Schritt zu einem tragfähigeren Glück liegt in der Abkehr vom rein individuellen Fokus. Wer sich als Teil eines grösseren Zusammenhangs begreift – sei es im Familienkreis, im Gemeinwesen oder in einer freiwilligen Aufgabe –, erlebt sein Leben nicht nur als fortlaufende Selbstinszenierung, sondern als bedeutungsvoll durch Verbindung. Gerade in einer Zeit, in der viele gesellschaftliche Strukturen unter Druck stehen, gewinnt das bewusste „Für-andere-da-sein“ an Wert – nicht als moralische Pflicht, sondern als Quelle innerer Stimmigkeit. Es sind oft die kleinen, unsichtbaren Beiträge, die Beziehungen tragen und persönliche Erfüllung ermöglichen.

Zweitens lohnt sich ein Perspektivenwechsel: Glück ist nicht primär eine Frage des Konsums oder der Wahlfreiheit, sondern eine Frage der Ausrichtung. Studien zeigen, dass Menschen ihr Leben dann als sinnvoll erleben, wenn sie ihre Handlungen mit grösseren Werten verbinden – etwa Fürsorge, Gerechtigkeit oder Verlässlichkeit. Es braucht kein perfektes Leben mit durchgeplantem Alltag. Viel entscheidender ist, ob wir unser Handeln als kohärent und relevant empfinden. Wer sein Engagement ausrichtet auf etwas, das über das eigene Wohlbefinden hinausgeht, erfährt oft eine tiefere Form von Zufriedenheit.

Drittens sollten wir die gängige Vorstellung hinterfragen, dass Glück mit ständiger positiver Stimmung gleichzusetzen sei. Ein erfülltes Leben schliesst Ambivalenz, Anstrengung und Unsicherheit mit ein. Gerade in Zeiten von sozialer oder ökologischer Krise zeigt sich, dass Glück nicht im Rückzug liegt, sondern im aktiven Mitgestalten einer Welt, die für viele lebenswert bleibt. Das grosse Glück entsteht dort, wo Menschen Verantwortung übernehmen, ohne sich aufzuspielen – wo sie verlässlich handeln, ohne immer perfekt sein zu müssen. Glück ist dann nicht das Ziel, sondern das Echo eines geglückten Daseins.

Denkanstoss zum Wochenbeginn

„Ich kann ihnen nicht sagen, wie man schnell reich wird. Ich kann ihnen aber sagen, wie man schnell arm wird: indem man nämlich versucht, schnell reich zu werden.“ – André Kostolany (1906–1999)

ProductivityPorn-Tipp der Woche: Schwierige Aufgaben zuerst (Eat the Frog)

Fange den Tag mit der schwierigsten oder unangenehmsten Aufgabe an. Danach fühlt sich alles andere leichter an und du vermeidest das ständige Aufschieben.

Aus dem Archiv: Warum nicht jeder Ratschlag zu dir passt

Wir hören es immer wieder: Erfolgreiche CEOs sagen, dass das Geheimnis ihres Erfolgs darin liegt, „Nein“ zu sagen. Influencer raten uns, dieses oder jenes Produkt zu kaufen, weil sie es angeblich selbst lieben. Doch was für sie funktioniert, muss nicht automatisch für dich passen. Pauschale Ratschläge ohne Berücksichtigung deines eigenen Kontextes können sogar gefährlich sein. Dieser Beitrag wirft einen kritischen Blick darauf, warum es so wichtig ist, Ratschläge zu hinterfragen – egal ob sie von einer erfolgreichen CEO oder einem beliebten Influencer kommen.

weiterlesen …

Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!


EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.


Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.

Topic #Newsletter

 
Weiterlesen... Discuss...

from An Open Letter

I hosted a lot of friends over today. They came over from 1 all the way till eight or 9 PM. This was the most amount of people I’ve hosted, like around like 12 or 13 at the peak. I’m glad I did it, but my social battery I think is drained. I think there’s also a certain amount of growing pain in finding your community of people and who you feel comfortable with.

I don’t think about her as much anymore. I almost have started to forget her face, but whenever I remember that, I think back to her and I have to stop myself. I also know that I have a lot of photos of her, and honestly I want to go back and look at them. But I don’t do it cause I know it’s smart not to. It’s strange because I don’t even know how much of her I miss, versus if I just miss the holes that she filled. But at the same time I think it’s a little bit of a mixture of both. I really do miss a lot of the connection in the suite moments that we shared, and a lot of the things that we were able to do together. But I think this is a dangerous thing, romanticizing things this soon. I still find that some of the places where it hurt me a lot are still sore. I try not to avoid things that remind me of her, but I don’t really want to see anything about VALORANT or VR chat. It’s weird to have tied myself so closely to someone who gave me so much doubt and anxiety. It’s weird the lack of self-respect/self-love that I had. I think I wanted a relationship so desperately, and I wanted it to work so badly that I kept telling myself that what happened was just a fluke over and over again. But at the same time that doesn’t make it any different, how nice and safe it felt to wake up to her. In the middle of the night waking up, and rolling over and pulling her arm over me, and getting to be hugged and cuddled to bed. Having someone that would lay on me. Feeling her hands pushing on my body in her shitty attempts at massages. It’s hard because she wasn’t a great partner a lot of different ways, but I think she did try. And at the end of the day someone can be good but not good enough. And I guess I just have a higher bar. I think she will have another partner that is maybe a little bit less mentally dominant that can coexist with her a little bit better, and things can be more her speed. And I think she’ll be happy and I think her partner will be very happy with her. She has a very kind heart. Just a bit naïve and with some growing to do. And I kind of feel like I grabbed her along and dragged her at my speed in life, and I think that’s not something she was ready for. We really are at different stages of life. In more ways than one. And I don’t miss having to almost regress myself in several different ways to match a little bit more. And I really like the stability that I have now. But I do mourn the future that I had planned and hoped for. I kept telling myself that people are just young right now, and if you give it a little bit of time people will mature more and grow. And I think that’s true, but at the same time I don’t know what I can even expect.

It’s a weird thing to me, I feel like in a lot of ways I consider myself exceptional, but at the same time I have problems with my self-worth. Sometimes I just wish that I was loved as a child and I wouldn’t have to worry about trying to figure out a concept that I never could believe. Like how could you fucking tell me that someone could just love you no matter what. Like even if you did bad at things, or even if you fucked up or even if you asked for help or a fucking hug sometimes, they would actually give it to you? Like if you told your parent that you were hurting, they would care? What a stupid fucking fantasy that is. And I know that it’s reality for a lot of people, but I just almost want to refuse the fact that it exists. The grapes I cannot taste must be sour. But I fucking just wish I could have been loved, not even for how much nicer it would’ve made the rest of my life, and maybe not making me try to kill myself. But even just for the fact that I could see myself as someone worth loving to myself, and to others. Because I say that I love myself and I think I do, but at the same time when I think of anyone else it’s almost like the only thing I should be loved for is either value, or loyalty from value I’ve already provided. And that fucking hurts to go through life that way.

 
Read more...

from Two Sentences

Another surprisingly social day chatting with neighbors. The sun shone just long enough for me to run my long run too — what luck.

 
Read more...

from 下川友

この部屋にはもう用がないので、出ていくことにした。 自分の顔はすっかり童話めいていて、ドアから出る行為によく似合っていた。 このドアには最初から鍵はなく、いつでもその気になれば出られたのだ。

ドアに手をかけ、後ろを振り返る。 部屋には大きな木の机と、木の椅子がひとつ。 壁には時計がかかっていた。

住んでいるときは何とも思わなかったが、 出ていくとなると、悪くない部屋だったかもしれないと思えるレイアウトだ。

一度ドアノブから手を離し、小さな部屋をゆっくり一周することにした。

ミシミシと床が音を立てる。 いつもより大きく聞こえる床の音。

呼吸も。 息を吸ったり吐いたり。 意図的に、吸ったあともう一度吸う遊びをしてみたりする。 息をするたび、お腹が暖かくなる。

窓から差し込む光は青白い。 まだ午前中だが、こんなことをしていたら、すぐにオレンジ色の光になってしまうから人生は忙しい。

リュックにさしていた水を飲む。 水というより、口の中の味を感じる。 自分は500mlを基本に飲むが、世の中には2Lのペットボトルに口をつける人もいることを思い出す。 丼が大盛より普通盛りの方が美味しいことは、なんとなくみんな気づいていると思う。 水も同じで、2Lより500mlのほうが美味しい。 ただ、コップに注ぐ場合は2Lから出した方が美味しいけれど。

右足より左足に重心がある。 というより、右足で地面を踏ん張れていない感覚がある。 左足を浮かせ、右足だけで立ってみても、それでも重心は左に寄る。 ただの違和感で、医者は何も言ってくれないだろう。 分かりやすいケガや痛みより、違和感を解決するほうがずっと難しい。

首の後ろに、チップのようなものが埋め込まれている気がする。 こんな質素な部屋に、望んで長くいたのだから、本当に誰かにチップを埋め込まれていてもおかしくない。 しかし、本当にチップが埋め込まれているわけではない。 それでも、そこにチップがあるという感覚。 自分の記憶が重要であることを、自分の脳内で視覚化しているのだ。

遠くで救急車の音がする。 誰かが運ばれていて、その人が助かったのかどうかは、分かる術がない。 毎日救急車の音が響くのに、自分の生活には直接影響がない。 救急車で運ばれる人は、普段は何をしているのだろう。

さて、部屋を一周したので外に出る。 床に落ちているホコリに「それでは」と言い、 悲しい顔は家側に、凛々しい顔は外に向け、 目的地はなく、ただまっすぐ歩くことだけを決めて、その家を跡にした。

 
もっと読む…

from Mitchell Report

A digital painting of a person walking down a sunlit path through a vibrant, colorful valley filled with flowers and trees. The trees have large, floating app icons instead of leaves, including icons resembling Twitter and other social media or tech platforms. The scene is bathed in warm, golden light with a dreamy, painterly texture, and butterflies flutter in the sky near the glowing sun.

A lone traveler embarks on a journey through a vibrant valley of blogging platforms, seeking the perfect path to share their voice and stories with the world.

Okay, it has been one year since I joined Micro.blog and Scribbles.page, and just over a year since I joined Write.as. I thought I would review all three services with a clear winner, a hard “can't wait for my subscription to end and won't be renewing,” and a dark horse.

I joined all three within months of each other looking to get rid of my InMotionHosting web host and get away from WordPress. I didn't like the direction that Matt Mullenweg was heading and didn't want to get burned like I did with Elon Musk and Twitter. Twitter was a special place for me as I refused to use any Zuckerberg product, especially since he ruined Instagram.

Now with the history out of the way, here we go.

Write.as — I joined the free tier in October/November 2024 and was initially impressed by its simplicity compared to WordPress. I like paying for services ahead of time, so I bought the five-year plan. That was buyer's regret.

Customization is where Write.as falls apart. Anything beyond typing and publishing requires contorting CSS and JavaScript, and even then there are limits. The rich editor is buggy and loses formatting if you switch between rich and plain text modes.

The platform feels stagnant. Post preview was first requested in October 2018. Over seven years later, it finally got shipped but within the plain text editor only and does not account for any custom CSS. Users are asking for more than a simple text preview. They want to actually see how the post is going to look live. Support has historically been slow, though the owner has recently brought on some help.

The sole owner has been transparent about his shifting priorities. He took a sabbatical from development in 2022 and has written about moving toward other creative pursuits. In recent blog comments, Matt acknowledged taking mental health breaks “at different points over the years” and has even considered succession planning. While his transparency is commendable, paying customers are left wondering when development will resume in earnest.

Photo integration through Snap.as is frustrating. If you want picture galleries, you have to pay extra, but you can't even embed them in Write.as posts. In September 2025, the owner asked users what they'd want galleries to look like, saying “the design is the biggest thing holding us back.” After years as a paid feature, basic functionality is still missing.

The price has increased from $7 to $9 a month, though the proprietor regularly runs promotions and you can pick up 5 years for $180. For comparison, Micro.blog's $5 plan includes blog hosting, custom domain, cross-posting, native apps, and photo sharing. Their $10 Premium plan offers even more. You get dramatically more features and active development for less money.

Pros:

  • Excellent Fediverse integration. Posting, editing, and deleting all sync reliably. Any instance can follow Write.as blogs, including Mastodon, Sharkey, and Misskey.
  • Sole owner operated, appealing to those who prefer independent services.
  • Simple and minimalist if all you need is to type and publish.
  • If you get hosting on sale it is a passable deal as long as you know the limitations.

Cons:

  • No proper post preview after seven years of requests.
  • Rich editor is buggy and limited.
  • Customization requires CSS/JavaScript expertise.
  • Features disjointed across separate services with extra charges.
  • Development has stagnated.
  • Price increased to $9/month while offering less than competitors.
  • Owner's priorities have shifted away from the platform.

Micro.blog is almost the opposite of Write.as in all ways, and 90 percent of those differences are positive.

I've already reviewed Micro.blog extensively in this blog post here, so I won't rehash everything here. The premium plan is only $1 more than Write.as ($10 vs $9), but you get dramatically more value. Micro.blog is constantly evolving, and the owner maintains development pace while keeping the platform stable and minimally disrupted. Also, keep in mind every tier has different features.

Pros:

  • Active weekly development with new features, bug fixes, and improvements shipped constantly
  • Extensive feature set: podcasts, newsletters, photo galleries, image hosting, video hosting, multiple blogs, and audio transcription
  • Cross-posting to 10+ platforms (Mastodon, Bluesky, Medium, LinkedIn, Tumblr, Flickr, Nostr, Pixelfed, Threads, PeerTube)
  • Better customization: themes, custom CSS/JavaScript, Hugo-based with plug-ins
  • Import from WordPress, Medium, Ghost, Substack, Write.as, Instagram, Twitter archives, and more
  • Responsive support with dedicated Community Manager
  • Multiple pricing tiers ($5 basic, $10 premium, $20 Studio) offering more features than Write.as at comparable or lower prices
  • Strong community guidelines with human-curated “Discover” section

Cons:

  • Awkward Fediverse implementation not fully compatible with non-Mastodon instances
  • Features not easily discoverable
  • Android apps lagging behind Mac/iOS apps

I would easily recommend this service. It is probably the most well-rounded and actively maintained platform out there if you need these features.

Scribbles.page is my dark horse. This is a managed blog hosting service with excellent design. Vincent Ritter, the owner and designer, has been on a tear lately modernizing the platform and adding features.

The only drawback for me is the lack of Fediverse integration and POSSE. But it makes up for it in every other respect and serves as a nice companion to Micro.blog with built-in cross-posting support.

Vincent is developing a robust API based on JSON and Micropub standards. The only thing I see missing is media uploading, which he is still working on. The pace of changes on Scribbles has been steady and everything is polished.

A social feature unique to the platform is something Vincent calls “Scribbles,” which lets readers send short private messages to blog owners about their posts. It's more casual than email and completely privacy-friendly since scribbles are private notes between the sender and recipient, not publicly shared. The platform also features a nice explorer page where you can discover other blogs, and it's available via RSS feed. Vincent regularly announces software updates there, keeping users informed about new features and improvements. If I had found this before Write.as or Micro.blog, this might have been my only purchase, and the Fediverse could have been implemented via n8n, IFTTT, or a custom solution.

I also appreciate that he plans to offer self-hosting for Lifetime members, and there is a Lifetime membership option instead of subscriptions, which addresses my subscription fatigue. One last detail that might matter to some: it is hosted and based in Europe.

The Verdict

After one year on all three platforms, here's my decision:

Write.as is the “won't be renewing.” Unless it drastically changes course in the next five years, it is too limited and stagnant. While the Fediverse integration is excellent, that alone doesn't justify the price when competitors offer more features and active development. Only consider it if you get a significant promotional discount and need nothing beyond basic blogging with ActivityPub.

Micro.blog is the clear winner. It delivers exceptional value with constant development, extensive features, and strong community management. The platform continues to evolve while remaining stable. Despite everything increasing in price lately, I am surprised Micro.blog hasn't raised its rates. I wholeheartedly recommend it.

Scribbles.page is the dark horse. If you don't need federation features and value gorgeous design with modern blogging standards, this is a compelling choice. The lifetime membership option and Vincent's impressive development momentum make it worth serious consideration.


Links may be shortened via mtribe.link for cleaner formatting. All links redirect to their original destinations.

#opinion #review

 
Read more... Discuss...

from SmarterArticles

Somewhere inside a foundation model trained on millions of supposedly de-identified electronic health records, a ghost lingers. Not a literal one, of course, but a data spectre: the clinical history of a patient whose records were stripped of names, addresses, and social security numbers before ever touching an algorithm. The model was never supposed to remember this person. It was supposed to learn medicine. Instead, it learned a patient.

This is the memorisation problem, and it is rapidly becoming one of the most consequential privacy challenges in clinical artificial intelligence. As healthcare systems worldwide rush to deploy foundation models trained on vast troves of electronic health record data, researchers are discovering that de-identification, the process long treated as the gold standard for protecting patient privacy, may not be enough. These models do not merely generalise medical knowledge from the populations they study. In some cases, they memorise individual patient records with enough fidelity that an adversary armed with the right prompts could extract sensitive clinical details about real people.

The implications are profound. A patient with a rare autoimmune disorder, an individual whose HIV status was recorded during a hospital visit, a person who sought treatment for substance use: these are precisely the kinds of patients whose records are most vulnerable to memorisation, because their clinical profiles are, by definition, unusual. And unusualness is exactly what makes data memorable to a machine learning model.

When Models Remember What They Should Forget

In October 2025, a team of researchers led by Sana Tonekaboni, a postdoctoral fellow at the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, published a paper that would reframe how the clinical AI community thinks about privacy. The study, “An Investigation of Memorization Risk in Healthcare Foundation Models,” was presented at the 2025 Conference on Neural Information Processing Systems (NeurIPS) in San Diego. Co-authored with Lena Stempfle, Adibvafa Fallahpour, Walter Gerych, and Marzyeh Ghassemi, an associate professor at MIT in Electrical Engineering and Computer Science, the paper introduced a suite of black-box evaluation tests designed to probe whether foundation models trained on structured electronic health records were genuinely generalising medical knowledge or simply recalling individual patients.

The distinction matters enormously. A model that generalises has learned, say, that patients over 65 with elevated troponin levels and chest pain are at high risk of myocardial infarction. That knowledge draws on thousands of patient records and reflects a genuine population-level pattern. But a model that memorises has locked onto a singular patient record, and when prompted with the right combination of attributes, it can reproduce details about that specific individual. “Knowledge in these high-capacity models can be a resource for many communities,” Tonekaboni explained, “but adversarial attackers can prompt a model to extract information on training data.”

The framework the team developed includes methods for probing memorisation at both the embedding level, where models encode patient data as numerical representations, and the generative level, where models produce clinical outputs. Crucially, the researchers designed their tests to distinguish between benign generalisation and genuinely harmful memorisation. Not all information leakage is created equal. If a model reveals that a particular patient profile tends to involve elderly males, that reflects population statistics. If it reveals that a specific combination of laboratory values, timestamps, and diagnostic codes corresponds to a single identifiable individual, that constitutes a privacy breach.

The findings were sobering. The researchers demonstrated that the more prior knowledge an attacker possesses about a particular patient, the more likely the model is to leak additional information. Patients with rare conditions proved especially vulnerable, precisely because their clinical signatures are distinctive enough to be picked out from the broader training distribution. And while some categories of leaked information, such as a patient's age or gender, represent relatively low risk, others carry serious consequences. Diagnoses related to HIV, substance use disorders, or mental health conditions were flagged as potentially harmful disclosures that could damage a person's employment prospects, insurance coverage, or social standing.

Ghassemi, the paper's senior author, offered a practical framing of the threat. “We really tried to emphasise practicality here,” she noted. “If an attacker has to know the date and value of a dozen laboratory tests from your record in order to extract information, there is very little risk of harm. If I already have access to that level of protected source data, why would I need to attack a large foundation model for more?” The question cuts to the heart of the adversarial calculus: how much prior knowledge makes an attack feasible, and at what point does memorisation cross from theoretical vulnerability to practical danger?

The Adversary's Toolkit

To understand the scale of the memorisation threat, it helps to look beyond healthcare-specific models to the broader landscape of large language model security research. The foundational work in this space comes from Nicholas Carlini and colleagues, whose research at Google DeepMind and collaborating institutions has systematically demonstrated that language models memorise and can be made to regurgitate their training data.

In a landmark 2021 paper published at USENIX Security, Carlini, along with Florian Tramer, Eric Wallace, and others, showed that an adversary could extract hundreds of verbatim text sequences from GPT-2, including personally identifiable information such as names, phone numbers, and email addresses. The attack required no access to the training data itself, only the ability to query the model. By 2023, the same research group, now including Milad Nasr, Daphne Ippolito, and Christopher Choquette-Choo, had scaled their methods dramatically. Their paper “Scalable Extraction of Training Data from (Production) Language Models” demonstrated that an adversary could extract gigabytes of training data from both open-source models and commercial systems including ChatGPT.

The 2023 work introduced a particularly concerning technique: the divergence attack. By crafting prompts that cause a model to diverge from its normal conversational behaviour, the researchers achieved training data emission rates up to 150 times higher than those observed during typical usage. The attack essentially tricks aligned models into reverting to their pre-alignment behaviour, at which point they begin outputting memorised sequences with alarming fidelity.

What does this mean for clinical AI? The attack surface is substantial. An electronic health record foundation model trained on millions of patient records contains, by design, sensitive clinical information. Even if the records have been de-identified according to HIPAA standards, the model itself may have encoded enough information to reconstruct individual patient profiles when queried with the right combination of clinical attributes. A rare diagnosis combined with a specific age range and a distinctive pattern of laboratory values could function as a fingerprint, allowing an attacker to extract additional details that the de-identification process was supposed to protect.

The level of prior knowledge required for a successful attack varies depending on the model architecture, training methodology, and the patient population in question. Research on general-purpose language models suggests that model size strongly correlates with memorisation: larger models, with their greater capacity to store training data patterns, are more vulnerable to extraction attacks. Given that clinical foundation models are trending towards ever-larger architectures to capture the complexity of medical knowledge, this scaling relationship poses a direct tension between clinical utility and patient privacy.

De-identification Was Never Bulletproof

The memorisation problem does not exist in isolation. It builds upon decades of research demonstrating that de-identification of health data has always been more fragile than regulators and healthcare institutions have assumed.

The seminal work in this field belongs to Latanya Sweeney, now the Daniel Paul Professor of the Practice of Government and Technology at the Harvard Kennedy School. In 1997, while still a graduate student at MIT, Sweeney demonstrated that she could re-identify the medical records of then-Massachusetts Governor William Weld by cross-referencing publicly available voter registration data with de-identified hospital discharge records. The records had been stripped of names, addresses, and social security numbers, but they retained date of birth, gender, and ZIP code. Sweeney showed that just these three attributes were sufficient to uniquely identify an individual.

Her subsequent research revealed that 87 per cent of the United States population could be uniquely identified using only ZIP code, date of birth, and gender, a finding that helped shape the HIPAA Privacy Rule's Safe Harbour de-identification standard. Yet even with these protections in place, re-identification remains possible. A 2018 study demonstrated that patients could be re-identified from HIPAA-compliant de-identified datasets by cross-referencing them with publicly available newspaper articles about hospitalisations.

A 2025 paper published in AI and Ethics highlighted the particular challenge of clinical free text. Structured data fields like diagnosis codes and laboratory values can be systematically scrubbed, but clinical notes contain narrative descriptions that may include identifying details embedded in the prose: references to a patient's occupation, family circumstances, or the name of a referring physician. De-identification tools, including those powered by natural language processing, struggle with the ambiguity and variability of clinical language.

The emergence of foundation models adds a new dimension to this longstanding vulnerability. Traditional re-identification attacks required an adversary to obtain and cross-reference multiple external datasets. Memorisation attacks against AI models require only the ability to query the model itself. The model becomes both the target and the pathway to the data it was trained on, collapsing what was previously a multi-step process into a single interaction. A 2025 study published in PMC on contemporary threats to anonymised healthcare data warned that AI-based techniques can now infer identity from traditionally de-identified sources using data such as electrocardiograms or patterns of gait, data types that were never considered identifiers under existing privacy frameworks.

How AI Threats Compare with Conventional Cybersecurity Risks

The memorisation vulnerability exists within a broader landscape of healthcare cybersecurity threats that are already severe and worsening. Understanding how AI-specific risks compare with conventional attack vectors is essential for calibrating the response.

The numbers from conventional healthcare cybersecurity are staggering. In 2024, 259 million Americans had their protected health information compromised through hacking incidents, a figure driven overwhelmingly by the Change Healthcare ransomware attack. That single breach, perpetrated by the ALPHV/BlackCat ransomware group, affected approximately 190 million individuals after attackers exploited a Citrix remote access service that lacked multi-factor authentication. UnitedHealth Group, Change Healthcare's parent company, reported total cyberattack impacts of 2.457 billion dollars in the first nine months of 2024 alone.

The healthcare sector has become the most targeted industry for ransomware, accounting for 17 per cent of all ransomware attacks across sectors. Complete protected health information packages command prices of up to 1,200 dollars per record on criminal marketplaces, roughly 80 times the value of stolen credit card data. Over 80 per cent of stolen health records in 2024 were taken not from hospitals directly but from third-party vendors, software services, and business associates, highlighting the systemic nature of the vulnerability.

Against this backdrop, AI memorisation attacks represent a qualitatively different kind of threat. Conventional breaches involve exfiltrating stored data, breaking through perimeters, and exploiting network vulnerabilities. Memorisation attacks exploit the model itself as an unwitting data store. There is no firewall to breach, no database to penetrate. The sensitive information is encoded within the model's parameters, distributed across billions of numerical weights in ways that resist simple detection or removal. An attacker needs nothing more than API access to the model, which in many clinical deployment scenarios would be available to any authorised user of the system.

The two categories of threat also differ in their detectability. A ransomware attack produces obvious signs: encrypted systems, operational disruption, ransom demands. A memorisation extraction attack can be conducted through queries that resemble normal clinical usage, making it far harder to detect. Medical identity theft already takes an average of 24 months to discover, compared with four months for financial fraud. Memorisation-based data extraction could extend this detection timeline even further, because the data never technically leaves the system in the conventional sense.

Yet it would be a mistake to treat AI memorisation as the dominant threat. The scale of conventional breaches dwarfs anything that memorisation attacks have demonstrated in practice. The Change Healthcare incident compromised the records of roughly 190 million people in a single event. Memorisation attacks, by contrast, tend to target individual patients or small groups, requiring specific prior knowledge about each target. The threat from memorisation is more surgical than it is sweeping, but for the individuals affected, particularly those with rare conditions or stigmatising diagnoses, the consequences could be devastating.

The Regulatory Patchwork

The regulatory response to AI memorisation risks in healthcare remains fragmented and, in many respects, inadequate. Existing frameworks were designed for a world where privacy threats came from databases, not algorithms.

In the United States, HIPAA remains the foundational framework for protecting health information, but it was enacted in 1996, long before the emergence of clinical AI. The proposed update to the HIPAA Security Rule, published by the Department of Health and Human Services in January 2025, represents the first major revision in over a decade. The proposal eliminates the distinction between “required” and “addressable” security controls, mandates encryption for all electronic protected health information, and introduces multi-factor authentication requirements. Critically, it establishes that data used in AI training, prediction models, and algorithm development by regulated entities falls under HIPAA's protections.

However, the proposed rule does not specifically address memorisation risks. It treats AI systems primarily through the lens of conventional cybersecurity: access controls, encryption, audit logging. These measures are necessary but insufficient for a threat that is embedded within the model's learned representations rather than stored in a conventional database. The public comment period for the proposed rule closed in March 2025 with nearly 5,000 submissions, and the final rule is expected in late 2025 or 2026. Whether it will address the unique characteristics of AI memorisation remains uncertain.

The European Union's approach through the AI Act offers somewhat more specificity. The regulation classifies AI systems used in healthcare as high-risk, subjecting them to requirements for data governance, transparency, human oversight, and post-market monitoring. From August 2026, most obligations will apply, with full compliance for high-risk medical device AI required by August 2027. The Medical Device Coordination Group published guidance document MDCG 2025-6 to clarify how the AI Act interacts with existing medical device regulations under the MDR and IVDR frameworks.

The AI Act's data governance requirements are particularly relevant to memorisation. High-risk AI manufacturers must implement practices appropriate for the intended purpose, including attention to possible biases and privacy risks. The transparency obligations require that systems be designed to allow deployers to interpret outputs and use systems appropriately. These provisions create a regulatory foundation that could, in principle, require memorisation testing before deployment. But the specifics of implementation remain to be worked out through standards and guidance that have not yet been finalised.

At the state level in the United States, a patchwork of legislation is emerging. By 2025, over 250 AI-related bills had been introduced across more than 34 states. Texas enacted the Responsible Artificial Intelligence Governance Act in June 2025, requiring healthcare practitioners to provide patients with written disclosure of AI use in diagnosis or treatment. Colorado and Utah have enacted their own comprehensive AI laws. The result is a fragmented landscape that creates compliance challenges for healthcare organisations operating across jurisdictions whilst providing inconsistent protection for patients.

Technical Safeguards and Their Limits

The technical toolkit for mitigating memorisation risks is growing, though no single approach offers a complete solution.

Differential privacy, the mathematical framework developed by computer scientists including Cynthia Dwork of Harvard University, provides formal guarantees about information leakage during model training. By adding carefully calibrated statistical noise to the training process, differential privacy ensures that the model's outputs reveal almost nothing about any individual training example. Recent research has demonstrated that healthcare AI models can achieve 96.1 per cent accuracy with a privacy budget of epsilon equals 1.9, suggesting that strong privacy and high clinical performance can coexist.

Yet differential privacy has limitations. The privacy-utility trade-off is real: stronger privacy guarantees require more noise, which can degrade model performance on clinical tasks where accuracy directly affects patient outcomes. The United States Census Bureau's experience with differential privacy in the 2020 census provides a cautionary example. Research found that the technique introduced disproportionate discrepancies for rural and non-white populations, raising concerns about equity impacts that would be equally relevant in clinical settings where underrepresented populations already face disparities in care.

Federated learning offers another approach, keeping patient data decentralised across institutions whilst training a shared model. Rather than aggregating raw data on a central server, each participating hospital trains the model locally and shares only model updates. Yet research has shown that these model updates themselves can leak information. Gradient inversion attacks can reconstruct substantial portions of original training data from the mathematical updates exchanged during federated learning. A study titled “Two Models are Better than One: Federated Learning Is Not Private for Google GBoard Next Word Prediction” demonstrated that user sentences could be reconstructed from model updates alone.

Machine unlearning, the targeted removal of specific patient data from a trained model, has emerged as a conceptually appealing response to memorisation. The approach aligns with the General Data Protection Regulation's right to be forgotten, which allows individuals to request deletion of their personal data. Research presented at MICCAI 2025 introduced Forget-MI, a method for unlearning multimodal medical data from trained architectures. A December 2025 testbed called MedForget modelled hospital data as a nested hierarchy, enabling fine-grained unlearning assessment across multiple organisational levels.

But machine unlearning faces fundamental practical barriers. Retraining a model from scratch without specific patient data remains the only guaranteed path to complete unlearning, and for large foundation models, retraining can take weeks and cost millions of dollars. Approximate unlearning methods are faster but cannot guarantee that all traces of a patient's data have been removed. Moreover, if certain demographic groups are more likely to exercise their right to be forgotten, the resulting training data could become skewed, potentially worsening the very biases that clinical AI is supposed to help address. As a Health Affairs analysis noted, machine unlearning “is computationally intensive, scientifically immature, and potentially destabilising to models that must remain reliable across a wide range of clinical inputs.”

Data deduplication, the removal of repeated training examples, provides a simpler but partial mitigation. Research has consistently shown that models are more likely to memorise data that appears multiple times in training sets. Curating and deduplicating training data can reduce memorisation rates, though it cannot eliminate the risk entirely for patients whose clinical profiles are inherently distinctive.

Building an Evaluation Framework That Works

The MIT team's work points towards what a comprehensive evaluation framework for clinical AI memorisation might look like. Their open-source toolkit, validated on a publicly available electronic health record foundation model, provides a starting point for systematic privacy assessment before model deployment.

The framework's key innovation is contextualising memorisation within healthcare. Not all information leakage constitutes a meaningful privacy risk. A model that reveals population-level patterns, such as the typical age distribution of patients with a particular condition, is doing exactly what it was designed to do. The danger arises when a model's outputs can be traced to a specific individual, particularly when the leaked information includes sensitive diagnoses or treatment histories.

Tonekaboni emphasised the importance of practical evaluation. “This work is a step towards ensuring there are practical evaluation steps our community can take before releasing models,” she said. The framework assesses both embedded memorisation, where patient information is encoded in the model's internal representations, and generative memorisation, where the model can be prompted to produce patient-specific outputs. By testing across both dimensions, the framework provides a more complete picture of privacy risk than either approach alone.

For this kind of evaluation to become standard practice, it would need to be integrated into the regulatory approval process for clinical AI systems. Currently, most AI-enabled medical devices in the United States are cleared through the FDA's 510(k) pathway, which requires demonstration of substantial equivalence to a previously approved device but does not mandate independent clinical performance studies or privacy evaluation. A cross-sectional study of 903 FDA-approved AI devices found that clinical performance studies were reported for only approximately half at the time of regulatory approval. Memorisation testing is not part of the approval process at all.

The Coalition for Health AI (CHAI), on whose working group Ghassemi serves, represents one effort to establish industry-wide standards for trustworthy health AI. The NIST AI Risk Management Framework provides a complementary structure, addressing validity, reliability, safety, security, explainability, privacy, and fairness. Integrating memorisation evaluation into these existing frameworks would be more practical than creating entirely new regulatory apparatus, but it requires agreement on what constitutes acceptable levels of memorisation risk, a question that remains open.

Rare Conditions, Outsized Vulnerability

The memorisation problem falls hardest on the patients who can least afford it. Individuals with rare diseases, by definition, have clinical profiles that stand out from the broader population. Their diagnostic codes appear infrequently in training data. Their laboratory value patterns are unusual. Their treatment trajectories are distinctive. All of these characteristics make their records more memorable to a model and more extractable by an adversary.

The same is true for patients with stigmatising diagnoses. HIV status, substance use disorders, psychiatric conditions, and sexually transmitted infections all carry social consequences that extend far beyond the clinical encounter. Disclosure of these conditions can affect employment, housing, insurance, and personal relationships. De-identification was supposed to sever the link between these sensitive details and the individuals they describe. Memorisation threatens to re-forge that link through the model itself.

This disproportionate vulnerability raises equity concerns that mirror broader patterns in healthcare AI. Research has repeatedly shown that AI systems can perpetuate and amplify existing biases against marginalised populations. If memorisation risks are concentrated among patients with rare or stigmatising conditions, the privacy burden falls most heavily on those who are already underserved by the healthcare system.

Addressing this inequity requires targeted protections. Higher levels of differential privacy noise could be applied to training data involving sensitive diagnoses, at the cost of reduced model performance for those specific conditions. Rare disease patient records could be excluded from training sets entirely, though this would eliminate the clinical utility of foundation models for precisely the populations that stand to benefit most from AI-assisted care. Neither option is satisfactory, and the tension between privacy protection and clinical benefit for rare disease patients may prove to be one of the defining challenges of clinical AI governance.

What Genuine Protection Requires

The path from current vulnerability to genuine protection requires action across multiple domains simultaneously. No single technical safeguard, regulatory standard, or evaluation framework will suffice in isolation.

On the technical side, differential privacy during training should become the default rather than the exception for clinical foundation models. Memorisation evaluation, using frameworks like the one developed by Tonekaboni and colleagues, should be mandatory before any model is deployed in a clinical setting. Ongoing monitoring should be built into deployment infrastructure to detect potential memorisation-based extraction attempts in real time. And machine unlearning capabilities, however immature, should be developed and standardised so that patients can exercise meaningful control over the fate of their data within AI systems.

On the regulatory side, HIPAA needs to evolve beyond its current framework to address threats that are embedded within model architectures rather than stored in conventional databases. The EU AI Act's high-risk classification for healthcare AI provides a useful starting point, but implementation guidance must specifically address memorisation risks. Regulatory bodies including the FDA, the European Medicines Agency, and national health authorities need to incorporate memorisation testing into their approval and post-market surveillance processes.

On the institutional side, healthcare organisations deploying clinical AI must treat memorisation as a distinct category of risk requiring its own governance structures, audit procedures, and incident response plans. The conventional cybersecurity toolkit, with its emphasis on perimeter defence, encryption, and access control, is necessary but not sufficient for threats that live inside the model rather than outside the firewall.

The researchers behind the MIT study plan to expand their work to become more interdisciplinary, bringing in clinicians, privacy experts, and legal scholars. That instinct is exactly right. The memorisation problem sits at the intersection of computer science, medicine, law, and ethics, and solving it will require all four disciplines working in concert.

“There's a reason our health data is private,” Tonekaboni observed. “There's no reason for others to know about it.” That principle has guided health privacy law for decades. The question now is whether it can survive the age of foundation models trained on the very data it was designed to protect. The answer will depend on whether the clinical AI community treats memorisation as a fundamental design constraint rather than an afterthought, building privacy into the architecture of these systems from the ground up rather than bolting it on after deployment. The technology to do so exists. Whether the will and the regulatory momentum exist to mandate it remains the open question.


References and Sources

  1. Tonekaboni, S., Stempfle, L., Fallahpour, A., Gerych, W., and Ghassemi, M. “An Investigation of Memorization Risk in Healthcare Foundation Models.” arXiv:2510.12950, presented at NeurIPS 2025. https://arxiv.org/abs/2510.12950

  2. MIT News. “MIT scientists investigate memorization risk in the age of clinical AI.” January 5, 2026. https://news.mit.edu/2026/mit-scientists-investigate-memorization-risk-clinical-ai-0105

  3. Carlini, N., Tramer, F., Wallace, E., et al. “Extracting Training Data from Large Language Models.” USENIX Security 2021. https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting

  4. Nasr, M., Carlini, N., Hayase, J., et al. “Scalable Extraction of Training Data from (Production) Language Models.” arXiv:2311.17035, 2023. https://arxiv.org/abs/2311.17035

  5. Sweeney, L. “Simple Demographics Often Identify People Uniquely.” Carnegie Mellon University, Data Privacy Working Paper 3, 2000. https://dataprivacylab.org/people/sweeney/work/index.html

  6. Sweeney, L. “Risks to Patient Privacy: A Re-identification of Patients in Maine and Vermont Statewide Hospital Data.” Technology Science, 2018. https://techscience.org/a/2018100901/

  7. PMC. “Addressing contemporary threats in anonymised healthcare data using privacy engineering.” 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC11885643/

  8. Springer Nature. “What is the patient re-identification risk from using de-identified clinical free text data for health research?” AI and Ethics, 2025. https://link.springer.com/article/10.1007/s43681-025-00681-0

  9. HIPAA Journal. “Healthcare Data Breach Statistics.” https://www.hipaajournal.com/healthcare-data-breach-statistics/

  10. UnitedHealth Group. “UnitedHealth Group Updates on Change Healthcare Cyberattack.” April 22, 2024. https://www.unitedhealthgroup.com/newsroom/2024/2024-04-22-uhg-updates-on-change-healthcare-cyberattack.html

  11. HHS. “Changes Proposed to Strengthen HIPAA Security Rule.” January 2025. https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html

  12. Reed Smith. “The EU AI Act and Medical Devices: Navigating High-Risk Compliance.” 2025. https://www.reedsmith.com/our-insights/blogs/viewpoints/102kq35/the-eu-ai-act-and-medical-devices-navigating-high-risk-compliance/

  13. European Commission. “Medical Devices Joint Artificial Intelligence Board, MDCG 2025-6.” 2025. https://health.ec.europa.eu/document/download/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en

  14. Health Affairs Forefront. “Unlearning In Medical AI: A New Frontier For Privacy, Regulation, And Trust.” 2025. https://www.healthaffairs.org/content/forefront/unlearning-medical-ai-new-frontier-privacy-regulation-and-trust

  15. MICCAI 2025. “Forget-MI: Machine Unlearning for Forgetting Multimodal Information in Healthcare Settings.” https://arxiv.org/html/2506.23145

  16. MedForget. “MedForget: Hierarchy-Aware Multimodal Unlearning Testbed for Medical AI.” December 2025. https://arxiv.org/html/2512.09867v1

  17. Nature Medicine. “Medical large language models are vulnerable to data-poisoning attacks.” January 2025. https://www.nature.com/articles/s41591-024-03445-1

  18. Becker's Hospital Review. “EHR-trained AI could compromise patient privacy: MIT.” 2026. https://www.beckershospitalreview.com/healthcare-information-technology/ai/ehr-trained-ai-could-compromise-patient-privacy-mit/

  19. Cobalt. “Healthcare Data Breach 2025 Statistics.” https://www.cobalt.io/blog/healthcare-data-breach-statistics

  20. NIST AI Risk Management Framework. https://www.nist.gov/artificial-intelligence/ai-risk-management-framework


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from targetedjaidee

It is awesome that we are on this path.

God's grace, and His ability to shift things in favor of things to bring glory to His kingdom? I love it.

I love being a part of this.

Gratitude List: 1. Woke up clean. 2. God's favor over my life. 3. My marriage & children.

I hope you all had a great day today!

Love ya!

Jaide owwt*

 
Read more...

from Roscoe's Story

In Summary: * After watching one baseball game and two basketball games on TV earlier today, I'll spend Sunday evening with my Radio. Listening now to 1200 WOAI, the flagship station for the San Antonio Spurs, for pregame coverage then the call of tonight's game vs. the Houston Rockets.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 230.49 lbs * bp= 155/92 (63)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:00 – 1 banana * 07:40 – rice cake * 09:50 – garden salad * 10:00 – home made fruit roll-ups (munched on throughout the day) * 12:45 – bowl of cooked meat (liver, tongue, sausage, pork, etc.) and vegetables * 15:30 – 1 fresh apple

Activities, Chores, etc.: * 06:10 – bank accounts activity monitored * 06:25 – read, write, pray, follow news reports from various sources, surf the socials, and nap * 11:00 – watch the World Baseball Classic, Netherlands vs Dominican Republic * 14:00 – watch college basketball, Illinois at Maryland * 16:00 – more college basketball, Iowa at Nebraska * 18:00 – San Antonio Spurs Pregame Coverage * 19:00 – listening to the radio call of tonight's Houston Rockets vs. San Antonio Spurs game

Chess: * 08:55 – moved in all pending CC games

 
Read more...

from The Home Altar

Blonde doodle in a muddy yard

The grounds of St. Clare House are a messy mix of leftover snow, ice, and newly formed mud, as the late winter sunshine begins the transformation of the ground. It’s far too early for other signs of impending spring like snow blossoming flowers or buds on trees and shrubs. Rather it’s a squishy, slippery work in progress. Early today a couple of us cleared the mound of snow and ice from the back porch. There was chopping, scraping, and heaving. It was a pretty good workout.

This landscape makes for a pretty powerful Lentscape. We are most definitely not in Spring. At the same time, the warmer air tells me we’re not precisely in Winter either. We’re somewhere in between. The preparatory and penitential seasons of the liturgical calendar tend to work this way. Definitely not the festival or season we’ve left behind, certainly not the upcoming feast either.

Rather, we experience the melt, the sogginess, the mush, and the necessity of getting rid of what needs to go, and the reality that only patient presence will get us through this transition. This can be hard, especially with the last of the roof-bound ice and snow crashing down, the large, lazy puddles, the mind’s desire to race ahead and begin projects and preparations on a ground that is nowhere close to ready.

To say nothing of the longing to escape into the gardens or the earth-keeping as the news of war, rumors of bigger war, and calamities of growing proportion keep crashing down like that stubborn ice. Even so, we remain caught up in the present moment, with all of the very real and uncertain things that are swirling about. If a part of Lent is preparing to bear witness to the suffering and violence of the crucifixion, and in contrast God’s enduring love, then we have plenty of crucified neighbors, neighborhoods, and far-flung members of the human family who are giving us the opportunity to prepare our hearts and hands for both witness and loving action.

Let us attend during this season of change, some slower than we want, some faster than we can keep up with, to the unique gift of each moment. As we discern what is ours to do in the midst of mud and ice, seeking the well-being of our neighbors and the earth, we have an amazing opportunity to still be mindfully and heartfully attentive when the next sign of new life emerges.

 
Read more...

from The happy place

Hello I have been talking to some friends It’s the modern miracle of science to see these faces through the screen

And anyway we were talking about separation of intent from outcome

And I thought of this line from Kamelot ”Soul Society” from ”The Black Halo” album, (which is my favourite even though my favourite song ”The Spell” is from ”Karma”)

How could I be condemned for the things that I've done  If my intentions were good?

and yes this is food for thought of course it is. Of course it is. Intenttions are all we have on the one hand: the outcome is never given, because we can only guess how it will pan out. The point is, that we should make these educated guesses and also ensure that intention and outcome walk hand in hand

But that is beside the point

The point is I once in listened to ”The Spell” (and Karma) on a burned CD which was in my friend’s black saab 9000 turbo with black leather upholstery, we were going to the gas station in the middle of the night to buy snacks for we were playing Heroes of Might and Magic: II, but were out of snacks, and so when I sat there on the passenger’s seat and my friend was speeding and this song came on and never in my life have I ever felt as cool as I did then.

Then years pass and this memory faded until I heard the Karma album many years later and I thought this is the bomb and so I listened to all of these Kamelot albums until I rediscovered Karma and The Spell and then I was a more complete human being with this aforementioned memory sitting like a black diamond on my metaphorical crown.

Did you know Roy Khan the singer (then) of Kamelot is from Norway?

 
Read more... Discuss...

from The happy place

Outside in places the sidewalks are dry and the gravel on there redundant, but it’s mostly wet, because the snow and ice are melting and the meltwater on the salted roads run like tears, and the trees all seem dead for now but everybody knows it’s just a matter of time til there will be green buds all over them!

And of course one day I will wake up to half a meter of dirty snow and sleet and there will be ice, just when you think about planting tomatoes, but even so, it will still be under the spring sun and sky and that’s really comforting

And I have many friends and family and they come too like spring suns and they make my life worth living

And there will grow dandelions in the cracks of the asphalt and there will be once again butterflies outside

And I feel th

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog