from Jujupiter

This is the last of the #JujuAwards2025! And of course I save the best for last with #BookOfTheYear.

I read a fair bit this year, despite being busy exploring Australia, studying, hitting the gym again, etc. (Yeah, turns out it wasn't a sustainable lifestyle!) I decided to read more essays and also more short stories. When you like sci-fi, short stories are a very good choice as they bring plenty of ideas in a concentrate amount of pages.

Here are the nominees.

Free by Lea Ypi

In this memoir, the author revisits her childhood in Albania, from dictatorship to democracy, to civil war. There is a lot of humour, notably when the author, who was exposed to propaganda until her teens, discovers her parents used to be bourgeois.

The Man Who Ended History: A Documentary by Ken Liu

In this novella, a scientist invents a machine that allows to see the past. He gives access to descendants of victims of atrocities committed in Unit 731 during the Second World War, who witness the horror, which triggers important questions.

Clarkesworld Magazine 223, edited by Neil Clarke

My first edition of Clarkesworld, a magazine of short speculative fiction stories, and I wasn't disappointed. I especially liked Thomas Ha's story, In My Country, which is shortlisted by the serial for best story of the year.

The Persuaders by Anand Giridharadas

An essay from the US written before the 2024 election about how, for the progressive side, to win over the other side. Some interesting insights, notably the method of deep canvassing.

Cyberpunk by Asma Mhalla

A book written by a French essayist after the 2024 US election about the technological dystopia that we might already be living in. Of course it's highly topical but the writing is also witty.

There Is No Antimemetics Division by qntm

A sci-fi horror novel from Britain in which agents fight an impossible battle against an enemy that cannot, or actually, must not be remembered.

And the winner is… Clarkesworld Magazine 223! Reminder to support short speculative fiction. And now back onto reviews :)

#JujuAwards #BestOf2025

 
Read more...

from An Open Letter

There are so many different things I need to do for this house and I’m honestly so overwhelmed with it. I haven’t been keeping to my habits either like reading which I want to fix.

 
Read more...

from folgepaula

HAMNET_REVIEW

We seem to be living through an unexpected renaissance of literary classics on screen: Frankenstein, Wuthering Heights, Dracula, and now Hamnet. Watching this film reminded me why I fell in love with cinema and theatre in the first place, long before I shifted my studies toward communications. It’s worth noting that Hamnet is directed by Chloé Zhao, one of only three women ever to win the Oscar for Best Director, and adapted from Maggie O’Farrell’s novel, itself a reimagining of the few biographical traces left from Shakespeare at the time he wrote what may be his greatest tragedy, Hamlet. Yet the film wisely avoids centering Shakespeare, instead it turns its gaze toward his wife, Agnes/Anne Hathaway.

But what struck me most goes beyond literary adaptation. While sometimes it truly feels to me as though we’re witnessing a slow disintegration of the world around us. Modern forms of fascism thrive on indifference: political, social, emotional. They depend on people who no longer allow themselves to be affected, who shrink their circuits of feeling until nothing truly touches them. In that sense, Hamnet hit me hard. I cried more than once, especially during the final ten minutes, and the heavy silence in the theater mixed with sobs told me I wasn’t the only one. Those closing moments are, for me, the film’s true peak, not because they offer catharsis, but because they recontextualize everything that came before. They open a space to consider the many ways we move through grief, love, and the lingering echoes of emotion, here retracted as the loss of a family member, and how art in any matrix can truly heal us, as individuals, but perhaps more importantly us in the broader sense, as a collective.

For sure not an awards‑bait kind of movie, it’s far too sincere for that. Thoughtfully directed and quietly devastating, Hamnet draws its power from restraint, mythic resonance, and a final act that stays with you long after the credits fade. It reminded me of the simple, human act of telling and listening to stories, and how narrative is one of the ways we reorganize ourselves, rebuild connection, and resist fragmentation. In moments when the world feels like it’s coming apart, investing in a good story might be one of the few things that still holds us together.

/feb26

 
Read more...

from EpicMind

Illustration eines antiken Philosophen in Toga, der erschöpft an einem modernen Büroarbeitsplatz vor einem Computer sitzt, umgeben von leeren Bürostühlen und urbaner Architektur.

Freundinnen & Freunde der Weisheit! Wer kennt das nicht, endloses Sitzen am Schreibtisch. Wir alle wissen, dass das nicht gesund ist. Aber was können wir dagegen tun? In der sechsten Ausgabe des Newsletters beleuchte ich genau diese Frage.

Langes Sitzen gilt heute als eigenständiger Risikofaktor für ernsthafte Gesundheitsprobleme – auch bei Menschen, die täglich Sport treiben. Wer über acht Stunden sitzt, schadet langfristig Herz, Kreislauf und Stoffwechsel. Die gute Nachricht: Bereits kurze, regelmässige Bewegungspausen können diesen negativen Effekten entgegenwirken. Studien zeigen, dass sogenannte „Active Breaks“ oder „Exercise Snacks“ eine einfache und wirkungsvolle Strategie darstellen, um den Körper auch während langer Sitzphasen aktiv zu halten.

Doch was genau wirkt am besten? Forschende verglichen verschiedene Formen von Bewegung und fanden heraus: Wer alle 45 Minuten drei Minuten spazieren geht oder zehn Kniebeugen macht, verbessert seine Blutzuckerwerte deutlich – und wirksamer als mit einer einzigen halbstündigen Gehpause pro Tag. Entscheidend ist also nicht die Dauer, sondern die Regelmässigkeit der Unterbrechungen. Bewegung in kleinen Dosen, aber in hoher Frequenz, entfaltet eine überraschend grosse Wirkung.

Für den Alltag bedeutet das: Wer im Büro arbeitet oder zu Hause viel sitzt, sollte sich alle 45 bis 60 Minuten bewusst kurz bewegen. Möglich sind Kniebeugen, Treppensteigen, zügiges Gehen auf der Stelle, Ausfallschritte oder ein schneller Gang durch den Flur. Diese Mini-Workouts dauern nur ein bis drei Minuten, lassen sich fast überall umsetzen und benötigen keine Hilfsmittel. Wer solche Pausen konsequent einplant, verbessert nicht nur seine körperliche Verfassung, sondern auch Konzentration und Wohlbefinden – mit minimalem Aufwand, aber maximalem Nutzen.

Denkanstoss zum Wochenbeginn

„Das grösste Vergnügen im Leben besteht darin, Dinge zu tun, die man nach Meinung anderer Leute nicht fertig bringt.“ – Marcel Aymé (1902–1967)

ProductivityPorn-Tipp der Woche: E-Mails gezielt abrufen

Jedes Mal, wenn du eine neue E-Mail liest, bekommst du einen kleinen Dopamin-Kick – und bist abgelenkt. Setze feste Zeiten für das Abrufen deiner E-Mails und arbeite konzentriert in den übrigen Zeiträumen.

Aus dem Archiv: Lernen durch Dialog – Perspektivenaustausch statt Feedback-Frust

Genauso wie in der Arbeitswelt spielt Feedback auch in der Schule und im Studium eine zentrale Rolle. Doch oft ist die Art, wie Feedback vermittelt wird, genauso verbesserungswürdig. Wenn Schülerinnen oder Studenten Feedback als reine Kritik wahrnehmen, können auch sie in eine Art „inneres Kündigen“ verfallen, bei dem sie zwar körperlich anwesend sind, aber emotional längst auf Distanz gegangen sind. Das Ziel sollte jedoch sein, dass Feedback nicht demotiviert, sondern zur Weiterentwicklung anregt – sowohl im Arbeitsleben als auch in der Bildung.

weiterlesen …

Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!


EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.


Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.

Topic #Newsletter

 
Weiterlesen... Discuss...

from 下川友

目が覚めて時計を見ると、まだ早朝の4時だった。 寝ている姿勢の軸がずれているような気がして、背中を少し動かした瞬間、左腕に鈍い痛みが走った。 ああ、寝違えたか、と思ったが、体のコアはまだ半分眠っていて、痛みへの関心は薄かった。

あと2時間もすれば会社に行く準備をしなければならない。 そう思った途端、体中の筋肉と毛穴が一斉に収縮するのを感じた。 当然、髪の毛もその巻き添えを食うわけで、たまったものじゃない。 せっかく前回美容院に行ったとき、「髪の質が良くなってきましたね」と褒められたばかりなのに。

思考はまだ半覚醒のままなのに、そこへストレスが割り込んでくるものだから、体が本格的に拒絶反応を起こし始めているのが分かる。

安全な方向を探して、細胞たちがそれぞれ勝手に逃げ回っているような感覚がある。 だが「安全な場所とはどこだ」と言わんばかりに、みんな好き勝手に動くので、体が変形しているようにすら感じる。 今、体温を測ったら熱があるかもしれない。

こんなことが年に数回ある。 最初はただ辛いだけだと思っていたが、最近は今を生き抜くための、次の体に向けた進化なのではないかと感じている。

つい最近まで二か月ほど在宅勤務を続けていて、エレガントなニットがよく似合っていた。 それなのに「明日からまたしばらく本社に来てくれ」と言われた途端、家にいたい、現状を変えたくないという気持ちが一気に押し寄せた。 本当に会社に行きたくなくて、心にはかなりの負荷がかかっていた。

朝起きてカーテンを開け、コーヒーを淹れ、洗濯をして、ストレッチをして、部屋の温度を整え、自炊をして、好きなソファに座る。 俺はこの空間で仕事がしたいんだ。

しかし、うちの会社は請負の仕事ばかりだから、上司に「来い」と言われれば行かざるを得ない。 今の心身のままでは会社に行く負荷が高すぎるから、細胞たちが化学変化を起こして、少しでも楽にしてやろうと、俺を会社用の体に作り変えてくれているのだろう。

子どもの頃から「手から電気を出したい」と漠然と思っている夢がある。

人間が状況に合わせて進化できるのなら、電気を出さざるを得ない状況を作れば、そのように体も変化していくものではないのか?と考え、大学院では生物物理科に進学した。 ただ、大学院時代に一度研究を試みて以来、科学は「他人に伝えるためのもの」だと感じてしまい、科学的に理解しようという関心は薄れてしまった。 自分だけが辿り着きたい、現実的であり抽象的な場所なのだから、学問にわざわざ落とし込む必要もない、とは思うものの、何かしらのアプローチは必要で、やはりこれは人生の課題だなと思うと同時に、右脳に、普段はしない喝みたいなものを入れた。

そんなことを考えているうちに、「改造が終わったよ」と細胞がシグナルを送ってきたので、布団から起き上がり、会社に行く準備をした。 これまでは自然とスーツを着ていたが、今日は自然と私服を選んだ。 改造されたことが視覚的に分かりやすくて良いねと思いながら、いつかたどり着く場所とはまったく関係のない寄り道として、会社へ向かう事にした。

 
もっと読む…

from Nerd for Hire

I try to leave the country once a year or so, and I'm currently in the process of getting in my trip for 2026. I started this post on the plane, continued it on the Tren Maya, and finished it in warm and beautiful Valladolid, Mexico, far from the snow-covered and frozen city of Pittsburgh. As excited as I am to warm up a bit, that's not the primary reason I travel. A lot of my best story ideas have come from visiting new places and experiencing life in a different part of the world. It's also a solid way to break up my routine, which I know I could technically do in Pittsburgh, too, but I'm very bad at veering away from my usual habits when I'm going about my day-to-day life. Jumping on a plane and heading to Mexico or Guatemala—or even just riding the train to Buffalo or Baltimore—is a foolproof way to ease myself out of any ruts I've gotten into. 

I will say, the way that I approach these trips isn't quite the way I think most would go into a vacation. For one thing, I usually don't completely take off work—I tend to drop down to more of a part-time workload, but continuing to work lets me take the kind of longer trips that allow me really sink into a new place. Especially when there's train travel involved, like on this trip, I view those periods as little mini writing retreats, because that environment is one where I find it very easy to get ideas flowing: you're stuck in one place without much to distract you, but there's also something new to see every time you look out the window in need of inspiration. 

But I also think I approach travel with a different mindset than the average non-writer would, and that's something you can do regardless of your employment situation or how long you're traveling. When you look at things the right way, you can find stories and inspiration that you can use to fuel your creativity for long after you're back on familiar turf. This post is both advice for other writers and a reminder to myself as I get ready to see some new places on how to make the most of my time while I wander. 

So how do you travel like a writer? For me, it comes down to 5 main things.

Get off the well-traveled trails

This doesn't mean you have to skip the big attractions. In each town I’m visiting, I'll probably take the expected wander into the central plaza and stare at the Spanish cathedral, like one does in an old Latin American city. But while I’m on that walk, I'm also going to have my eyes open for strange things that snag them, side-streets that look intriguing, or off-the-wall museums that most folks might walk by.

There are two types of places I always seek out when I'm in a new city: cemeteries and public parks. They're not the kinds of attractions that most would add to their itinerary, but they reveal so much about the place you're visiting and are usually among my favorite finds. I also get around on foot when I can, giving me a chance to take in the streets that would blur by from a vehicle. This often leads to discovering street art, statues, and other neat finds that I never would have known were there otherwise. I've even stumbled across festivals or hidden museums a time or two when exploring whatever city I'm in. 

The value of these unexpected finds is two-fold. One, that sense of discovery makes it stick in your brain more because it has emotion attached to it, and for me at least that is exactly the germ I need to turn it into inspiration. It also means you have things to write about that not everyone who visits that city will have seen. That's especially good if you want to write nonfiction like travel writing, but also gives you unique details you can use in fiction and poetry that will make it sound uniquely you, not like something regurgitated off of every travel blog. 

Learn the history

I'm what I'll call a half-spontaneous traveler. I'll plan out the big strokes of the trip like transportation and where I'm staying well in advance. When it comes to the things I see, though, I'll usually have a list of places I want to visit, but I don't go so far as to write out an itinerary. I like to leave myself room for discovery and not go into a place with too many preconceived ideas about it. 

Which might all sound like the opposite of what I'm suggesting in the heading here, but what's glorious about traveling in the 21st century is that the vast majority of us constantly carry a device that can access a significant percentage of accumulated human knowledge. You don't necessarily need to research in advance to get insight into the history of a place, and for me looking it up on the spot, while I'm at the site, often helps me connect that history to what I'm seeing and makes it stick more firmly in my memory. 

The advice I gave above also applies here. Informational signs and plaques are like catnip for me. Any time I see one I’m drawn to it, even when I'm walking around Pittsburgh. You can learn a lot from historical markers, or the signs and explanations posted in museums. If you see a random plaque on the side of an unassuming building, don’t breeze by it—give it a read to see if the building you were about to ignore actually has a fascinating history that will end up being the germ of your next story.

There are a lot of reasons I give this recommendation. For one, it puts what you're seeing in context so you can more fully understand it, and at the same time gain a deeper understanding of the place that you're visiting, which will help you to make it come across as real to the reader. You might also find some stories hiding in that history, or learn about famous figures, historical events, or other things that you want to look into further and use for something you're writing. When you're conducting research—whether before, during, or after your trip—let yourself fall down rabbit holes and go off on tangents, following your interests wherever they lead you. The whole point of traveling is to discover new things, and those new things don't only have to be the ones you physically encounter. 

Carry a notebook

I usually don't journal. I've done it at points in the past, but I tend to drop the habit pretty quickly. That said, whenever I'm traveling, I always make a point of journaling every day. I even type up these journals into a digital file after I get back so that there's no risk of them being lost if the physical copy gets misplaced. I usually do this within a month of getting home, and even with that little time in between the trip and when I'm going back over them, I still inevitably come across a few things I'd already forgotten about. 

That's the main reason I always keep a journal when I'm traveling. There's such an influx of new knowledge and sensory input that my brain just doesn't have space to process all of it before I feed more in, and inevitably some of that info is going to get lost. Keeping a notebook handy and consistently recording what I see, cool things I learn, and other details about the experience I might forget helps me to capture and freeze the ideas so that they're still there when I'm ready to make use of them. 

I suppose the physical notebook isn't a requirement anymore—it's the way I prefer to do things, but you could also use an app on your phone or something similar. The key is to use something that you can have with you while you're actively out exploring, not just after you get back to your hotel room. That lets you capture things as you see and feel them, which is the best way to retain those tangible, distinctive details that will really make the moment come to life if you decide to make use of it in a creative work.

Engage your full senses and attention

When I was in college I had the privilege of studying in Florence for six weeks, and was able to take an in-depth art history class during the session. One point that the instructor would always make as she was leading us around the various museums and landmarks is to not only focus at eye level. She was always directing our eyes up at a frescoed ceiling or down at an interesting paving stone, and her lesson stuck with me well after that trip. Staying too focused on one view of things could mean you miss details that you would've found even cooler.

This same advice goes in a broader sense, too. I try to remind myself when I'm traveling to just stop every once in a while and take everything in—to listen to the sounds, smell the air, feel the energy of the place I'm inhabiting. It's easy when you're traveling to feel like you need to constantly be moving to see everything you want to see, but you don't want to fall into the trap of trying to see so much that you don't end up really seeing any of it. 

Make smart use of your camera

This is another place where my habits are I think a bit different than the typical vacationer's. I often don't take many pictures of the big landmarks. If I want to remember what El Castillo in Chichen Itza looks like, I can find plenty of pictures of it taken by much better photographers than I am. But the internet won't help me remember the funny graffiti somebody scrawled on one of the signs, or that one other tourist wearing the crazy hat—those are the kinds of details I'm most likely to get out the camera for. Again, this isn't saying you can't also take Instagram-worthy pictures of the big sites, but that's another great thing about the modern era of the phone camera. It's not like you have a limited number of shots. You can do both. 

I will add onto this that you don't just have to use your camera to capture sights. If you come across a particularly interesting sign and want to make sure you remember all the details, snap a photo of it. These days, lots of museums and historical sites will use QR codes on the signs to lead you to places where you can learn more info, which can be a shortcut to following that “learn the history” tip I gave above. You can also use the other recording features to get snippets of music or street noise, or capture video to capture the energy of a crowd or the stillness of an empty park.

Of course, you don't want to go overboard. Even the best phone has a finite amount of storage and you don't want to be so intent on documenting everything that you forget to actually experience it. But when you come across one of those things that you want to make sure you remember, take full advantage of the tools at your disposal to give yourself memory triggers you can activate once you get home. 

See similar posts:

#WritingAdvice #DigitalNomads #Worldbuilding

 
Leer más...

from SmarterArticles

Every time you type a message on your smartphone, your keyboard learns a little more about you. It notices your favourite words, your common misspellings, the names of people you text most often. For years, this intimate knowledge was hoovered up and shipped to distant servers, where tech giants analysed your linguistic fingerprints alongside billions of others. Then, around 2017, something changed. Google began training its Gboard keyboard using a technique called federated learning, promising that your typing data would never leave your device. The raw text of your most private messages, they assured users, would stay exactly where it belonged: on your phone.

It sounds like a privacy advocate's dream. But beneath this reassuring narrative lies a more complicated reality, one where mathematical guarantees collide with practical vulnerabilities, where corporate interests shape the definition of “privacy,” and where the gap between what users understand and what actually happens grows wider by the day. As AI systems increasingly rely on techniques like federated learning and differential privacy to protect sensitive information, a fundamental question emerges: are these technical solutions genuine shields against surveillance, or are they elaborate mechanisms that create new attack surfaces whilst giving companies plausible deniability?

The Machinery of Privacy Preservation

To understand whether federated learning and differential privacy actually work, you first need to understand what they are and how they operate. These are not simple concepts, and that complexity itself becomes part of the problem.

Federated learning, first formally introduced by Google researchers in 2016, fundamentally reimagines how machine learning models are trained. In the traditional approach, organisations collect vast quantities of data from users, centralise it on their servers, and train AI models on this aggregated dataset. Federated learning inverts this process. Instead of bringing data to the model, it brings the model to the data.

The process works through a carefully orchestrated dance between a central server and millions of edge devices, typically smartphones. The server distributes an initial model to participating devices. Each device trains that model using only its local data, perhaps the messages you have typed, the photos you have taken, or the websites you have visited. Crucially, the raw data never leaves your device. Instead, each device sends back only the model updates, the mathematical adjustments to weights and parameters that represent what the model learned from your data. The central server aggregates these updates from thousands or millions of devices, incorporates them into a new global model, and distributes this improved version back to the devices. The cycle repeats until the model converges.

The technical details matter here. Google's implementation in Gboard uses the FederatedAveraging algorithm, with between 100 and 500 client updates required to close each round of training. On average, each client processes approximately 400 example sentences during a single training epoch. The federated system converges after about 3000 training rounds, during which 600 million sentences are processed by 1.5 million client devices.

Differential privacy adds another layer of protection. Developed by computer scientists including Cynthia Dwork of Harvard University, who received the National Medal of Science in January 2025 for her pioneering contributions to the field, differential privacy provides a mathematically rigorous guarantee about information leakage. The core idea is deceptively simple: if you add carefully calibrated noise to data or computations, you can ensure that the output reveals almost nothing about any individual in the dataset.

The formal guarantee states that an algorithm is differentially private if its output looks nearly identical whether or not any single individual's data is included in the computation. This is measured by a parameter called epsilon, which quantifies the privacy loss. A smaller epsilon means stronger privacy but typically comes at the cost of utility, since more noise obscures more signal.

The noise injection typically follows one of several mechanisms. The Laplace mechanism adds noise calibrated to the sensitivity of the computation. The Gaussian mechanism uses a different probability distribution, factoring in both sensitivity and privacy parameters. Each approach has trade-offs in terms of accuracy, privacy strength, and computational efficiency.

When combined, federated learning and differential privacy create what appears to be a formidable privacy fortress. Your data stays on your device. The model updates sent to the server are aggregated with millions of others. Additional noise is injected to obscure individual contributions. In theory, even if someone intercepted everything being transmitted, they would learn nothing meaningful about you.

In practice, however, the picture is considerably more complicated.

When Privacy Promises Meet Attack Vectors

The security research community has spent years probing federated learning systems for weaknesses, and they have found plenty. One of the most troubling discoveries involves gradient inversion attacks, which demonstrate that model updates themselves can leak significant information about the underlying training data.

A gradient, in machine learning terms, is the mathematical direction and magnitude by which model parameters should be adjusted based on training data. Researchers have shown that by analysing these gradients, attackers can reconstruct substantial portions of the original training data. A 2025 systematic review published in Frontiers in Computer Science documented how gradient-guided diffusion models can now achieve “visually perfect recovery of images up to 512x512 pixels” from gradient information alone.

The evolution of these attacks has been rapid. Early gradient inversion techniques required significant computational resources and produced only approximate reconstructions. Modern approaches using fine-tuned generative models reduce mean squared error by an order of magnitude compared to classical methods, whilst simultaneously achieving inference speeds a million times faster and demonstrating robustness to gradient noise.

The implications are stark. Even though federated learning never transmits raw data, the gradients it does transmit can serve as a detailed map back to that data. A team of researchers demonstrated this vulnerability specifically in the context of Google's Gboard, publishing their findings in a paper pointedly titled “Two Models are Better than One: Federated Learning is Not Private for Google GBoard Next Word Prediction.” Their work showed that the word order and actual sentences typed by users could be reconstructed with high fidelity from the model updates alone.

Beyond gradient leakage, federated learning systems face threats from malicious participants. In Byzantine attacks, compromised devices send deliberately corrupted model updates designed to poison the global model. Research published by Fang et al. at NDSS in 2025 demonstrated that optimised model poisoning attacks can cause “1.5x to 60x higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks.” This suggests that existing defences against malicious participants are far weaker than previously assumed.

Model inversion attacks present another concern. These techniques attempt to reverse-engineer sensitive information about training data by querying a trained model. A February 2025 paper on arXiv introduced “federated unlearning inversion attacks,” which exploit the model differences before and after data deletion to expose features and labels of supposedly forgotten data. As regulations like the GDPR establish a “right to be forgotten,” the very mechanisms designed to delete user data may create new vulnerabilities.

Differential privacy, for its part, is not immune to attack either. Research has shown that DP-SGD, the standard technique for adding differential privacy to deep learning, cannot prevent certain classes of model inversion attacks. A study by Zhang et al. demonstrated that their generative model inversion attack in face recognition settings could succeed even when the target model was trained with differential privacy guarantees.

The Census Bureau's Cautionary Tale

Perhaps the most instructive real-world example of differential privacy's limitations comes from the US Census Bureau's adoption of the technique for the 2020 census. This was differential privacy's biggest test, applied to data that would determine congressional representation and the allocation of hundreds of billions of dollars in federal funds.

The results were controversial. Research published in PMC in 2024 found that “the total population counts are generally preserved by the differential privacy algorithm. However, when we turn to population subgroups, this accuracy depreciates considerably.” The same study documented that the technique “introduces disproportionate discrepancies for rural and non-white populations,” with “significant changes in estimated mortality rates” occurring for less populous areas.

For demographers and social scientists, the trade-offs proved troubling. A Gates Open Research study quantified the impact: when run on historical census data with a privacy budget of 1.0, the differential privacy system produced errors “similar to that of a simple random sample of 50% of the US population.” In other words, protecting privacy came at the cost of effectively throwing away half the data. With a privacy budget of 4.0, the error rate decreased to approximate that of a 90 percent sample, but privacy guarantees correspondingly weakened.

The Census Bureau faced criticism from data users who argued that local governments could no longer distinguish between actual errors in their data and noise introduced by the privacy algorithm. The structural inaccuracy preserved state-level totals whilst “intentionally distorting characteristic data at each sub-level.”

This case illuminates a fundamental tension in differential privacy: the privacy-utility trade-off is not merely technical but political. Decisions about how much accuracy to sacrifice for privacy, and whose data bears the greatest distortion, are ultimately value judgements that mathematics alone cannot resolve.

Corporate Privacy, Corporate Interests

When technology companies tout their use of federated learning and differential privacy, it is worth asking what problems these techniques actually solve, and for whom.

Google's deployment of federated learning in Gboard offers a revealing case study. The company has trained and deployed more than twenty language models for Gboard using differential privacy, achieving what they describe as “meaningfully formal DP guarantees” with privacy parameters (rho-zCDP) ranging from 0.2 to 2. This sounds impressive, but the privacy parameters alone do not tell the full story.

Google applies the DP-Follow-the-Regularized-Leader algorithm specifically because it achieves formal differential privacy guarantees without requiring uniform sampling of client devices, a practical constraint in mobile deployments. The company reports that keyboard prediction accuracy improved by 24 percent through federated learning, demonstrating tangible benefits from the approach.

Yet Google still learns aggregate patterns from billions of users. The company still improves its products using that collective intelligence. Federated learning changes the mechanism of data collection but not necessarily the fundamental relationship between users and platforms. As one Google research publication frankly acknowledged, “improvements to this technology will benefit all users, although users are only willing to contribute if their privacy is ensured.”

The tension becomes even starker when examining Meta, whose platforms represent some of the largest potential deployments of privacy-preserving techniques. A 2025 analysis in Springer Nature noted that “approximately 98% of Meta's revenue derives from targeted advertising, a model that depends heavily on the collection and analysis of personal data.” This business model “creates a strong incentive to push users to sacrifice privacy, raising ethical concerns.”

Privacy-preserving techniques can serve corporate interests in ways that do not necessarily align with user protection. They enable companies to continue extracting value from user data whilst reducing legal and reputational risks. They provide technical compliance with regulations like the GDPR without fundamentally changing surveillance-based business models.

Apple presents an interesting contrast. The company has integrated differential privacy across its ecosystem since iOS 10 in 2016, using it for features ranging from identifying popular emojis to detecting domains that cause high memory usage in Safari. In iOS 17, Apple applied differential privacy to learn about popular photo locations without identifying individual users. With iOS 18.5, the company extended these techniques to train certain Apple Intelligence features, starting with Genmoji.

Apple's implementation deploys local differential privacy, meaning data is randomised before leaving the device, so Apple's servers never receive raw user information. Users can opt out entirely through Settings, and privacy reports are visible in device settings, providing a degree of transparency unusual in the industry.

Apple's approach differs from Google's in that the company does not derive the majority of its revenue from advertising. Yet even here, questions arise about transparency and user understanding. The technical documentation is dense, the privacy parameters are not prominently disclosed, and the average user has no practical way to verify the claimed protections.

The Understanding Gap

The gap between technical privacy guarantees and user comprehension represents perhaps the most significant challenge facing these technologies. Differential privacy's mathematical rigour means nothing if users cannot meaningfully consent to, or even understand, what they are agreeing to.

Research on the so-called “privacy paradox” consistently finds a disconnect between stated privacy concerns and actual behaviour. A study analysing Alipay users found “no relationship between respondents' self-stated privacy concerns and their number of data-sharing authorizations.” Rather than indicating irrational behaviour, the researchers argued this reflects the complexity of privacy decisions in context.

A 2024 Deloitte survey found that less than half of consumers, 47 percent, trust online services to protect their data. Yet a separate survey by HERE Technologies found that more than two-thirds of consumers expressed willingness to share location data, with 79 percent reporting they would allow navigation services to access their data. A study of more than 10,000 respondents across 10 countries found 53 percent expressing concern about digital data sharing, even as 70 percent indicated growing willingness to share location data when benefits were clear.

This is not necessarily a paradox so much as an acknowledgment that privacy decisions involve trade-offs that differ by context, by benefit received, and by trust in the collecting entity. But federated learning and differential privacy make these trade-offs harder to evaluate, not easier. When a system claims to be “differentially private with epsilon equals 4,” what does that actually mean for the user? When federated learning promises that “your data never leaves your device,” does that account for the information that gradients can leak?

The French data protection authority CNIL has recommended federated learning as a “data protection measure from the outset,” but also acknowledged the need for “explainability and traceability measures regarding the outputs of the system.” The challenge is that these systems are inherently difficult to explain. Their privacy guarantees are statistical, not absolute. They protect populations, not necessarily individuals. They reduce risk without eliminating it.

Healthcare: High Stakes, Conflicting Pressures

Nowhere are the tensions surrounding privacy-preserving AI more acute than in healthcare, where the potential benefits are enormous and the sensitivity of data is extreme.

NVIDIA's Clara federated learning platform exemplifies both the promise and the complexity. Clara enables hospitals to collaboratively train AI models without sharing patient data. Healthcare institutions including the American College of Radiology, Massachusetts General Hospital and Brigham and Women's Hospital's Center for Clinical Data Science, and UCLA Health have partnered with NVIDIA on federated learning initiatives.

In the United Kingdom, NVIDIA partnered with King's College London and the AI company Owkin to create a federated learning platform for the National Health Service, initially connecting four of London's premier teaching hospitals. The Owkin Connect platform uses blockchain technology to capture and trace all data used for model training, providing an audit trail that traditional centralised approaches cannot match.

During the COVID-19 pandemic, NVIDIA coordinated a federated learning study involving twenty hospitals globally to train models predicting clinical outcomes in symptomatic patients. The study demonstrated that federated models could outperform models trained on any single institution's data alone, suggesting that the technique enables collaboration that would otherwise be impossible due to privacy constraints.

In the pharmaceutical industry, the MELLODDY project brought together ten pharmaceutical companies in Europe to apply federated learning to drug discovery. The consortium pools the largest existing chemical compound library, more than ten million molecules and one billion assays, whilst ensuring that highly valuable proprietary data never leaves each company's control. The project runs on the open-source Substra framework and employs distributed ledger technology for full traceability.

These initiatives demonstrate genuine value. Healthcare AI trained on diverse populations across multiple institutions is likely to generalise better than AI trained on data from a single hospital serving a particular demographic. Federated learning makes such collaboration possible in contexts where data sharing would be legally prohibited or practically impossible.

But the same vulnerabilities that plague federated learning elsewhere apply here too, perhaps with higher stakes. Gradient inversion attacks could potentially reconstruct medical images. Model poisoning by a malicious hospital could corrupt a shared diagnostic tool. The privacy-utility trade-off means that stronger privacy guarantees may come at the cost of clinical accuracy.

Regulation Catches Up, Slowly

The regulatory landscape is evolving to address these concerns, though the pace of change struggles to keep up with technological development.

In the European Union, the AI Act took full effect on 2 August 2025, establishing transparency obligations for general-purpose AI systems. In November 2025, the European Commission published the Digital Omnibus proposal, streamlining the relationship between the Data Act, GDPR, and AI Act. The proposal includes clarification that organisations “may rely on legitimate interests to process personal data for AI-related purposes, provided they fully comply with all existing GDPR safeguards.”

In the United States, NIST finalised guidelines for evaluating differential privacy guarantees in March 2025, fulfilling an assignment from President Biden's Executive Order on Safe, Secure, and Trustworthy AI from October 2023. The guidelines provide a framework for assessing privacy claims but acknowledge the complexity of translating mathematical parameters into practical privacy assurances.

The market is responding to these regulatory pressures. The global privacy-enhancing technologies market reached 3.12 billion US dollars in 2024, projected to grow to 12.09 billion dollars by 2030. The federated learning platforms market, valued at 150 million dollars in 2023, is forecast to reach 2.3 billion dollars by 2032, reflecting a compound annual growth rate of 35.4 percent. The average cost of a data breach reached 4.88 million dollars in 2024, and industry analysts estimate that 75 percent of the world's population now lives under modern privacy regulations.

This growth suggests that corporations see privacy-preserving techniques as essential infrastructure for the AI age, driven as much by regulatory compliance and reputational concerns as by genuine commitment to user protection.

The Security Arms Race

The relationship between privacy-preserving techniques and the attacks against them resembles an arms race, with each advance prompting countermeasures that prompt new attacks in turn.

Defensive techniques have evolved significantly. Secure aggregation protocols encrypt model updates so that the central server only learns the aggregate, not individual contributions. Homomorphic encryption allows computation on encrypted data, theoretically enabling model training without ever decrypting sensitive information. Byzantine-robust aggregation algorithms attempt to detect and exclude malicious model updates.

Each defence has limitations. Secure aggregation protects against honest-but-curious servers but does not prevent sophisticated attacks like Scale-MIA, which researchers demonstrated can reconstruct training data even from securely aggregated updates. Homomorphic encryption imposes significant computational overhead and is not yet practical for large-scale deployments. Byzantine-robust algorithms, as the research by Fang et al. demonstrated, are more vulnerable to optimised attacks than previously believed.

The research community continues to develop new defences. A 2025 study proposed “shadow defense against gradient inversion attack,” using decoy gradients to obscure genuine updates. LSTM-based approaches attempt to detect malicious updates by analysing patterns across communication rounds. The FedMP algorithm combines multiple defensive techniques into a “multi-pronged defence” against Byzantine attacks.

But attackers are also advancing. Gradient-guided diffusion models achieve reconstruction quality that would have seemed impossible a few years ago. Adaptive attack strategies that vary the number of malicious clients per round prove more effective and harder to detect. The boundary between secure and insecure keeps shifting.

This dynamic suggests that privacy-preserving AI should not be understood as a solved problem but as an ongoing negotiation between attackers and defenders, with no permanent resolution in sight.

What Users Actually Want

Amid all the technical complexity, it is worth returning to the fundamental question: what do users actually want from privacy protection, and can federated learning and differential privacy deliver it?

Research suggests that user expectations are contextual and nuanced. People are more willing to share data with well-known, trusted entities than with unknown ones. They want personalised services but also want protection from misuse. They care more about some types of data than others, and their concerns vary by situation.

Privacy-preserving techniques address some of these concerns better than others. They reduce the risk of data breaches by not centralising sensitive information. They provide mathematical frameworks for limiting what can be inferred about individuals. They enable beneficial applications, such as medical AI or improved keyboard prediction, that might otherwise be impossible due to privacy constraints.

But they do not address the fundamental power imbalance between individuals and the organisations that deploy these systems. They do not give users meaningful control over how models trained on their data are used. They do not make privacy trade-offs transparent or negotiable. They replace visible data collection with invisible model training, which may reduce certain risks whilst obscuring others.

The privacy paradox literature suggests that many users make rational calculations based on perceived benefits and risks. But federated learning and differential privacy make those calculations harder, not easier. The average user cannot evaluate whether epsilon equals 2 provides adequate protection for their threat model. They cannot assess whether gradient inversion attacks pose a realistic risk in their context. They must simply trust that the deploying organisation has made these decisions competently and in good faith.

The Question That Matters

Will you feel safe sharing personal data as AI systems adopt federated learning and differential privacy? The honest answer is: it depends on what you mean by “safe.”

These techniques genuinely reduce certain privacy risks. They make centralised data breaches less catastrophic by keeping data distributed. They provide formal guarantees that limit what can be inferred about individuals, at least in theory. They enable beneficial applications that would otherwise founder on privacy concerns.

But they also create new vulnerabilities that researchers are only beginning to understand. Gradient inversion attacks can reconstruct sensitive data from model updates. Malicious participants can poison shared models. The privacy-utility trade-off means that stronger guarantees come at the cost of usefulness, a cost that often falls disproportionately on already marginalised populations.

Corporate incentives shape how these technologies are deployed. Companies that profit from data collection have reasons to adopt privacy-preserving techniques that maintain their business models whilst satisfying regulators and reassuring users. This is not necessarily malicious, but it is also not the same as prioritising user privacy above all else.

The gap between technical guarantees and user understanding remains vast. Few users can meaningfully evaluate privacy claims couched in mathematical parameters and threat models. The complexity of these systems may actually reduce accountability by making it harder to identify when privacy has been violated.

Perhaps most importantly, these techniques do not fundamentally change the relationship between individuals and the organisations that train AI on their data. They are tools that can be used for better or worse, depending on who deploys them and why. They are not a solution to the privacy problem so much as a new set of trade-offs to navigate.

The question is not whether federated learning and differential privacy make you safer, because the answer is nuanced and contextual. The question is whether you trust the organisations deploying these techniques to make appropriate decisions on your behalf, whether you believe the oversight mechanisms are adequate, and whether you accept the trade-offs inherent in the technology.

For some users, in some contexts, the answer will be yes. The ability to contribute to medical AI research without sharing raw health records, or to improve keyboard prediction without uploading every message, represents genuine progress. For others, the answer will remain no, because no amount of mathematical sophistication can substitute for genuine control over one's own data.

Privacy-preserving AI is neither panacea nor theatre. It is a set of tools with real benefits and real limitations, deployed by organisations with mixed motivations, in a regulatory environment that is still evolving. The honest assessment is that these techniques make some attacks harder and enable some attacks we have not yet fully understood. They reduce some risks whilst obscuring others. They represent progress, but not a destination.

As these technologies continue to develop, the most important thing users can do is maintain healthy scepticism, demand transparency about the specific techniques and parameters being used, and recognise that privacy in the age of AI requires ongoing vigilance rather than passive trust in technical solutions. The machines may be learning to protect your privacy, but whether they succeed depends on far more than the mathematics.


References and Sources

  1. Google Research. “Federated Learning for Mobile Keyboard Prediction.” (2019). https://research.google/pubs/federated-learning-for-mobile-keyboard-prediction-2/

  2. Google Research. “Federated Learning of Gboard Language Models with Differential Privacy.” arXiv:2305.18465 (2023). https://arxiv.org/abs/2305.18465

  3. Dwork, Cynthia. “Differential Privacy.” Springer Nature, 2006. https://link.springer.com/chapter/10.1007/11787006_1

  4. Harvard Gazette. “Pioneer of modern data privacy Cynthia Dwork wins National Medal of Science.” January 2025. https://news.harvard.edu/gazette/story/newsplus/pioneer-of-modern-data-privacy-cynthia-dwork-wins-national-medal-of-science/

  5. NIST. “Guidelines for Evaluating Differential Privacy Guarantees.” NIST Special Publication 800-226, March 2025. https://www.nist.gov/publications/guidelines-evaluating-differential-privacy-guarantees

  6. Frontiers in Computer Science. “Deep federated learning: a systematic review of methods, applications, and challenges.” 2025. https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1617597/full

  7. arXiv. “Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction.” arXiv:2210.16947 (2022). https://arxiv.org/abs/2210.16947

  8. NDSS Symposium. “Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning.” 2025. https://www.ndss-symposium.org/ndss-paper/manipulating-the-byzantine-optimizing-model-poisoning-attacks-and-defenses-for-federated-learning/

  9. arXiv. “Model Inversion Attack against Federated Unlearning.” arXiv:2502.14558 (2025). https://arxiv.org/abs/2502.14558

  10. NDSS Symposium. “Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning.” 2025. https://www.ndss-symposium.org/wp-content/uploads/2025-644-paper.pdf

  11. PMC. “The 2020 US Census Differential Privacy Method Introduces Disproportionate Discrepancies for Rural and Non-White Populations.” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11105149/

  12. Gates Open Research. “Differential privacy in the 2020 US census: what will it do? Quantifying the accuracy/privacy tradeoff.” https://gatesopenresearch.org/articles/3-1722

  13. Springer Nature. “Meta's privacy practices on Facebook: compliance, integrity, and a framework for excellence.” Discover Artificial Intelligence, 2025. https://link.springer.com/article/10.1007/s44163-025-00388-5

  14. Apple Machine Learning Research. “Learning with Privacy at Scale.” https://machinelearning.apple.com/research/learning-with-privacy-at-scale

  15. Apple Machine Learning Research. “Learning Iconic Scenes with Differential Privacy.” https://machinelearning.apple.com/research/scenes-differential-privacy

  16. Apple Machine Learning Research. “Understanding Aggregate Trends for Apple Intelligence Using Differential Privacy.” https://machinelearning.apple.com/research/differential-privacy-aggregate-trends

  17. Deloitte Insights. “Consumer data privacy paradox.” https://www2.deloitte.com/us/en/insights/industry/technology/consumer-data-privacy-paradox.html

  18. NVIDIA Blog. “NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data.” https://blogs.nvidia.com/blog/clara-federated-learning/

  19. Owkin. “Federated learning in healthcare: the future of collaborative clinical and biomedical research.” https://www.owkin.com/blogs-case-studies/federated-learning-in-healthcare-the-future-of-collaborative-clinical-and-biomedical-research

  20. EUR-Lex. “European Commission Digital Omnibus Proposal.” COM(2025) 835 final, November 2025. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52025DC0835

  21. CNIL. “AI system development: CNIL's recommendations to comply with the GDPR.” https://www.cnil.fr/en/ai-system-development-cnils-recommendations-to-comply-gdpr

  22. 360iResearch. “Privacy-Preserving Machine Learning Market Size 2025-2030.” https://www.360iresearch.com/library/intelligence/privacy-preserving-machine-learning


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Matt

Lately, I've been thinking a lot about how to unite two parts of my life that are seemingly disparate, opposite, irreconcilable, etc.

One is the fact that I've been working on a single endeavor, Write.as, for 11 years now — longer than I've worked on any single thing in my life. The other fact I'm trying to square is that these days, I spend a lot of time on things that have absolutely nothing to do with this, whether art, writing, photography, organizing meetups, or just grabbing a long lunch or taking the day (or three) off.

In a conversation with a friend last weekend, while I was in Brussels for FOSDEM, it finally clicked for me.

He put it plainly: some projects you do for a long time — for him, ten years was about right — and others are simply for pleasure; a long night of great conversation, a good meal with friends, etc. The former makes up the purpose in life that keeps you grounded, and the latter is life for its own sake — and nothing more.

Maybe it sounds simple, but somehow I’d never heard it put this way before. At least in the US, the options in the tech industry seem to be: either take a steady paycheck, or launch a startup, raise some money, and grind until you hopefully make that sweet payday for you and your investors.

All are valid, of course. But there are fewer words spilled about what I’m doing with Write.as: building a small software company that sustains itself for several decades. Yet, “Where is the hustle? Why aren’t you working 12 hours a day, 6 days a week?,” I hear the grindmaxxers and hustlebros cry from across the tubes of the internet.

I’ve never worried about these hardcore 10X-er brogrammers before, but I have started to feel odd in this industry building something where its only goal is to… just last a long time. And obviously keep the people that use it happy.

The thing about doing one thing for a long time…

…is that you have do one thing for a long time. And if you’re the type of person who does that in the first place, you probably have more interests in life. And, well, you might get tired of doing that one original thing for so long. You feel yourself change; life changes from under you, and days pass by, expecting you to adapt right along with them.

It’s also easy to say “I’m going to do this forever” on day one — it’s all so new and exciting! there’s so much to do! It’s less exciting to say this after 11 years, after all the battles fought, won, and lost (even if it was never really that bad).

Enough time, and other things start to look appealing, like taking a long walk instead of responding to emails, or taking up a bartending job because you don’t have to worry about AI bots in real life (well— debatable), or starting a new meetup for writers (like I did last month), or making zines and writing poetry and taking pictures of street trash and so on.

So this is the general limbo I’ve found myself in regarding Write.as, really for the last three or four years (at least). Besides the business, over the years there was plenty that took a toll on me — relationships that came and went, the pandemic, the death of my brother and my dog. And there was all the good, too: moving to New York turned out to be all I wanted and far more; I’ve been able to travel and speak about the things I make all over the world; I’ve found myself in communities of creative people and builders and connectors all around me, all around the world. Some days I’ve wallowed in an unmotivated limbo, tired of running this marathon, and on others I’m proud of what I’ve built, and I know how lucky I am to have made it this far.

Sometimes it takes a good conversation over a pint in Brussels to remember that.

All of this to say…

By now, I know the perils of writing about how enthused I am about some new perspective I have on life and Write.as, both so intertwined, before having anything to show for it. So this time I’m just getting to work, and if you’re writing here, you’ll simply notice the progress. Keep up with the big updates on our big Blog (@blog@write.as), smaller updates on our Changelog (@updates@write.as), and everywhere else you can find us on the social web:

Lastly, in case you missed it, we’re celebrating 11 years on the web with a sale on our 5-year Pro plan through February 16th, to help support our small bootstrapped business for the next five years and beyond.

Thanks to all who make this space such a great corner of the web — and all who bear with me through my own internal ups and downs :)


Thoughts? Drop me a note @matt@writing.exchange, or on Remark.as.

 
Read more... Discuss...

from Shad0w's Echos

CeCe makes Love

#nsfw #CeCe

That Thanksgiving night in our dorm room felt like the culmination of everything we'd built—years of friendship twisted into something deeper, more electric, amid the quiet hum of the city outside. The remnants of our makeshift meal lay scattered on the floor, forgotten as CeCe's confession hung in the air, her nervous tremors vibrating through our naked embrace. I pulled back slightly, cupping her face in my hands, my thumbs brushing away the stray tears that had gathered in her eyes. She was so beautiful like this, her caramel skin flushed, those full and captivating breasts rising and falling with her shaky breaths, her thick curves a testament to the fearless woman she'd become. But beneath it all, I saw her vulnerabilities—the way her mind fixated on routines, on the sensory overload that porn provided as her anchor in a world that often felt too chaotic.

“CeCe,” I whispered, my voice thick with emotion, “I've wanted this for so long. But we'll go slow, okay? Whatever feels right for you.” She nodded, her gaze intense, almost laser-focused in that way she had when something captivated her completely, like solving a complex equation or diving into one of her endless porn binges. I knew her brain worked differently—craving patterns, repetition, the reliable rush of stimulation that helped her navigate the unpredictability of emotions and touch. As someone who processed the world more straightforwardly, I admired it, even if it sometimes left me chasing to keep up. Gently, I leaned in, our lips meeting in a tentative kiss that quickly deepened, her mouth soft and eager against mine, tasting of cranberry and the salt of her earlier nerves.

We shifted on the bed, our bodies aligning naturally, skin to skin. I trailed my fingers down her arms, teasing the sensitive undersides, then along her sides, feeling the subtle shiver that ran through her—a response to the newness, the intimacy beyond her solo rituals. “Remember that first time I showed you porn?” I murmured against her neck, nipping lightly at the spot where her pulse raced. “I thought I was just helping you loosen up. But it changed everything—for both of us.” She moaned softly, her hands exploring my back, her touch methodical, almost exploratory, as if mapping every inch. “It did,” she agreed, her voice breathy. “It's my everything now. The way those Black women own their bodies in those videos... it's what I need to feel alive.” I confessed then, my lips brushing her ear, “I'm addicted too, CeCe. Hunting for those videos for you over the summer? It pulled me in. I can't stop thinking about it—about you. I love porn too.”

The admission hung between us, binding us closer. But CeCe's eyes flicked to the door, then the window—both already cracked open as per her ritual, the cool night air whispering in with distant city sounds. “Leave them open,” she said, her voice laced with that familiar thrill. “I need to feel... seen. The risk, the exposure—it's part of me.” My heart raced at the idea—the door ajar, anyone could wander by in the empty hall; the window framing us for anyone glancing up from the street below. It heightened everything, a rush of adrenaline that made my core ache. I'd never been with a woman before, but with her, it felt instinctive, right.

I really thought I was straight until I met CeCe. But right now, I don't have labels for any of this. I just have an undeniable emotional bond with someone stronger and more capable than they know. We kissed again, deeper, our tongues dancing as I guided her back against the pillows, my hands caressing her breasts, thumbs circling her hardening nipples until she arched into me.

CeCe's curiosity shone through as she pushed me gently onto my back, her eyes wide with fascination. “I want to try... tasting you,” she said, almost analytically, like testing a hypothesis. She lowered herself between my legs, her breath warm against my thighs, and I gasped as her tongue flicked out tentatively, then more confidently, lapping at my folds with focused precision. She explored me methodically—long, slow licks alternating with gentle sucks on my clit—her obsession with repetition turning it into a rhythmic bliss that had me writhing. “God, CeCe, that feels incredible,” I moaned, threading my fingers through her hair, the open door and window amplifying every sound, every sensation, as if the world might hear us. The risk made it hotter, my body thrumming with the danger of exposure.

I returned the favor, eager despite my inexperience, kissing down her body—her neck, her breasts, sucking each nipple until she whimpered—then lower, to the heat between her thick thighs. Her pussy was already slick from her chronic habits, glistening in the low light, and I savored her musky taste as I licked her slowly, circling her clit with my tongue while my fingers teased her entrance. CeCe bucked against my mouth, her moans echoing softly, one hand reaching for her phone to queue up a video—Black women entangled in passionate, exposed encounters, their bodies moving in ways that mirrored her deepest fixations.

We watched porn together for what felt like hours, edging each other mercilessly: my fingers plunging into her wetness, stroking her inner walls while she rubbed my clit in steady circles, building us both to the brink without tipping over. The porn played on mute at first, then low volume, its visuals fueling her, but I whispered consents and check-ins—”Is this okay? Tell me if it's too much”—honoring her need for control amid the sensory storm. It was healing, in a way—reclaiming the addiction that had isolated her, turning it into something shared, intimate. We have masturbated together a few times. Sure. But it was about the screen and our pleasure. Never us touching like this. This was special.

Finally, as the tension coiled unbearably, CeCe handed me the dildo from her drawer—a smooth, curved silicone toy, realistic in its girth. “Please, Tasha... I'm ready,” she said, her voice trembling with a mix of fear and desire. I coated it with lube, positioning myself between her legs, the open window letting in a breeze that pebbled our skin. Slowly, sensually, I pressed the tip against her virgin entrance, watching her face for any sign of discomfort. “Close your eyes and relax. Breathe with me,” I murmured, easing it in inch by inch, her tight walls yielding like warm velvet around the intrusion, a soft gasp escaping her lips as I filled her gently, rhythmically. It was exquisite—the way she stretched, her pussy clenching around the toy as I thrust shallowly at first, then deeper, my free hand rubbing her clit in tandem. She rocked against me, her hands gripping the sheets, the risk of our exposed position sending shivers through us both.

We built to a crescendo, the porn fading into the background as our connection took over. CeCe's first partnered orgasm crashed over her not from the screen, but from my touch—the dildo buried deep, my mouth on her breast, sucking hard as she cried out, her body convulsing in waves of pleasure that left her trembling and spent. I followed soon after, her fingers still working me expertly, the release washing away years of unspoken longing.

In the afterglow, we lay tangled on the bed, the door and window still open, a soft breeze cooling our sweat-slicked skin. I held her close, reflecting on the beauty of it all—how her obsessions, her differences, had led us here, healing wounds neither of us had fully acknowledged. CeCe nestled peacefully against my breasts, tearfully happy, her sobs turning to contented sighs as she traced lazy patterns on my skin.

“I love you, Tasha,” she confessed softly, her voice thick with emotion. “You have been my rock when no one else was there for me. My classmates think I'm weird, my mom thinks I'm a failure, I'm a socially awkward mess and you stand by me. I don't know how it'll work long-term, or if we should make it 'official'... but I wanted you to know. I'll always be there for you, even if I have to wear clothes.”

 
Read more... Discuss...

from Roscoe's Story

In Summary: * This quiet Sunday is winding down. It finds me listening to relaxing music, ready to load my nighttime meds onto a little plate, and happy with my decision to avoid tonight's Superbowl, its annoying halftime show, and the alternative patriotic halftime show. The time I would have wasted on those things will be much better spent focusing on my night prayers.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Health Metrics: * bw= 229.06 lbs. * bp= 157/92 (65)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 07:00 – 1 six inch submarine sandwich, 2 cookies * 11:55 – pork chops, noodles, baked beans, whole kernel corn * 15:50 – liver and onions * 17:25 – 1 fresh apple

Activities, Chores, etc.: * 06:30 – bank accounts activity monitored * 06:50 – read, pray, follow news reports from various sources, surf the socials * 10:00 – listen to the Pre-1955 Mass Propers for Sexagesima Sunday, Feb. 08, 2026 * 10:40 – listen to KAHL Radio * 12:30 – listening to The Home for IU Women's Basketball for the pregame show ahead of the call of this afternoon's game between the Hoosiers and Purdue's Boilermakers. * 14:55 – And the IU Women Win, final score 74 to 59. * 15:00 – now watching PGA Tour Golf, final-round play from the WM Phoenix Open. * 17:20 – listen to KAHL Radio

Chess: * 16:25 – have moved in all pending CC games

 
Read more...

from Mitchell Report

⚠️ SPOILER WARNING: MAJOR SPOILERS

Alt text: Promotional poster for the video game "Fallout" featuring three characters and a dog walking towards the viewer on a dusty road with a large, weathered "Welcome to New Vegas" sign in the background under a dusky sky. The word "Fallout" is prominently displayed at the top in bold yellow letters with a lightning bolt through the "o".

In the desolate wasteland of New Vegas, three survivors and their loyal dog embark on a perilous journey through a post-apocalyptic world where every step could be their last.

My Rating: ⭐⭐⭐⭐ (4/5 stars)

Episodes: 8 | Aired: 12-16-2025

This season answered a lot of questions and introduced a few new ones. But I actually liked this season better than the first season. I think they did the flashbacks much better this season, and this is thoroughly engaging and enjoyable entertainment. The one negative I would have is that this is not the first TV show or movie that has tried to technologize the 50s after World War 2 and the 60s. I am always astonished at the technology they built but other things that seem like simpler items that they miss, and I say why doesn't the technology they have built work here or there. Oh, and I didn't miss the few references to modern day issues and Trump. I find it amazing how politics of today is meshing into TV shows. Some do it well (like here) and some don't (like the recent Superman of 2025). But one thing on the real life side is the old axiom, history seems to always repeat itself. Looking forward to Season 3.

TMDb
This product uses the TMDb API but is not endorsed or certified by TMDb.

#review #tv #streaming

 
Read more... Discuss...

from folgepaula

as sweet as possible as spontaneous as possible as sincere as possible as serene as possible as strong as possible as symbolic as possible as soothing as possible as soulful as possible

/feb26

 
Read more...

from Sinnorientierung

A Message of Hope

Each of you is unique, unrepeatable, irreplaceable, incomparable, separate, and distinct. You have been given a body and a pyche which are sometimes similar in character type and/or traits to others, but beyond that your are a spirit person with a limited degree of freedom and a capacity to respond to life an its demands. There never was, there never is, there never will be an absolute twin, a clone, one who can replace you. You are a one of a kind and life is calling, inviting, and challenging you to become the authentic you by trancending yourself and at the same time forgetting yourself.

If you simply search for pleasure or power, you will experience something missing. You will at some moment feel empty, a void, a vacuum. You will wonder, “What's it all about?”

When the need for meaning finally occurs to you, you will beging to seach for meaning every day.

...

McKilopp, T. (1993) A MESSAGE OF HOPE, The International Forum of Logotherapy, p. 4

#LogoTherapy #FranklViktor #McKillopp #hope #UniquePerson #meaning

 
Weiterlesen... Discuss...

from Reflections

This fairly recent obsession with metrics in the workplace is driving companies insane.

A while back, I watched a video about all the ways hotels are trying to save money by, among other things, eliminating storage space, making the bathroom less private, removing desks, and pressuring guests to work at the bar, where they can spend more money. (By the way, that bartender? They're also the receptionist.) These changes are, of course, driven by metrics like “GSS” and “ITR,” whatever the f@*k those are.

Is there a kernel of truth to all of this? Sure. Aloft Hotels are cozy, and they seem to follow this playbook. I didn't mind staying in one when I was stuck in San Francisco for one night more than ten years ago. Would I want to stay in one of their rooms during a business trip or anything else lasting more than a couple of days? Hell no. I'd like a desk and somewhere to put clothes. (I know, I'm so needy. I travel with clothes.)

Metrics are fine, sometimes, when their use is limited and their shortcomings are genuinely appreciated. Taking them too seriously and letting them make the decisions, however, is a recipe for disaster. Hard questions demand more thoughtfulness than that. “GSS” and “ITR” are meaningful until they aren't, and nobody is going to find solace in those abbreviations when generations of potential customers steer clear of your business because they actually want something good.

Sadly, I don't think most businesses think that far ahead.

Show me the metric which proves that your business isn't incurring massive risk by ignoring common sense. Until then, I don't care about “the numbers.”

#Life #SoftwareDevelopment #Tech

 
Read more...

from Healthier

Our Mothers — This Documentary is a Very Keen Look at Our Dear Mothers

Lydia Joly, middle, on her parents’ farm circa 1967 — son, Loran, left; sister, right; my great-grandmother, back row. When great-grandmother was not visiting, I would sometimes sleep in the bed she had slept in when at the farm… “The apple doesn’t fall far from the tree?”

A documentary on the “astounding impact” of a mother on others, was created by Michael DuBois, a few years ago …

“Becoming Home – full film”:

https://youtu.be/NtPbAuFMI0c?si=bcCTE2fZH3PVN7vy

“The documentary “Becoming Home” touched my heart, a few years ago. Make by filmmaker Michael DuBois, he chronicled the “first year after the death of his mother. He set out to discover why she had the astounding impact on others that she did…”

Michael lives on Cape Cod, as of when he created this documentary…

“Becoming Home” is his finished story. It is the story of his mother, and her grace through life. It is the story of his childhood. And it is the story of learning to move forward after those losses, without moving away from them. Directed by Michael F. DuBois Produced by Bert Mayer and Larissa Farrell Director of Photography Mark Kammel Original Music by Derek Hamilton Featuring Music by Sky Flying By and Pete Miller”

Denzel Washington has this to say, about mothers, also…

“The Power of a Mother’s Love | Denzel Washington’s Inspiring Speech on Gratitude and Respect”

My mother, Lydia Joly, age 87, war refugee from Piaski, Poland, with time in a relocation camp in northern Germany after World War I also — arrived Ellis Island 1950 — image by son Loran

Christmas card 2024 with Lydia’s self-made Gingerbread house

Lydia — my mother — was born in Lubelskie County, Poland.

We see her village, Piaski, here, with beautiful music…

“Piaski Lubelskie”:

https://youtu.be/XF04EznukOY?si=E2qJLDS5jNsJxzaI

No wonder she loves gardening and flowers…

Lydia, gardening, 2025, age 87

 
Read more...

Join the writers on Write.as.

Start writing or create a blog