Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Kremkaus Blog
Gestern in einer digitalen Politik-Werkstatt des SEND e.V. zur politischen Arbeit in ländlichen Räumen habe ich einen Gedanken geteilt, der mich schon länger begleitet und den ich hier etwas sortieren möchte. In meinem beruflichen Alltag begegne ich in Dörfern und Kleinstädten regelmäßig Menschen, die sich selbst als konservativ bezeichnen – und zugleich an Projekten arbeiten, die man im urbanen Diskurs vermutlich als progressiv oder sogar transformativ einordnen würde. Diese Menschen bauen Netzwerke auf, gründen Coworking Spaces, initiieren MehrWertOrte oder engagieren sich für das, was Ray Oldenburg als „Dritte Orte“ beschrieben hat: Räume jenseits von Zuhause und Arbeitsplatz, in denen Begegnung, Austausch und demokratische Aushandlung stattfinden.
Und doch höre ich immer wieder den Satz: „Ich bin eher konservativ.“ Das hat mich lange irritiert – auch, weil ich mich selbst nicht als konservativ verstehe und wir trotzdem gut, konstruktiv und mit gemeinsamen Zielen zusammenarbeiten. Hinzu kommt, dass empirische Befunde, etwa von dem Soziologen Ansgar Hudde, zeigen, dass das Wahlverhalten vieler dieser Personen – mit regionalen Ausnahmen wie Bayern oder der Eifel – gar nicht eindeutig konservativ ist. Selbstzuschreibung und politische Praxis fallen also nicht zwangsläufig zusammen.
Vielleicht liegt die Irritation an meinem Begriffsverständnis. Im urban geprägten Diskurs erscheint „konservativ“ häufig als Gegenbegriff zu „progressiv“: Bewahrung statt Veränderung, Tradition statt Innovation. Im ländlichen Raum scheint sich diese Semantik jedoch zu verschieben. Dort bedeutet konservativ meiner Wahrnehmung nach oft, Verantwortung für das Gemeinwesen zu übernehmen, sich zu kümmern und funktionierende Strukturen nicht leichtfertig aufzugeben.
Wenn eine Unternehmerin einen leerstehenden Vierseitenhof in einen Coworking- und Begegnungsort transformiert, dann geschieht das selten aus disruptivem Sendungsbewusstsein. Es geht um Ortskernbelebung, um kürzere Wege, um Perspektiven für junge Menschen, um soziale Infrastruktur. Das ist kein Bruch mit dem Bestehenden, sondern dessen Weiterentwicklung. Auch der urbane Coworking Space war nie wirklich disruptiv – Büros existieren weiterhin –, doch im städtischen Kontext wurde Coworking gern als Symbol einer „neuen Arbeitswelt“ inszeniert. Neu ist dort normativ aufgeladen. Auf dem Land ist neu vor allem funktional.
In Metropolen war Coworking lange Teil einer kreativen Milieu-Logik; in Kleinstädten und Dörfern ist es häufig eine infrastrukturelle Antwort auf strukturelle Herausforderungen: Leerstand nimmt zu, Geschäftsmodelle verändern sich, klassische Treffpunkte verschwinden, junge Menschen pendeln oder ziehen weg. Unter diesen Bedingungen wird ein Coworking Space zu einem Instrument der Daseinsvorsorge im weiteren Sinne.
Ähnlich verhält es sich mit MehrWertOrten: multifunktionale Räume, die Arbeiten, Lernen, Nahversorgung, Kultur und Ehrenamt zusammenführen. Sie stabilisieren lokale Netzwerke und erhöhen Resilienz. Viele Initiator*innen sprechen dabei von Heimat, Zusammenhalt, Verantwortung und Generationengerechtigkeit – Begriffe, die politisch eher konservativ konnotiert sind. Und doch sind diese Projekte ohne Offenheit nicht denkbar. Unterschiedliche Professionen, Lebensentwürfe und politische Haltungen teilen sich Infrastruktur und kommen ins Gespräch. Vielleicht lese ich diese Praxis deshalb – aus meiner urban sozialisierten Perspektive – als progressiv.
Je länger ich darüber nachdenke, desto mehr zweifle ich an der Schärfe der Dichotomie „konservativ versus progressiv“ im ländlichen Kontext. Viele der Akteur*innen, mit denen ich arbeite, wollen nichts umstürzen. Sie wollen erhalten, was ihnen wichtig ist: Lebensqualität, Gemeinschaft, wirtschaftliche Tragfähigkeit. Gerade aus diesem Motiv heraus entwickeln sie neue Modelle von Arbeit, Kooperation und Begegnung. Das wirkt weniger wie ein Widerspruch als wie eine andere Form von Fortschritt – nicht der radikale Bruch mit dem Bestehenden, sondern eine behutsame Transformation aus der Mitte der Gemeinschaft heraus.
Wer ländliche Räume vorschnell in politische Schubladen steckt, übersieht diese Dynamik. Dort entstehen Orte, die demokratische Kultur stärken, wirtschaftliche Perspektiven eröffnen und gesellschaftliche Vielfalt ermöglichen – häufig initiiert von Menschen, die sich selbst als konservativ verstehen. Für mich ist das inzwischen weniger irritierend als ermutigend. Es zeigt, dass Offenheit nicht zwingend an ein bestimmtes politisches Label gebunden ist und dass Coworking Spaces, MehrWertOrte und andere Dritte Orte Brücken schlagen können – zwischen Milieus, Generationen und Weltanschauungen. Vielleicht liegt genau darin ihr eigentlicher Mehrwert.
from
Atmósferas
Así como suena, sin alterar. En su golpe, en las ondas que produce, tal como es, sencilla perfección.
Dejando que la mente caiga, descanse. Simplemente lo que sucede.
Otra gota, y otra. La montaña, la casa, el origen: la luz que enciende la forma y tus pensamientos, también gotas.
En su golpe, otra gota.
Sin alterar nada, descansa.

I started The Catechetic Converter a year ago. And I feel an obligation to write something in honor of that milestone, celebrating the fact that I’ve had my own website for a full year.
The post that has the most views is my first one, which is about Linux. And Linux and this site have an intertwined relationship in that my switch to Linux helped inspire me to get away from “Big Tech” in other ways. In fact, it was (I think) a post on Westenberg that inspired me to start the blog. The idea of having my own little corner of the web, not mediated by some corporation—and not built to make money—was appealing. Just a place to stick my random thoughts and ideas, putting them “out there” in the ether to see where they land, what they inspire… I loved that.
And Linux and this post are kind of intertwined because I have spent the past week staying up far too late most nights (in violation of one of my Lenten disciplines to go to bed by 10:30) trying to get a custom firmware to run on an MP3 player.
See, in continuance of the spirit of moving away from Big Tech, I have started disentangling myself from having everything on a phone. I asked for a Sony CyberShot F707 digital camera for Christmas (a model I had when it was brand new and stupidly donated to Goodwill or whatever several years back). And then I bought an Innioasis Y1, which hearkens back to iPods of yore (with a click-wheel and everything). This is a device that folks like to tinker with as well and I learned that people had managed to get a bespoke firmware known as RockBox—initially developed for old iPods—to run on the device, improving its functionality in numerous ways. So I planned to do this.
After what seemed like months, the device finally arrived. The standard, out-of-the-box firmware was fine, if a little rough. But the filing system for finding my music was wanting and so I decided to give RockBox a try because it offers more refinements in this area (plus a TON of fun custom themes for the device). Doing this requires downloading and running a program known as the Innioasis Updater, which was developed primarily for Windows and Mac but also includes a Linux version that is overtly said to be “unofficial” with warnings that I would be “on my own” with this. I got the sense that this would be a challenge.
I’ll spare you too many details, but I had to download another tool called MTKClient, which is written in Python, and had to run a ton of terminal commands to get running. It didn’t help that installation guides were written using LLMs and I needed to switch back and forth between two of them to get all the necessary steps right (the “official” one on GitHub failed to note the need to change directories in a couple of key places). I wound up needing certain drivers, having to write custom scripts. At one point I managed to accidentally remove all of my terminal commands thanks to forgetting to add the word “eval” to a directory. Then I also managed to lock myself into my machine (in this case a 2011-era Mac running Linux Mint) constantly trying to download Android Platform Tools from a broken mirror of a repository—which taught me a whole a range of new commands to fully purge a faulty download. After successfully installing both programs, I found that the updater would not properly read my device.
I attempted to install the program using my wife’s Windows 11 laptop and was reminded why I’ve spent over twenty years hating Microsoft.
But GitHub forums came to the rescue (where I also learned that the updater was vibe-coded using LLMs which probably explains a lot) and I got RockBox to run on the device. It now has a theme that looks like it belongs on the first MacIntosh (because even though we might have broken up, I still carry parts of her that are now parts of me). I’m listening to Maggie Rogers on the device as I write this—right after I celebrated this accomplishment with the Geto Boys’ “Damn It Feels Good To Be A Gangsta.”
***
This morning at my parish’s Bible Study, one of my parishioners noted that I tend to have a lot of information and ephemera in my head related to any number of things related to Christianity. “I tend to think of things more simply,” he said.
I told him that I also value simplicity, but I come to that simplicity through learning and accumulating knowledge about what I’m doing and believing. For me it’s like a bell curve by way of zen. What I mean is that the zen monk may take a guy out of the mud, put him on the stool for years and teach him koans and sutras, get him to the verge of enlightenment only to then throw him back in the mud because the guy needs to learn that enlightenment can be found in the mud. In other words, I like taking things apart just to get back to where I started because I now understand that start so much better.
Tinkering and futzing with my computer speaks to this because it helps me to consider the complexity behind simple things. Like right now I’m putting letters together into words on a document. But there is an astounding amount of calculations taking place to make this happen. The words you read on your screen are the result of carefully managed electrical currents running on circuit boards and through cables connected to liquid crystal fields that display what you’re seeing. And there are also an incomprehensible number of electrical charges going on among the synapses in your brain to not only cause you to see these words, but to interpret them as things that cause you to feel things and think other things.
In the Bible Study we looked at Jesus’ discussion with Nicodemus in the third chapter of John’s gospel. In that passage Jesus says that everyone born of the Spirit is like the wind (in the Greek language of John’s gospel there is a triple meaning of Spirit/wind/breath that Jesus is playing with here). The wind connects. The wind moves. This speaks to the complex connections between things, connections made by God. Connections where God can be found. And like the wind, once God shows up you know it.
***
I’m not really sure where I’m going with this. I guess I’m just on the other side of bell-curve, back where I started. Putting out thoughts and words into the ether to see where they land. To see what they inspire. Which is another wind-related word, by the way. I’m writing this on a miraculous piece of technology and you’re reading it on one equally so. In between us is a dense web of complexity and connection—including both electricity and wind. We can take a look at that complexity, investigate it, see how it runs. But in the end we come back to where we began:
A writer and a reader, brought together by some wind. A wind holy and mysterious.
***
The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.
#Linux #Christianity #Thought #Random #Jesus #Technology
Los jóvenes de hoy no son capaces siquiera de saltar una jirafa. A tu edad, yo me apoyaba en una vara larga que hice con madera de encino, y fui capaz de saltar tres jirafas en diez segundos. Mi amigo Edgard llegó a saltar cinco jirafas, cronometrado. Tú no eres capaz ni de brincar un sapo, y si te pasa cerca te asustas.
Esto te lo digo porque has cogido una pena y no sales de ella. Ahí pegado como un tornillo. Hijo, los muchachos de antes cogíamos diez penas a la semana y las íbamos brincando de una en una hasta dejarlas atrás. Yo fui capaz de saltar seis penas el mismo día. Tu tío el Matracas saltó ocho. Y tú ves una y te quedas atascado.
A tu edad los muchachos del pueblo nos levantábamos el ánimo tirando piedras a los que iban en sus canoas por los rápidos del río Zot. Llegué a arrancarle una oreja al tío abuelo de un amigo al que le decíamos el Cornetas. Muy difícil darles porque en el pueblo se la sabían todas.
-Papá, acuérdate que tú naciste allá y esos eran otros tiempos. Aquí en Los Ángeles la vida es diferente, las penas se disfrutan y, si se puede, vivimos de ellas.
from
Iain Harper's Blog
In March 2023, GPT-4 could identify prime numbers with 97.6% accuracy. By June, that figure had cratered to 2.4%. Not a rounding error, not a minor regression, but a 95-point collapse on the same task with the same prompts. If a bridge lost 95% of its load-bearing capacity in three months, someone would go to prison. In AI, the vendor posts a changelog and moves on.
This pattern has repeated with depressing regularity across every frontier provider. Models ship to applause and enterprise contracts get signed on the strength of benchmark screenshots, and then something changes. The model you evaluated is no longer the model answering your customers, and nobody tells you until your production workflow starts producing garbage.
Researchers at Stanford and UC Berkeley tracked this drift formally, comparing GPT-3.5 and GPT-4 snapshots from March and June 2023 across seven tasks. The results were bad enough to make the researchers themselves flinch. GPT-4’s ability to generate directly executable code dropped from 52% to 10%. Its willingness to follow chain-of-thought prompting, one of the most widely used techniques for improving accuracy, degraded without explanation. GPT-3.5 actually improved on some tasks where GPT-4 got worse, which implies that updates to one model’s behaviour were creating unintended regressions in another.
“The magnitude of the changes in the LLMs’ responses surprised us,” James Zou, a Stanford professor and co-author, told The Register. The team’s conclusion was blunt. The behaviour of the “same” LLM service can shift substantially in weeks, and nobody outside the provider knows when or why.
This wasn’t a one-off result that got debated and forgotten. The OpenAI developer forums have become a rolling graveyard of complaints. In September 2025, users running GPT-4.1 reported severe intelligence degradation within 30 days of launch, with complex tool calls and multi-step instructions suddenly failing. Similar threads appeared for GPT-4 Turbo in May 2025. The pattern never varies, and by now it has become depressingly predictable. Works brilliantly at launch, degrades silently, users scramble to figure out what broke.
There are at least four mechanisms that can degrade a deployed model, and most frontier providers are using all of them simultaneously.
Quantisation is the most technically straightforward of the four, and the easiest to understand. A model trained in 16-bit or 32-bit floating-point precision gets compressed to 8-bit or 4-bit integers for serving. The arithmetic is straightforward enough, since a model stored in FP16 needs roughly two bytes per parameter, so a 70-billion-parameter model demands about 140GB of VRAM just for weights. Quantise to 4-bit and you cut that to around 35GB, enough to run on hardware that costs a fraction as much.
The trade-off is supposed to be minimal, and Red Hat’s analysis of over 500,000 evaluations found that 8-bit and 4-bit quantised models showed “very competitive accuracy recovery” on most benchmarks, especially for larger models. But that phrase “most benchmarks” is doing heavy lifting. Quantisation works by rounding, and rounding destroys outlier values. The weights that fire rarely but matter enormously for edge-case reasoning are exactly the weights that get flattened first. For standard tasks you barely notice the difference, but for the specific hard problems your production system was built to handle, the gap can be catastrophic. One developer reported that dynamic quantisation of a 3B-parameter model dropped accuracy from 65.6% to 32.3%, a halving that no benchmark average would predict.
Mixture-of-experts routing is the more interesting culprit, and the one providers talk about least. DeepSeek’s V3, for example, has 671 billion total parameters but only activates about 37 billion per token. The economics are irresistible because you get the capacity of a massive model with the inference cost of a much smaller one. But the router decides which experts handle which queries, and routing decisions are probabilistic. A query that activated your model’s strongest expert subnetwork at launch might get routed differently after an update to the routing logic, or after the provider adjusts load balancing to handle peak traffic. The user sees the same model name in the API response. The actual computation behind it may have changed entirely.
Distillation and model substitution is the elephant in the room that everyone suspects but nobody can prove definitively. Rumours have circulated since mid-2023 that OpenAI routes some queries to smaller, cheaper models behind the same API endpoint. The Gleech.org 2025 AI retrospective put it plainly: “True frontier capabilities are likely obscured by systematic cost-cutting (distillation for serving to consumers, quantisation, low reasoning-token modes, routing to cheap models).” GPT-4.5 was retired after just three months, presumably because the inference costs were unsustainable, even though it still ranked in the top five on LMArena for hallucination reduction nine months later. The model that performed best got killed because it was too expensive to run.
Safety tuning and RLHF adjustments create the subtlest form of drift. When OpenAI tightens content filters or adjusts the model’s tendency to refuse certain queries, those changes ripple through the entire behaviour space. The Stanford study found that GPT-4 became less willing to explain why it refused sensitive questions, switching from detailed explanations to terse “Sorry, I can’t answer that” responses. The model may have become safer by one measure, but it simultaneously became less transparent and less useful for legitimate applications that happened to brush against the updated boundaries.
Running frontier models is staggeringly expensive, and every provider is under pressure to reduce cost-per-token. The maths, as one industry analysis noted, resembles building more fuel-efficient engines and then using the efficiency gains to build monster trucks. Token prices have dropped by a factor of 1,000 in three years, but reasoning models now generate thousands of internal tokens before producing a single visible output, and 99% of demand shifts to the newest model the moment it ships.
Providers respond by doing what any business would do. They optimise for throughput and margin, quantising the weights and routing easy queries to cheaper subnetworks while distilling the flagship into something that passes the benchmarks but costs a tenth as much to serve. The individual techniques are all defensible, but stacked together and applied silently, they create a system where the model’s advertised performance diverges from its delivered performance over time.
DeepSeek made this trade-off explicit and turned it into a business strategy. Its V3 model serves inference at roughly 90% below comparable OpenAI and Anthropic rates, and the MoE architecture that enables this pricing is openly documented. Whatever you think of the approach, at least the engineering trade-offs are visible. The problem is worse when providers make the same trade-offs quietly, behind an API that returns the same model identifier regardless of what actually computed the response.
The practical upshot is unpleasant but straightforward. If your application depends on consistent model behaviour, you are building on sand that shifts without warning. The Stanford researchers recommended continuous monitoring, and they were right, but monitoring alone doesn’t solve the problem, because it tells you something broke without stopping it from breaking.
Pinning to a specific model snapshot helps, where providers offer it, but even snapshots get deprecated. OpenAI maintains them for a few months and then requires developers to migrate. The careful evaluation you ran against the March snapshot becomes irrelevant when you’re forced onto the June version and nobody can tell you exactly what changed.
The deeper issue is one of trust and transparency. When a model provider updates a live model, they are unilaterally changing the behaviour of every application built on top of it. That is not a software update but an undocumented API change, the kind that would trigger outrage in any other engineering discipline. Imagine if AWS silently swapped your database engine for a cheaper one that was “approximately equivalent” on standard benchmarks, and you can begin to see how the AI industry has somehow normalised something that would be career-ending negligence anywhere else.
The model you benchmarked, the one that earned the contract, that impressed the board, that your engineers spent weeks building prompts and evaluation harnesses around, is a snapshot of a moving target. Quantisation shaves off the edges while routing sends your queries to whichever expert subnetwork happens to be cheapest that millisecond, and safety updates redraw the boundaries of what the model will and won’t do. None of it shows up in the model name string your application receives in the API response.
Somewhere in a data centre, the accountants and the alignment researchers are both pulling the same model in different directions, one toward cheaper inference and the other toward tighter guardrails, and the engineers who built their products on last month’s version are left checking the forums to figure out why everything stopped working on a Tuesday.
from Unvarnished diary of a lill Japanese mouse
JOURNAL 27 février Natte, sabre, bokken, une leçon de kenjutsu
Alors j'ai expliqué à ces élèves déjà avancés pourquoi dans le kenjutsu on coupe pas les nattes. J'enseigne le kenjutsu, c’est l’ensemble des techniques de combat des guerriers traditionnels japonais. Je dis bien de combat, ça veut dire on est censés avoir un adversaire, pas une natte immobile. Pour couper une natte, on utilise la lame à partir de la moitié de sa longueur, pour ça il faut s'approcher de la natte. Déjà là un adversaire il va pas te laisser approcher comme ça sans rien faire. Mais mettons. On dégaine, on prend le sabre à deux mains, la plupart du temps on frappe fort donc avec les jambes… on risque de coincer la lame. On est mort. Si on coupe on termine la lame vers le bas, un peu penché en avant pour se remettre en position en garde il faut reculer de deux pas... Je fais pas de dessin : à toutes les phases de l'action on se met gravement en danger devant un adversaire. Dans le kenjutsu on garde une longueur de bras plus une de lame. Dans le même geste on dégaine, on coupe avec la pointe deux trois centimètres de lame maximum, on se remet aussitôt en garde sans avoir à reculer. Si on a bien estimé la distance ( ça s'acquiert facilement) on a coupé la carotide. C’est suffisant au besoin on peut porter un deuxième coup. Pas besoin d'avoir une force exceptionnelle, il suffit d'avoir le mouvement juste. C’est ça la différence et c’est pour ça qu’on s'amuse pas à couper les nattes. J'ai fait la démonstration sur une natte justement. La coupe sur tout le diamètre environ 3 cm de profondeur, à vue de nez ½ seconde. C’est parce que je suis formée au kenjutsu que j'ai battu moralement mon frère dès le premier assaut de notre duel. On avait les bokken bien sûr mais moi je m’en servais comme d'un sabre au premier assaut de la pointe de mon bokken j'ai touchés les tendons de sa main, avec un sabre le combat était fini, lui voulait me casser les os, toute la différence est là. Évidemment c’est plus clair quand je peux montrer, mais je crois que même ici c’est compréhensible. Mes élèves ont compris.
from An Open Letter
I learned that I'm supposed to move on. And I learned a couple different ways of doing that, one of the ways was to act like they had died and grieve that. But I guess it's pretty hard because I know she's not dead. Open watching a good amount of videos on a different set of topics, things from her perspective, things from my perspective, things from relationship in the future stuff like that. I also learned that I over intellectualized the break up, and because of that I was able to shield myself from grief. And so I'm back to letting myself feel that grief.
I was watching a video on YouTube an old lady reading something called. Let them, just let them. And I think it's a pretty simple concept that I think would benefit me a lot to really internalize. I spent a lot of time in the relationship and I put up with a lot of stuff because I really wanted things to work, for the sake of things working. I think I also took on the role of a caretaker a lot, and I tried to fix her. I think I took a lot of responsibility and I told myself that I had a lot of agency on the things that she was kind of deficient in, and instead of trying to move on, I instead almost made her a pet project in a way of trying to make her want to change and become the person that I hoped I could spend a future with. But you cannot make a horse drink water. All you can do is lead it to it. I think there's a certain kind of grief in accepting the fact that I cannot save her, and additionally accepting the fact that if she chooses to get better, maybe I can take some amount of credit for being part of that catalyst, but at the same time it's not because of me.
It was a really weird thing, when I was using ChatGPT as a sounding board, one of the questions it asked me was how would I feel if in two years she got better emotionally. Like she had healed and fixed a lot of the issues that caused our relationship to fail. And weirdly I didn't feel good about it and I don't really know why if I'm being honest. I think the low hanging fruit I would guess is that I would be upset that I didn't get to experience that version of her. And I guess another part of me would feel like I tried really hard to give her that grace and give her the tools to fix her issues, but she just didn't. And then in this hypothetical, she then did. It's a weird thing because I've said a lot that I want the best for her and I want her life to go well, but when I think about her in a relationship with another person, especially if it's recent, it hurts. I think that's also just a very natural thing of course but it still hurts. I think I want to know that I was special to her, and I guess part of me is still hanging onto those words that she would always tell me of how I was the one and how she doesn't know how she could live without me. But I guess that's the caretaker. I think I also thought a lot about the fact that part of the reason why I felt like she loved me a lot was because I took care of her in those ways. I would give her a lot of grace, I would tolerate a lot of things, I would regulate her emotions, I would try to fix her where I could. And I remember that in the most recent fuck up, I had a really weird thought that I didn’t like it. My brain told me at some point that she was going to make up for this so much, and I would essentially receive so much love and affection from that. Almost like my needs, and even just my wants would be met, because she almost owed me in a way. And that thought disgusted me immediately even in the moment. That's not at all what love should be like and I really don't think that's how I viewed things either. I really hope not.
I started to cry when I was driving home from hanging out with a friend, and I kind of triggered it on my own. I put my hand off to the side like I would when we were driving, and I would hold her thighs. And then I would put my hand where I would've held her hand on the center console. I told myself that I lost my passenger princess. And that was enough to make me start crying again. I went to the food place that we went to together a lot when I first moved here. I thought about how we parked in almost the same spot and she would get into my car and we would eat our food while watching a YouTube video together. But it's almost like I've ran out of tears. I heaved and I cried, but not many tears came out.
I cried a lot yesterday when I put some of the last keepsakes into the trash bag that I have now stored into the shed. I also put lemon, which was the stuffed animal that she bought to cuddle at my place. And it felt kind of fucked because lemon kind of became my stuffed animal in a way. I've never really been super attached to a stuffed animal, but part of me feels really guilty for keeping it in that shed, especially when I've cuddled it so many nights. Maybe the kindest thing that I could do is rehome lemon. Part of me wants to of course keep lemon, and it's weird because it doesn't even feel like it's super strongly tied to her, like lemon is its own thing in my mind. But then I also think about all of the photos and all of the times that she cuddled lemon. All the times that I would see her sleeping so peacefully cuddling it. You know it would make me sad sometimes because she would cuddle lemon and Hash would be trying to snuggle up with her on the side. And I would always wonder how she could choose stuffed animal over Hash. Sometimes she did cuddle Hash though and it made me really happy to see that. And just like that, I've started to cry again. I feel a lot of obligation towards my dog, Hash. And I really want to make sure that any partner I have in the future is loved by him, and absolutely loves him back. Honestly, a part of me feels like another reason why I'm glad I made this decision was because when she was breaking up with me and going through my house with her roommates, she said bye-bye Hash in a very light tone, like she had emotionally distanced herself from him. And I just don't understand how you could love a dog and dismiss them so easily. Her roommates even were making jokes about stealing him. And how their cat would beat him in a fight. I don't care how much she was mad at me, or how much she needed the relationship to end. She should never have even considered any kind of malintent towards him. That's my baby. And at the end of the day, I really do mean that way more than whenever I would say that towards her. Hash should never have been disregarded or caught by the conflict we had. And I think that's enough of a reason for me to understand that maybe she wasn't the one for me. And it hurts a lot because I think Hash is like me in a lot of ways. I went through camera roll and I found photos of her with Hash sleeping under her lap while she was doing homework. And there's so many photos of them together. And Hash really loved her so much, just like I did. But I think sometimes there were just periods where, for whatever reason she wouldn't be well, and she would hurt us. I think at the end of the day, I really do deserve someone who wouldn't do that to me. I deserve to be loved, in a gentle way. In a way where I don't need to take care of someone else and keep them emotionally grounded. In a way that I don't need to convince them or explain to them how they've hurt me, but rather it comes from a place of compassion and curiosity. And I'm not saying that E didn't love me at all. But I don't think she loved me in the healthiest of ways, and I also agree that I don't think I loved her in those same ways. I also don't know if I believe if love conquers all. Because I think if she truly loved me, she would've helped herself. She would've done the work to overcome the problems that she had, at the same time, maybe those problems were just more than her love could handle. And so she did love me, but I think she was struggling a lot. And maybe the kindest thing she could've done in that situation was to let me go. And the problem is I didn't want to let go. And so I think the even kinder thing she did for me was to take it so far that I would have no choice. And maybe that was what I needed. I really poke in my future relationships. I'm able to set boundaries and I'm able to take time and make sure that the person I choose to be with next is someone who is good for me.
from
EpicMind
![]()
Warum steigen manche Menschen in Organisationen auf – und andere nicht, obwohl sie fachlich mindestens ebenso kompetent sind? Diese Frage begegnet mir regelmässig. Im Unterricht, in Gesprächen mit Führungskräften, in Diskussionen über Karrierewege. Viele gehen implizit davon aus, dass sich Qualität langfristig durchsetzt. Wer die besseren Analysen liefert, wer klüger denkt, wer sorgfältiger arbeitet, wird früher oder später auch führen. So einfach ist es nicht.
Eine Studie des MIT, über die kürzlich berichtet wurde, liefert dazu einen aufschlussreichen Befund. In mehreren Untersuchungen zeigte sich: Personen, die ein strukturiertes Debattiertraining absolvierten, hatten eine höhere Wahrscheinlichkeit, später in Führungsrollen zu gelangen. Der entscheidende Mechanismus war nicht Fachwissen, sondern eine Zunahme an sogenannter Assertiveness („Durchsetzungsvermögen“) – also die Fähigkeit, klar, direkt und standhaft zu kommunizieren. Assertiveness bedeutet nicht Aggressivität. Es geht nicht darum, andere niederzureden oder dominant aufzutreten. Gemeint ist die Fähigkeit, die eigene Position verständlich zu vertreten, Einwände aufzunehmen und dennoch nicht einzuknicken.
Die Studie macht damit etwas sichtbar, das viele aus der Praxis kennen: #Führung entsteht in sozialen Interaktionen. Nicht mit perfekten Konzeptpapieren, sondern in Meetings, Verhandlungen, Konfliktsituationen. Wer in solchen Momenten sichtbar bleibt, wird eher als führungsfähig wahrgenommen. Das heisst nicht, dass diese Person automatisch die bessere Führungskraft ist. Aber sie wird eher ausgewählt.
Organisationen müssen entscheiden, wem sie Verantwortung übertragen. Diese Entscheidungen basieren nicht nur auf objektiven Leistungsdaten. Sie beruhen auf Wahrnehmung: Wer wirkt souverän? Wer bleibt ruhig unter Druck? Wer kann eine Position vertreten, auch wenn Gegenwind kommt?
Die MIT-Ergebnisse legen nahe, dass genau diese Faktoren systematisch eine Rolle spielen. Debattiertraining verändert nicht primär das Denken, sondern das Auftreten im sozialen Raum. Und dieses Auftreten beeinflusst Aufstiegschancen. Damit wird aufgezeigt: Es genügt nicht, gute Ideen zu haben. Man muss sie auch im Dialog behaupten können.
Hier kommt ein Punkt ins Spiel, der für viele irritierend ist: Wenn ich angehende Führungskräfte auf ihre mündliche Kommunikationsprüfung im Rahmen des SVF-Zertifikats vorbereite, werde ich regelmässig gefragt, wozu dieses Format überhaupt dient. Die Prüfung besteht aus einer kurzen Vorbereitungsphase und anschliessend einem 15-minütigen Dialog mit zwei Expertinnen oder Experten, die bewusst die Gegenposition einnehmen. Also kein Referat und kein Auswendiglernen, sondern ein Gespräch mit Gegenwind.
Auf den ersten Blick wirkt das wie ein rhetorisches Duell. Bei genauerem Hinsehen bildet es jedoch eine typische Führungssituation ab: Du musst eine Position entwickeln, strukturieren, vertreten – und gleichzeitig zuhören, reagieren, ruhig bleiben. Genau jene Fähigkeiten also, die laut MIT-Studie mit Leadership Emergence zusammenhängen. Die Prüfung misst nicht Wissen, sondern die Fähigkeit, unter sozialem Druck sichtbar und argumentativ handlungsfähig zu bleiben. Das ist kein Zufall. Führung findet nicht im Monolog statt.
An dieser Stelle ist mir eine differenzierte Einordnung wichtig. Die Studie zeigt, dass durchsetzungsstarke Kommunikation Aufstiegschancen erhöht. Sie sagt nichts darüber, ob diese Personen langfristig die wirksamsten Führungskräfte sind. Hier liegt eine Spannung. Organisationen könnten Gefahr laufen, jene zu bevorzugen, die besonders klar auftreten, während reflektierte, leise oder stark kooperative Persönlichkeiten weniger Beachtung finden. Sichtbarkeit ist nicht gleichbedeutend mit Qualität.
Auch die mündliche Prüfung misst nicht „gute Führung“ in ihrer ganzen Breite. Sie misst eine Voraussetzung dafür, in Führungssituationen überhaupt wahrgenommen zu werden. Zuhören, Empathie, strategisches Denken oder Integrationsfähigkeit werden dort nicht umfassend geprüft. Aber: Wer nicht in der Lage ist, eine Position klar zu vertreten, wird es schwer haben, diese anderen Qualitäten wirksam einzubringen. Sichtbarkeit ist kein Ersatz für Führung – sie ist eine Eintrittskarte.
Vor diesem Hintergrund halte ich das Format für klug gewählt. Es zwingt Kandidatinnen und Kandidaten in eine realitätsnahe Interaktionssituation. Es testet Standhaftigkeit ohne Respektlosigkeit. Es fordert Struktur unter Zeitdruck. Es verlangt Präsenz. Und es konfrontiert mit einem Umstand, der im Berufsalltag ohnehin gilt: Führung bedeutet, in kontroversen Gesprächen Haltung zu zeigen. Wer diese Fähigkeit nicht trainiert, wird sie auch im Arbeitskontext kaum spontan abrufen können.
Nicht immer steigen die besten Ideen auf. Oft steigen jene auf, die ihre Ideen unter Widerspruch sichtbar vertreten können. Die MIT-Studie liefert dafür eine empirische Grundlage. Führung entsteht im Gespräch – nicht im Gedanken allein.
Die mündliche Kommunikationsprüfung im SVF-Zertifikat bildet genau diese Realität ab. Sie prüft nicht einfach Wissen, sondern soziale Wirksamkeit. Und sie erinnert uns daran, dass Fachkompetenz ohne kommunikative Standfestigkeit in Organisationen selten ausreicht.
Wenn Du Dich auf eine solche Prüfung vorbereitest, verstehe sie nicht als rhetorisches Kräftemessen. Verstehe sie als Trainingsfeld für Sichtbarkeit. Entwickle Klarheit in Deiner Argumentation, bleibe respektvoll im Widerspruch und halte Position, wenn Gegenwind kommt. Führung beginnt nicht mit Macht. Sie beginnt damit, im entscheidenden Moment nicht zu verstummen.
Bildquelle Anton Hickel (1745–1798): The House of Commons, National Portrait Gallery, London, Public Domain.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet.
Topic #Erwachsenenbildung | #Coaching
from Lastige Gevallen in de Rede
Rituelen rond geschetste lijnen
De bel verklaard de noodtoestand bijtijds ontwaken in het ledikant vroeger dan een vogel opstaan boete doen een vastomlijnde baan mededelen wat het u kan schelen de daden hechten aan bevelen al wat afkomt gerechtigd klaren al wat kan beoordelen op gevaren dreigen met losse atomen halveren wat zij aan bieden moeten begeren elke beperking wordt strak beveiligd het handelshuis er tussen geheiligd de offers geschonken aan het gat een dag is om en dat is dan dat
from Microsoft Dynamics 365 Human Resources
Microsoft Dynamics 365 Human Resources
Managing employee data, payroll processes, performance tracking, and compliance can be complex without the right system. With Microsoft Dynamics 365 Human Resources, businesses can streamline HR operations, improve workforce management, and enhance employee experiences. It provides a centralised platform to manage the entire employee lifecycle efficiently. Dynamics 365 Human Resources helps organisations build a more productive, engaged, and well-managed workforce.
Benefits of Microsoft Dynamics 365 Human Resources:
• Centralised employee data management • Automated HR workflows • Leave and attendance management • Performance tracking and goal setting • Payroll and benefits administration support • Compliance and policy management • Real-time workforce insights and reporting
With better visibility and automation, HR teams can focus more on strategy and employee development instead of manual processes.
Why Choose Nimus Technologies?
At Nimus Technologies, we implement and customise Dynamics 365 HR solutions tailored to your organisational structure and policies. Our focus is on improving efficiency, compliance, and employee engagement.
We provide:
• End-to-end HR system implementation • Workflow configuration and customization • Integration with finance and payroll systems • Data migration and setup • User training and onboarding • Ongoing support and optimisation
We help businesses modernise their HR operations with scalable and secure solutions.
Contact Us:
Ready to simplify and strengthen your HR management?
Nimus Technologies
📧 Email: contact@nimustech.com 🌐 Website: https://nimustech.com/ 📞 Contact: +91 7696006333
Let’s build a smarter human resources system for your organisation. 🚀
from 下川友
朝、喉に違和感があった。
薬を買おうと思い、近くの薬局まで出かける。
俺はホームで電車を待っていた。 あれ、近くの薬局って電車で行く場所だったっけ。
そう思っていると、アナウンスが流れた。
「サイは、私の運転する電車に衝突しないでください。」
この辺にはサイがいるのか。動物と電車が衝突する事故があるというのは、インターネットで見たことがある。
電車に揺られていると、ふと昔のことを思い出した。 当時は俳優を目指していて、オーディションには一切行かず、破天荒なことばかりしていれば、勝手に声がかかると思っていた。
銭湯で服を盗まれないと俳優の仕事が来ない、そんなふうに思っていた時期がある。 眉毛の凛々しい男優が、インタビューか何かでそんなことを言っていたからだ。 たった一つのサンプルに、若い頃はなぜか力があると思ってしまう。
窓の外には山並みが続いている。「山から空気が降りてきています」と運転手のアナウンスが入る。ロボットみたいな運転手だなと思い、なぜか好感を持った。
薬が買える駅で降り、商店街を歩く。 古い建物の外壁から釘が飛び出していた。それがなぜか、こちらに視線を送っているように見えた。出ていた釘に、視線だけで挨拶をする。 友人がいないもので、いつの間にか無機物とコミュニケーションを取るようになったが、そのことも、もちろん自分しか知らない。
ふと路地に目をやると、階段の途中で誰かが立ち止まっていた。手すりに手をかけ、下半身だけが見えている。上半身は見えない。何をしているのかわからないが、声をかけるのはためらわれた。
通り過ぎようとしたとき、隣の建物から外国人が出てきて、床の畳をずらし始めた。 そもそも外に畳が敷かれていたこと自体に気づかなかった。 こういう大々的な変なことがあっても、俺はそれを無視してしまうことがある。 最初は少しだけだったのに、そのずれがだんだん大きくなっていく。 何してるんですか?とここで聞けないのが、俺の人生が楽しくならない理由だ。
八百屋の前では、看板娘が鉛筆を研いでいる。目の端でその仕草を見ながら、昔、似たような雰囲気の店で「あなたはカブを抜きに来たのよ」と急に言われたことを思い出す。あのときは意味がわからず、ただ納得してカブを抜く仕事を手伝っていたが、今では「もっと主体性を持たんかい」と体をまっすぐにさせる自分が心の中にいる。あの頃よりは、少しは成長しているのかもしれない。
…… なんでこの街に降りたんだっけ。 かつて世話になったスナックのママに会うためだったか。いや、違う。喉を治す薬を買うんだった。
昔、俺はとあるスナックの常連だった。ある日、いつものようにママと話をしようと思ったら、店の前に男が立っていて、「ママならもう船に乗りましたよ」といきなり言われた。それきり、会えなくなった。あれから十年以上が経っている。
そうこうしているうちに、呼吸を意識的にしている自分に気づいた。 呼吸に集中すると、他に何もできなくなる。歩くことさえおぼつかない。だから、ぼんやりと立ち止まったまま、しばらくその場にいた。
ああ、喉の痛みなんて、ほんの不調の一部にすぎない。 もっと根本的に、深い病を患っている。
喉の薬を探しに来ていたが、目的を達成することをやめた。 なんでもいいか、と思いながら、ゆっくりと街をただ歩くことにした。
from Dallineation
Yesterday, for Lent Day 8, I posted about an important letter. I didn't title it as the Day 8 entry in the series because I wanted it to be more of a standalone post. But today I'm continuing with Day 9, sharing some thoughts on the Holy Spirit.
I've been reading from a journal I kept while serving as a full-time missionary for the Church of Jesus Christ of Latter-day Saints. I served in the Brazil Santa Maria Mission from December 2000 to December 2002. I went straight to Brazil and spent two months in the São Paulo Missionary Training Center learning the basics of the Portuguese language and learning how to do the work of proselyting and teaching. After two months, I traveled to my assigned mission area in the southernmost state of Brazil.
So far I have read what I wrote about my experiences in the MTC and in my first several months of actual missionary service. It has been fun to revisit those times, but my 19-year-old self was pretty naïve and a bit cringe at times. But I was committed and trying hard to be a good missionary.
One thing I've noticed is that I repeated phrases like “I felt the Spirit so strong” or “the Spirit was so strong” very often. It's very common to hear such expressions in LDS church meetings and classes. We believe the Holy Spirit testifies of truth, but we also tend to associate its presence with positive feelings like happiness, hope, joy, peace, and similar. Likewise, we tend to associate negative feelings sadness, despair, agitation, and confusion with a lack of the presence of the Spirit. So when we say we are “feeling the Spirit” – or at least when I wrote about it as a missionary and in my life since then, it's almost always in the context of those positive feelings.
I am still trying to learn about the Catholic perspective on the Holy Spirit and its role in our lives and in the Church, but it is quite different from the LDS perspective. I think Catholics tend to be more skeptical of feelings and emotions as it is sometimes difficult to discern their origin. They can be misleading. This is not to say that God cannot send positive feelings and emptions to us through the Holy Spirit, but that those feelings don't necessarily always come from God. And we can be easily manipulated through our feelings.
So I'm trying to reflect on specific experiences I've had in the past where I believed I “felt the Spirit so strong” and think about the context and circumstances surrounding them.
I do believe I have felt the undeniable influence of the Holy Spirit at times throughout my life. The most powerful times have almost always been times when I have focused my thoughts and attention on any aspect of Jesus Christ, such as his birth, his ministry and teachings, his sufferings in the Garden of Gethsemane and on the cross, his resurrection.
Other times, when I think I have “felt the Spirit so strongly”, I think I have been caught up in feelings of unity, fellowship, belonging, love, etc. associated with church meetings.
But I would say that, for me, the majority of the time the Holy Spirit works on me almost indirectly. Quietly “nudging” me. A thought crosses my mind that I should text someone to say hello. Or I feel a brief feeling of reassurance as I am wrestling with my doubts and questions about my faith. I can easily dismiss or ignore those nudges, and I have for long stretches. But the nudges are always there. Always trying to gently turn my head to look at Jesus Christ. Because wherever we are looking is were we will go. And the Holy Spirit wants us to follow Christ.
I want to follow Christ, too. I'm just really stubborn and foolish. And easily distracted. So I really need the Holy Spirit. I'm just trying to understand more about how the Holy Spirit works and better recognize and discern his influence in my life.
Something has been drawing me to seriously investigate Catholicism and I can't explain it. And it's not stopping. My church leaders would certainly tell me that Catholicism is false and that it's not the Holy Spirit that's been nudging me to look into it. But I don't know.
#100DaysToOffload (No. 139) #faith #Lent #Christianity
from
Dieselgoth
“Adjustable Clutch Pedal Stop” is surely a favorite listing on a satirical online aftermarket automotive parts store, somewhere... somewhen. At this stage in my life, I have accepted that my utter inability to understand marketing has always been nothing but my own failure, so I'll leave it up to you to decide whether or not ECS Tuning's c u s t o m little peg originated from a genuine automotive need.
Here's most of the product information, prettified:
If you've ever driven a modern day manual transmission VW, you'll quickly notice that there's an abundance of unnecessary leg movement to disengage the clutch. This excessive leg motion creates several problems such as an uncomfortable, non-driver's focused seating position, difficulty in consistently finding the clutch engagement point and leaves room for improvement on faster gear changes.
With the ECS Adjustable Clutch Pedal Stop in place, driving dynamics are dramatically improved! Our unique height adjustable thread-in design allows you to fine tune clutch pedal feel to your preference, improving the connection between your foot and the transmission.
As you push down on the clutch pedal, the clutch disc becomes disengaged from the flywheel, allowing the transmission to become disconnected from the engine. However, there is a point within the clutch pedal travel where the clutch disc becomes disengaged but the pedal keeps going past the point of disengagement.. This is called “dead travel” and it leaves the clutch engagement point feeling more like a floating target.
By reducing the amount of travel needed to disengage the clutch, you gain consistency in take-offs and launches by always stopping the clutch pedal at the proper point, just before clutch engagement.
This unnecessary pedal travel is removed and taken up by the height of the clutch pedal stop, helping to lock into place the clutch disengagement point higher up off the floor for more consistent take-offs, faster gear changes and sportier pedal feel.
Part Design
Our in-house Engineering Team carefully spec?d out high quality parts to give you a robust, adjustable pedal stop that can take the stress and abuse of sporty driving
Our design includes a polyurethane bumper to absorb shocks while driving aggressively and offers a unique, solid ?thud? at the end of pedal stroke. The poly. bumper won?t compress or feel ?sticky? after pedal strokes like other brands will.
Not satisfied with a stack of washers, we set out to design a fully adjustable pedal stop that allows you to adjust your height with threads, rather than rubber washers that compress, or steel washers that can rattle.
A zinc-alloy nutsert threads into the floor, in place of the OEM pedal stop, and acts as the anchor for the adjustable pedal stop to thread into. This unique feature in our design is a much more rigid stop, that is going to stand up to repeated pedal mashing. This gives you a more confident , OE-like feel.
All other hardware is zinc-coated for protection from the environment for long-lasting great looks
Performance Features
With our Adjustable Clutch Pedal Stop installed, you can dial in the feel of your clutch engagement point higher off the floor. This gives you shorter shift times, more consistent launches and easier driving dynamics.
You can creep and take off from a light or a hill with greater ease with our Adjustable Pedal Stop properly setting the pedal height just below the clutch engagement point.
With less leg movement required to disengage the clutch, you can re-adjust your seating position further back for a more comfortable and confident driver seating position. Many people are forced to sit too close to the steering wheel to disengage the clutch, which can lead to your arms being bent improperly, not allowing you to take proper control of the steering wheel.
Product Development
Our ECS Adjustable Clutch Pedal Stop was designed, engineered and tested by our Research and Development team in our Wadsworth, Ohio facility. We ensured the highest level of precision and quality is delivered throughout rigorous long term product testing and leading edge product development methods. Each unit is etched, assembled and packaged in-house for the highest level of quality assurance.
We tested several prototypes on many vehicles with OE and aftermarket clutches to ensure proper fitment and operation,
We specced out the best selection of parts to fulfill our mission of giving you the absolute BEST Pedal Stop on the market! With premium materials and our unique adjustable design, this part will completely transform your driving experience with improved dynamics!
It honestly improves the driving experience in my opinion.
#hardware
from
SmarterArticles

Hiromu Yakura noticed something strange about his own voice. A postdoctoral researcher at the Max Planck Institute for Human Development in Berlin, Yakura studies the intersection of artificial intelligence and human behaviour. But the shift he detected was not in his data; it was in his speech. “I realised I was using 'delve' more,” he told reporters, describing the unsettling moment he caught himself unconsciously parroting the verbal tics of a large language model. Yakura was not alone. His subsequent research, analysing over 360,000 YouTube videos and 771,000 podcast episodes, revealed that academic YouTubers had begun using words favoured by AI chatbots up to 51 per cent more frequently after ChatGPT's November 2022 launch. Words like “delve,” “realm,” “underscore,” and “meticulous” were migrating from machine-generated text into the mouths of actual humans. A cultural feedback loop had been set in motion, and hardly anyone had noticed.
This quiet linguistic contamination is just one symptom of a much broader transformation. Across industries, conversational AI has become the front line of customer interaction. Chatbots handle banking queries, voice assistants schedule medical appointments, and algorithmic agents negotiate insurance claims. The global AI customer service market, valued at $12.06 billion in 2024, is projected to reach $47.82 billion by 2030, according to industry analysts. Gartner has predicted that conversational AI deployments within contact centres will reduce agent labour costs by $80 billion in 2026, with approximately 17 million contact centre agents worldwide facing a fundamental reshaping of their roles. Bank of America's virtual assistant Erica has surpassed 3 billion client interactions since its 2018 launch, serving nearly 50 million users with an average response time of 44 seconds. The two million daily consumer interactions with Erica alone save the bank the equivalent of 11,000 employees' daily work. The efficiency gains are staggering, the convenience undeniable.
But as these systems grow more sophisticated, more emotionally responsive, and more deeply woven into the fabric of daily communication, a disquieting question presents itself. What happens to us, the humans on the other end of the line? If we spend our days talking to machines that never lose their patience, never misunderstand our tone, and never push back with the messy friction of genuine feeling, do we slowly lose the capacity to navigate the unpredictable terrain of real human conversation? The evidence is beginning to suggest that we might.
The appeal of conversational AI is rooted in something profoundly human: a desire to be understood quickly and without complication. When you call your bank and a voice assistant resolves your problem in under a minute, there is an undeniable satisfaction in the transaction. No hold music, no awkward small talk, no navigating the emotional state of a tired customer service representative at the end of a long shift. The interaction is clean, efficient, and entirely on your terms.
This is by design. The conversational AI industry has been engineered to minimise friction. McKinsey reports that 78 per cent of companies have now integrated conversational AI into at least one key operational area. A 2025 Nextiva analysis found that 57 per cent of businesses are either using self-service chatbots or plan to do so imminently. By 2027, Gartner projects, 25 per cent of organisations will use chatbots as their primary customer service channel. The technology is no longer experimental; it is infrastructural. And the economic incentives are overwhelming: companies report average returns of $3.50 for every dollar invested in AI customer service, with leading organisations achieving returns as high as eight times their investment.
Yet friction, as any psychologist will tell you, is precisely what builds social muscle. The small moments of discomfort in human interaction, the pauses, the misunderstandings, the need to read another person's expression and adjust your approach, these are the crucibles in which empathy is forged. Sherry Turkle, the Abby Rockefeller Mauz\u00e9 Professor of the Social Studies of Science and Technology at MIT, has spent decades studying how technology shapes human relationships. Her warning is direct: “What do we forget when we talk to machines? We forget what is special about being human.”
Turkle's concern is not that AI is inherently destructive, but that its seductive convenience trains us to avoid the very interactions that make us more fully human. In her research, she describes social media as a “gateway drug” to conversations with machines, arguing that the emotional scaffolding we once built through difficult, imperfect human dialogue is now being outsourced to algorithms that mirror our sentiments without ever genuinely understanding them. “AI offers the illusion of intimacy without the demands,” she has written. She challenges us to consider whether machines truly grasp empathy, or whether we are merely being “remembered” without being genuinely “heard.” The result is a kind of emotional atrophy; we become fluent in transactional exchange but increasingly clumsy at the real thing. The pushback and resistance of genuine human relationships, Turkle argues, are not obstacles to connection. They are the mechanism through which understanding and growth are forged.
The neurological implications of this shift are only beginning to come into focus. In a landmark 2025 paper published in the journal Neuron, Professor Benjamin Becker of the University of Hong Kong's Department of Psychology laid out a framework for understanding how interactions with AI might physically alter the social circuitry of the human brain. Becker's analysis, drawing on a meta-analysis of 1,302 functional MRI studies encompassing 47,083 activations, identified the “social brain” networks that enable rapid understanding and affiliation in interpersonal interactions. These are evolutionarily shaped circuits, refined over millennia of face-to-face human contact. They allow us to read facial expressions, interpret vocal tone, predict others' intentions, and calibrate our own behaviour in real time.
The problem, Becker argues, is that humans are hardwired to anthropomorphise. We instinctively attribute personality, feelings, and intentions to AI agents, a tendency psychologists call the “ELIZA effect,” named after a rudimentary 1960s chatbot that users nonetheless treated as a genuine therapist. The classic Heider and Simmel experiment demonstrated this tendency decades ago: humans intuitively interpret behaviour and motives even in simple moving geometric shapes. With AI agents that can modulate their voice, recall personal details, and respond with apparent emotional sensitivity, the anthropomorphic pull becomes far more powerful. As conversational AI becomes more advanced and personalised, Becker warns, these interactions will “increasingly engage neural mechanisms more deeply and may even change how brains function in social contexts.”
“Understanding how our social brain shapes interactions with AI and how AI interactions shape our social brains will be key to making sure these technologies support us, not harm us,” Becker stated. The implications are especially significant for young people, whose neural pathways for social cognition are still developing. If children and adolescents are forming their primary conversational habits with AI rather than with peers, parents, and teachers, the social brain may develop along fundamentally different lines than those of previous generations.
This is not merely theoretical. Research from Harvard's Graduate School of Education, led by Dr. Ying Xu, has examined how children interact differently with AI compared to humans. The findings are nuanced but concerning. While children can learn effectively from AI designed with pedagogical principles (improving vocabulary and comprehension through interactive dialogue), they consistently engage less deeply with AI than with human conversational partners. When speaking with a person, children are more likely to steer the conversation, ask follow-up questions, and share their own thoughts. With AI, they tend to become passive recipients, answering questions with less effort, particularly in complex exchanges that require genuine back-and-forth discussion.
The implication is clear: AI may teach children facts, but it struggles to teach them how to be present in a conversation. And that presence, that willingness to lean into the discomfort of not knowing what someone else will say next, is the foundation of social competence.
Perhaps the most counterintuitive finding in recent AI research is this: the more people talk to chatbots, the lonelier they tend to feel. In early 2025, OpenAI and the MIT Media Lab published the results of a landmark study, a four-week randomised controlled experiment involving 981 participants who exchanged over 300,000 messages with ChatGPT. The researchers tested three interaction modes (text, neutral voice, and engaging voice) across three conversation types (open-ended, non-personal, and personal).
The headline finding was stark. “Overall, higher daily usage, across all modalities and conversation types, correlated with higher loneliness, dependence, and problematic use, and lower socialisation,” the researchers reported. Voice-based chatbots initially appeared to mitigate loneliness compared to text-based interactions, but these advantages disappeared at high usage levels, especially with a neutral-voice chatbot. Participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely and to rely on the chatbot further, creating a self-reinforcing cycle of dependency.
The study also revealed gender-specific effects. After four weeks of chatbot use, female participants were slightly less likely to socialise with other people than their male counterparts. Participants who interacted with ChatGPT's voice mode using a gender different from their own reported significantly higher levels of loneliness and greater emotional dependency on the chatbot. The researchers noted that people with a stronger tendency for attachment in relationships and those who viewed the AI as a friend were more likely to experience negative effects. Personal conversations, which included more emotional expression from both user and model, were associated with higher levels of loneliness but, intriguingly, lower emotional dependence at moderate usage levels.
Parallel to the controlled study, OpenAI and MIT analysed real-world data from close to 40 million ChatGPT interactions and surveyed 4,076 of those users. They found that emotional engagement with ChatGPT remains relatively rare in overall usage, but that the subset of users who do form emotional connections tend to be the platform's heaviest users, and the loneliest.
The Brookings Institution, in a July 2025 analysis by Rebecca Winthrop and Isabelle Hau, framed this as a defining paradox of our era: “We are living through a paradox: humans are wired to connect, yet we've never been more isolated. At the same time, AI is growing more responsive, conversational, and emotionally attuned, and we are increasingly turning to machines for what we're not getting from each other: companionship.” They noted that AI companions like Replika.ai, Character.ai, and China's Xiaoice now count hundreds of millions of emotionally invested users, with some estimates suggesting the total may already exceed one billion.
The scale of emotional investment in AI companions has become impossible to ignore. Replika, one of the most prominent AI companion platforms, claims approximately 25 million users, with over 85 per cent reporting that they have developed emotional connections with their digital companion. The average user exchanges roughly 70 messages per day with their Replika. Character.AI users average 93 minutes per day on the platform, 18 minutes longer than the average TikTok session, while heavy Replika users report engagement of 2.7 hours daily, with extreme cases exceeding 12 hours.
A nationally representative survey of 1,060 teenagers conducted in spring 2025 found that 72 per cent of those aged 13 to 17 are already using AI companions, with roughly half using them at least a few times per month. About a third of teens reported using the technology for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice. Perhaps most tellingly, around a third of teenagers using AI companions said they find conversations with these systems as satisfying, or more satisfying, than conversations with real-life friends.
The data on well-being is less comforting. Among 387 research participants in one study, “the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family.” Ninety per cent of the 1,006 American students using Replika who were surveyed for a separate study reported experiencing loneliness, significantly higher than the comparable national average of 53 per cent. Common Sense Media has recommended that no one under 18 should use AI companions like Character.AI or Replika until more safeguards are in place to “eliminate relational manipulation and emotional dependency risks.”
The regulatory landscape is beginning to respond. In September 2025, the California legislature passed a bill requiring AI platforms to clearly notify users under 18 when they are interacting with a bot. That same week, the Federal Trade Commission opened a broad inquiry into seven major firms, including OpenAI, Meta, Snap, Google, and Character Technologies, examining the potential for emotional manipulation and dependency. These are early steps, but they signal a growing recognition that the companion economy is not merely a consumer trend; it is a public health concern.
The social consequences of AI-mediated communication extend beyond individual loneliness into the texture of everyday human interaction. At Cornell University, research scientist Jess Hohenstein led a series of experiments investigating what happens when people suspect their conversational partner is using AI assistance. The results, published in Scientific Reports under the title “Artificial Intelligence in Communication Impacts Language and Social Relationships,” revealed a troubling dynamic.
When participants believed their partner was using AI-generated smart replies, they rated that partner as less cooperative, less affiliative, and more dominant, regardless of whether the partner was actually using AI. The mere suspicion of algorithmic assistance was enough to erode trust and social warmth. “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you're using AI to help you compose text, regardless of whether you actually are,” Hohenstein noted.
The study also found that actual use of smart replies increased communication efficiency and positive emotional language. But this improvement came at a cost: “While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you're sacrificing some of your own personal voice,” Hohenstein observed.
Malte Jung, associate professor of information science at Cornell and a co-author on the study, drew a broader conclusion: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people's interactions, language and perceptions of each other.”
This finding raises uncomfortable questions about authenticity in an age of AI-assisted communication. If AI makes our messages more efficient and more positive but less recognisably our own, are we gaining convenience at the expense of genuine connection? And if the mere suspicion of AI involvement poisons the well of trust, what happens as AI becomes ubiquitous in workplace communication, dating apps, and even family group chats?
The Max Planck Institute research that caught Hiromu Yakura by surprise points to an even more fundamental concern: AI is not just changing how we communicate with machines; it is changing how we communicate with each other. The study identified twenty-one words that serve as clear markers of AI's linguistic influence. Terms favoured by large language models, “delve,” “realm,” “underscore,” “meticulous,” and others, were appearing with dramatically increased frequency in human speech, not just in written text but in spontaneous spoken communication. An analysis of 58 per cent of videos that showed no signs of scripted speech suggested that the adoption of these linguistic patterns extended beyond prepared remarks into genuinely extemporaneous conversation.
Levin Brinkmann, a co-author of the study at the Max Planck Institute, described the mechanism at work: “The patterns that are stored in AI technology seem to be transmitting back to the human mind.” The researchers characterised this as a “cultural feedback loop.” Humans train AI on their language; AI processes and statistically remixes that language; humans then unconsciously adopt the AI's patterns. The loop narrows with each iteration, potentially reducing linguistic diversity on a global scale. If AI systems trained primarily on English-language content begin to influence communication patterns worldwide, we might see a homogenisation of human expression that transcends national and cultural boundaries.
The concern extends beyond vocabulary. An analysis published by IE Insights in April 2025 argued that AI-driven platforms are “subtly teaching people to speak and think like machines, efficient, clear, emotionally detached.” The article warned that interactions are “increasingly optimised for clarity and brevity, but stripped of emotional depth, cultural nuance, and spontaneity that define authentic human connection.” It described a world in which “we are training machines to sound more human while simultaneously training ourselves to sound more like machines.” The impact, the analysis argued, is particularly dangerous in high-stakes environments where human nuance and emotional intelligence matter most: diplomacy, crisis negotiation, healthcare, and community care.
Emily Bender, a prominent linguist at the University of Washington, has observed that even people who do not personally use AI chatbots are not immune to this influence. The sheer volume of synthetic text now circulating online, in articles, emails, social media posts, and automated responses, makes it nearly impossible to avoid absorbing AI-inflected language patterns. The homogenisation is insidious precisely because it is invisible.
The American public appears to intuit, even if it cannot fully articulate, the social risks posed by AI. A Pew Research Centre survey of 5,023 U.S. adults conducted in June 2025 found that 50 per cent of Americans say they are more concerned than excited about the increased use of AI in daily life, up from 37 per cent in 2021. Only 10 per cent reported being more excited than concerned, while 38 per cent felt equally excited and concerned. More than half (57 per cent) rated the societal risks of AI as high, compared with just 25 per cent who said the benefits are high.
The data on social relationships is particularly striking. Half of respondents (50 per cent) said they believe AI will make people's ability to form meaningful relationships worse. The public fears the loss of human connection more than AI experts do: 57 per cent of U.S. adults expressed extreme or high concern about AI leading to less connection between people, versus only 37 per cent of surveyed experts. This 20-point gap between public anxiety and expert reassurance is itself revealing. It suggests either that everyday citizens are perceiving something that specialists are overlooking, or that proximity to AI development generates a form of optimism bias.
The generational divide is especially revealing. Among adults under 30, the cohort most likely to use AI regularly, 58 per cent believe AI will worsen people's ability to form meaningful relationships, and 61 per cent believe it will make people worse at thinking creatively. This is markedly higher than the roughly 40 per cent of those aged 65 and older who share those views. The generation most fluent in AI is also the generation most anxious about what it might cost them.
Two-thirds of respondents (66 per cent) said AI should not judge whether two people could fall in love, and 73 per cent said AI should play no role in advising people about their faith. These are not merely policy preferences; they are boundary markers, lines drawn around the domains of human experience that people consider too sacred, too intimate, or too complex for algorithmic mediation.
The workplace effects of conversational AI adoption are already visible in the customer service industry itself. As chatbots handle an ever-larger share of routine interactions, the calls that do reach human agents are increasingly complex, emotionally charged, and difficult to resolve. This creates a cascading paradox: the agents who remain employed need greater social skills than ever, even as the broader population is getting less practice at the kind of difficult conversations these agents must navigate daily.
Recent industry data illustrates the toll. According to one analysis, 87 per cent of contact centre agents report high stress levels, and over 50 per cent face daily burnout, sleep issues, and emotional exhaustion. The automation of simple queries means agents now spend a disproportionate share of their working hours handling angry customers, technical problems that defy standard solutions, and emotionally charged conversations demanding empathy and judgement. More than 68 per cent of agents receive calls at least weekly that their training did not prepare them to handle.
A 2025 CX-focused study found that 79 per cent of Americans strongly prefer interacting with a human over an AI agent, and a Twilio report from the same year revealed that 78 per cent of consumers consider it important to be able to switch from an AI agent to a human one. Meanwhile, a Kinsta report found that 50 per cent of consumers would cancel a service if it were solely AI-driven. The message from customers is clear: they want efficiency, but not at the price of human presence.
The tension between economic incentive and human need creates a troubling dynamic. The global chatbot market, valued at roughly $15.6 billion in 2024, is expected to nearly triple to $46.6 billion by 2029. Every interaction that moves from human to machine represents a small reduction in the total volume of genuine interpersonal exchange in society. Multiply this across billions of interactions per year, and the cumulative effect on collective social skills becomes a legitimate concern.
The stakes are highest for the youngest members of society. UNICEF's December 2025 guidance on AI and children, now in its third edition, acknowledged that large language models are becoming “deeply embedded in daily life as conversational agents, evolving into companions for emotional support and social interaction.” The guidance flagged this trend as “particularly pronounced among children and adolescents, a demographic prone to forming parasocial relationships with AI chatbots.” It warned that youth are “uniquely vulnerable to manipulation due to neurodevelopmental changes.”
Research on joint media engagement, studying what happens when parents are present during children's AI interactions, offers a partial counterweight. When caregivers scaffold AI interactions, helping children process what they are hearing, encouraging them to question and respond actively, the developmental risks appear to diminish. But this requires time, attention, and digital literacy that not all families possess in equal measure.
The Harvard research from Dr. Ying Xu highlights a critical distinction: children who engage in interactive dialogue with AI can comprehend stories better and learn more vocabulary compared to passive listeners, and in some cases, learning gains from AI were even comparable to those from human interactions. But learning facts and developing social-emotional intelligence are fundamentally different processes. AI can drill vocabulary; it cannot model the subtle art of reading a room, sensing another person's discomfort, or knowing when to stay silent. The risk is not that children will stop learning. The risk is that they will learn everything except how to be with other people.
The picture that emerges from the research is neither straightforwardly dystopian nor naively optimistic. It is, instead, deeply complicated. Conversational AI offers genuine benefits: accessibility for people with disabilities, support for those experiencing isolation, efficiency in service delivery, and learning tools that can supplement (though not replace) human instruction. Stanford researchers found that while young adults using the AI chatbot Replika reported high levels of loneliness, many also felt emotionally supported by it, with 3 per cent crediting the chatbot for temporarily halting suicidal thoughts. The question is not whether to use these technologies, but how to use them without surrendering the skills that make us most distinctively human.
A 2025 study published in the Journal of Systems Science and Systems Engineering offers an instructive finding. Across two scenario studies and one laboratory experiment, researchers found that consumers exhibited higher prosocial intentions after interacting with socially oriented AI chatbots (those designed to build rapport and engage emotionally) compared to task-oriented ones (those focused purely on efficiency). The study revealed that social presence and empathy mediated this effect, suggesting that the design of AI systems meaningfully shapes their social consequences. This is not a trivial insight. It means that the choices made by engineers, product managers, and policymakers about how AI communicates will have ripple effects across the social fabric.
Professor Becker's neuroscience framework points in the same direction. The social brain is not fixed; it is plastic, shaped by the interactions it encounters. If those interactions are predominantly with machines that reward brevity and compliance, the brain will adapt accordingly. But if AI systems are designed to encourage, rather than replace, genuine human engagement, the technology could serve as a bridge rather than a barrier.
The Brookings Institution's Rebecca Winthrop and Isabelle Hau offered perhaps the most pointed formulation: the age of AI must not become “the age of emotional outsourcing.” The restoration of real human connection requires not a rejection of technology, but a deliberate, society-wide commitment to preserving the spaces, skills, and habits that sustain authentic relationships.
Sherry Turkle has described her decades of research as “not anti-technology, but pro-conversation.” That framing captures what is most urgently needed now. The rapid adoption of conversational AI in customer service, healthcare, education, and personal companionship is not inherently destructive. But it is proceeding at a pace that far outstrips our collective understanding of its social consequences.
The evidence assembled here, from neuroscience laboratories in Hong Kong to linguistics studies in Berlin, from controlled experiments at MIT to population surveys by Pew Research, converges on a single uncomfortable truth: the more seamlessly machines learn to talk like us, the greater the risk that we forget how to talk to each other. Not efficiently, not optimally, not in the polished cadence of a well-trained language model, but in the halting, imperfect, gloriously messy way that humans have always communicated. With pauses. With misunderstandings. With the kind of friction that, it turns out, is not a bug in the system of human connection. It is the entire point.
The voice recognition systems now achieving 95 per cent accuracy under ideal conditions and processing billions of interactions daily are marvels of engineering. The global voice and speech recognition market, valued at $14.8 billion in 2024, is projected to reach $61.27 billion by 2033. But accuracy in speech recognition is not the same as accuracy in human understanding. As we optimise our AI systems to hear every word, we might ask whether we are simultaneously losing our capacity to listen, truly listen, to one another.
The conversation about conversational AI has barely begun. It needs to move beyond the boardroom metrics of cost savings and efficiency gains, beyond the engineering challenges of word error rates and natural language processing, and into the deeper territory of what kind of society we are building when the first voice many of us hear each morning, and the last one we hear at night, belongs not to another human being but to a machine that has learned, with remarkable precision, to sound like one.
Yakura, H. and Brinkmann, L. et al. “Empirical evidence of Large Language Model's influence on human spoken communication.” Max Planck Institute for Human Development. arXiv:2409.01754. 2024. https://arxiv.org/html/2409.01754v1
Gartner, Inc. “Gartner Predicts Conversational AI Will Reduce Contact Center Agent Labor Costs by $80 Billion in 2026.” Press release, 31 August 2022. https://www.gartner.com/en/newsroom/press-releases/2022-08-31-gartner-predicts-conversational-ai-will-reduce-contac
Bank of America. “A Decade of AI Innovation: BofA's Virtual Assistant Erica Surpasses 3 Billion Client Interactions.” Press release, August 2025. https://newsroom.bankofamerica.com/content/newsroom/press-releases/2025/08/a-decade-of-ai-innovation--bofa-s-virtual-assistant-erica-surpas.html
Turkle, Sherry. “Reclaiming Conversation in the Age of AI.” After Babel. 2024. https://www.afterbabel.com/p/reclaiming-conversation-age-of-ai
Turkle, Sherry. NPR interview on the psychological impacts of bot relationships. 2 August 2024. https://www.npr.org/2024/08/02/g-s1-14793/mit-sociologist-sherry-turkle-on-the-psychological-impacts-of-bot-relationships
Becker, Benjamin. “Will our social brain inherently shape, and be shaped by, interactions with AI?” Neuron 113: 2037-2041. 2025. DOI: 10.1016/j.neuron.2025.04.034. https://www.cell.com/neuron/abstract/S0896-6273(25)00346-0
Xu, Ying. “AI's Impact on Children's Social and Cognitive Development.” Harvard Graduate School of Education and Children and Screens. 2024. https://www.gse.harvard.edu/ideas/edcast/24/10/impact-ai-childrens-development
OpenAI and MIT Media Lab. “How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.” March 2025. https://arxiv.org/html/2503.17473v2
OpenAI. “Early methods for studying affective use and emotional well-being on ChatGPT.” March 2025. https://openai.com/index/affective-use-study/
Hohenstein, Jess; Jung, Malte; and Kizilcec, Rene. “Artificial Intelligence in Communication Impacts Language and Social Relationships.” Scientific Reports. April 2023. https://news.cornell.edu/stories/2023/04/study-uncovers-social-cost-using-ai-conversations
Pew Research Center. “How Americans View AI and Its Impact on Human Abilities, Society.” Survey of 5,023 U.S. adults, June 2025. Published 17 September 2025. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/
Winthrop, Rebecca and Hau, Isabelle. “What happens when AI chatbots replace real human connection.” Brookings Institution. July 2025. https://www.brookings.edu/articles/what-happens-when-ai-chatbots-replace-real-human-connection/
IE Insights. “The Social Price of AI Communication.” IE University. April 2025. https://www.ie.edu/insights/articles/the-social-price-of-ai-communication/
Nextiva. “50+ Conversational AI Statistics for 2026.” 2026. https://www.nextiva.com/blog/conversational-ai-statistics.html
UNICEF. “Guidance on AI and Children 3.0.” December 2025. https://www.unicef.org/innocenti/media/11991/file/UNICEF-Innocenti-Guidance-on-AI-and-Children-3-2025.pdf
Twilio. “Customer Engagement Report.” 2025. Referenced in SurveyMonkey, “Customer Service Statistics 2026.” https://www.surveymonkey.com/curiosity/customer-service-statistics/
Fortune. “Linguists say ChatGPT is now influencing how humans write and speak.” 30 June 2025. https://fortune.com/2025/06/30/linguists-chatgpt-influencing-how-humans-write-speak/
Journal of Systems Science and Systems Engineering. “Beyond Consumption-Relevant Outcomes: The Role of AI Customer Service Chatbots' Communication Styles in Promoting Societal Welfare.” 2025. https://journal.hep.com.cn/jossase/EN/10.1007/s11518-025-5674-8
Straits Research. “Voice and Speech Recognition Market Size, Share and Forecast to 2033.” 2024. https://straitsresearch.com/report/voice-and-speech-recognition-market
CX Today. “The Algorithm Never Blinks: Why Contact Center AI is Creating a New Kind of Agent Burnout.” 2025. https://www.cxtoday.com/contact-center/the-algorithm-never-blinks-why-contact-center-ai-is-creating-a-new-kind-of-agent-burnout/
Common Sense Media. Referenced in Christian Post, “Advocate warns against teen use of AI companions as study shows heavy use by demographic.” 2025. https://www.christianpost.com/news/72-percent-of-teens-are-using-ai-companions-as-advocates-raise-concern.html
Nikola Roza. “Replika AI: Statistics, Facts and Trends Guide for 2025.” https://nikolaroza.com/replika-ai-statistics-facts-trends/
Ada Lovelace Institute. “Friends for sale: the rise and risks of AI companions.” 2025. https://www.adalovelaceinstitute.org/blog/ai-companions/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Hunter Dansin
Generation after generation, Vice and virtue breed with one another, Until hate is easy, and love is maudlin. And hearts, like flies over muck, do hover. O that one could sever this sullied past From we whose hearts are stained and sunk by it. That which we are told to put first, comes last, In the order of crude survivalists. Love is preached and praised, but rarely practiced. Art is punished unless profitable. More valued are the words, about them, lisped. So we cannot bear to leave the bubble. In your own reflection find your own way To marry past and present with today.
#poetry #sonnet
Thank you for reading! Sonnets are my way of coping with stress, I guess. Gives me something to think about while my daughter is playing with puzzles at the library, and keeps me from scrolling on my phone. I hope you like it. If I get more I think I will post them here sooner rather than later. What else is a blog for?
Send me a kind word or a cup of coffee:
Buy Me a Coffee | Listen to My Music | Listen to My Podcast | Follow Me on Mastodon | Read With Me on Bookwyrm
from
Roscoe's Story
In Summary: * Time-management is an extremely important skill to employ when setting schedules, goals, etc. We must be careful not to commit to too many chores or projects than we can realistically or comfortably handle. With this thought in mind I've declined an invitation to enter a monthly tournament run by one of my correspondence chess clubs. Lord knows I've still got plenty of other games in progress at that club and others. And now that I've begun following this season's MLB games, it's necessary that I cut back on other activites that claim my time and mental focus.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 227.63 lbs. * bp= 140/83 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:10 – 1 peanutbutter sandwich * 09:15 – mashed potatoes, cole slaw * 10:40 – fried chicken * 12:30 – beef chop suey, fried rice * 14:00 – 1 fresh apple * 16:30 – 1 bean & cheese breakfast taco
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:00 – bank accounts activity monitored * 06:20 – read, pray, follow news reports from various sources, surf the socials, and nap * 12:30 to 13:30 – Watch old game shows and eat lunch at home with Sylvia * 14:00 – follow an MLB Spring Training game, Brewers vs.Rangers * 16:50 – tuned into 1200 WOAI, the flagship station for the San Antonio Spurs, well ahead of pregame coverage then the call of tonight's game vs. the Brooklyn Nets. Go Spurs Go!
Chess: * 18:40 – moved in all pending CC games