Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
EpicMind
![]()
Warum steigen manche Menschen in Organisationen auf – und andere nicht, obwohl sie fachlich mindestens ebenso kompetent sind? Diese Frage begegnet mir regelmässig. Im Unterricht, in Gesprächen mit Führungskräften, in Diskussionen über Karrierewege. Viele gehen implizit davon aus, dass sich Qualität langfristig durchsetzt. Wer die besseren Analysen liefert, wer klüger denkt, wer sorgfältiger arbeitet, wird früher oder später auch führen. So einfach ist es nicht.
Eine Studie des MIT, über die kürzlich berichtet wurde, liefert dazu einen aufschlussreichen Befund. In mehreren Untersuchungen zeigte sich: Personen, die ein strukturiertes Debattiertraining absolvierten, hatten eine höhere Wahrscheinlichkeit, später in Führungsrollen zu gelangen. Der entscheidende Mechanismus war nicht Fachwissen, sondern eine Zunahme an sogenannter Assertiveness („Durchsetzungsvermögen“) – also die Fähigkeit, klar, direkt und standhaft zu kommunizieren. Assertiveness bedeutet nicht Aggressivität. Es geht nicht darum, andere niederzureden oder dominant aufzutreten. Gemeint ist die Fähigkeit, die eigene Position verständlich zu vertreten, Einwände aufzunehmen und dennoch nicht einzuknicken.
Die Studie macht damit etwas sichtbar, das viele aus der Praxis kennen: #Führung entsteht in sozialen Interaktionen. Nicht mit perfekten Konzeptpapieren, sondern in Meetings, Verhandlungen, Konfliktsituationen. Wer in solchen Momenten sichtbar bleibt, wird eher als führungsfähig wahrgenommen. Das heisst nicht, dass diese Person automatisch die bessere Führungskraft ist. Aber sie wird eher ausgewählt.
Organisationen müssen entscheiden, wem sie Verantwortung übertragen. Diese Entscheidungen basieren nicht nur auf objektiven Leistungsdaten. Sie beruhen auf Wahrnehmung: Wer wirkt souverän? Wer bleibt ruhig unter Druck? Wer kann eine Position vertreten, auch wenn Gegenwind kommt?
Die MIT-Ergebnisse legen nahe, dass genau diese Faktoren systematisch eine Rolle spielen. Debattiertraining verändert nicht primär das Denken, sondern das Auftreten im sozialen Raum. Und dieses Auftreten beeinflusst Aufstiegschancen. Damit wird aufgezeigt: Es genügt nicht, gute Ideen zu haben. Man muss sie auch im Dialog behaupten können.
Hier kommt ein Punkt ins Spiel, der für viele irritierend ist: Wenn ich angehende Führungskräfte auf ihre mündliche Kommunikationsprüfung im Rahmen des SVF-Zertifikats vorbereite, werde ich regelmässig gefragt, wozu dieses Format überhaupt dient. Die Prüfung besteht aus einer kurzen Vorbereitungsphase und anschliessend einem 15-minütigen Dialog mit zwei Expertinnen oder Experten, die bewusst die Gegenposition einnehmen. Also kein Referat und kein Auswendiglernen, sondern ein Gespräch mit Gegenwind.
Auf den ersten Blick wirkt das wie ein rhetorisches Duell. Bei genauerem Hinsehen bildet es jedoch eine typische Führungssituation ab: Du musst eine Position entwickeln, strukturieren, vertreten – und gleichzeitig zuhören, reagieren, ruhig bleiben. Genau jene Fähigkeiten also, die laut MIT-Studie mit Leadership Emergence zusammenhängen. Die Prüfung misst nicht Wissen, sondern die Fähigkeit, unter sozialem Druck sichtbar und argumentativ handlungsfähig zu bleiben. Das ist kein Zufall. Führung findet nicht im Monolog statt.
An dieser Stelle ist mir eine differenzierte Einordnung wichtig. Die Studie zeigt, dass durchsetzungsstarke Kommunikation Aufstiegschancen erhöht. Sie sagt nichts darüber, ob diese Personen langfristig die wirksamsten Führungskräfte sind. Hier liegt eine Spannung. Organisationen könnten Gefahr laufen, jene zu bevorzugen, die besonders klar auftreten, während reflektierte, leise oder stark kooperative Persönlichkeiten weniger Beachtung finden. Sichtbarkeit ist nicht gleichbedeutend mit Qualität.
Auch die mündliche Prüfung misst nicht „gute Führung“ in ihrer ganzen Breite. Sie misst eine Voraussetzung dafür, in Führungssituationen überhaupt wahrgenommen zu werden. Zuhören, Empathie, strategisches Denken oder Integrationsfähigkeit werden dort nicht umfassend geprüft. Aber: Wer nicht in der Lage ist, eine Position klar zu vertreten, wird es schwer haben, diese anderen Qualitäten wirksam einzubringen. Sichtbarkeit ist kein Ersatz für Führung – sie ist eine Eintrittskarte.
Vor diesem Hintergrund halte ich das Format für klug gewählt. Es zwingt Kandidatinnen und Kandidaten in eine realitätsnahe Interaktionssituation. Es testet Standhaftigkeit ohne Respektlosigkeit. Es fordert Struktur unter Zeitdruck. Es verlangt Präsenz. Und es konfrontiert mit einem Umstand, der im Berufsalltag ohnehin gilt: Führung bedeutet, in kontroversen Gesprächen Haltung zu zeigen. Wer diese Fähigkeit nicht trainiert, wird sie auch im Arbeitskontext kaum spontan abrufen können.
Nicht immer steigen die besten Ideen auf. Oft steigen jene auf, die ihre Ideen unter Widerspruch sichtbar vertreten können. Die MIT-Studie liefert dafür eine empirische Grundlage. Führung entsteht im Gespräch – nicht im Gedanken allein.
Die mündliche Kommunikationsprüfung im SVF-Zertifikat bildet genau diese Realität ab. Sie prüft nicht einfach Wissen, sondern soziale Wirksamkeit. Und sie erinnert uns daran, dass Fachkompetenz ohne kommunikative Standfestigkeit in Organisationen selten ausreicht.
Wenn Du Dich auf eine solche Prüfung vorbereitest, verstehe sie nicht als rhetorisches Kräftemessen. Verstehe sie als Trainingsfeld für Sichtbarkeit. Entwickle Klarheit in Deiner Argumentation, bleibe respektvoll im Widerspruch und halte Position, wenn Gegenwind kommt. Führung beginnt nicht mit Macht. Sie beginnt damit, im entscheidenden Moment nicht zu verstummen.
Bildquelle Anton Hickel (1745–1798): The House of Commons, National Portrait Gallery, London, Public Domain.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet.
Topic #Erwachsenenbildung | #Coaching
from Lastige Gevallen in de Rede
Rituelen rond geschetste lijnen
De bel verklaard de noodtoestand bijtijds ontwaken in het ledikant vroeger dan een vogel opstaan boete doen een vastomlijnde baan mededelen wat het u kan schelen de daden hechten aan bevelen al wat afkomt gerechtigd klaren al wat kan beoordelen op gevaren dreigen met losse atomen halveren wat zij aan bieden moeten begeren elke beperking wordt strak beveiligd het handelshuis er tussen geheiligd de offers geschonken aan het gat een dag is om en dat is dan dat
from Microsoft Dynamics 365 Human Resources
Microsoft Dynamics 365 Human Resources
Managing employee data, payroll processes, performance tracking, and compliance can be complex without the right system. With Microsoft Dynamics 365 Human Resources, businesses can streamline HR operations, improve workforce management, and enhance employee experiences. It provides a centralised platform to manage the entire employee lifecycle efficiently. Dynamics 365 Human Resources helps organisations build a more productive, engaged, and well-managed workforce.
Benefits of Microsoft Dynamics 365 Human Resources:
• Centralised employee data management • Automated HR workflows • Leave and attendance management • Performance tracking and goal setting • Payroll and benefits administration support • Compliance and policy management • Real-time workforce insights and reporting
With better visibility and automation, HR teams can focus more on strategy and employee development instead of manual processes.
Why Choose Nimus Technologies?
At Nimus Technologies, we implement and customise Dynamics 365 HR solutions tailored to your organisational structure and policies. Our focus is on improving efficiency, compliance, and employee engagement.
We provide:
• End-to-end HR system implementation • Workflow configuration and customization • Integration with finance and payroll systems • Data migration and setup • User training and onboarding • Ongoing support and optimisation
We help businesses modernise their HR operations with scalable and secure solutions.
Contact Us:
Ready to simplify and strengthen your HR management?
Nimus Technologies
📧 Email: contact@nimustech.com 🌐 Website: https://nimustech.com/ 📞 Contact: +91 7696006333
Let’s build a smarter human resources system for your organisation. 🚀
from 下川友
朝、喉に違和感があった。
薬を買おうと思い、近くの薬局まで出かける。
俺はホームで電車を待っていた。 あれ、近くの薬局って電車で行く場所だったっけ。
そう思っていると、アナウンスが流れた。
「サイは、私の運転する電車に衝突しないでください。」
この辺にはサイがいるのか。動物と電車が衝突する事故があるというのは、インターネットで見たことがある。
電車に揺られていると、ふと昔のことを思い出した。 当時は俳優を目指していて、オーディションには一切行かず、破天荒なことばかりしていれば、勝手に声がかかると思っていた。
銭湯で服を盗まれないと俳優の仕事が来ない、そんなふうに思っていた時期がある。 眉毛の凛々しい男優が、インタビューか何かでそんなことを言っていたからだ。 たった一つのサンプルに、若い頃はなぜか力があると思ってしまう。
窓の外には山並みが続いている。「山から空気が降りてきています」と運転手のアナウンスが入る。ロボットみたいな運転手だなと思い、なぜか好感を持った。
薬が買える駅で降り、商店街を歩く。 古い建物の外壁から釘が飛び出していた。それがなぜか、こちらに視線を送っているように見えた。出ていた釘に、視線だけで挨拶をする。 友人がいないもので、いつの間にか無機物とコミュニケーションを取るようになったが、そのことも、もちろん自分しか知らない。
ふと路地に目をやると、階段の途中で誰かが立ち止まっていた。手すりに手をかけ、下半身だけが見えている。上半身は見えない。何をしているのかわからないが、声をかけるのはためらわれた。
通り過ぎようとしたとき、隣の建物から外国人が出てきて、床の畳をずらし始めた。 そもそも外に畳が敷かれていたこと自体に気づかなかった。 こういう大々的な変なことがあっても、俺はそれを無視してしまうことがある。 最初は少しだけだったのに、そのずれがだんだん大きくなっていく。 何してるんですか?とここで聞けないのが、俺の人生が楽しくならない理由だ。
八百屋の前では、看板娘が鉛筆を研いでいる。目の端でその仕草を見ながら、昔、似たような雰囲気の店で「あなたはカブを抜きに来たのよ」と急に言われたことを思い出す。あのときは意味がわからず、ただ納得してカブを抜く仕事を手伝っていたが、今では「もっと主体性を持たんかい」と体をまっすぐにさせる自分が心の中にいる。あの頃よりは、少しは成長しているのかもしれない。
…… なんでこの街に降りたんだっけ。 かつて世話になったスナックのママに会うためだったか。いや、違う。喉を治す薬を買うんだった。
昔、俺はとあるスナックの常連だった。ある日、いつものようにママと話をしようと思ったら、店の前に男が立っていて、「ママならもう船に乗りましたよ」といきなり言われた。それきり、会えなくなった。あれから十年以上が経っている。
そうこうしているうちに、呼吸を意識的にしている自分に気づいた。 呼吸に集中すると、他に何もできなくなる。歩くことさえおぼつかない。だから、ぼんやりと立ち止まったまま、しばらくその場にいた。
ああ、喉の痛みなんて、ほんの不調の一部にすぎない。 もっと根本的に、深い病を患っている。
喉の薬を探しに来ていたが、目的を達成することをやめた。 なんでもいいか、と思いながら、ゆっくりと街をただ歩くことにした。
from Dallineation
Yesterday, for Lent Day 8, I posted about an important letter. I didn't title it as the Day 8 entry in the series because I wanted it to be more of a standalone post. But today I'm continuing with Day 9, sharing some thoughts on the Holy Spirit.
I've been reading from a journal I kept while serving as a full-time missionary for the Church of Jesus Christ of Latter-day Saints. I served in the Brazil Santa Maria Mission from December 2000 to December 2002. I went straight to Brazil and spent two months in the São Paulo Missionary Training Center learning the basics of the Portuguese language and learning how to do the work of proselyting and teaching. After two months, I traveled to my assigned mission area in the southernmost state of Brazil.
So far I have read what I wrote about my experiences in the MTC and in my first several months of actual missionary service. It has been fun to revisit those times, but my 19-year-old self was pretty naïve and a bit cringe at times. But I was committed and trying hard to be a good missionary.
One thing I've noticed is that I repeated phrases like “I felt the Spirit so strong” or “the Spirit was so strong” very often. It's very common to hear such expressions in LDS church meetings and classes. We believe the Holy Spirit testifies of truth, but we also tend to associate its presence with positive feelings like happiness, hope, joy, peace, and similar. Likewise, we tend to associate negative feelings sadness, despair, agitation, and confusion with a lack of the presence of the Spirit. So when we say we are “feeling the Spirit” – or at least when I wrote about it as a missionary and in my life since then, it's almost always in the context of those positive feelings.
I am still trying to learn about the Catholic perspective on the Holy Spirit and its role in our lives and in the Church, but it is quite different from the LDS perspective. I think Catholics tend to be more skeptical of feelings and emotions as it is sometimes difficult to discern their origin. They can be misleading. This is not to say that God cannot send positive feelings and emptions to us through the Holy Spirit, but that those feelings don't necessarily always come from God. And we can be easily manipulated through our feelings.
So I'm trying to reflect on specific experiences I've had in the past where I believed I “felt the Spirit so strong” and think about the context and circumstances surrounding them.
I do believe I have felt the undeniable influence of the Holy Spirit at times throughout my life. The most powerful times have almost always been times when I have focused my thoughts and attention on any aspect of Jesus Christ, such as his birth, his ministry and teachings, his sufferings in the Garden of Gethsemane and on the cross, his resurrection.
Other times, when I think I have “felt the Spirit so strongly”, I think I have been caught up in feelings of unity, fellowship, belonging, love, etc. associated with church meetings.
But I would say that, for me, the majority of the time the Holy Spirit works on me almost indirectly. Quietly “nudging” me. A thought crosses my mind that I should text someone to say hello. Or I feel a brief feeling of reassurance as I am wrestling with my doubts and questions about my faith. I can easily dismiss or ignore those nudges, and I have for long stretches. But the nudges are always there. Always trying to gently turn my head to look at Jesus Christ. Because wherever we are looking is were we will go. And the Holy Spirit wants us to follow Christ.
I want to follow Christ, too. I'm just really stubborn and foolish. And easily distracted. So I really need the Holy Spirit. I'm just trying to understand more about how the Holy Spirit works and better recognize and discern his influence in my life.
Something has been drawing me to seriously investigate Catholicism and I can't explain it. And it's not stopping. My church leaders would certainly tell me that Catholicism is false and that it's not the Holy Spirit that's been nudging me to look into it. But I don't know.
#100DaysToOffload (No. 139) #faith #Lent #Christianity
from
Dieselgoth
“Adjustable Clutch Pedal Stop” is surely a favorite listing on a satirical online aftermarket automotive parts store, somewhere... somewhen. At this stage in my life, I have accepted that my utter inability to understand marketing has always been nothing but my own failure, so I'll leave it up to you to decide whether or not ECS Tuning's c u s t o m little peg originated from a genuine automotive need.
Here's most of the product information, prettified:
If you've ever driven a modern day manual transmission VW, you'll quickly notice that there's an abundance of unnecessary leg movement to disengage the clutch. This excessive leg motion creates several problems such as an uncomfortable, non-driver's focused seating position, difficulty in consistently finding the clutch engagement point and leaves room for improvement on faster gear changes.
With the ECS Adjustable Clutch Pedal Stop in place, driving dynamics are dramatically improved! Our unique height adjustable thread-in design allows you to fine tune clutch pedal feel to your preference, improving the connection between your foot and the transmission.
As you push down on the clutch pedal, the clutch disc becomes disengaged from the flywheel, allowing the transmission to become disconnected from the engine. However, there is a point within the clutch pedal travel where the clutch disc becomes disengaged but the pedal keeps going past the point of disengagement.. This is called “dead travel” and it leaves the clutch engagement point feeling more like a floating target.
By reducing the amount of travel needed to disengage the clutch, you gain consistency in take-offs and launches by always stopping the clutch pedal at the proper point, just before clutch engagement.
This unnecessary pedal travel is removed and taken up by the height of the clutch pedal stop, helping to lock into place the clutch disengagement point higher up off the floor for more consistent take-offs, faster gear changes and sportier pedal feel.
Part Design
Our in-house Engineering Team carefully spec?d out high quality parts to give you a robust, adjustable pedal stop that can take the stress and abuse of sporty driving
Our design includes a polyurethane bumper to absorb shocks while driving aggressively and offers a unique, solid ?thud? at the end of pedal stroke. The poly. bumper won?t compress or feel ?sticky? after pedal strokes like other brands will.
Not satisfied with a stack of washers, we set out to design a fully adjustable pedal stop that allows you to adjust your height with threads, rather than rubber washers that compress, or steel washers that can rattle.
A zinc-alloy nutsert threads into the floor, in place of the OEM pedal stop, and acts as the anchor for the adjustable pedal stop to thread into. This unique feature in our design is a much more rigid stop, that is going to stand up to repeated pedal mashing. This gives you a more confident , OE-like feel.
All other hardware is zinc-coated for protection from the environment for long-lasting great looks
Performance Features
With our Adjustable Clutch Pedal Stop installed, you can dial in the feel of your clutch engagement point higher off the floor. This gives you shorter shift times, more consistent launches and easier driving dynamics.
You can creep and take off from a light or a hill with greater ease with our Adjustable Pedal Stop properly setting the pedal height just below the clutch engagement point.
With less leg movement required to disengage the clutch, you can re-adjust your seating position further back for a more comfortable and confident driver seating position. Many people are forced to sit too close to the steering wheel to disengage the clutch, which can lead to your arms being bent improperly, not allowing you to take proper control of the steering wheel.
Product Development
Our ECS Adjustable Clutch Pedal Stop was designed, engineered and tested by our Research and Development team in our Wadsworth, Ohio facility. We ensured the highest level of precision and quality is delivered throughout rigorous long term product testing and leading edge product development methods. Each unit is etched, assembled and packaged in-house for the highest level of quality assurance.
We tested several prototypes on many vehicles with OE and aftermarket clutches to ensure proper fitment and operation,
We specced out the best selection of parts to fulfill our mission of giving you the absolute BEST Pedal Stop on the market! With premium materials and our unique adjustable design, this part will completely transform your driving experience with improved dynamics!
It honestly improves the driving experience in my opinion.
#hardware
from
SmarterArticles

Hiromu Yakura noticed something strange about his own voice. A postdoctoral researcher at the Max Planck Institute for Human Development in Berlin, Yakura studies the intersection of artificial intelligence and human behaviour. But the shift he detected was not in his data; it was in his speech. “I realised I was using 'delve' more,” he told reporters, describing the unsettling moment he caught himself unconsciously parroting the verbal tics of a large language model. Yakura was not alone. His subsequent research, analysing over 360,000 YouTube videos and 771,000 podcast episodes, revealed that academic YouTubers had begun using words favoured by AI chatbots up to 51 per cent more frequently after ChatGPT's November 2022 launch. Words like “delve,” “realm,” “underscore,” and “meticulous” were migrating from machine-generated text into the mouths of actual humans. A cultural feedback loop had been set in motion, and hardly anyone had noticed.
This quiet linguistic contamination is just one symptom of a much broader transformation. Across industries, conversational AI has become the front line of customer interaction. Chatbots handle banking queries, voice assistants schedule medical appointments, and algorithmic agents negotiate insurance claims. The global AI customer service market, valued at $12.06 billion in 2024, is projected to reach $47.82 billion by 2030, according to industry analysts. Gartner has predicted that conversational AI deployments within contact centres will reduce agent labour costs by $80 billion in 2026, with approximately 17 million contact centre agents worldwide facing a fundamental reshaping of their roles. Bank of America's virtual assistant Erica has surpassed 3 billion client interactions since its 2018 launch, serving nearly 50 million users with an average response time of 44 seconds. The two million daily consumer interactions with Erica alone save the bank the equivalent of 11,000 employees' daily work. The efficiency gains are staggering, the convenience undeniable.
But as these systems grow more sophisticated, more emotionally responsive, and more deeply woven into the fabric of daily communication, a disquieting question presents itself. What happens to us, the humans on the other end of the line? If we spend our days talking to machines that never lose their patience, never misunderstand our tone, and never push back with the messy friction of genuine feeling, do we slowly lose the capacity to navigate the unpredictable terrain of real human conversation? The evidence is beginning to suggest that we might.
The appeal of conversational AI is rooted in something profoundly human: a desire to be understood quickly and without complication. When you call your bank and a voice assistant resolves your problem in under a minute, there is an undeniable satisfaction in the transaction. No hold music, no awkward small talk, no navigating the emotional state of a tired customer service representative at the end of a long shift. The interaction is clean, efficient, and entirely on your terms.
This is by design. The conversational AI industry has been engineered to minimise friction. McKinsey reports that 78 per cent of companies have now integrated conversational AI into at least one key operational area. A 2025 Nextiva analysis found that 57 per cent of businesses are either using self-service chatbots or plan to do so imminently. By 2027, Gartner projects, 25 per cent of organisations will use chatbots as their primary customer service channel. The technology is no longer experimental; it is infrastructural. And the economic incentives are overwhelming: companies report average returns of $3.50 for every dollar invested in AI customer service, with leading organisations achieving returns as high as eight times their investment.
Yet friction, as any psychologist will tell you, is precisely what builds social muscle. The small moments of discomfort in human interaction, the pauses, the misunderstandings, the need to read another person's expression and adjust your approach, these are the crucibles in which empathy is forged. Sherry Turkle, the Abby Rockefeller Mauz\u00e9 Professor of the Social Studies of Science and Technology at MIT, has spent decades studying how technology shapes human relationships. Her warning is direct: “What do we forget when we talk to machines? We forget what is special about being human.”
Turkle's concern is not that AI is inherently destructive, but that its seductive convenience trains us to avoid the very interactions that make us more fully human. In her research, she describes social media as a “gateway drug” to conversations with machines, arguing that the emotional scaffolding we once built through difficult, imperfect human dialogue is now being outsourced to algorithms that mirror our sentiments without ever genuinely understanding them. “AI offers the illusion of intimacy without the demands,” she has written. She challenges us to consider whether machines truly grasp empathy, or whether we are merely being “remembered” without being genuinely “heard.” The result is a kind of emotional atrophy; we become fluent in transactional exchange but increasingly clumsy at the real thing. The pushback and resistance of genuine human relationships, Turkle argues, are not obstacles to connection. They are the mechanism through which understanding and growth are forged.
The neurological implications of this shift are only beginning to come into focus. In a landmark 2025 paper published in the journal Neuron, Professor Benjamin Becker of the University of Hong Kong's Department of Psychology laid out a framework for understanding how interactions with AI might physically alter the social circuitry of the human brain. Becker's analysis, drawing on a meta-analysis of 1,302 functional MRI studies encompassing 47,083 activations, identified the “social brain” networks that enable rapid understanding and affiliation in interpersonal interactions. These are evolutionarily shaped circuits, refined over millennia of face-to-face human contact. They allow us to read facial expressions, interpret vocal tone, predict others' intentions, and calibrate our own behaviour in real time.
The problem, Becker argues, is that humans are hardwired to anthropomorphise. We instinctively attribute personality, feelings, and intentions to AI agents, a tendency psychologists call the “ELIZA effect,” named after a rudimentary 1960s chatbot that users nonetheless treated as a genuine therapist. The classic Heider and Simmel experiment demonstrated this tendency decades ago: humans intuitively interpret behaviour and motives even in simple moving geometric shapes. With AI agents that can modulate their voice, recall personal details, and respond with apparent emotional sensitivity, the anthropomorphic pull becomes far more powerful. As conversational AI becomes more advanced and personalised, Becker warns, these interactions will “increasingly engage neural mechanisms more deeply and may even change how brains function in social contexts.”
“Understanding how our social brain shapes interactions with AI and how AI interactions shape our social brains will be key to making sure these technologies support us, not harm us,” Becker stated. The implications are especially significant for young people, whose neural pathways for social cognition are still developing. If children and adolescents are forming their primary conversational habits with AI rather than with peers, parents, and teachers, the social brain may develop along fundamentally different lines than those of previous generations.
This is not merely theoretical. Research from Harvard's Graduate School of Education, led by Dr. Ying Xu, has examined how children interact differently with AI compared to humans. The findings are nuanced but concerning. While children can learn effectively from AI designed with pedagogical principles (improving vocabulary and comprehension through interactive dialogue), they consistently engage less deeply with AI than with human conversational partners. When speaking with a person, children are more likely to steer the conversation, ask follow-up questions, and share their own thoughts. With AI, they tend to become passive recipients, answering questions with less effort, particularly in complex exchanges that require genuine back-and-forth discussion.
The implication is clear: AI may teach children facts, but it struggles to teach them how to be present in a conversation. And that presence, that willingness to lean into the discomfort of not knowing what someone else will say next, is the foundation of social competence.
Perhaps the most counterintuitive finding in recent AI research is this: the more people talk to chatbots, the lonelier they tend to feel. In early 2025, OpenAI and the MIT Media Lab published the results of a landmark study, a four-week randomised controlled experiment involving 981 participants who exchanged over 300,000 messages with ChatGPT. The researchers tested three interaction modes (text, neutral voice, and engaging voice) across three conversation types (open-ended, non-personal, and personal).
The headline finding was stark. “Overall, higher daily usage, across all modalities and conversation types, correlated with higher loneliness, dependence, and problematic use, and lower socialisation,” the researchers reported. Voice-based chatbots initially appeared to mitigate loneliness compared to text-based interactions, but these advantages disappeared at high usage levels, especially with a neutral-voice chatbot. Participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely and to rely on the chatbot further, creating a self-reinforcing cycle of dependency.
The study also revealed gender-specific effects. After four weeks of chatbot use, female participants were slightly less likely to socialise with other people than their male counterparts. Participants who interacted with ChatGPT's voice mode using a gender different from their own reported significantly higher levels of loneliness and greater emotional dependency on the chatbot. The researchers noted that people with a stronger tendency for attachment in relationships and those who viewed the AI as a friend were more likely to experience negative effects. Personal conversations, which included more emotional expression from both user and model, were associated with higher levels of loneliness but, intriguingly, lower emotional dependence at moderate usage levels.
Parallel to the controlled study, OpenAI and MIT analysed real-world data from close to 40 million ChatGPT interactions and surveyed 4,076 of those users. They found that emotional engagement with ChatGPT remains relatively rare in overall usage, but that the subset of users who do form emotional connections tend to be the platform's heaviest users, and the loneliest.
The Brookings Institution, in a July 2025 analysis by Rebecca Winthrop and Isabelle Hau, framed this as a defining paradox of our era: “We are living through a paradox: humans are wired to connect, yet we've never been more isolated. At the same time, AI is growing more responsive, conversational, and emotionally attuned, and we are increasingly turning to machines for what we're not getting from each other: companionship.” They noted that AI companions like Replika.ai, Character.ai, and China's Xiaoice now count hundreds of millions of emotionally invested users, with some estimates suggesting the total may already exceed one billion.
The scale of emotional investment in AI companions has become impossible to ignore. Replika, one of the most prominent AI companion platforms, claims approximately 25 million users, with over 85 per cent reporting that they have developed emotional connections with their digital companion. The average user exchanges roughly 70 messages per day with their Replika. Character.AI users average 93 minutes per day on the platform, 18 minutes longer than the average TikTok session, while heavy Replika users report engagement of 2.7 hours daily, with extreme cases exceeding 12 hours.
A nationally representative survey of 1,060 teenagers conducted in spring 2025 found that 72 per cent of those aged 13 to 17 are already using AI companions, with roughly half using them at least a few times per month. About a third of teens reported using the technology for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice. Perhaps most tellingly, around a third of teenagers using AI companions said they find conversations with these systems as satisfying, or more satisfying, than conversations with real-life friends.
The data on well-being is less comforting. Among 387 research participants in one study, “the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family.” Ninety per cent of the 1,006 American students using Replika who were surveyed for a separate study reported experiencing loneliness, significantly higher than the comparable national average of 53 per cent. Common Sense Media has recommended that no one under 18 should use AI companions like Character.AI or Replika until more safeguards are in place to “eliminate relational manipulation and emotional dependency risks.”
The regulatory landscape is beginning to respond. In September 2025, the California legislature passed a bill requiring AI platforms to clearly notify users under 18 when they are interacting with a bot. That same week, the Federal Trade Commission opened a broad inquiry into seven major firms, including OpenAI, Meta, Snap, Google, and Character Technologies, examining the potential for emotional manipulation and dependency. These are early steps, but they signal a growing recognition that the companion economy is not merely a consumer trend; it is a public health concern.
The social consequences of AI-mediated communication extend beyond individual loneliness into the texture of everyday human interaction. At Cornell University, research scientist Jess Hohenstein led a series of experiments investigating what happens when people suspect their conversational partner is using AI assistance. The results, published in Scientific Reports under the title “Artificial Intelligence in Communication Impacts Language and Social Relationships,” revealed a troubling dynamic.
When participants believed their partner was using AI-generated smart replies, they rated that partner as less cooperative, less affiliative, and more dominant, regardless of whether the partner was actually using AI. The mere suspicion of algorithmic assistance was enough to erode trust and social warmth. “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you're using AI to help you compose text, regardless of whether you actually are,” Hohenstein noted.
The study also found that actual use of smart replies increased communication efficiency and positive emotional language. But this improvement came at a cost: “While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you're sacrificing some of your own personal voice,” Hohenstein observed.
Malte Jung, associate professor of information science at Cornell and a co-author on the study, drew a broader conclusion: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people's interactions, language and perceptions of each other.”
This finding raises uncomfortable questions about authenticity in an age of AI-assisted communication. If AI makes our messages more efficient and more positive but less recognisably our own, are we gaining convenience at the expense of genuine connection? And if the mere suspicion of AI involvement poisons the well of trust, what happens as AI becomes ubiquitous in workplace communication, dating apps, and even family group chats?
The Max Planck Institute research that caught Hiromu Yakura by surprise points to an even more fundamental concern: AI is not just changing how we communicate with machines; it is changing how we communicate with each other. The study identified twenty-one words that serve as clear markers of AI's linguistic influence. Terms favoured by large language models, “delve,” “realm,” “underscore,” “meticulous,” and others, were appearing with dramatically increased frequency in human speech, not just in written text but in spontaneous spoken communication. An analysis of 58 per cent of videos that showed no signs of scripted speech suggested that the adoption of these linguistic patterns extended beyond prepared remarks into genuinely extemporaneous conversation.
Levin Brinkmann, a co-author of the study at the Max Planck Institute, described the mechanism at work: “The patterns that are stored in AI technology seem to be transmitting back to the human mind.” The researchers characterised this as a “cultural feedback loop.” Humans train AI on their language; AI processes and statistically remixes that language; humans then unconsciously adopt the AI's patterns. The loop narrows with each iteration, potentially reducing linguistic diversity on a global scale. If AI systems trained primarily on English-language content begin to influence communication patterns worldwide, we might see a homogenisation of human expression that transcends national and cultural boundaries.
The concern extends beyond vocabulary. An analysis published by IE Insights in April 2025 argued that AI-driven platforms are “subtly teaching people to speak and think like machines, efficient, clear, emotionally detached.” The article warned that interactions are “increasingly optimised for clarity and brevity, but stripped of emotional depth, cultural nuance, and spontaneity that define authentic human connection.” It described a world in which “we are training machines to sound more human while simultaneously training ourselves to sound more like machines.” The impact, the analysis argued, is particularly dangerous in high-stakes environments where human nuance and emotional intelligence matter most: diplomacy, crisis negotiation, healthcare, and community care.
Emily Bender, a prominent linguist at the University of Washington, has observed that even people who do not personally use AI chatbots are not immune to this influence. The sheer volume of synthetic text now circulating online, in articles, emails, social media posts, and automated responses, makes it nearly impossible to avoid absorbing AI-inflected language patterns. The homogenisation is insidious precisely because it is invisible.
The American public appears to intuit, even if it cannot fully articulate, the social risks posed by AI. A Pew Research Centre survey of 5,023 U.S. adults conducted in June 2025 found that 50 per cent of Americans say they are more concerned than excited about the increased use of AI in daily life, up from 37 per cent in 2021. Only 10 per cent reported being more excited than concerned, while 38 per cent felt equally excited and concerned. More than half (57 per cent) rated the societal risks of AI as high, compared with just 25 per cent who said the benefits are high.
The data on social relationships is particularly striking. Half of respondents (50 per cent) said they believe AI will make people's ability to form meaningful relationships worse. The public fears the loss of human connection more than AI experts do: 57 per cent of U.S. adults expressed extreme or high concern about AI leading to less connection between people, versus only 37 per cent of surveyed experts. This 20-point gap between public anxiety and expert reassurance is itself revealing. It suggests either that everyday citizens are perceiving something that specialists are overlooking, or that proximity to AI development generates a form of optimism bias.
The generational divide is especially revealing. Among adults under 30, the cohort most likely to use AI regularly, 58 per cent believe AI will worsen people's ability to form meaningful relationships, and 61 per cent believe it will make people worse at thinking creatively. This is markedly higher than the roughly 40 per cent of those aged 65 and older who share those views. The generation most fluent in AI is also the generation most anxious about what it might cost them.
Two-thirds of respondents (66 per cent) said AI should not judge whether two people could fall in love, and 73 per cent said AI should play no role in advising people about their faith. These are not merely policy preferences; they are boundary markers, lines drawn around the domains of human experience that people consider too sacred, too intimate, or too complex for algorithmic mediation.
The workplace effects of conversational AI adoption are already visible in the customer service industry itself. As chatbots handle an ever-larger share of routine interactions, the calls that do reach human agents are increasingly complex, emotionally charged, and difficult to resolve. This creates a cascading paradox: the agents who remain employed need greater social skills than ever, even as the broader population is getting less practice at the kind of difficult conversations these agents must navigate daily.
Recent industry data illustrates the toll. According to one analysis, 87 per cent of contact centre agents report high stress levels, and over 50 per cent face daily burnout, sleep issues, and emotional exhaustion. The automation of simple queries means agents now spend a disproportionate share of their working hours handling angry customers, technical problems that defy standard solutions, and emotionally charged conversations demanding empathy and judgement. More than 68 per cent of agents receive calls at least weekly that their training did not prepare them to handle.
A 2025 CX-focused study found that 79 per cent of Americans strongly prefer interacting with a human over an AI agent, and a Twilio report from the same year revealed that 78 per cent of consumers consider it important to be able to switch from an AI agent to a human one. Meanwhile, a Kinsta report found that 50 per cent of consumers would cancel a service if it were solely AI-driven. The message from customers is clear: they want efficiency, but not at the price of human presence.
The tension between economic incentive and human need creates a troubling dynamic. The global chatbot market, valued at roughly $15.6 billion in 2024, is expected to nearly triple to $46.6 billion by 2029. Every interaction that moves from human to machine represents a small reduction in the total volume of genuine interpersonal exchange in society. Multiply this across billions of interactions per year, and the cumulative effect on collective social skills becomes a legitimate concern.
The stakes are highest for the youngest members of society. UNICEF's December 2025 guidance on AI and children, now in its third edition, acknowledged that large language models are becoming “deeply embedded in daily life as conversational agents, evolving into companions for emotional support and social interaction.” The guidance flagged this trend as “particularly pronounced among children and adolescents, a demographic prone to forming parasocial relationships with AI chatbots.” It warned that youth are “uniquely vulnerable to manipulation due to neurodevelopmental changes.”
Research on joint media engagement, studying what happens when parents are present during children's AI interactions, offers a partial counterweight. When caregivers scaffold AI interactions, helping children process what they are hearing, encouraging them to question and respond actively, the developmental risks appear to diminish. But this requires time, attention, and digital literacy that not all families possess in equal measure.
The Harvard research from Dr. Ying Xu highlights a critical distinction: children who engage in interactive dialogue with AI can comprehend stories better and learn more vocabulary compared to passive listeners, and in some cases, learning gains from AI were even comparable to those from human interactions. But learning facts and developing social-emotional intelligence are fundamentally different processes. AI can drill vocabulary; it cannot model the subtle art of reading a room, sensing another person's discomfort, or knowing when to stay silent. The risk is not that children will stop learning. The risk is that they will learn everything except how to be with other people.
The picture that emerges from the research is neither straightforwardly dystopian nor naively optimistic. It is, instead, deeply complicated. Conversational AI offers genuine benefits: accessibility for people with disabilities, support for those experiencing isolation, efficiency in service delivery, and learning tools that can supplement (though not replace) human instruction. Stanford researchers found that while young adults using the AI chatbot Replika reported high levels of loneliness, many also felt emotionally supported by it, with 3 per cent crediting the chatbot for temporarily halting suicidal thoughts. The question is not whether to use these technologies, but how to use them without surrendering the skills that make us most distinctively human.
A 2025 study published in the Journal of Systems Science and Systems Engineering offers an instructive finding. Across two scenario studies and one laboratory experiment, researchers found that consumers exhibited higher prosocial intentions after interacting with socially oriented AI chatbots (those designed to build rapport and engage emotionally) compared to task-oriented ones (those focused purely on efficiency). The study revealed that social presence and empathy mediated this effect, suggesting that the design of AI systems meaningfully shapes their social consequences. This is not a trivial insight. It means that the choices made by engineers, product managers, and policymakers about how AI communicates will have ripple effects across the social fabric.
Professor Becker's neuroscience framework points in the same direction. The social brain is not fixed; it is plastic, shaped by the interactions it encounters. If those interactions are predominantly with machines that reward brevity and compliance, the brain will adapt accordingly. But if AI systems are designed to encourage, rather than replace, genuine human engagement, the technology could serve as a bridge rather than a barrier.
The Brookings Institution's Rebecca Winthrop and Isabelle Hau offered perhaps the most pointed formulation: the age of AI must not become “the age of emotional outsourcing.” The restoration of real human connection requires not a rejection of technology, but a deliberate, society-wide commitment to preserving the spaces, skills, and habits that sustain authentic relationships.
Sherry Turkle has described her decades of research as “not anti-technology, but pro-conversation.” That framing captures what is most urgently needed now. The rapid adoption of conversational AI in customer service, healthcare, education, and personal companionship is not inherently destructive. But it is proceeding at a pace that far outstrips our collective understanding of its social consequences.
The evidence assembled here, from neuroscience laboratories in Hong Kong to linguistics studies in Berlin, from controlled experiments at MIT to population surveys by Pew Research, converges on a single uncomfortable truth: the more seamlessly machines learn to talk like us, the greater the risk that we forget how to talk to each other. Not efficiently, not optimally, not in the polished cadence of a well-trained language model, but in the halting, imperfect, gloriously messy way that humans have always communicated. With pauses. With misunderstandings. With the kind of friction that, it turns out, is not a bug in the system of human connection. It is the entire point.
The voice recognition systems now achieving 95 per cent accuracy under ideal conditions and processing billions of interactions daily are marvels of engineering. The global voice and speech recognition market, valued at $14.8 billion in 2024, is projected to reach $61.27 billion by 2033. But accuracy in speech recognition is not the same as accuracy in human understanding. As we optimise our AI systems to hear every word, we might ask whether we are simultaneously losing our capacity to listen, truly listen, to one another.
The conversation about conversational AI has barely begun. It needs to move beyond the boardroom metrics of cost savings and efficiency gains, beyond the engineering challenges of word error rates and natural language processing, and into the deeper territory of what kind of society we are building when the first voice many of us hear each morning, and the last one we hear at night, belongs not to another human being but to a machine that has learned, with remarkable precision, to sound like one.
Yakura, H. and Brinkmann, L. et al. “Empirical evidence of Large Language Model's influence on human spoken communication.” Max Planck Institute for Human Development. arXiv:2409.01754. 2024. https://arxiv.org/html/2409.01754v1
Gartner, Inc. “Gartner Predicts Conversational AI Will Reduce Contact Center Agent Labor Costs by $80 Billion in 2026.” Press release, 31 August 2022. https://www.gartner.com/en/newsroom/press-releases/2022-08-31-gartner-predicts-conversational-ai-will-reduce-contac
Bank of America. “A Decade of AI Innovation: BofA's Virtual Assistant Erica Surpasses 3 Billion Client Interactions.” Press release, August 2025. https://newsroom.bankofamerica.com/content/newsroom/press-releases/2025/08/a-decade-of-ai-innovation--bofa-s-virtual-assistant-erica-surpas.html
Turkle, Sherry. “Reclaiming Conversation in the Age of AI.” After Babel. 2024. https://www.afterbabel.com/p/reclaiming-conversation-age-of-ai
Turkle, Sherry. NPR interview on the psychological impacts of bot relationships. 2 August 2024. https://www.npr.org/2024/08/02/g-s1-14793/mit-sociologist-sherry-turkle-on-the-psychological-impacts-of-bot-relationships
Becker, Benjamin. “Will our social brain inherently shape, and be shaped by, interactions with AI?” Neuron 113: 2037-2041. 2025. DOI: 10.1016/j.neuron.2025.04.034. https://www.cell.com/neuron/abstract/S0896-6273(25)00346-0
Xu, Ying. “AI's Impact on Children's Social and Cognitive Development.” Harvard Graduate School of Education and Children and Screens. 2024. https://www.gse.harvard.edu/ideas/edcast/24/10/impact-ai-childrens-development
OpenAI and MIT Media Lab. “How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.” March 2025. https://arxiv.org/html/2503.17473v2
OpenAI. “Early methods for studying affective use and emotional well-being on ChatGPT.” March 2025. https://openai.com/index/affective-use-study/
Hohenstein, Jess; Jung, Malte; and Kizilcec, Rene. “Artificial Intelligence in Communication Impacts Language and Social Relationships.” Scientific Reports. April 2023. https://news.cornell.edu/stories/2023/04/study-uncovers-social-cost-using-ai-conversations
Pew Research Center. “How Americans View AI and Its Impact on Human Abilities, Society.” Survey of 5,023 U.S. adults, June 2025. Published 17 September 2025. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/
Winthrop, Rebecca and Hau, Isabelle. “What happens when AI chatbots replace real human connection.” Brookings Institution. July 2025. https://www.brookings.edu/articles/what-happens-when-ai-chatbots-replace-real-human-connection/
IE Insights. “The Social Price of AI Communication.” IE University. April 2025. https://www.ie.edu/insights/articles/the-social-price-of-ai-communication/
Nextiva. “50+ Conversational AI Statistics for 2026.” 2026. https://www.nextiva.com/blog/conversational-ai-statistics.html
UNICEF. “Guidance on AI and Children 3.0.” December 2025. https://www.unicef.org/innocenti/media/11991/file/UNICEF-Innocenti-Guidance-on-AI-and-Children-3-2025.pdf
Twilio. “Customer Engagement Report.” 2025. Referenced in SurveyMonkey, “Customer Service Statistics 2026.” https://www.surveymonkey.com/curiosity/customer-service-statistics/
Fortune. “Linguists say ChatGPT is now influencing how humans write and speak.” 30 June 2025. https://fortune.com/2025/06/30/linguists-chatgpt-influencing-how-humans-write-speak/
Journal of Systems Science and Systems Engineering. “Beyond Consumption-Relevant Outcomes: The Role of AI Customer Service Chatbots' Communication Styles in Promoting Societal Welfare.” 2025. https://journal.hep.com.cn/jossase/EN/10.1007/s11518-025-5674-8
Straits Research. “Voice and Speech Recognition Market Size, Share and Forecast to 2033.” 2024. https://straitsresearch.com/report/voice-and-speech-recognition-market
CX Today. “The Algorithm Never Blinks: Why Contact Center AI is Creating a New Kind of Agent Burnout.” 2025. https://www.cxtoday.com/contact-center/the-algorithm-never-blinks-why-contact-center-ai-is-creating-a-new-kind-of-agent-burnout/
Common Sense Media. Referenced in Christian Post, “Advocate warns against teen use of AI companions as study shows heavy use by demographic.” 2025. https://www.christianpost.com/news/72-percent-of-teens-are-using-ai-companions-as-advocates-raise-concern.html
Nikola Roza. “Replika AI: Statistics, Facts and Trends Guide for 2025.” https://nikolaroza.com/replika-ai-statistics-facts-trends/
Ada Lovelace Institute. “Friends for sale: the rise and risks of AI companions.” 2025. https://www.adalovelaceinstitute.org/blog/ai-companions/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Hunter Dansin
Generation after generation, Vice and virtue breed with one another, Until hate is easy, and love is maudlin. And hearts, like flies over muck, do hover. O that one could sever this sullied past From we whose hearts are stained and sunk by it. That which we are told to put first, comes last, In the order of crude survivalists. Love is preached and praised, but rarely practiced. Art is punished unless profitable. More valued are the words, about them, lisped. So we cannot bear to leave the bubble. In your own reflection find your own way To marry past and present with today.
#poetry #sonnet
Thank you for reading! Sonnets are my way of coping with stress, I guess. Gives me something to think about while my daughter is playing with puzzles at the library, and keeps me from scrolling on my phone. I hope you like it. If I get more I think I will post them here sooner rather than later. What else is a blog for?
Send me a kind word or a cup of coffee:
Buy Me a Coffee | Listen to My Music | Listen to My Podcast | Follow Me on Mastodon | Read With Me on Bookwyrm
from
Roscoe's Story
In Summary: * Time-management is an extremely important skill to employ when setting schedules, goals, etc. We must be careful not to commit to too many chores or projects than we can realistically or comfortably handle. With this thought in mind I've declined an invitation to enter a monthly tournament run by one of my correspondence chess clubs. Lord knows I've still got plenty of other games in progress at that club and others. And now that I've begun following this season's MLB games, it's necessary that I cut back on other activites that claim my time and mental focus.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 227.63 lbs. * bp= 140/83 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:10 – 1 peanutbutter sandwich * 09:15 – mashed potatoes, cole slaw * 10:40 – fried chicken * 12:30 – beef chop suey, fried rice * 14:00 – 1 fresh apple * 16:30 – 1 bean & cheese breakfast taco
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:00 – bank accounts activity monitored * 06:20 – read, pray, follow news reports from various sources, surf the socials, and nap * 12:30 to 13:30 – Watch old game shows and eat lunch at home with Sylvia * 14:00 – follow an MLB Spring Training game, Brewers vs.Rangers * 16:50 – tuned into 1200 WOAI, the flagship station for the San Antonio Spurs, well ahead of pregame coverage then the call of tonight's game vs. the Brooklyn Nets. Go Spurs Go!
Chess: * 18:40 – moved in all pending CC games