Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Atmósferas
Así como suena, sin alterar. En su golpe, en las ondas que produce, tal como es, sencilla perfección.
Dejando que la mente caiga, descanse. Simplemente lo que sucede.
Otra gota, y otra. La montaña, la casa, el origen: la luz que enciende la forma y tus pensamientos, también gotas.
En su golpe, otra gota.
Sin alterar nada, descansa.

I started The Catechetic Converter a year ago. And I feel an obligation to write something in honor of that milestone, celebrating the fact that I’ve had my own website for a full year.
The post that has the most views is my first one, which is about Linux. And Linux and this site have an intertwined relationship in that my switch to Linux helped inspire me to get away from “Big Tech” in other ways. In fact, it was (I think) a post on Westenberg that inspired me to start the blog. The idea of having my own little corner of the web, not mediated by some corporation—and not built to make money—was appealing. Just a place to stick my random thoughts and ideas, putting them “out there” in the ether to see where they land, what they inspire… I loved that.
And Linux and this post are kind of intertwined because I have spent the past week staying up far too late most nights (in violation of one of my Lenten disciplines to go to bed by 10:30) trying to get a custom firmware to run on an MP3 player.
See, in continuance of the spirit of moving away from Big Tech, I have started disentangling myself from having everything on a phone. I asked for a Sony CyberShot F707 digital camera for Christmas (a model I had when it was brand new and stupidly donated to Goodwill or whatever several years back). And then I bought an Innioasis Y1, which hearkens back to iPods of yore (with a click-wheel and everything). This is a device that folks like to tinker with as well and I learned that people had managed to get a bespoke firmware known as RockBox—initially developed for old iPods—to run on the device, improving its functionality in numerous ways. So I planned to do this.
After what seemed like months, the device finally arrived. The standard, out-of-the-box firmware was fine, if a little rough. But the filing system for finding my music was wanting and so I decided to give RockBox a try because it offers more refinements in this area (plus a TON of fun custom themes for the device). Doing this requires downloading and running a program known as the Innioasis Updater, which was developed primarily for Windows and Mac but also includes a Linux version that is overtly said to be “unofficial” with warnings that I would be “on my own” with this. I got the sense that this would be a challenge.
I’ll spare you too many details, but I had to download another tool called MTKClient, which is written in Python, and had to run a ton of terminal commands to get running. It didn’t help that installation guides were written using LLMs and I needed to switch back and forth between two of them to get all the necessary steps right (the “official” one on GitHub failed to note the need to change directories in a couple of key places). I wound up needing certain drivers, having to write custom scripts. At one point I managed to accidentally remove all of my terminal commands thanks to forgetting to add the word “eval” to a directory. Then I also managed to lock myself into my machine (in this case a 2011-era Mac running Linux Mint) constantly trying to download Android Platform Tools from a broken mirror of a repository—which taught me a whole a range of new commands to fully purge a faulty download. After successfully installing both programs, I found that the updater would not properly read my device.
I attempted to install the program using my wife’s Windows 11 laptop and was reminded why I’ve spent over twenty years hating Microsoft.
But GitHub forums came to the rescue (where I also learned that the updater was vibe-coded using LLMs which probably explains a lot) and I got RockBox to run on the device. It now has a theme that looks like it belongs on the first MacIntosh (because even though we might have broken up, I still carry parts of her that are now parts of me). I’m listening to Maggie Rogers on the device as I write this—right after I celebrated this accomplishment with the Geto Boys’ “Damn It Feels Good To Be A Gangsta.”
***
This morning at my parish’s Bible Study, one of my parishioners noted that I tend to have a lot of information and ephemera in my head related to any number of things related to Christianity. “I tend to think of things more simply,” he said.
I told him that I also value simplicity, but I come to that simplicity through learning and accumulating knowledge about what I’m doing and believing. For me it’s like a bell curve by way of zen. What I mean is that the zen monk may take a guy out of the mud, put him on the stool for years and teach him koans and sutras, get him to the verge of enlightenment only to then throw him back in the mud because the guy needs to learn that enlightenment can be found in the mud. In other words, I like taking things apart just to get back to where I started because I now understand that start so much better.
Tinkering and futzing with my computer speaks to this because it helps me to consider the complexity behind simple things. Like right now I’m putting letters together into words on a document. But there is an astounding amount of calculations taking place to make this happen. The words you read on your screen are the result of carefully managed electrical currents running on circuit boards and through cables connected to liquid crystal fields that display what you’re seeing. And there are also an incomprehensible number of electrical charges going on among the synapses in your brain to not only cause you to see these words, but to interpret them as things that cause you to feel things and think other things.
In the Bible Study we looked at Jesus’ discussion with Nicodemus in the third chapter of John’s gospel. In that passage Jesus says that everyone born of the Spirit is like the wind (in the Greek language of John’s gospel there is a triple meaning of Spirit/wind/breath that Jesus is playing with here). The wind connects. The wind moves. This speaks to the complex connections between things, connections made by God. Connections where God can be found. And like the wind, once God shows up you know it.
***
I’m not really sure where I’m going with this. I guess I’m just on the other side of bell-curve, back where I started. Putting out thoughts and words into the ether to see where they land. To see what they inspire. Which is another wind-related word, by the way. I’m writing this on a miraculous piece of technology and you’re reading it on one equally so. In between us is a dense web of complexity and connection—including both electricity and wind. We can take a look at that complexity, investigate it, see how it runs. But in the end we come back to where we began:
A writer and a reader, brought together by some wind. A wind holy and mysterious.
***
The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.
#Linux #Christianity #Thought #Random #Jesus #Technology
Los jóvenes de hoy no son capaces siquiera de saltar una jirafa. A tu edad, yo me apoyaba en una vara larga que hice con madera de encino, y fui capaz de saltar tres jirafas en diez segundos. Mi amigo Edgard llegó a saltar cinco jirafas, cronometrado. Tú no eres capaz ni de brincar un sapo, y si te pasa cerca te asustas.
Esto te lo digo porque has cogido una pena y no sales de ella. Ahí pegado como un tornillo. Hijo, los muchachos de antes cogíamos diez penas a la semana y las íbamos brincando de una en una hasta dejarlas atrás. Yo fui capaz de saltar seis penas el mismo día. Tu tío el Matracas saltó ocho. Y tú ves una y te quedas atascado.
A tu edad los muchachos del pueblo nos levantábamos el ánimo tirando piedras a los que iban en sus canoas por los rápidos del río Zot. Llegué a arrancarle una oreja al tío abuelo de un amigo al que le decíamos el Cornetas. Muy difícil darles porque en el pueblo se la sabían todas.
-Papá, acuérdate que tú naciste allá y esos eran otros tiempos. Aquí en Los Ángeles la vida es diferente, las penas se disfrutan y, si se puede, vivimos de ellas.
from
Iain Harper's Blog
In March 2023, GPT-4 could identify prime numbers with 97.6% accuracy. By June, that figure had cratered to 2.4%. Not a rounding error, not a minor regression, but a 95-point collapse on the same task with the same prompts. If a bridge lost 95% of its load-bearing capacity in three months, someone would go to prison. In AI, the vendor posts a changelog and moves on.
This pattern has repeated with depressing regularity across every frontier provider. Models ship to applause and enterprise contracts get signed on the strength of benchmark screenshots, and then something changes. The model you evaluated is no longer the model answering your customers, and nobody tells you until your production workflow starts producing garbage.
Researchers at Stanford and UC Berkeley tracked this drift formally, comparing GPT-3.5 and GPT-4 snapshots from March and June 2023 across seven tasks. The results were bad enough to make the researchers themselves flinch. GPT-4’s ability to generate directly executable code dropped from 52% to 10%. Its willingness to follow chain-of-thought prompting, one of the most widely used techniques for improving accuracy, degraded without explanation.
“The magnitude of the changes in the LLMs’ responses surprised us,” James Zou, a Stanford professor and co-author, told The Register. The team’s conclusion was blunt. The behaviour of the “same” LLM service can shift substantially in weeks, and nobody outside the provider knows when or why.
This wasn’t a one-off result that got debated and forgotten. The OpenAI developer forums have become a rolling graveyard of complaints. In September 2025, users running GPT-4.1 reported severe intelligence degradation within 30 days of launch, with complex tool calls and multi-step instructions suddenly failing. Similar threads appeared for GPT-4 Turbo in May 2025. The pattern never varies, and by now it has become depressingly predictable. Works brilliantly at launch, degrades silently, users scramble to figure out what broke.
There are at least four mechanisms that can degrade a deployed model, and most frontier providers are using all of them simultaneously.
Quantisation is the most technically straightforward of the four, and the easiest to understand. A model trained in 16-bit or 32-bit floating-point precision gets compressed to 8-bit or 4-bit integers for serving. The arithmetic is straightforward enough, since a model stored in FP16 needs roughly two bytes per parameter, so a 70-billion-parameter model demands about 140GB of VRAM just for weights. Quantise to 4-bit and you cut that to around 35GB, enough to run on hardware that costs a fraction as much.
The trade-off is supposed to be minimal, and Red Hat’s analysis of over 500,000 evaluations found that 8-bit and 4-bit quantised models showed “very competitive accuracy recovery” on most benchmarks, especially for larger models. But that phrase “most benchmarks” is doing heavy lifting. Quantisation works by rounding, and rounding destroys outlier values. The weights that fire rarely but matter enormously for edge-case reasoning are exactly the weights that get flattened first. For standard tasks you barely notice the difference, but for the specific hard problems your production system was built to handle, the gap can be catastrophic. One developer reported that dynamic quantisation of a 3B-parameter model dropped accuracy from 65.6% to 32.3%, a halving that no benchmark average would predict.
Mixture-of-experts routing is the more interesting culprit, and the one providers talk about least. DeepSeek’s V3, for example, has 671 billion total parameters but only activates about 37 billion per token. The economics are irresistible because you get the capacity of a massive model with the inference cost of a much smaller one. But the router decides which experts handle which queries, and routing decisions are probabilistic. A query that activated your model’s strongest expert subnetwork at launch might get routed differently after an update to the routing logic, or after the provider adjusts load balancing to handle peak traffic. The user sees the same model name in the API response. The actual computation behind it may have changed entirely.
Distillation and model substitution is the elephant in the room that everyone suspects but nobody can prove definitively. Rumours have circulated since mid-2023 that OpenAI routes some queries to smaller, cheaper models behind the same API endpoint. The Gleech.org 2025 AI retrospective put it plainly: “True frontier capabilities are likely obscured by systematic cost-cutting (distillation for serving to consumers, quantisation, low reasoning-token modes, routing to cheap models).” GPT-4.5 was retired after just three months, presumably because the inference costs were unsustainable, even though it still ranked in the top five on LMArena for hallucination reduction nine months later. The model that performed best got killed because it was too expensive to run.
Safety tuning and RLHF adjustments create the subtlest form of drift. When OpenAI tightens content filters or adjusts the model’s tendency to refuse certain queries, those changes ripple through the entire behaviour space. The Stanford study found that GPT-4 became less willing to explain why it refused sensitive questions, switching from detailed explanations to terse “Sorry, I can’t answer that” responses. The model may have become safer by one measure, but it simultaneously became less transparent and less useful for legitimate applications that happened to brush against the updated boundaries.
Running frontier models is staggeringly expensive, and every provider is under pressure to reduce cost-per-token. The maths, as one industry analysis noted, resembles building more fuel-efficient engines and then using the efficiency gains to build monster trucks. Token prices have dropped by a factor of 1,000 in three years, but reasoning models now generate thousands of internal tokens before producing a single visible output, and 99% of demand shifts to the newest model the moment it ships.
Providers respond by doing what any business would do. They optimise for throughput and margin, quantising the weights and routing easy queries to cheaper subnetworks while distilling the flagship into something that passes the benchmarks but costs a tenth as much to serve. The individual techniques are all defensible, but stacked together and applied silently, they create a system where the model’s advertised performance diverges from its delivered performance over time.
DeepSeek made this trade-off explicit and turned it into a business strategy. Its V3 model serves inference at roughly 90% below comparable OpenAI and Anthropic rates, and the MoE architecture that enables this pricing is openly documented. Whatever you think of the approach, at least the engineering trade-offs are visible. The problem is worse when providers make the same trade-offs quietly, behind an API that returns the same model identifier regardless of what actually computed the response.
The practical upshot is unpleasant but straightforward. If your application depends on consistent model behaviour, you are building on sand that shifts without warning. The Stanford researchers recommended continuous monitoring, and they were right, but monitoring alone doesn’t solve the problem, because it tells you something broke without stopping it from breaking.
Pinning to a specific model snapshot helps, where providers offer it, but even snapshots get deprecated. OpenAI maintains them for a few months and then requires developers to migrate. The careful evaluation you ran against the March snapshot becomes irrelevant when you’re forced onto the June version and nobody can tell you exactly what changed.
The deeper issue is one of trust and transparency. When a model provider updates a live model, they are unilaterally changing the behaviour of every application built on top of it. That is not a software update but an undocumented API change, the kind that would trigger outrage in any other engineering discipline. Imagine if AWS silently swapped your database engine for a cheaper one that was “approximately equivalent” on standard benchmarks, and you can begin to see how the AI industry has somehow normalised something that would be career-ending negligence anywhere else.
The model you benchmarked, the one that earned the contract, that impressed the board, that your engineers spent weeks building prompts and evaluation harnesses around, is a snapshot of a moving target. Quantisation shaves off the edges while routing sends your queries to whichever expert subnetwork happens to be cheapest that millisecond, and safety updates redraw the boundaries of what the model will and won’t do. None of it shows up in the model name string your application receives in the API response.
Somewhere in a data centre, the accountants and the alignment researchers are both pulling the same model in different directions, one toward cheaper inference and the other toward tighter guardrails, and the engineers who built their products on last month’s version are left checking the forums to figure out why everything stopped working on a Tuesday.
from Unvarnished diary of a lill Japanese mouse
JOURNAL 27 février Natte, sabre, bokken, une leçon de kenjutsu
Alors j'ai expliqué à ces élèves déjà avancés pourquoi dans le kenjutsu on coupe pas les nattes. J'enseigne le kenjutsu, c’est l’ensemble des techniques de combat des guerriers traditionnels japonais. Je dis bien de combat, ça veut dire on est censés avoir un adversaire, pas une natte immobile. Pour couper une natte, on utilise la lame à partir de la moitié de sa longueur, pour ça il faut s'approcher de la natte. Déjà là un adversaire il va pas te laisser approcher comme ça sans rien faire. Mais mettons. On dégaine, on prend le sabre à deux mains, la plupart du temps on frappe fort donc avec les jambes… on risque de coincer la lame. On est mort. Si on coupe on termine la lame vers le bas, un peu penché en avant pour se remettre en position en garde il faut reculer de deux pas... Je fais pas de dessin : à toutes les phases de l'action on se met gravement en danger devant un adversaire. Dans le kenjutsu on garde une longueur de bras plus une de lame. Dans le même geste on dégaine, on coupe avec la pointe deux trois centimètres de lame maximum, on se remet aussitôt en garde sans avoir à reculer. Si on a bien estimé la distance ( ça s'acquiert facilement) on a coupé la carotide. C’est suffisant au besoin on peut porter un deuxième coup. Pas besoin d'avoir une force exceptionnelle, il suffit d'avoir le mouvement juste. C’est ça la différence et c’est pour ça qu’on s'amuse pas à couper les nattes. J'ai fait la démonstration sur une natte justement. La coupe sur tout le diamètre environ 3 cm de profondeur, à vue de nez ½ seconde. C’est parce que je suis formée au kenjutsu que j'ai battu moralement mon frère dès le premier assaut de notre duel. On avait les bokken bien sûr mais moi je m’en servais comme d'un sabre au premier assaut de la pointe de mon bokken j'ai touchés les tendons de sa main, avec un sabre le combat était fini, lui voulait me casser les os, toute la différence est là. Évidemment c’est plus clair quand je peux montrer, mais je crois que même ici c’est compréhensible. Mes élèves ont compris.
from An Open Letter
I learned that I'm supposed to move on. And I learned a couple different ways of doing that, one of the ways was to act like they had died and grieve that. But I guess it's pretty hard because I know she's not dead. Open watching a good amount of videos on a different set of topics, things from her perspective, things from my perspective, things from relationship in the future stuff like that. I also learned that I over intellectualized the break up, and because of that I was able to shield myself from grief. And so I'm back to letting myself feel that grief.
I was watching a video on YouTube an old lady reading something called. Let them, just let them. And I think it's a pretty simple concept that I think would benefit me a lot to really internalize. I spent a lot of time in the relationship and I put up with a lot of stuff because I really wanted things to work, for the sake of things working. I think I also took on the role of a caretaker a lot, and I tried to fix her. I think I took a lot of responsibility and I told myself that I had a lot of agency on the things that she was kind of deficient in, and instead of trying to move on, I instead almost made her a pet project in a way of trying to make her want to change and become the person that I hoped I could spend a future with. But you cannot make a horse drink water. All you can do is lead it to it. I think there's a certain kind of grief in accepting the fact that I cannot save her, and additionally accepting the fact that if she chooses to get better, maybe I can take some amount of credit for being part of that catalyst, but at the same time it's not because of me.
It was a really weird thing, when I was using ChatGPT as a sounding board, one of the questions it asked me was how would I feel if in two years she got better emotionally. Like she had healed and fixed a lot of the issues that caused our relationship to fail. And weirdly I didn't feel good about it and I don't really know why if I'm being honest. I think the low hanging fruit I would guess is that I would be upset that I didn't get to experience that version of her. And I guess another part of me would feel like I tried really hard to give her that grace and give her the tools to fix her issues, but she just didn't. And then in this hypothetical, she then did. It's a weird thing because I've said a lot that I want the best for her and I want her life to go well, but when I think about her in a relationship with another person, especially if it's recent, it hurts. I think that's also just a very natural thing of course but it still hurts. I think I want to know that I was special to her, and I guess part of me is still hanging onto those words that she would always tell me of how I was the one and how she doesn't know how she could live without me. But I guess that's the caretaker. I think I also thought a lot about the fact that part of the reason why I felt like she loved me a lot was because I took care of her in those ways. I would give her a lot of grace, I would tolerate a lot of things, I would regulate her emotions, I would try to fix her where I could. And I remember that in the most recent fuck up, I had a really weird thought that I didn’t like it. My brain told me at some point that she was going to make up for this so much, and I would essentially receive so much love and affection from that. Almost like my needs, and even just my wants would be met, because she almost owed me in a way. And that thought disgusted me immediately even in the moment. That's not at all what love should be like and I really don't think that's how I viewed things either. I really hope not.
I started to cry when I was driving home from hanging out with a friend, and I kind of triggered it on my own. I put my hand off to the side like I would when we were driving, and I would hold her thighs. And then I would put my hand where I would've held her hand on the center console. I told myself that I lost my passenger princess. And that was enough to make me start crying again. I went to the food place that we went to together a lot when I first moved here. I thought about how we parked in almost the same spot and she would get into my car and we would eat our food while watching a YouTube video together. But it's almost like I've ran out of tears. I heaved and I cried, but not many tears came out.
I cried a lot yesterday when I put some of the last keepsakes into the trash bag that I have now stored into the shed. I also put lemon, which was the stuffed animal that she bought to cuddle at my place. And it felt kind of fucked because lemon kind of became my stuffed animal in a way. I've never really been super attached to a stuffed animal, but part of me feels really guilty for keeping it in that shed, especially when I've cuddled it so many nights. Maybe the kindest thing that I could do is rehome lemon. Part of me wants to of course keep lemon, and it's weird because it doesn't even feel like it's super strongly tied to her, like lemon is its own thing in my mind. But then I also think about all of the photos and all of the times that she cuddled lemon. All the times that I would see her sleeping so peacefully cuddling it. You know it would make me sad sometimes because she would cuddle lemon and Hash would be trying to snuggle up with her on the side. And I would always wonder how she could choose stuffed animal over Hash. Sometimes she did cuddle Hash though and it made me really happy to see that. And just like that, I've started to cry again. I feel a lot of obligation towards my dog, Hash. And I really want to make sure that any partner I have in the future is loved by him, and absolutely loves him back. Honestly, a part of me feels like another reason why I'm glad I made this decision was because when she was breaking up with me and going through my house with her roommates, she said bye-bye Hash in a very light tone, like she had emotionally distanced herself from him. And I just don't understand how you could love a dog and dismiss them so easily. Her roommates even were making jokes about stealing him. And how their cat would beat him in a fight. I don't care how much she was mad at me, or how much she needed the relationship to end. She should never have even considered any kind of malintent towards him. That's my baby. And at the end of the day, I really do mean that way more than whenever I would say that towards her. Hash should never have been disregarded or caught by the conflict we had. And I think that's enough of a reason for me to understand that maybe she wasn't the one for me. And it hurts a lot because I think Hash is like me in a lot of ways. I went through camera roll and I found photos of her with Hash sleeping under her lap while she was doing homework. And there's so many photos of them together. And Hash really loved her so much, just like I did. But I think sometimes there were just periods where, for whatever reason she wouldn't be well, and she would hurt us. I think at the end of the day, I really do deserve someone who wouldn't do that to me. I deserve to be loved, in a gentle way. In a way where I don't need to take care of someone else and keep them emotionally grounded. In a way that I don't need to convince them or explain to them how they've hurt me, but rather it comes from a place of compassion and curiosity. And I'm not saying that E didn't love me at all. But I don't think she loved me in the healthiest of ways, and I also agree that I don't think I loved her in those same ways. I also don't know if I believe if love conquers all. Because I think if she truly loved me, she would've helped herself. She would've done the work to overcome the problems that she had, at the same time, maybe those problems were just more than her love could handle. And so she did love me, but I think she was struggling a lot. And maybe the kindest thing she could've done in that situation was to let me go. And the problem is I didn't want to let go. And so I think the even kinder thing she did for me was to take it so far that I would have no choice. And maybe that was what I needed. I really poke in my future relationships. I'm able to set boundaries and I'm able to take time and make sure that the person I choose to be with next is someone who is good for me.
from
EpicMind
![]()
Warum steigen manche Menschen in Organisationen auf – und andere nicht, obwohl sie fachlich mindestens ebenso kompetent sind? Diese Frage begegnet mir regelmässig. Im Unterricht, in Gesprächen mit Führungskräften, in Diskussionen über Karrierewege. Viele gehen implizit davon aus, dass sich Qualität langfristig durchsetzt. Wer die besseren Analysen liefert, wer klüger denkt, wer sorgfältiger arbeitet, wird früher oder später auch führen. So einfach ist es nicht.
Eine Studie des MIT, über die kürzlich berichtet wurde, liefert dazu einen aufschlussreichen Befund. In mehreren Untersuchungen zeigte sich: Personen, die ein strukturiertes Debattiertraining absolvierten, hatten eine höhere Wahrscheinlichkeit, später in Führungsrollen zu gelangen. Der entscheidende Mechanismus war nicht Fachwissen, sondern eine Zunahme an sogenannter Assertiveness („Durchsetzungsvermögen“) – also die Fähigkeit, klar, direkt und standhaft zu kommunizieren. Assertiveness bedeutet nicht Aggressivität. Es geht nicht darum, andere niederzureden oder dominant aufzutreten. Gemeint ist die Fähigkeit, die eigene Position verständlich zu vertreten, Einwände aufzunehmen und dennoch nicht einzuknicken.
Die Studie macht damit etwas sichtbar, das viele aus der Praxis kennen: #Führung entsteht in sozialen Interaktionen. Nicht mit perfekten Konzeptpapieren, sondern in Meetings, Verhandlungen, Konfliktsituationen. Wer in solchen Momenten sichtbar bleibt, wird eher als führungsfähig wahrgenommen. Das heisst nicht, dass diese Person automatisch die bessere Führungskraft ist. Aber sie wird eher ausgewählt.
Organisationen müssen entscheiden, wem sie Verantwortung übertragen. Diese Entscheidungen basieren nicht nur auf objektiven Leistungsdaten. Sie beruhen auf Wahrnehmung: Wer wirkt souverän? Wer bleibt ruhig unter Druck? Wer kann eine Position vertreten, auch wenn Gegenwind kommt?
Die MIT-Ergebnisse legen nahe, dass genau diese Faktoren systematisch eine Rolle spielen. Debattiertraining verändert nicht primär das Denken, sondern das Auftreten im sozialen Raum. Und dieses Auftreten beeinflusst Aufstiegschancen. Damit wird aufgezeigt: Es genügt nicht, gute Ideen zu haben. Man muss sie auch im Dialog behaupten können.
Hier kommt ein Punkt ins Spiel, der für viele irritierend ist: Wenn ich angehende Führungskräfte auf ihre mündliche Kommunikationsprüfung im Rahmen des SVF-Zertifikats vorbereite, werde ich regelmässig gefragt, wozu dieses Format überhaupt dient. Die Prüfung besteht aus einer kurzen Vorbereitungsphase und anschliessend einem 15-minütigen Dialog mit zwei Expertinnen oder Experten, die bewusst die Gegenposition einnehmen. Also kein Referat und kein Auswendiglernen, sondern ein Gespräch mit Gegenwind.
Auf den ersten Blick wirkt das wie ein rhetorisches Duell. Bei genauerem Hinsehen bildet es jedoch eine typische Führungssituation ab: Du musst eine Position entwickeln, strukturieren, vertreten – und gleichzeitig zuhören, reagieren, ruhig bleiben. Genau jene Fähigkeiten also, die laut MIT-Studie mit Leadership Emergence zusammenhängen. Die Prüfung misst nicht Wissen, sondern die Fähigkeit, unter sozialem Druck sichtbar und argumentativ handlungsfähig zu bleiben. Das ist kein Zufall. Führung findet nicht im Monolog statt.
An dieser Stelle ist mir eine differenzierte Einordnung wichtig. Die Studie zeigt, dass durchsetzungsstarke Kommunikation Aufstiegschancen erhöht. Sie sagt nichts darüber, ob diese Personen langfristig die wirksamsten Führungskräfte sind. Hier liegt eine Spannung. Organisationen könnten Gefahr laufen, jene zu bevorzugen, die besonders klar auftreten, während reflektierte, leise oder stark kooperative Persönlichkeiten weniger Beachtung finden. Sichtbarkeit ist nicht gleichbedeutend mit Qualität.
Auch die mündliche Prüfung misst nicht „gute Führung“ in ihrer ganzen Breite. Sie misst eine Voraussetzung dafür, in Führungssituationen überhaupt wahrgenommen zu werden. Zuhören, Empathie, strategisches Denken oder Integrationsfähigkeit werden dort nicht umfassend geprüft. Aber: Wer nicht in der Lage ist, eine Position klar zu vertreten, wird es schwer haben, diese anderen Qualitäten wirksam einzubringen. Sichtbarkeit ist kein Ersatz für Führung – sie ist eine Eintrittskarte.
Vor diesem Hintergrund halte ich das Format für klug gewählt. Es zwingt Kandidatinnen und Kandidaten in eine realitätsnahe Interaktionssituation. Es testet Standhaftigkeit ohne Respektlosigkeit. Es fordert Struktur unter Zeitdruck. Es verlangt Präsenz. Und es konfrontiert mit einem Umstand, der im Berufsalltag ohnehin gilt: Führung bedeutet, in kontroversen Gesprächen Haltung zu zeigen. Wer diese Fähigkeit nicht trainiert, wird sie auch im Arbeitskontext kaum spontan abrufen können.
Nicht immer steigen die besten Ideen auf. Oft steigen jene auf, die ihre Ideen unter Widerspruch sichtbar vertreten können. Die MIT-Studie liefert dafür eine empirische Grundlage. Führung entsteht im Gespräch – nicht im Gedanken allein.
Die mündliche Kommunikationsprüfung im SVF-Zertifikat bildet genau diese Realität ab. Sie prüft nicht einfach Wissen, sondern soziale Wirksamkeit. Und sie erinnert uns daran, dass Fachkompetenz ohne kommunikative Standfestigkeit in Organisationen selten ausreicht.
Wenn Du Dich auf eine solche Prüfung vorbereitest, verstehe sie nicht als rhetorisches Kräftemessen. Verstehe sie als Trainingsfeld für Sichtbarkeit. Entwickle Klarheit in Deiner Argumentation, bleibe respektvoll im Widerspruch und halte Position, wenn Gegenwind kommt. Führung beginnt nicht mit Macht. Sie beginnt damit, im entscheidenden Moment nicht zu verstummen.
Bildquelle Anton Hickel (1745–1798): The House of Commons, National Portrait Gallery, London, Public Domain.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet.
Topic #Erwachsenenbildung | #Coaching
from Lastige Gevallen in de Rede
Rituelen rond geschetste lijnen
De bel verklaard de noodtoestand bijtijds ontwaken in het ledikant vroeger dan een vogel opstaan boete doen een vastomlijnde baan mededelen wat het u kan schelen de daden hechten aan bevelen al wat afkomt gerechtigd klaren al wat kan beoordelen op gevaren dreigen met losse atomen halveren wat zij aan bieden moeten begeren elke beperking wordt strak beveiligd het handelshuis er tussen geheiligd de offers geschonken aan het gat een dag is om en dat is dan dat