Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
the casual critic
#books #fiction #SF
Warning: Contains some spoilers
Interstellar empires. They are a staple of science fiction, but we don’t often see how they arise. They’re just…sort of there, with their ‘Romans with spaceships’ vibe. John Scalzi’s Interdependency trilogy departs from convention by giving us both a backstory and a look under the hood. The series, comprised of The Collapsing Empire, The Consuming Fire, and The Last Emperox, tells the story of the eponymous interstellar empire confronted with an existential crisis, as its interdimensional hyperspace network starts to unravel. Like other human societies that preceded it, what the Interdependency does not do is pull itself together to avert disaster. Instead, its ruling elite descend into lethal court intrigues to gain control over the limited number of proverbial escape pods on the rapidly decompressing imperial spaceship. Across three fast-paced books, Scalzi puts the reader at the centre of power to find out whether the ruling class will pull itself together, or apart, and the rest of society with it.
Scalzi’s worldbuilding makes for a really interesting setting, and a creative new take on the interstellar empire trope, with plenty of nods to our contemporary world that are either humorous, insightful or both. Which is why it is such a shame that as the series progresses, the Interdependency itself fades increasingly into the background, obscured by the interpersonal dramas and vendattas of the main characters. The end result is something akin to what you might get if Frank Herbert’s Dune was the basis for a season of Coronation Street.
None of this is as apparent in the first book, which I felt to be the strongest in the series. The Collapsing Empire benefits from introducing us to Scalzi’s intriguing world, its characters, and the central point of the plot. We learn that the Holy Empire of the Interdependency is a refuge for a spacefaring human civilization that has long since lost contact with Earth, consisting mostly of habitats either on or orbiting otherwise inhospitable planets. The precarious nature of the Interdependency is due to its reliance on the ‘Flow’, an interdimensional network of hyperspace lanes that allow for faster-than-light travel, but only between specific star systems, most of which do not contain planets capable of supporting human life. Despite their high level of technological sophistication, the Interdependency’s systems could not function in isolation, hence the overriding purpose of the empire is to maintain both inter-system trade and enduring political stability and stasis.
Of course, this system works better for some than for others, and it works particularly well for the noble houses and guilds that have monopolies on the manufacture and trade of life’s essentials. The political economy of the Interdependency is the logical endpoint you would get to when applying Cory Doctorow’s process of enshittification to an entire economy: everything, from starships to citrus fruits, can only be produced by a single house and is legally and technologically shielded against reverse engineering. One cannot wonder if the architects of the Interdependency read Lenin’s Imperialism, the Highest Stage of Capitalism as an instruction manual:
A monopoly, once it is formed and controls thousands of millions, inevitably penetrates into every sphere of public life, regardless of the form of government and all other “details”.
This, however, is all about to come to an end as the Flow connections begin to fail, threatening to leave each system isolated and facing slow but certain collapse. By the end of The Collapsing Empire, this news has finally reached the new emperox (yes, the title is gender neutral) Grayland II, whose unexpected and short reign as the second in line to the throne has already been beset with intrigue, assassination and attempted coups. On top of which, she now has the imminent collapse of all human civilisation to contend with.
Unfortunately, the imminent collapse of civilisation remains eclipsed by said intrigue, assassinations and attempted coups in the remaining two novels, as Grayland II is under continuous assault from the ambitious Nohamapetan noble house. That is not to say that the Flow collapse disappears from the story, but for much of it it functions more as a political complication or liability within the ever shifting allegiances of different factions. Apart from a handful of paragraphs, we learn nothing about the response of the billions of people whose existence is at stake. The denizens of the Interdependency suffer from what I’ve come to think of as ‘prole syndrome’: a debilitating lack of agency and presence, which means their salvation can only ever come from the outside or the top down. We also saw this in Oryx & Crake, and it is taking 1984’s O’Brien at face value when he says:
Or perhaps you have returned to your old idea that the proletarians or the slaves will arise and overthrow us. Put it out of your mind. They are helpless, like the animals. Humanity is the Party. The others are outside — irrelevant.
There is a potential comparison here with the contemporary response to climate change. We, too, live in a society faced with an approaching existential threat. We too are governed by elites that are either unable to avert catastrophe, or have decided that they will be just fine, actually, and the death of millions is a small price to pay for ‘number go up’. Scalzi himself has indicated the analogy was not intended as directly, but that he was nonetheless inspired by the realisation that it will take us caring for one another if we are to survive because, to take a leaf from one of his characters , ‘the universe doesn’t give a fuck’.
Yet for all that, care or mutual aid are conspicuously absent from the Interdependency. We are told most of the Interdependency’s citizens assume matters will work themselves out, and only a handful either prepare for the End Times, or beseech their representatives to avert it. If this is a reflection on our contemporary state of affairs, it is a cynical and fatalistic one. Yes, more could be done, but we know that the vast majority of people want more action to be taken. Any limited progress we have made in the fight against climate change has been extracted from elites through organised collective action, rather than being benevolently gifted to us from above. Maybe an alternative version of the story could have seen boycots of trading guilds, occupations of space stations or the hijacking of starships as the citizenry of the Interdependency forcefully asserts its right not to be annihilated.
With its focus on court intrigue as it is, the Interdependency series can’t help but invite comparison with other galactic empire stories, perhaps most immediately Frank Herbert’s Dune. Despite being mostly confined to a single planet, the narrative in Dune feels grand, whereas in the Interdependency the interpersonal conflicts resemble the scale of dysfunctional university fraternity. In Dune, the conflict between its noble houses is encoded into the fabric of its society in a way that believably inflects everything about how the nobility acts and reacts, relying on careful long-term planning to attain victory. In the Holy Empire of the Interdependency, violence is deployed so casually that the universal incompetence of everyone’s security services begs the question how anyone in the leading houses is still alive by the time the story rolls round.
Of course this comparison is unfair, and so is judging the Interdependency series for something that it is not, but the contrast was productive helping me identify that my disappointment with the novels traced back to the separation between the world and the story set within it. The concept of the Interdependency holds much creative potential, yet the series never fully realises it. Whether that is due to the focus on the upper classes, the pace of the stories or the limited length of the series, is hard to tell.
That is not to say that the Interdependency series isn’t worth reading, as there is still much to enjoy in it. For one, although functionally Scalzi leans heavily into the Great Person Theory of History, he is happy to show us that up close, these people are anything but Great. Scalzi’s heroes are flawed, with doubts and foibles and endearingly humane concerns. Even his villains, while mainly murderous sociopaths, have compelling and interesting characters. All three novels are pleasantly fast-paced, which means it is neither surprising nor problematic that none of the characters show any real development over the course of the story, and have neatly Newtonian trajectories that can be predictably inferred from their starting positions. Instead, the plot proceeds through a couple of only mildly contrived deus-ex-machinas that move the story in an interesting direction without nullifying all dramatic tension the way we saw in Remembrance of Earth’s Past. The Last Emperox then sticks the landing with a solid and satisfying finale, handing the villains their just desserts without making it too easy on the heroes. The Interdependency is is easily enjoyed as a literary light snack, and I will certainly give other Scalzi’s a go. Yet I cannot help but wonder if, with the same ingredients, something more substantial wouldn’t have been possible.
from
Roscoe's Quick Notes

As I've mentioned before, Correspondence Chess Games very rarely end in checkmate: here is one I won recently playing Black, checkmating the White King by squeezing him against a side of the board with a Queen and Rook.
Started: 17-Nov-25, Ended: 05-Jan-26, Time control: 2 days per move
The image at the top of the post shows position of pieces at game's end. Our full move record follows: 1. e4 a6 2. Qf3 f6 3. Qe2 Nc6 4. f3 e5 5. d3 Nd4 6. Qd1 Bb4+ 7. c3 Ba5 8. b4 Bb6 9. cxd4 Bxd4 10. Ne2 Bxa1 11. Nbc3 Bxc3+ 12. Nxc3 Ne7 13. a3 O-O 14. Nd5 Nxd5 15. exd5 b5 16. g4 h6 17. Bh3 Bb7 18. Rf1 Bxd5 19. Bg2 Re8 20. Rf2 c5 21. bxc5 Qa5+ 22. Bd2 Qc7 23. Bb4 a5 24. Bd2 Qxc5 25. Re2 Qg1+ 26. Bf1 Bxf3 27. Be3 Qxg4 28. h3 Qg3+ 29. Rf2 Bxd1 30. Kxd1 Qxe3 31. Rd2 Rac8 32. Be2 Qxh3 33. Rc2 Qh1+ 34. Kd2 Qh4 35. Rxc8 Rxc8 36. Bd1 Qf4+ 37. Ke1 Qe3+ 38. Kf1 Qxd3+ 39. Be2 Rc1+ 40. Kf2 Qd4+ 41. Kf3 Rc3+ 42. Kg2 Rxa3 43. Bxb5 Qd5+ 44. Kf2 Qc5+ 45. Ke2 Qxb5+ 46. Kf2 Qb2+ 47. Kg1 Ra1# 0-1
And the adventure continues.
from audiobook-reviews

To Kill a Mockingbird is one of these books where I've heard or read the title many times before, but I had really no idea what it was about.
I got motivated to listen to it through the computer game Tiny Bookshop. Doing so was interesting, certainly enjoyable, but not always easy. Not because of the quality of the recording or because of the story, both of which are good, but because the subject of the book is hard to stomach.
Harper Lee is telling this story through the eyes of Jean Louise Finch, a girl growing up in the 1930s in the deep south of the US. Writing children is hard, but when it is done well as is the case here, it can bring a lot of perspective to a story. In this book, the children are written very believable. And to see the world through their eyes and to hear what they think with their innocence and infinite capacity for optimism is truly beautiful. So much the worse, to see their hopes and believes disappointed.
The story starts of slow, and the main plot only starts about half way in. Despite that, the first half is not at all boring. We get to know the family, the town and to experience life in the southern US.
The story eventually goes on to talk about racism and how black people at that time were denied their rights routinely and how that seemed normal. Other than this one, I have also enjoyed a number of excellent books by John Grisham on very similar subjects. It's not always easy to listen to, but it makes one think — appreciate one's own privileges, appreciate the progress we've made in the past decades and question what injustices we might still be committing to this day, even in these enlightened times.
The reading by Sissy Spacek is good. Unfortunately, she doesn't give strong separate voices to the characters, but the pacing and emphasis are great. The result is an engaging audiobook.
The audio is clear, nothing bad to say about it.
It is my believe that everyone will benefit from listening to this book.
from surdelaindia

¿Llegar a Mysore y preguntarte dónde dormir cómodamente cerca del Palacio o cómo moverte sin estrés? ¡Estás en el lugar correcto! La guía de viaje de Mysore de Sur De La India destaca que esta ciudad limpia y cultural es perfecta para explorar con base cómoda, pero elegir el hotel y traslado ideal marca la diferencia. Como agencia experta en el sur de India para hispanohablantes, Sur De La India recomienda opciones probadas con guías en español, descuentos exclusivos y traslados privados desde Bangalore (147 km, solo 2.5 horas). Aquí te explico en detalle todo lo que necesitas para 2026, de forma clara y práctica.
Mysore no es gigante, pero sus joyas como el Palacio (centro), Chamundi Hill (este), Zoo y Jardines de Brindavan (20 km) están dispersas. La guía de viaje de Mysore sugiere llegar por tren a Mysuru Junction o taxi desde el aeropuerto de Bangalore, pero sin traslados organizados, pierdes tiempo en rickshaws caros o tráfico. Sur De La India resuelve esto con hoteles cerca de atracciones clave y vehículos AC con choferes que conocen rutas secretas, evitando高峰 Dasara (septiembre-octubre). Imagina llegar, dejar maletas y salir directo al Palacio iluminado –¡eso es viajar inteligente!
Sur De La India prioriza hoteles limpios, con desayuno incluido (dosa, idli y Mysore Pak), WiFi rápido y staff que habla algo de inglés. Basados en su experiencia con turistas hispanos, aquí van sus top picks:
Para lujo cerca del Palacio (5 estrellas): Radisson Blu Plaza Hotel Mysore y Grand Mercure Mysuru son ideales. A 2 km del Palacio, ofrecen piscinas rooftop con vistas a Chamundi Hill, spas ayurvédicos y buffets internacionales. Precio: 8.000-12.000 INR/noche. Perfectos para parejas que quieren romanticismo post-Dasara. Fortune JP Palace, otro favorito de la agencia, tiene salones reales y gimnasio 24h.
Para familias y comodidad media (4 estrellas): Roopa Elite y Royal Orchid Metropole destacan por habitaciones espaciosas, piscinas infantiles y cercanía al Zoo (2 km). Roopa Elite (4.8/5 en reseñas) es top para desayuno vegetariano sur-indio. Mysore County Retreat, a las afueras, da paz con jardines y elefantes cerca. Precio: 4.000-7.000 INR/noche. Southern Star Mysuru suma vistas panorámicas.
Económicos pero confiables (3 estrellas): Pai Vista y Ginger Mysore son gangas cerca de la estación de tren. Limpios, con AC y seguridad 24h, ideales para mochileros que visitan Brindavan al día siguiente. Precio: 2.000-4.000 INR/noche. Hotel Le Ruchi The Prince ofrece spa económico extra.
Sur De La India negocia tarifas especiales (10-20% off) y upgrades gratis para sus clientes, más tours guiados desde el hotel.
Olvídate de taxis pirata: Sur De La India ofrece traslados puerta a puerta en Innova/SUV AC (4-6 pax) o Tempo Traveller (12 pax). Desde Bangalore: 147 km, 2.5h por autopista NH275, con paradas en cafés. Precio: 4.000-6.000 INR one-way. Incluye agua, WiFi y chofer que explica historia Wodeyar.
Aeropuerto Mysore (MYQ): Pequeño pero moderno, 12 km del centro. Traslado 1.000 INR, 20 min. Tren: Mysuru Junction (principal) o Chamrajapuram; recogida 500 INR. Local: Al Zoo/Brindavan/Chamundi: 1.500-2.500 INR full day, con esperas gratis.
En Dasara, reservan pases VIP para evitar tráfico. ¡Incluso houseboat connections a Kerala vía su red!
Lo mejor de Sur De La India son paquetes todo-en-uno: 2D/1N Mysore Essential (Hotel Roopa Elite + traslados Bangalore + Palacio+Zoo, 15.000 INR/pax). 3D/2N Cultural (Grand Mercure + Chamundi+Brindavan+guía español, 25.000 INR). Incluyen desayuno, entradas y seguro. Para familias: kids gratis en traslados. Reserva WhatsApp o web para personalizar –ellos ajustan por presupuesto o Dasara.
Reserva hoteles con antelación (3 meses Dasara). Elige centro para caminar al Palacio; afueras para paz. Traslados: pide chofer fijo para confianza. Temporada ideal: octubre-marzo (fresco). Mysore es segura, pero usa taxis app como Ola solo si no viajas con agencia. Prueba Mysore Pak en hotel –¡dulce adictivo!
Recomendación Exclusiva: Sur De La India ¡Contacta ya a Sur De La India! Como líderes en tours hispanos por sur de India (oficinas Cochin, Bangalore, Chennai), manejan hoteles top, traslados impecables y guías que hablan tu idioma. Su guía de viaje de Mysore es prueba de expertise. WhatsApp +91-9447xxxxxx o surdelaindia.com –¡te arman paquete perfecto sin sorpresas!
Con estos tips de Sur De La India, tu estadía en Mysore será puro placer. ¿Hotel lujo o traslado express? ¡Dime en comentarios y empieza tu aventura 2026!
****
Many readers are familiar with President Dwight Eisenhower's famous 1961 farewell address, where he warned Americans of the necessarily evil of the military-industrial complex and the potential for undue and dangerous influence on our society and policies. Regarding its necessity in the modern world, he observed:
“Until the latest of our world conflicts, the United States had no armaments industry. American makers of plowshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defense; we have been compelled to create a permanent armaments industry of vast proportions...This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence-economic, political, even spiritual-is felt in every city, every state house, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.”
Yet immediately after this concession, Eisenhower issued this somber warning:
“In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.”
His observations and warnings about the military-industrial complex were prophetic. Something that many conservatives praise and take for granted today was something that conservatives in generations preceding us cautioned against.
While many are familiar with this facet of General Eisenhower's warning, there was a second thing he warned us about...the potential and related danger of technological advancement and its impacts on policy. Pointing out the relationship between the military, industry, research, and the federal government, he observed:
“Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades. In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government...The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.”
And again, immediately following an astute and prophetic observation, Eisenhower cautioned:
“Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
Depending on the party in power, which has changed many times in the 60 years since Eisenhower left office, it is clear that America has fallen prey to both of these influences (sometimes both simultaneously) as well as the political partisanship he said was thankfully avoided during his Administration, all of which continually threaten the principles upon which our Republic was founded.
We would do well to use our time being better students of history and paying less attention to talking heads paid for by very groups whose influence Eisenhower warned us about.
Here is President Eisenhower's short speech in its entirety.
#history #politics
from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
Rose East of Rockwood
And in the temperance claim I eat by the river in esteem Spoon for dinner And sitting night by the carousel Five things to celebrate Now that victory is a poem Three cheers for cheer itself Doubling down on maple sugar And Justin’s day Sweet vermillion in for shore A wave to passing trains And a thorough sense of night Victory’s dance And a sweet, appreciative Moon I filled out an application To speak ingress to the landed A quarter image per hour And flights of bon’accord Places in genuflect And the light of Heaven the same Tempos lightly off to dancing And a fever here for all this crowd Adoring secret carols Waiting plan to save the shore For the hurry and explained One May per hour The time of year to bud A day of Summer breath And a chorus set to free- The scalar wind; emergence And a lampstand of the World When days are sunrise And prophecy our sport For the mercy on all souls And in Christ we unbind all men- For the village and the depths Canada be free- In restless village and annoy But only softly As we make unto a dash- For Vict’ry’s land A cadence on the water Holding Rome And an essence on the rock Befriending gourd and high Of better thought proprioception And democracy’s Queen abiding Sold our flint for raucous dune I am the lamp And the four accord of noon In Eastern acres And gypsum grass For services to be me Your light will carry- unto mine For the betterest of the nightcall Where the spacecraft call it here- Unbound And thinking of plankton ready Four stalls to the iron core While the Moon is at our gate And mostly Earthen- Set to distance And the mountaintops- To the East While the clouds of early forest While we wait- For timing’s Will.
from folgepaula
to sin is to betray happiness.
/jan26
from 下川友
昨日、友人からはっきりと「お前はつまらない」と言われ、重い足取りで帰宅した。 ただ、朝起きるとその言葉はもう喉を通り過ぎていてすっきりしている、と一瞬思ったものの、心は粘度の高いドロドロのマグマのような状態になっていて、どうにも気持ち悪い。 同時に、今まで自分は川のような、薬にも毒にもならない会話ばかりしてきたんだな、とも思った。
もやもやした気持ちのまま会社へ向かい、通勤電車に乗る。 こんな気分の時でも満員電車に押し込まれなければならないなんて、経営者層は本当に労働者がどういう生活を送りたいのか、何を望んでいるのかがまだ分かっていない。
新宿で降りたあと、駅の天井から空調が調子の悪い音を立てていた。 気にしながら歩いていると、望んでもいないカクテルパーティー効果が働き、空調音だけが耳に届くようになった。 人の足音も会話も消え、空調音だけが支配する。 その単一の聴覚情報はすぐに精神を蝕み、視界にも影響が出て、見える範囲がいつもの三分の一ほどに狭まった。
会社に着いてからは、水を多めに飲み、15分ほどじっとしていたら、体調はようやく通常に近づいた。 ちょうどその頃、上司が到着する。 上司はいつも勤務時間より15分ほど遅れてくるが、特に仕事に支障があるわけでもないので気にしていない。
席はフリーアドレスなので、俺は誰も隣に来ない端の席に座る。 そこで仕事をしたり、しなかったり、コーヒーを飲んだりして過ごした。 心は落ち着いてきたものの、ぐしゃぐしゃであることには変わりがない。 嫌いなものを思い切り書き殴って羅列したいと思ったが、暴力的な文章を書くには、この部屋はあまりにも美しさが足りなかった。 もしここが家のリビングで、好きなものだけに囲まれていたなら、いくらでも悪い文章を書けただろう。
嫌いなものというのは、一度吐き出してしまえば案外満足して、「あれ、そんなに嫌いじゃなかったな」と思えるものだ。 嫌いなものと言うより、ただ言いたかっただけな事に気づく。 実際のところ、世の中に本当に嫌いなものなんてそう多くない。 ただ、生活の中で悪いことを吐ける状況がほとんどないから、自分が何を嫌っているのかに気づきにくいだけだ。
帰宅前、帰り支度をしていると、上司が「高円寺って、本当に高円寺って寺があるんだな」と言いながらネットサーフィンをしていた。 その寺が意外と小さいらしいことを教えてくれたので「へえ」と思っていたら、今度は「じゃあ吉祥寺には吉祥寺って寺があるのかな?」と話を続けてきたので、「あるかもしれませんね、お疲れ様でした」と言って帰ることにした。
from
Jujupiter
So far I was always a little contemptuous towards actors, because I thought screenwriting and directing were more important but without actors, you wouldn't have a movie, and an actor can save or sink a movie.

Here are the nominees.
Juliette Gariépy for Red Rooms

Red Rooms is a cold-blooded thriller centred on fashion model Kelly-Anne, obsessed with a series of atrocious murders. The character doesn't show many emotions during the movie but disturbance is simmering under the surface, only to explode in both shocking and redeeming ways. A confronting part perfectly executed by Juliette Gariépy.
Stephen Graham for Adolesence

Everyone has been talking about that TV show given its subject matter. Stephen Graham not only played a lead part in those real-time episodes, he also co-wrote the script. Stellar job.
Wagner Moura for The Secret Agent

What makes Wagner Moura's performance remarkable in this movie is that he plays the same character for the whole movie only to play another at the end, with a completely different personality. His versatily as an actor was displayed in just one movie and it was very convincing.
Andrew Garfield for After The Hunt

Saying I didn't get into After The Hunt would be an understatement. But I did get into Andrew Garfield's performance, thanks to the apartment scene. He plays an unpleasant character that you feel you're not even allowed to like?! And in one scene, even though nothing was clarified, he managed to make you feel for him.
Emma Stone for Bugonia

Emma Stone is fucking amazing and should just get the Oscar every year.
And the winner is Wagner Moura. Now I have to watch Narcos.
#JujuAwards #ActorOfTheYear #JujuAwards2025 #BestOf2025
from
Jujupiter
Onto cinema! This year, I would like to introduce a new Juju Award: Actor of the Year.
I wasn't big on actors until recently, when I discovered I could be sensitive to their charisma, their presence or, well, their acting!
So as a result, I have decided to name my favourite actor for the past 5 years of the Juju Awards.
Actor of the Year 2020

Rebecca Ferguson for Doctor Sleep
Actor of the Year 2021

Rebecca Ferguson for Dune: Part One
Actor of the Year 2022

Eryn Jean Norvill for The Picture Of Dorian Gray
(That's actually a play, though some filming was involved!)
Actor of the Year 2023

Emma Stone for Poor Things
Actor of the Year 2024

Naomi Scott for Smile 2
Yes I gave Rebecca Ferguson two awards! What can I say, she killed both those parts and stole the show on both occasions!
I'll be doing a specific post for 2025.
#JujuAwards #ActorOfTheYear #JujuAwards2025 #BestOf2025
from drpontus
By Pontus Wärnestål
Generative AI has been deployed at scale before societies had time to understand its consequences. In only a few years, experimental models have rapidly become embedded in education, healthcare, media, public services, and daily communication. The dominating narrative calls this progress. In reality, it is a transfer of influence and control over information and decision-making to a handful of Silicon Valley-based technology companies.
That is why AI governance is the most pressing issue of the AI era. Without it, we are not shaping this technology. We are allowing a handful of private actors to shape our economies, institutions, and public discourse.
AI governance is often misunderstood as simple regulation or compliance. It is neither bureaucratic overhead nor a brake on innovation. AI governance is the system of rules, institutions, technical safeguards, and accountability structures that determine how AI is designed, deployed, evaluated, and controlled.
It includes:
In short, governance determines whether AI serves society — or whether society becomes a testing ground for tech companies.
The latest LLM-powered AI technology has advanced at a pace that far outstrips our ability to regulate it. Companies release increasingly powerful models into public and institutional use while the systems that should verify safety, evaluate societal risks, and assign responsibility remain fragmented or nonexistent.
This gap is structural. The current economic incentives of the AI industry reward rapid deployment, market capture, and scale. Safety, transparency, and accountability are viewed as slowing down that process. As a result, governance is treated as a secondary concern – something to be added after technologies are widely adopted and dependencies are already established.
History suggests this is the most dangerous phase of technological development. Industrialization, pharmaceuticals, aviation, and nuclear power all demonstrate the same pattern: early expansion without oversight creates systemic risk that later requires costly and reactive regulation. AI is following that trajectory, but at unprecedented speed and scale.
AI development has been driven by impressive demonstrations rather than proven reliability or societal readiness. The risk is not that AI exists, but that it is deployed in complex social systems before we understand how to control it.

Generative AI is often described as a productivity tool. That framing obscures its real impact. AI systems increasingly shape information flows, language use, economic opportunities, and decision-making processes. They are becoming a layer of societal infrastructure.
Infrastructure carries power. Whoever controls it influences communication, knowledge production, and public services. When AI infrastructure is controlled by a small number of private actors, governance is no longer just about technology. It becomes a question of democracy, sovereignty, and institutional resilience.
For smaller countries like Sweden, the issue is particularly acute. Reliance on external AI systems creates strategic dependency. If AI becomes foundational to education, public administration, healthcare, and communication, access and control over these systems becomes as critical as energy or telecommunications infrastructure.
AI sovereignty is therefore basic risk management and a resilience issue. It ensures that democratic societies retain the capacity to govern technologies that shape their citizens’ lives.
The concept of AI alignment is often framed as a technical challenge: how to make AI systems follow human values. But this framing avoids a fundamental question – whose values?
Alignment cannot be solved inside corporate research labs alone. Human values are negotiated through democratic processes, legal systems, and cultural institutions. Without governance, alignment becomes an internal corporate policy rather than a societal decision.
True alignment requires enforceable standards. It requires transparency about training data, model behavior, and deployment contexts. It requires independent evaluation and public accountability. Without these mechanisms, alignment becomes marketing language rather than a measurable outcome.
The urgency of AI governance is not hypothetical. The consequences of insufficient oversight are already visible across multiple dimensions.
Creative industries face structural disruption as AI models are trained on copyrighted work without consent or compensation. Creative workers lose income as their work trains models without compensation. Invisible global labor markets support AI systems through data annotation and content moderation, often under poor working conditions. Content moderators in Kenya and the Philippines develop PTSD filtering training data for poverty wages. The environmental footprint of large-scale AI – including energy consumption and water use – remains opaque and largely unregulated.
Linguistic and cultural diversity erodes as English-centric models dominate. The legal costs of deepfakes, defamation, and disinformation fall on individuals and governments while tech companies invoke terms of service to shield themselves from liability. These companies increasingly operate as publishers and information intermediaries while avoiding the accountability traditionally required of those roles.
These issues are systemic outcomes of technological scaling without governance frameworks capable of distributing risks and benefits fairly.
Current AI development exemplifies what happens when technological capability outpaces social wisdom about appropriate use. We have created powerful tools for mass content generation without considering whether replacing human creativity with statistical pattern matching serves any purpose beyond reducing labor costs. We have built systems that can mimic human reasoning without addressing whether mimicry advances understanding. We have enabled unprecedented surveillance and manipulation capabilities without establishing boundaries around acceptable applications.
The halo effect that surrounds AI makes these questions difficult to raise. Skepticism about specific deployments gets conflated with opposition to progress. Calls for oversight get framed as obstacles to innovation. Concerns about harms get dismissed as “luddism”. This rhetorical strategy benefits those who profit from unconstrained development while silencing those who bear its costs.
The dominant narrative suggests that governance slows innovation. Evidence from other sectors suggests the opposite. Aviation safety regulations made commercial flight trustworthy. Pharmaceutical oversight made medicine reliable. Environmental regulation drove cleaner industrial technologies.
We would never allow the pharmaceutical industry to self-regulate, yet we permit AI companies to deploy systems affecting hundreds of millions of users without independent oversight, safety testing, or liability frameworks. The comparison is apt. Both industries produce products with significant potential for societal harm. Both require expert evaluation before mass deployment. Yet only one operates under a regulatory regime designed to protect the public.
Governance creates stable conditions for innovation by building trust, ensuring safety, and distributing benefits more broadly. Without governance, technological progress becomes fragile. Public backlash, legal uncertainty, and systemic failures eventually undermine the technology itself.
Responsible AI innovation depends on governance structures that are credible, transparent, and enforceable.

AI governance is often discussed as something governments or regulators must solve. But governance is not only a legal framework. It is also a design discipline and an operational responsibility. Every organization that develops, deploys, or procures AI systems becomes part of the governance ecosystem.
For small and medium-sized enterprises (SMEs), public organizations, and design teams, responsible AI governance is not about building large compliance departments. It is about embedding accountability, transparency, and human oversight directly into how services are designed and delivered.
Six practical pillars can guide that work.
Governance begins with knowing how AI systems behave in real conditions.
What SMEs and organizations can do:
What designers can do:
Good AI design assumes systems will fail sometimes – and ensures those failures are visible, understandable, and recoverable.
Responsible AI requires openness about what the system does, what data it uses, and what risks it carries.
What SMEs and organizations can do:
What designers can do:
Transparency builds trust. Hidden automation erodes it.
AI systems often blur responsibility between developers, vendors, and organizations. Governance requires clarity about who is accountable when things go wrong.
What SMEs and organizations can do:
What designers can do:
Accountability means AI systems are never allowed to operate without human responsibility attached.
Many organizations do not build AI – they buy it. Procurement is therefore one of the most powerful governance tools available.
What SMEs and organizations can do:
What designers can do:
Every procurement decision shapes the AI ecosystem.
AI systems rely on large amounts of human labor and creative content. Governance requires recognizing and respecting that human foundation.
What SMEs and organizations can do:
What designers can do:
Responsible AI should augment human work, not erase its value.
AI systems evolve over time. Governance must evolve with them.
What SMEs and organizations can do:
What designers can do:
Governance is not a one-time checklist. It is an ongoing responsibility.
For SMEs and designers, AI governance is not simply risk management. It is a competitive and ethical advantage. Organizations that design transparent, accountable, and trustworthy AI services build stronger customer relationships, reduce legal risk, and create more resilient products.
Responsible AI design also aligns with long-term innovation. Systems that users understand, trust, and control are more likely to be adopted sustainably.
Governance is therefore not only about avoiding harm. It is about designing technology that earns trust and creates lasting societal value.
Every major technological shift forces societies to decide what kind of future they are willing to build. Generative AI is no exception. It carries enormous potential: it can strengthen public services, accelerate scientific discovery, and expand access to knowledge. But it also carries the capacity to concentrate power, erode cultural diversity, destabilize labor markets, and weaken trust in information systems.
Governance determines which of these futures becomes reality.
Too often, the debate around AI is framed as a race – a competition between nations, companies, and institutions to develop more powerful systems faster than everyone else. But the real race is not technological. It is moral and institutional. It is the race between capability and responsibility.
Right now, technological capability is accelerating rapidly. Responsibility is not.
Dutch historian and author Rutger Bregman describes moral ambition as the willingness to dedicate talent, resources, and political will to solving humanity’s most urgent and complex problems. Moral ambition rejects the idea that the most capable actors should simply pursue profit, prestige, or technological dominance. Instead, it asks what those actors owe to society.
Artificial intelligence demands precisely this kind of ambition.
Developing systems that shape language, information flows, education, public administration, and democratic discourse is not a neutral technical exercise. It is an act that redistributes power across society. And power, when left ungoverned, rarely distributes itself fairly.
The question is no longer whether AI will influence our future. It already does. The question is whether we will take responsibility for guiding that influence.
Responsibility means acknowledging that technological progress does not automatically produce social progress. It means accepting that safety, fairness, sustainability, and democratic accountability must be designed into AI systems deliberately. It means building institutions capable of auditing, regulating, and shaping technologies that are increasingly embedded in everyday life.
Most importantly, responsibility means rejecting the idea that governance is an obstacle to innovation. Governance is what makes innovation legitimate, sustainable, and worthy of public trust.
History offers a clear lesson. The Industrial Revolution created unprecedented wealth and productivity – but it also produced exploitation, inequality, and social upheaval. The benefits society now associates with industrialization did not emerge from technology alone. They emerged from labor movements, democratic reform, public regulation, and collective demands for fairness and safety.
The same is true for AI.
If we want artificial intelligence to strengthen democracy, improve working life, preserve cultural and linguistic diversity, and contribute to a sustainable future, then governance cannot remain reactive or symbolic. It must be proactive, evidence-based, and democratically grounded. It must be built with the same seriousness and ambition that currently drives technological development itself.
Artificial intelligence may shape the future. But responsibility will decide whether that future is worth living in.
from
hex_m_hell
I met a guy in a Dutch class I was taking. He was a cook and he used ChatGPT to get recipes. I explained what ChatGPT actually was and it kind of blew his mind. He had thought that it was some kind of magic search engine that would look through all the information on the Internet to answer your questions. It is not even close to that.
If you, or someone you know, thinks that ChatGPT or any “AI” “knows” things, or “finds answers,” or “figures things out,” then this post is for you. You have been sold something that isn't quite what it seems.
I'm going to do my best to explain this in as similar a way as possible to the first time I explained it. This means I'll simplify things a bit. When I throw technical language or details, it will only be so that you can dig in deeper if you want. It won't be necessary to understand anything in depth to grasp the general concept.
I'm adding a bit more context than when I first explained it. But I'm also going to try to keep this as short as possible, because it's actually a lot less complex to understand at a high level than you might have been lead to believe.
We're going to start simple, and build up. Imagine you counted each letter in this script.
Now you turn those into statistics.
from string import ascii_uppercase
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
counts = {l:script_text.count(l) for l in ascii_uppercase}
total_letters = sum(v for v in counts.values())
probabilities = {k:v/total_letters for k,v in counts.items()}
return probabilities
| A | 0.07698545710115154 |
|---|---|
| B | 0.014110396280477916 |
| C | 0.030869870789857815 |
| D | 0.0335189490187598 |
| E | 0.11493755744174732 |
| F | 0.01816510785532789 |
| G | 0.02659890793101584 |
| H | 0.036113964426663786 |
| I | 0.06368600313564361 |
| J | 0.0022706384819159865 |
| K | 0.009460993674649943 |
| L | 0.04854841325620371 |
| M | 0.02713953614099584 |
| N | 0.06406444288262962 |
| O | 0.07979672379304752 |
| P | 0.025085148943071848 |
| Q | 0.001351570524949992 |
| R | 0.062064118505703626 |
| S | 0.07363356219927555 |
| T | 0.10174622911823539 |
| U | 0.030923933610855815 |
| V | 0.014218521922473915 |
| W | 0.02297669892414986 |
| X | 0.0038925231118559765 |
| Y | 0.017191977077363897 |
| Z | 0.0006487538519759961 |
Your highest probability letters would usually be “ETOANRISH”, but for us it's “ETOSARINL”. Not far off.
'J' makes up about .2% of letters, while 'Q' and Z are both under .01%. This isn't too far off what we'd expect from a normal distribution. 'S' is a bit more common than we'd expect. There are some things skewing these stats, for this specific set, but that will probably become obvious later.
Great.
If we take those stats we can generate “words.” Let's just choose 10 letters and see what happens…
from random import choices
from string import ascii_uppercase
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
counts = {l:script_text.count(l) for l in ascii_uppercase}
hat = ''.join(l*c for l,c in counts.items())
return [''.join(choices(hat, k=10)) for _ in range(5)]
['ANGOUCGOUE', 'SNEKEEOHRE', 'TYEOWTOPLR', 'THUGESTCNG', 'TEOEVSGINE']
'SNEKEEOHRE'
That doesn't really sound a lot like English, does it?. Neither is 'ANGOUCGOUE' or 'TYEOWTOPLR.' 'THUGESTCNG' includes an English word, so we're not too far off the mark. Perhaps we can improve this by including a bit more information. What if we get the average word size?
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
last_offset = 0
offset = script_text.find(" ")
lengths = {}
while offset > 0:
length = offset-last_offset
if length not in lengths:
lengths[length] = 0
lengths[length] += 1
last_offset = offset
offset = script_text.find(" ", offset+1)
word_count = sum(v for v in lengths.values())
length_stats = {l:c/word_count for l,c in lengths.items()}
return length_stats
| 8 | 0.056666666666666664 |
|---|---|
| 5 | 0.12333333333333334 |
| 3 | 0.12111111111111111 |
| 4 | 0.15 |
| 9 | 0.028888888888888888 |
| 7 | 0.06666666666666667 |
| 2 | 0.11333333333333333 |
| 6 | 0.07777777777777778 |
| 10 | 0.012222222222222223 |
| 12 | 0.023333333333333334 |
| 11 | 0.008888888888888889 |
| 16 | 0.005555555555555556 |
| 30 | 0.0044444444444444444 |
| 1 | 0.12555555555555556 |
| 18 | 0.006666666666666667 |
| 20 | 0.01888888888888889 |
| 24 | 0.0033333333333333335 |
| 14 | 0.01 |
| 19 | 0.011111111111111112 |
| 17 | 0.0011111111111111111 |
| 23 | 0.0011111111111111111 |
| 59 | 0.0011111111111111111 |
| 21 | 0.008888888888888889 |
| 22 | 0.005555555555555556 |
| 13 | 0.006666666666666667 |
| 26 | 0.0011111111111111111 |
| 56 | 0.0011111111111111111 |
| 43 | 0.0022222222222222222 |
| 54 | 0.0011111111111111111 |
| 15 | 0.0022222222222222222 |
Cool. Now let's generate some “sentences.”
from random import choices
len_pop = [i[0] for i in length_stats]
len_weights = [i[1] for i in length_stats]
letter_pop = [i[0] for i in script_word_stats]
letter_weigths = [i[1] for i in script_word_stats]
lengths = choices(len_pop, weights=len_weights, k=5)
return ' '.join(''.join(choices(letter_pop, weights=letter_weigths, k=l)) for l in lengths)
'N GITE SDNAT YPDSRGTC SWONRTLOEA'
Rad. Death metal lyrics.
We're getting a bit closer. If you run it a bunch sometimes short words will come out like “I.” You might even get an “AM” or “DOG” every now and then, but it mostly sounds like something uttered by a Lovecraftian horror. But… it's getting a bit closer, isn't it? Little by little.
It's not hard to make things that are even more like words.
We know that there are more rules to English spelling than letter probability. I bet you could recite one:
“I” before “E” except after “C.”
It does seem to “know” that some words are only one letter but it doesn't “know” that “N” is not one of them. And what about other letter pairs? There are a lot of pairs of letters that are common in English, like “th.”
We really need to be using more information. We could probably massively improve things just by tracking the probability of the next letter based on the current letter. That would capture things like “th” and “ie.” If we tracked two letters, that would also give us the “CEI” thing.
Well, this is actually a thing and it has a name. It's called a “Markov model.” It's a bit more complicated than my basic statistics example, so I'm not going to write one up. Instead I'm going to use someone else's.
git clone https://github.com/ggouzi/markov-word-generator.git
You can download this markov word generator if you would like. It can produce words like…
from markov_word_generator import MarkovWordGenerator, WordType
generator = MarkovWordGenerator(
markov_length=2,
language="en",
word_type=WordType.WORD,
)
return '\n'.join(generator.generate_word() for _ in range(10))
virs lep undingers perstorable posedgentionating whadlistubtly sed pes delingeoulle marks
These sound much more like real words. Sometimes it even makes real words like “marks.”
Great.
The more previous letters we use to determine the statistical probability of the next word, the more often we produce to real words.
from markov_word_generator import MarkovWordGenerator, WordType
generator = MarkovWordGenerator(
markov_length=5,
language="en",
word_type=WordType.WORD,
)
return '\n'.join(generator.generate_word() for _ in range(10))
sinner offprintential novices chapman downfall unpoetic ironwork sidest acupunctuary penicilling
Now what if instead of words we build this type of model for sentences?
Turns out there's code for that too. Let's use it on some of my writing.
pip install markovify --break-system-packages
from os.path import expanduser
from glob import glob
import markovify
text = ""
for fn in glob(expanduser("~/Writing/posts/posted/*.org")):
with open(fn) as fd:
text += fd.read()
text_model = markovify.Text(text)
return text_model.make_sentence()
He told me how close the bullet had been sliced up and was having lemon poured over the world.
Well that's strangely dark. It doesn't completely make sense, but it does sound almost human. If you've read my personal stuff, you may recognize the vibe. But hey! It's much better than those sentences we started with earlier, isn't it?
Maybe we should run it on this script to get something a little less dark.
import markovify
with open('what_an_llm_really_is.org') as fd:
text = fd.read()
text_model = markovify.Text(text)
return text_model.make_sentence()
Basil shows up a lot of things that simply can't be represented with statistics.
Hey, it almost sounds like something I've said. I mean, technically I did say it… just then…
Let's compare our first sentence with this one to see how far we've come:
'N GITE SDNAT YPDSRGTC SWONRTLOEA'
“Basil shows up a lot of things that simply can't be represented with statistics.”
As we add more and more statistical data we start to get things that seem more and more “real” or “alive.” Even making small changes to the models, still only using basic statistics, can give us really significant improvements in behavior.
The differences between the first sentence and the second all comes from the amount of statistical data we're capturing, and the way we're “tokenizing” or “breaking up the things we're doing statistics on.”
Now, if you extrapolate from the difference between between the first model using raw letter counts, and the second based on a Markov model, you start to see how a much more complex statistical model could give you output like you see from LLMs like ChatGPT.
Now, the underlying technology is different. An LLM doesn't use Markov models. It's quite a bit more complicated. But you don't actually need to understand any of that to get the idea.
It may be useful here to talk about the scale of the difference though. Our first attempt was tracking a couple of things. It was looking at word size and letter distribution. The Markov model version was tracking dozens or hundreds of things. An LLM can have billions of parameters. And when you think about the scale of that difference, you can really start to understand the quality jump from one to the other.
But none of this is knowledge. This program isn't “thinking about things I wrote and coming up with a sentence based on them.” It's a simple statistical operation. The output is essentially random, constrained by the weights we calculated from the input data.
An LLM is just a very complex statistical model built using unimaginably large amounts of data, to produce an unimaginably large statistical model.
A simple auto-complete can do a pretty good job of guessing what you might want to type next by using a Markov model, like we've already used, to generate options. If you start writing on your phone and then you hit auto-complete a bunch of times, you'll probably get a sentence that's not too different form the one we generated with the Markov model just now.
And that's basically what an LLM does. It takes your input, shoves a special prompt at the beginning, and passes it through the statistical model to produce some output based on your input. It's the world's most advanced auto-complete.
It's advanced, using really incredible math, but it's not smart. It's just ducking auto-complete.
Ducking.
Duck.
duck…

So let's go back to the recipe thing.
>>> Please give me a good salad recipe that uses wawai (sea mango). Wawai, also known as sea mango or mangrove mango, is a unique fruit found in tropical regions. It has a distinct flavor and texture compared to regular mangoes and can be used in various dishes, including salads. Here’s a simple yet refreshing salad recipe that incorporates wawai:
### Ingredients:
- 2 ripe wawai fruits
- 1 red onion, finely chopped
- 1 cucumber, thinly sliced
- 1 cup fresh cilantro leaves
- Juice of 1 lime
- 2 tablespoons olive oil
- Salt to taste
### Instructions:
- *Prepare the Wawai:* Wash and peel the wawai fruits if necessary. Cut them into small pieces.
- *Chop the Onion:* Finely chop the red onion.
- *Slice Cucumber:* Thinly slice the cucumber.
- *Chop Cilantro:* Finely chop the cilantro leaves.
- *Combine Ingredients:* In a large bowl, combine the wawai, onion, cucumber, and cilantro.
- *Add Lime Juice and Olive Oil:* Pour in the lime juice and olive oil over the salad ingredients. Mix well to ensure everything is coated evenly.
- *Season with Salt:* Season with salt to taste.
This salad makes for a light and refreshing meal perfect for hot days or as a side dish. Enjoy!
“Sea Mango” or “wawai” is cerbera manghas. It's deadly. It's literally poison. Do not make this recipe. If you ate this, you would die.
People who pay attention to popular culture may know it from “White Lotus.” Yeah. That's the plant this LLM just gave me a recipe for. This is suicide salad. I cannot stress this enough: Do not make this.

So that was obviously cheating a bit. I'm using a smaller model, one that I can actually run on my computer. It doesn't have all the guard rails that ChatGPT and Google have developed based on their repeated failures.
Google's AI will warn you that this is poisonous. After people point out some off-the-wall thing like this, LLM companies like Google tune their AI to provide warnings or fix obvious errors. We don't know how many other pieces of deadly advice still exist, or how many other ways you can run into the same deadly advice. These “fixes” don't actually fix everything because they can't. (That would be impossible... but explaining that is much harder, so we'll skip it for now.) These adjustments just skew the stats a bit so you're more likely to get some kind of warning if you ask how to do something dangerous.
So an LLM can use statistical representations to generate something that actually may work as a real recipe. If you want something with pasta, it may recommend tomatoes. Basil shows up a lot with tomatoes, especially in pasta, so it will probably show up too. Salt is in almost everything, so you'll probably get salt. You can start to see how a recipe can come together from just stats.
You can definitely catch the “vibe” of a recipe with a statistical model alone, just like you can catch the vibe of English from a Markov model. That doesn't mean it will be the best, or even be especially good.
What you're not getting is a “hand crafted recipe just for you,” or “the best recipe that a super-intelligence could possibly produce.” It doesn't understand flavors, or how to blend them, or what cheese smells like. It's just randomly creating text with similar statistical properties as the input.
It may well be like if you someone who had never tasted a type of cuisine read a whole bunch of books about it and then made a recipe. Like when Dutch Hello Fresh gives you a recipe for Mexican food and it uses pasta sauce instead of salsa.
You probably won't die, but you probably won't find anything really innovative. It may not taste bad, but it also may not be any good.
Now, I said that LLMs don't “search the Internet to find answers.”

That's true, but it can be a bit more complex than that. It can be really hard to square what I've just said with Google's AI search results that reference specific pages. So let me explain that a bit.
There's a thing called “Retrieval Augmented Generation” that weights the statistics a little more heavily towards some input. Google search works exactly the way it has for years before the advent of LLMs. Google then takes the top matching pages, already ranked using it's existing algorithm, and dumps the top matching ones into the LLM to skew the stats in favor of that content.
And with that, (intentional or not) Google creates the really convincing illusion of a smart AI that searches the Internet to answer your questions.
So instead of getting a recipe out of a blender, you get one that's essentially plagiarized from a real human… a real human who won't be able to monetize their work through ads because you've viewed their content from Google instead of from their site.
Wow. Now they're gonna starve. I hope you're happy with yourself.
All of this relies on page rank working correctly. But sometimes you search for something and something… else… comes up.

If game lore happens to be the thing that comes up at the top of your search results, Google's LLM may confidently inform you about a fantasy world. The LLM doesn't know the difference between fantasy and reality. It's all just math. That's it… and a lot of things can't be represented with statistics. Not everything can even be represented with math.
So if some nerds get together and start making jokes about the name collision between “The Doctor” from Star Trek: Voyager and “The Doctor” from Doctor Who, an LLM might…

…come to the wrong conclusion.
These problems get fixed, manually, after they occur. So at any point you could just randomly run in to one of these and you would only know if you start with the understanding that the LLM may well be wrong, and may be wrong quite often.
It may work, for now, for some things. But it's definitely not magic. It's a tool. It can do some interesting things.
LLMs can be really useful for very specific things, like natural language processing. They are far better at processing large amounts of language data than any previous method. That image of about “The Doctor” was found quickly using an LLM.

But they're also extremely bad at a lot of things. The more niche the knowledge is, the more wrong it will tend to be. There are a lot of things that simply can't be represented mathematically right now at all, those things are extremely wrong.
For example, computer security really revolves around the difference between how people expect a system to work and the way a system actually behaves. There are some statistical associations between syntax and unexpected behavior, but semantics, the meaning of the thing, is often not even documented much less available as statistics. If people don't even know to write something down then a machine definitely can't extract that information documentation. Security is fundamentally a creative process, which may not even be possible to automate.
Just like how medicine is really about empathy, and law is really about story telling, and… there are a lot of things we keep being told these statistical models will do. The fact is, they won't. This math won't do that, because it can't.
Not only will LLMs never do these things. We really have yet to see any evidence that there is any math, anything we could possibly encode into an algorithm, that can.
And these models will probably just get a lot worse. There are a lot of reasons why, but let's play with our Markov model again.
What happens when you use the output Markov model as input to itself?
from pprint import pprint
from random import choice
from markov_word_generator import MarkovWordGenerator, WordType
wbg = {}
generator = MarkovWordGenerator(
markov_length=1,
# dictionary_filename="what_an_llm_really_is.dict",
language="en",
word_type=WordType.WORD,
)
for g in range(300):
words = [generator.generate_word() for _ in range(100)]
wbg[g] = choice(list(words))
with open("tmp.dict", 'w') as fd:
fd.write('\n'.join(words))
generator = MarkovWordGenerator(
markov_length=3,
dictionary_filename="tmp.dict",
)
pprint(wbg)
pprint(set(words))
{0: 'madugeroliveerufondefuns',
1: 'nerpranoaliere',
2: 'rdvests',
3: 'tcibederanutrsiorphalat',
4: 'walowadia',
5: 'rdvesteravouslpang',
6: 'sibsteravouslpang',
7: 'esteranoaliere',
8: 'walowadia',
9: 'flinantirs',
10: 'ntirs',
11: 'ntirs',
12: 'rdvesteravouslpang',
13: 'fomont',
14: 'rdvesteravous',
15: 'we',
16: 'we',
17: 'we',
18: 'rdvestere',
19: 'ss',
20: 'cos',
[...]
30: 'cos',
31: 'ss',
32: 'walowadick',
33: 'rdvesteravouslpang',
34: 'walowadick',
35: 'ss',
36: 'walowadick',
37: 'rdvesteravouslpang',
38: 'rdvesteravouslpang',
39: 'ss',
40: 'fomont',
41: 'ss',
[...]
79: 'rdvesteravouslpang',
80: 'rdvesteravouslpang',
81: 'cos',
82: 'rdvesteravouslpang',
83: 'cos',
84: 'cos',
85: 'rdvesteravouslpang',
86: 'cos',
87: 'cos',
[...]
115: 'rdvesteravouslpang',
116: 'cos',
117: 'cos',
118: 'cos',
119: 'cos',
120: 'cos',
121: 'cos',
122: 'rdvesteravouslpang',
123: 'rdvesteravouslpang',
124: 'cos',
125: 'cos',
126: 'rdvesteravouslpang',
127: 'cos',
128: 'cos',
[...]
297: 'cos',
298: 'cos',
299: 'cos'}
{'cos'}
We generated 100 words from an English training set, then generated 100 more from the 100 we generated, and so on. We repeated this 299 times. Each time we chose a random word from the 100 as a sample, to kind of “see how things are going.”
This model is tuned down to 1 character of “attention,” (that means, we're only looking at one previous character to figure out the next) so it's not great. “fomonts” looks a bit like English. We selected a real word, “we,” on the 15th iteration (15th time feeding it to itself), and chose that a couple more times. At iteration 20 we produce “cos.” We'll see this again… 'walowadick' is a fun word that shows up on iteration 32. I'm going to have to use somewhere and see if I can get away with it. It looks a bit like 'walowadia,' which showed up on iteration 8. By iteration 49 we're starting to fall into a pattern of producing either 'rdvesteravouslpang' or 'cos,' and by 127 we're only producing 'cos'.
We can see at the end that every word we output is just “cos” over and over again.
This is what LLMs are starting to do to themselves. Humans are still producing content, sure, but LLMs are also being used to produce content. They can produce way more content, way faster than humans. So LLMs start to consume more and more LLM generated output, in some kind of reverse ouroboros or LLM human centipede. LLM centipede. Slopipede. A walowadick, if you will.
These systems are not just raw statistical models. They're engineered, tweaked, refined to work in very specific ways. They are built to make us trust them, to make us believe what they're saying is legitimate.
But these systems are also feeding off our data. That means we actually can also manipulate their output.
walowadick:
- noun
- LLM quality degradation manifesting from the ingestion of LLM output, repeated over multiple iterations.
- The intentional manipulation of LLM output as a form of collective resistance.
- A system of LLMs generating and consuming each other's and their own inputs, resulting in an incomprehensible mess.
The LLM feedback loop has become a complete walowadick.
- adjective
- having the properties of LLM quality degradation that manifest from LLM feedback loops.
- having the properties of intentional output manipulation as an LLM.
It's not just slop. It's walowadick slop.
If text shows up enough, if it's linked to enough, it will end up in the LLM. It will change the output. Some of this manipulation can be prevented, but words are interesting. Anyone can make up a joke word that can influence LLMs, but those words can't always be removed or fixed.
Words, by their usage, become real. So those who realize what LLMs are and don't want to see them everywhere can resist. That resistance can be encoded in language, and that language can leave an indelible mark in LLMs.
fomont:
- noun
- A fake word that one pretends to be a real word, that ceases to be a fake word by people choosing to make it a real word, which they do as a joke.
They saw the 'fnord' and realized that it had thereby become a fomont.
I'm not telling anyone to stop using LLMs. I'm just saying that you should know enough about the tools you use to keep yourself safe. You wouldn't pick up an angle grinder or a circular saw without at least a basic understanding of safety. LLMs can be at least that dangerous.
But if you understand what's really going on, then you're that much less likely to get hurt. I hope this clears a few things up.
Edit:
I assume that there's a real linguistic term that means the same things as “fomont” and @etymology_nerd has probably already made a video about it... I guess we'll see.
Also, thanks again to @unchartedworlds@scicomm.xyz for catching typos!
Information wants to be free, but hosting here still costs $9 a month. If any of this has been helpful to you, I've just set up a ko-fi account. If you want to help keep this going, I'll always appreciate help covering the costs.
from
Lanza el dodo
En enero apenas he jugado en físico más allá de la mayor parte de una partida de Trivial en Nochevieja que acabó cuando todo el mundo estaba harto del juego y de los dados y comenzó a discutir. ¡Para que digan que los juegos no permiten socializar con amistades!
Tampoco es que haya jugado cosas muy complejas en BGA (aunque tengo una partida abierta de Crucero Galáctico y de Concordia), y tampoco nada me ha parecido genial. En orden creciente de dificultad, Ubongo consiste rellenar un puzzle con poliminós lo más rápido posible, con lo que no hay una mayor estrategia. Igual de sencillo es Booty Dice, donde con la mecánica de lanzar dados, bloquear algunos, y poder relanzar hasta dos veces, se plantea una lucha entre piratas por conseguir más doblones. Lo he probado únicamente a dos jugadores y no tiene sentido la estrategia de buscar doblones o hacer que tu rival los pierda cuando puedes hacer que pierda vidas. Tampoco creo que a más jugadores mejore mucho.
Junk Drawer es otro juego de poliminós, donde hay 4 cajones con 4 criterios de puntuación, y vamos colocando objetos en cada cajón, de manera que de cada bloque de 4 objetos vaya al menos un elemento en cada cajón. Es sencillo y me recuerda a Cartógrafos, siendo a mi parecer Junk Drawer menos interesante, pues la puntuación de cada cajón es independiente (más allá de que la pieza que va en uno no puede ir en otro cajón), a diferencia de Cartógrafos que, además de tener lápices de colores, hay que llevar el tempo de cada ronda, y tratar de optimizar criterios de puntuación contradictorios en un mismo tablero.
Schotten Totten es un juego de cartas de reglas sencillas para dos personas que requiere bastante faroleo y control del riesgo. En tu turno juegas una carta y robas otra en tu lado de uno de las 9 zonas. Cuando en una zona hay 3 cartas de cada jugador se comprueba quién ha ganado, y gana quien consigue ganar 5 zonas o 3 zonas consecutivas. Para cada mecánica que exista, Knizia ha diseñado ya un juego que “está bastante bien” con una temática aleatoria.
Azure, además de cosas de Microsoft, es un juego abstracto para dos jugadores donde se lucha mediante mayorías por el favor de 4 figuras que otorgan puntos y poderes. La colocación de nuestras piezas en el tablero significará puntos directos, cartas que serán recursos para gastar en colocar nuestras piezas y descuentos para colocaciones futuras. Todo esto es una carrera por llegar a 25 puntos, con lo que las partidas son rápidas y nuestras decisiones tienen todo el peso del juego.
INK es un juego de draft de losetas y construcción de patrones, unas mecánicas usadas por tantos juegos que es imposible que no haya uno de estos cada mes. Este juego presenta una carrera por gastar todas tus botellas de tinta en colocarlas en tus losetas cuando cumples un patrón (tener X espacios contiguos de un color concreto). El añadido aquí es que cuando cumples esta condición puedes activar un efecto según el tamaño de la región de ese color y que de las 20 botellas iniciales, la mitad deben ir en dos colores que se te asignan inicialmente. Este juego es más sencillo en apariencia que en la práctica, pues no siempre es la mejor estrategia completar regiones grandes y hay que poner mucha atención a los efectos disponibles. Con esto quiero decir que necesitaría jugar más, a ver si me entero de qué hay que hacer realmente.
Carnuta es un juego sencillo de colección de cartas donde cada carta tiene un un símbolo y un criterio de puntuación basado en conjuntos de símbolos, normalmente de otras cartas. Las cartas también tienen un coste asociado que, generalmente, consiste en voltear o descartar unas runas que tienen, en el dorso y el reverso, una luna y un sol. Por tanto, tenemos que ir sucediendo girar runas con el símbolo del sol para pagar una carta y runas de luna para pagar otra, eso mientras coinciden en la medida de lo posible los símbolos de los criterios de puntuación de unas cartas con las otras. Me parece demasiada parafernalia para un juego bastante simple. Por ejemplo, los criterios de puntuación son más fáciles que los de Faraway y sin embargo el turno de cada jugador serían más engorrosos y la explicación serían comparables en tiempo pese a tener Faraway un concepto más innovador. Dewan va de ir colonizando un terreno, consiguiendo cartas y construyendo casas, tal y como está el precio de la vivienda este juego puede ser ofensivo. Teniendo casas en distintos terrenos puedes completar ciertas cartas que dan puntos. El punto de tensión se genera porque el tablero es pequeño para el número de casas que necesitas construir y entonces habrá bloqueos. Pero más allá de eso, creo que hay juegos con un planteamiento parecido con misiones y efectos más interesantes, como Kingdom Builder o The Guild of Merchant Explorers.
7 Wonders Dice es el juego de something and write inspirado levemente en la saga de juegos de 7 Wonders. Es una secuela ambientada en una temática que no está de moda. Como si la nueva entrega de A todo gas fuese una comedia musical. Si te gustan las dos cosas o no conoces nada de ninguna de las dos cosas, te parecerá entretenido y, en caso contrario, una versión que no aporta nada nuevo al género ni a la saga.
En la búsqueda de un juego de deducción que tenga más mecánicas de la pura deducción lógica, he probado Alchemists incluso estando en BGA en versión alpha, algo que no suelo hacer porque, además de que pueda ser una versión aún inestable, si ganas una partida no te dan el logro y, por tanto, los puntos para canjear por un mes gratis. Hobiconsejo de ser pobre como una rata. Sobre Alchemists, en este juego horriblemente feo que combina la deducción con la programación de acciones somos alquimistas tratando de decir qué “componente químico” se corresponde con cada ingrediente, algo que haremos mezclando ingredientes para obtener pociones. La deducción es interesante aunque me parece que la carga es menor que en otros juegos porque influye mucho la gestión de los ingredientes y monedas que tienes, necesarias para comprar artefactos y publicar las teorías, que es lo que te dará puntos y reputación. Además hay que considerar qué información revelar porque publicar una teoría sobre qué componente es un ingrediente puede resolver el resto del panel para los rivales.
Spirited mezcla la mecánica de forzar la suerte con la colección de sets y una pequeña construcción de tablero en un juego de cartas. Imaginad el juego de colección de las familias de hace siglos y que ahora sería cancelado por los estereotipos racistas pero ahora son furros de civilizaciones antiguas. Cada civilización presenta un efecto al jugar la carta y el objetivo es tener en tu campamento muchas cartas de una civilización o al menos una de cada. Esto te permitirá conseguir puntos y el favor de un clan. Pero, cuidado, si en tu línea tienes tres carta de una civilización, perderás parte del progreso actual. Es un juego que tiene varios conceptos más avanzados de lo que cabría esperar por su estética, y si no tienes pensado el combo puedes entrar en una situación de bloqueo y que sea un poco farragoso salir de ahí. Quiero jugar otra partida porque sería curioso que los autores del juego El muro de Adriano, aka la declaración de la renta, hayan hecho un juego más sencillo que a la vez es más enrevesado.

Tags: #boardgames #juegosdemesa
from An Open Letter
I’m packing up and it’s rough, but I’m so excited. I just want E to move in with me already.
from Unvarnished diary of a lill Japanese mouse
Journal 5 février 2026
On a beaucoup parlé toutes les deux bien sûr. A n'est pas du tout surprise. Comme elle travaille sur des sujets sensibles, elle se doutait depuis longtemps qu’on avait fait des enquêtes sur nous. Elle est certaine qu'on a nos dossiers dans les bureaux avec nos noms ou nos noms de code même pourquoi pas la tzarine et la souricette ? J'espère pas neko quand même. Nos téléphones tu parles pas besoin d'ouvrir le mien, il est sous surveillance depuis longtemps. Elle a un téléphone et un laptop fournis directement par eux. Ils voulaient juste me signaler qu'ils me surveillaient, et si je m'en apercevais c'est tout. On m'a joué une comédie bien mise en scène pour voir ce que je pigeais et grossir mon dossier, voilà ce qu'elle pense. Et là-dessus on s'aime ils ne peuvent rien y changer. La seule chose qui l'inquiète c'est que s’ils me le demandent je ne pourrai pas refuser d'y aller et ça ne lui plaît pas, mais l'une comme l'autre on est prises dans l'histoire comme des mouches dans une toile d'araignée.