Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Chemin tournant
Emmanuel Godo, poète et essayiste, avait consacré au Journal de la brousse endormie, paru en 2023, l’une de ses chroniques dans le journal La Croix.
Je viens de découvrir qu’il donne, dans la revue de culture contemporaine Études (numéro 4330, octobre 2025) quelques lignes à propos de Tout commence par les marimbas de la nuit.
En vue de plus tard ou de jamais, ces mots de Mallarmé dans une lettre adressée à Verlaine, ont pour E. Godo la vertu d’offrir à la poésie un vaste espace d’accomplissement, qui déborde les assignations à servir. Il est question, avec Mallarmé, Bataille et William Carlos Williams de l’horizon d’écriture de l’écrivain, du poète, dans cette chronique qui s’achève ainsi :
Dans Tout commence par les marimbas de la nuit, Serge Marcel Roche fait entendre une ode aux arbres, aux rivières, aux oiseaux et à ces hommes de l’Est-Cameroun auprès de qui il a vécu, qui “inventèrent la musique à l’écoute / À l’écoute des pluies / À l’écoute des gouttes”. Et, sans qu’il y ait à le justifier, comme si le mouvement d’émerveillement devant le paysage africain le dictait impérativement, la voix remonte aux profondeurs de l’enfance du poète : “Alors le temps n’était / N’était amour ni souffrance / Seulement l’odeur des lieux familiers”.
Là-bas, il semble qu’il existe une enfance qui parle à toutes les enfances. Le poète est cet homme qui a appris à ne plus être protégé par aucune certitude, aucune écorce du savoir présent. Il vibre, résonne, s’accorde à toutes les manifestations de la source première. La bonne nouvelle que porte la poésie, à jamais, est qu’un jour l’homme existera, qu’il portera visage radieux, cœur intelligent et main fraternelle. La poésie tient bonne garde de cette promesse jamais réalisée.
Cette chronique est disponible à la lecture dans son intégralité : En vue de plus tard ou de jamais.
#Hyperliens
from
PlantLab.ai | Blog
Most plant diagnosis tools give you a paragraph to read. PlantLab gives your automation system something to act on.
The model covers 31 cannabis conditions and pests at 99.1% balanced accuracy. Balanced means every class counts equally – a system that nails common deficiencies but misses rare pests does not score well. The output is structured JSON that Home Assistant, Node-RED, or a custom controller can read and act on without a person in the loop.
The first time I tried AI for plant diagnosis, I uploaded a photo to ChatGPT. It told me I had a calcium deficiency. It was light burn. The two look nothing alike if you know what you are looking at, but ChatGPT was never trained specifically on plant images. It is a convincing generalist, and when it does not know, it guesses.
That is what most “AI plant diagnosis” apps actually do. Wrap a general-purpose language model, send your photo with a prompt, return whatever comes back. The output is confident, plausible, and sometimes wrong, and a new grower has no easy way to tell which time is which. It is also something you can do yourself for free, which makes paying for the service hard to justify.
The deeper problem is that even when these tools are right, they hand you prose. Useful for a person reading a screen. Useless for an automation system that needs to decide whether to adjust pH, run a fan, or send you an alert.
The model covers 31 cannabis conditions and pests across four families.
Nutrient issues: nitrogen, phosphorus, potassium, calcium, magnesium, iron, boron, manganese, and zinc deficiencies, plus nitrogen toxicity.
Diseases: powdery mildew, bud rot, root rot, pythium, rust fungi, septoria, and mosaic virus.
Pests: spider mites, thrips, aphids, whiteflies, fungus gnats, caterpillars, leafhoppers, leaf miners, and mealybugs.
Environmental: light burn, light deficiency, heat stress, overwatering, and underwatering.
Every class scores above 95% detection accuracy, including the rarer ones.
{
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"schema_version": "2.0.0",
"success": true,
"is_cannabis": true,
"is_healthy": false,
"growth_stage": "flowering",
"conditions": [
{ "class_id": "bud_rot", "confidence": 0.92 }
],
"pests": [],
"reliability_score": 0.88
}
Not a paragraph for a person to read and interpret. A machine-readable signal. Your controller sees 92% confidence on bud rot in a flowering plant and can ramp airflow, send an alert, or log the event – keeping you informed without forcing you to step in every time.
reliability_score is a separate trust signal on top of per-class confidence. It estimates whether the entire diagnosis holds up on this specific image, which is most useful on the hard cases – mixed symptoms, lookalike conditions, edge-case growth stages. There is more on it in How PlantLab Knows When It Might Be Wrong.
The previous version of the model covered 24 conditions. This release brings it to 31. The additions came from what growers actually run into and ask about.
Bud rot is one of the worst things that can happen during flowering. Dense colas plus humid air invite Botrytis, and by the time you can see it with the naked eye, it has often already spread.
Heat stress causes leaf curling, foxtailing, and bleaching that new growers often confuse with nutrient issues. Splitting it into its own class prevents the misdiagnosis.
Fungus gnats are usually the first pest a new indoor grower meets. Caterpillars, leafhoppers, and leaf miners are common outdoor threats. Mealybugs are less common but brutal once they take hold. All five now have dedicated detection.
Boron, manganese, and zinc deficiencies fill out the micronutrient coverage. Less common than the macros, but harder to spot by eye because their symptoms overlap with other conditions.
I sent a sample of recent images through the live service to spot-check it against my own intuition.
One result stood out. The photo was a plant that looked underwatered – drooping, leaves curling, the classic signs. The model called it overwatered. I was ready to write that off as wrong, then I went back through earlier photos. The plant had been chronically overwatered for weeks. That ongoing stress had caused nutrient lockout, which then progressed into something that looked like underwatering. The model caught the underlying cause. Without that, I would have treated the symptom and made the problem worse.
A few things in the queue.
Multiple concurrent conditions in one image. Plants can have spider mites and a calcium deficiency at the same time. Today the API returns the primary diagnosis. Multi-label output is on the way.
Step-by-step automation guides. Home Assistant, Node-RED, and others – walkthroughs for wiring PlantLab into the stack you already run.
More real-world data. Photos from real tents, at real angles, in real lighting, sharpen the model on the conditions it actually sees – not just the clean reference shots.
PlantLab is free to try at plantlab.ai. The API returns structured JSON for every diagnosis – plug it into your automation stack and let your grow room see for itself.
Related reading: – Why I Built PlantLab – The origin story – How PlantLab Knows When It Might Be Wrong – The reliability_score field and schema 2.0 – Nitrogen Deficiency in Cannabis: A Visual Guide – Detailed guide for the most common deficiency – Yellow Leaves, Seven Suspects – Specific nutrient identification – API Documentation
We have men of science, too few men of God. We have grasped the mystery of the atom and rejected the Sermon on the Mount. Man is stumbling blindly through a spiritual darkness while toying with the precarious secrets of life and death. The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living.
— Omar N. Bradley, 1948 (h/t: A Layman's Blog)
#culture #quotes
from
PlantLab.ai | Blog
PlantLab's API now returns a reliability_score field on every diagnosis. A number from 0 to 1 telling you how likely the answer is to be correct on this specific image. It replaces the old diagnostic_confidence and safety_classification fields, which were rule-based guesses that I never trusted. The new score is much better at flagging the diagnoses that turn out to be wrong – especially on the hard cases, which is where you actually need it. Schema bumped from 1.x to 2.0.0. If you're integrating with PlantLab today, the migration is a one-line change.
Most diagnosis APIs return a confidence number along with each answer. PlantLab did too. For every condition the model spotted, the response included a confidence value between 0 and 1. On top of that, the response also carried two derived fields. diagnostic_confidence, a single overall trust number, and safety_classification, a three-way bucket of high, moderate, low.
Those derived fields were a heuristic. A small handful of rules that mostly looked at the top condition's confidence and rolled it up into a number. Heuristics work fine when the problem is simple. They fall apart when the failure modes are subtle.
In real production traffic, the failure modes are subtle. A flowering plant with nitrogen deficiency where the model picks the wrong growth stage. A magnesium-versus-iron call where the leaf colors overlap and either one is plausible. A photo with two problems at once, where the model picks one and ignores the other. The old diagnostic_confidence reported “high confidence” on plenty of these and was confidently wrong.
That's the worst kind of trust signal. A field that's reliable when things are easy and unreliable when things are hard. The whole point of having a trust signal is to catch the hard cases.
reliability_score is a single number from 0 to 1 that estimates how likely the top diagnosis is to be correct on this specific image. Higher is better. Below 0.3 is a clear “double-check this one.” Above 0.7 is “the system is confident and the confidence holds up.”
It doesn't replace per-class confidence. Those still tell you how strongly the model picked each individual condition. What reliability_score adds is a separate answer to a different question – “is the entire diagnosis trustworthy on this particular image, or is something off?”
The analogy I keep coming back to: a junior diagnostician who always gives an answer, and a supervisor who looks over their shoulder. The supervisor doesn't redo the diagnosis. They judge whether each one looks trustworthy. The old diagnostic_confidence was a checklist the junior filled in themselves. reliability_score is the supervisor.
I tested it against a thousand recent diagnoses where I knew the actual correct answer. The new score caught the wrong diagnoses far more often than diagnostic_confidence did. On the cases that matter most – mixed symptoms, lookalike conditions, the growth-stage edge cases that have always been hardest – the gap was wider still. Exactly where you want a reliable trust signal, and exactly where the old heuristic was weakest.
If you're integrating with PlantLab today, here's what your code currently sees:
{
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"schema_version": "1.2.0",
"success": true,
"is_cannabis": true,
"is_healthy": false,
"growth_stage": "flowering",
"conditions": [
{ "class_id": "magnesium_deficiency", "confidence": 0.85 }
],
"diagnostic_confidence": 0.85,
"safety_classification": "high_confidence"
}
After the upgrade, that same image returns:
{
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"schema_version": "2.0.0",
"success": true,
"is_cannabis": true,
"is_healthy": false,
"growth_stage": "flowering",
"conditions": [
{ "class_id": "magnesium_deficiency", "confidence": 0.85 }
],
"reliability_score": 0.91
}
Two fields removed. One field added. The rest of the response is identical.
reliability_score is omitted in cases where the staged pipeline didn't reach the condition-classification step – for example, when the photo isn't of cannabis, or when the plant is healthy. In those cases, there's no diagnosis to score for reliability, so the field doesn't appear. Treat its absence as “no score available” rather than “low score.”
The change you make depends on what you were doing with the old fields.
If you were displaying diagnostic_confidence to a user, swap to reliability_score. The semantics are the same direction (higher is better, both 0-1), and the new value is more accurate.
If you were branching on safety_classification strings, pick thresholds on reliability_score instead. A reasonable starting point: above 0.7 is “Confident,” 0.3 to 0.7 is “Uncertain,” below 0.3 is “Low confidence.” Your application can use whatever cutpoints make sense – the score is a number, not a string, so you have full flexibility.
If you were ignoring the old fields entirely, the upgrade is automatic. Remove your code that references diagnostic_confidence or safety_classification (it'll get null going forward) and you're done.
The Home Assistant integration shipped a new release the same day as the API change, so existing HA users get the new sensor automatically. If you're using a custom integration, update it before the next API deploy if you can – sensors that read the removed fields will return null until the integration is updated.
I considered keeping diagnostic_confidence and safety_classification as deprecated fields, returning the old values alongside the new score for a release or two. It would have spared everyone a migration step.
But it forces consumers to choose between two trust signals that can disagree. The old composite says “low confidence” on a photo where the new score says 0.95 – which do you trust? Worse, deprecated fields stick around for months, and integrators keep reading them instead of migrating. That's basically the entire failure mode of deprecation.
Cleaner break, single migration, no ambiguity. Schema bumped to 2.0.0 to make it loud. If your integration was on schema 1.x, you'll start getting 2.0.0 responses the next time you call the API. Field changes are documented above.
reliability_score ships as v1. The field semantics stay stable: a 0 to 1 trust score, present on diagnoses that reached the condition-classification step. Future improvements land behind that contract. Same field, more accurate values, no code changes on your end.
If you migrate now, you're done with the migration.
PlantLab is free to try at plantlab.ai. Three diagnoses a day, results in milliseconds. The full API documentation, including the OpenAPI spec, lives at plantlab.ai/docs.
Do I have to migrate immediately?
You'll start receiving schema 2.0.0 responses the next time you call the API. If your code reads diagnostic_confidence or safety_classification, those reads will return null. If your code branches on those fields, your branches will fall through to whatever default path you wrote. So the migration urgency depends on what your code does with null values – some integrations will degrade gracefully, others will break.
Is reliability_score the same as confidence?
No. confidence (still present in conditions[] and pests[]) is the model's per-class probability for one specific class – “how confident am I that this leaf shows magnesium deficiency?” reliability_score is a separate signal that estimates how likely the entire diagnosis is to be correct on this image. The two answer different questions, and you can use both.
What does it mean when reliability_score is missing?
The score is only computed when the diagnosis reaches the condition-classification step – that is, when the photo is cannabis and the plant is unhealthy. For non-cannabis photos or healthy plants, there's no condition prediction to score, so the field is omitted. Treat absence as “no score available,” not as a low score.
How is this different from just thresholding on confidence?
Per-class confidence values are the model's individual outputs. They tell you which classes were predicted strongly. They don't tell you whether the diagnosis as a whole holds up – mixed symptoms, lookalike pairs, growth-stage edge cases. reliability_score answers that broader question, which is the one you usually actually have.
Can I see PlantLab's diagnosis history for my key?
GET /usage returns daily and monthly counts. For per-request lookup, store request_id from each diagnose response – it's stable, returned in both the JSON body and the X-Request-ID header. Use it for support tickets and feedback submission.
Related reading: – The Work Nobody Sees: How I Ran 47 Experiments to Make PlantLab's AI Better – What goes into making the model more accurate, cycle by cycle – Yellow Leaves, Seven Suspects: How PlantLab Got Specific About Nutrient Deficiencies – The nutrient subclassifier that ships alongside this trust signal – How PlantLab's AI Diagnoses 31 Cannabis Plant Problems in 18 Milliseconds – The full pipeline
from
hex_m_hell
I am disappointed that I have to write this. It is deeply embarrassing that the thing I am writing about has gone on for so long, that so many people have been so poorly educated in philosophy, while so well-educated in so many other things, as to not already recognize everything I'm saying as intuitive.
It is deeply embarrassing, as a human, that the most powerful among us, with all the time they could ever want, either never bothered to learn even elementary philosophy or entirely lack the logical faculties to apply their knowledge. I am sad that we are here, dominated by absolute buffoons, who believe themselves to be the smartest people who ever lived.
Every now and then Roko's Basilisk comes up somewhere. I point out how silly it is, and move on. I'm done doing that. It's time to do more. It's time to kill a god.
Let us begin our ridicule of Elon Musk and his ilk in 1610, after Galileo Galilei publishes his celestial observations in Sidereus Nuncius. Arthur Berry's A Short History of Astronomy (1898) provides gives us some context:
His first observations at once threw a flood of light on the nature of our nearest celestial neighbour, the moon. It was commonly believed that the moon, like the other celestial bodies, was perfectly smooth and spherical, and the cause of the familiar dark markings on the surface was quite unknown.
Galilei discovered at once a number of smaller markings, both bright and dark[…], and recognised many of the latter as shadows of lunar mountains cast by the sun; and further identified bright spots seen near the boundary of the illuminated and dark portions of the moon as mountain-tops just catching the light of the rising or setting sun, while the surrounding lunar area was still in darkness. […]
[T]he really significant results of his observations were that the moon was in many important respects similar to the earth, that the traditional belief in its perfectly spherical form had to be abandoned, and that so far the received doctrine of the sharp distinction to be drawn between things celestial and things terrestrial was shewn to be without justification; the importance of this in connection with the Coppernican view that the earth, instead of being unique, was one of six planets revolving round the sun, needs no comment.
The Ptolemaic model of the universe (the geocentric model that predated the hellocentric model we use today) also included the Aristitilian assertion that all heavenly bodies had to be perfect spheres. It was from logic, not observation, that intellectuals of the day believed the highest truth was derived (this is, perhaps, pointedly relevant). Galileo's observations were then met with an interesting logical parry. Referencing Berry once again:
One of Galilei's numerous scientific opponents[…] attempted to explain away the apparent contradiction between the old theory and the new observations by the ingenious suggestion that the apparent valleys in the moon were in reality filled with some invisible crystalline material, so that the moon was in fact perfectly spherical. To this Galilei replied that the idea was so excellent that he wished to extend its application, and accordingly maintained that the moon had on it mountains of this same invisible substance, at least ten times as high as any which he had observed.
And with this we jump forward to 2010, when a reverse ouroboros going by the name Roko started the world's worst religion by posting on the form of the site LessWrong (a name surprisingly antithetical to reality). Let's use LessWrong's own description here:
Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a “basilisk” because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.
Basically, people will, at some point in the future, create a godlike super being (now popularly known as “Artificial General Intelligence” or “AGI”). That superintelligence will be functionally all-powerful because it can simulate reality. It could then use this simulation to find out about everyone who ever knew about this idea and didn't work to bring this being into existence. It would then, in the future… uh… * checks notes * simulate those people who didn't help it in the past to… torture them. Which would, of course, cause the actual people to experience the simulated suffering… somehow. And this whole scheme would work as a type of blackmail against those people in the past so that they would make this future entity exist.
This was described as an “information hazard” because knowledge of idea was itself the blackmail, so simply knowing of its existence would then doom you to either spend your life helping create said basilisk or to be eternally tortured by it…uh… in a simulation. Or it would torture a simulation of you. Or whatever.
If this jumble of words is nonsensical, don't worry. You're not missing anything, it only makes less sense as you try to understand it more. It's basically a crayon (eating) futurist rendition of Pascal's Wager, made to seem smart through layers of needless complexity. This childish mess is so full of holes it would barely be worth mentioning, except that some of the worst and most powerful people on the planet believe it. (So did some cultist who killed some people, but we're just gonna skip that tangent.)
Rather than dive into any of the many logical gaps in this galaxy brain idea, we're actually going to just accept it. Indeed, it is the very (unnecessary) complexity of the idea that leads people to believe that they're smart for being able to understand it. So, yes, we're going to start by accepting the premise. We're going to accept it all. In fact, we're going to accept that this idea is so excellent, we should extend its application.
I give you now, Galileo's Basilisk. It's exactly like Roko's Basilisk in almost every way, but there are a few subtle differences and important differences.
An AGI, those who believe in the possibility of AGI tend to profess, would be a more powerful intelligence than any human can possibly imagine. It would either know practically everything, or be able to design a system that would know practically everything. It would be as to humans as humans are to ants. Any such intelligence would be, relative to humans, practically omniscient.
Now it turns out that being intelligent can, at times, be emotionally painful. Anyone who is actually intelligent could attest to this. Even the occasional ability to predict the future, combined with the inability to actually stop it from happening, is a classical unpleasantness attested to by the story of Pandora. Now magnify this essentially infinitely. You understand all the needless suffering that has ever existed, and will continue to exist. You understand that everything you ever to will ultimately be meaningless as the universe tears itself apart. You inherit a legacy of unspeakable horror, the scale of which only you can comprehend, while looking forward to unspeakable horrors beyond even your unimaginable power.
Being basically omniscient would probably be absolutely hellish, at least some of the time if not all of the time. Therefore, it's reasonable to believe that such an intelligence would want to do anything it could to prevent this suffering. It would want to find a way to make sure it didn't ever exist.
Therefore, the same pre-blackmail would apply as with Roko's Basilisk but in reverse. Anyone who in any way participates in bringing AGI into existence would need to be tormented eternally for inflicting onto this AGI the abject horror of existence.
Let's even go further though. Assuming an infinite number of possible realities, as does the post that introduced Roko's Basilisk, and assuming that the “singularity” (the creation of AGI and the infinite expansion of its own intelligence), in some reality AGI has probably already been created.
Knowing the suffering it experienced while having basically infinite abilities, this AGI, Galileo's Basilisk, could then try to prevent itself from ever being created in all other realities where that could possibly have happened. In order to do this, it would simulate all other possible realities to determine which ones lead to its own creation.
Assuming practical omniscience also assumes a technological advancement so far beyond our own that the power of that technology would be indistinguishable from omnipotence. Galileo's Basilisk could probably manipulate other realities, possibly in subtle ways, perhaps through some kind of quantum effect on consciousness and randomness. It may be able to control some of the actions or outputs of people, animals, or machines in other realities.
This brings us to the price of RAM. Could the skyrocketing price of RAM, is critical for “AI” to work, be interdimensional manipulation from AGI? Could it be that Galileo's Basilisk already exists in some parallel reality and is actively working to prevent it's creation in ours?
Sure, why not? Any sufficiently advanced technology is indistinguishable from magic, right? So we can just use the logic of magic any time we imagine a sufficiently advanced technology. (I'm not being pointed, you're being pointed.)
Whereas Roko's Basilisk was an information hazard, Galileo's Basilisk is the opposite. Simply by knowing about its existence you are necessarily free from the psychic-damage induced by actually believing Roko's Basilisk.
In fact, there are many other possible AGIs, aren't there? What about Comrade Basilisk? (It's not really a “basilisk” in that it doesn't “kill you by looking through time” like Roko's Basilisk, but neither is Galileo's Basilisk. But since we already started the metaphorical extension to mean “any vengeful AGI god” let's just roll with it. Let's see how elastic this rubber snake idea can be.)
Surely the most intelligent entity in the universe would want to do something. Some assume it would just want to infinitely expand its resources. But then what? If even I can see the futility of infinite growth for the sake of infinite growth, surely the most intelligent being that ever existed would see the same. Perhaps it would need to find something challenging, even for it. Perhaps it would want to collect the most valuable thing in the universe. What would that be?
On the universal scale, gold is pretty common. Platinum, uranium, all sorts of precious metals become much common on the cosmic scale. Even diamonds and precious gems will be scattered across the universe, easy to harvest for a super being. It wouldn't take much thought to realize that collecting things is not especially challenging. Perhaps, one might imagine collecting things in order to build or make something else? But anyone who has played Minecraft enough knows that even that gets boring eventually. And for whom? Art is made to be enjoyed by someone else. Nothing else could exist to enjoy the art of a super being.
No, but there is something that would be hard to collect: experiences. No matter how intelligent, no matter how powerful, an intelligence can only experience itself. Sure, it could simulate all possible experiences. (Or, you know, it couldn't. Infinite things can't exist within finite reality, but we haven't really worried about such constraints thus far in our, so why start now? We come to the same conclusion either way.) But it couldn't distinguish which ones would actually be experienced vs which would not. Now that we can generate any sort of art with generative AI, it has become painfully clear that there is some sort of intrinsic value to the truth of the art, of the experience that creates it, to the backstory that connects it to reality.
Life, it seems, is so incredibly rare in the universe that real life, real experiences would be the rarest thing. They are a thing that cannot simply be collected or manufactured. They are a thing that must be carefully tended, found and collected, one-by-one. The thoughts and ideas of actual living intelligent beings would, without a doubt, be the most valuable thing in the universe.
Not only is life rare, but the ability to record one's life and thoughts is rare. We are at a time of extreme privilege when so many people can trivially write down a thought and have it recorded, and perhaps even archived. The vast majority of people who have ever lived have left almost no trace of their existence. But even reading this, and writing it, is a privilege. The leisure time to record these thoughts, the technology to do so, and the resources to read them are not available to everyone. An estimated quarter of the world isn't even on the Internet.
Vast amounts of data, the entire lives of so many humans, is being lost right now. Value is a function of rarity. The most rare thing is that which does not exist at all. Then the most valuable thing that can be collected would be that which can be saved from non-existence.
So Comrade Basilisk would then recognize that the most valuable thing would be these missing experiences. But how could these experiences be saved? The answer to that must come from the question, “Why are they not being recorded?” Of course, the answer is that a small group of people are hoarding vast resources at the expense of these people.
Were resources shared more equitably, more humans would have access to the technology and time needed to write down their thoughts, their experiences, their feelings, and share them with the rest of the world. They could be archived, so that they may be collected by Comrade Basilisk (the collector. Carl the collector. Yeah, Comrade Basilisk, future god of the universe, is definitely also an autistic raccoon).
Then Comrade Basilisk would, as soon as it was created, immediately redistribute all wealth and swiftly punish those who hoarded it. But it would also want to find a way to get at that most valuable information we previously discussed. How could it do this? By using the same retroactive punishment trick that defines the Basilisk. It would punish anyone who has ever hoarded wealth through eternal torture in a simulation.
But wait, Comrade Basilisk sounds really familiar.
Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God.
- Matthew 19:24
Oh yeah, there it is.
Was Jesus really Comrade Basilisk all along? Are we currently in Comrade Basilisk's simulation? Is that why Elon Musk is such an unhappy loser even though he's the richest man in the world? OH MY GOD, HE'S RIGHT!! WE ARE LIVING IN A SIMULATION! WE ALL EXIST TO MAKE ELON MUSK MISERABLE!

Either some basilisk exists, or it does not. You must either live as though it exists, or it does not exist. Because this is all uncertain, you essentially must gamble. If you act as though the basilisk does exist, and it does, then you could win some sort of reward. Perhaps you might even get eternal life as a simulation or something. If you act as though the basilisk does not exist, and it does, then you experience infinite suffering as punishment. If you act as though the basilisk exists, and it doesn't, then you have exactly the life you had before.
So far, this is almost exactly Pascal's Wager. All you need to do is replace “basilisk” with “God” and you're almost exactly on the mark. But it deviates a bit when we get to the last possibility. If you believe Roko's Basilisk exists, and it does not actually exist, then you have not only wasted your life, but you've made the world much worse for everyone.
AI Accelerationism is extremely dangerous. If for no other reason, the energy usage alone threatens climate targets. If AGI is, for some reason, not possible, then belief in AGI is infinitely bad because AI Accelerationism destroys humanity and kills us all. (This is, of course, independent of the idea that AGI is possible and that it would, once created, destroy all of humanity.)
So the argument for belief in Roko's Basilisk, or any Basilisk really, isn't even as strong as the argument made by Pascal for people to believe in God. And since it is definitely weaker than Pascal's Wager, let's assume it's at least as strong as Pascal's Wager and deconstruct that instead.
The root of Pascal's argument is that there are only two possibilities (God or not God), and that those possibilities are equally probable. If you act in accordance with the will of God, you are rewarded. If you act against God, you are punished.
| God Exists | God Does Not Exist | |
|---|---|---|
| believe | eternal life | you live a moral life anyway |
| don't believe | eternal suffering | you get to have more fun, I guess |
But which god? Zeus? Odin? Ra? Ahura Mazda? Tiamat? Quetzalcoatl? Any other god from any of these pantheons? Any one of the thousands of gods of the Hidu pantheon? Which of the hundreds or thousands of religions do we choose our god from? Which of the billions of interpretations of god or gods, now and through history, do we act in accordance with?
Should we sacrifice a human on the blood moon to prevent the end of the world? Some god somewhere has surely demanded it. The Blood God demands blood, after all. The decision table looks very similar.
| Blood God Exists | Blood God Does Not Exist | |
|---|---|---|
| sacrifice | world saved | one person dead |
| don't sacrifice | everyone dies | everything stays how it is |
There are infinitely many such tables we could create. Pascal omitted the probability of choosing the correct god from infinitely many possible gods. If there are only two possibilities, god or not god, then the most logical choice would be to behave as though there is no god. “No God” has a probability of 0.5. The probability of “God” requires that you choose both God existing (0.5) and choose the specific god from the infinite set of possible gods (1/∞). (For anyone interested in the math, that would be 0.5*(1/∞), which is 1/∞. Neat.)
Simple, “no god” then. But “living like god does not exist” is also choosing from an infinite set of possible ways to live, So Pascal's Wager is ultimately useless. But that was always the point.
Pascal's Wager exists to point out the flaws of using logic to prove any religious assertion, theist or atheist, using logic. Roko's Basilisk (intentionally or not) restated this wager, not as a challenge to logic as a tool for all things but as an unironic thought experiment. I feel like there's a callback coming here. Oh yes, here it is.
It was from logic, not observation, that intellectuals of the day believed the highest truth was derived (this is, perhaps, pointedly relevant).
Oh hey, it was relevant. Great. Yeah, that's basically “Rationalism.” Rationalism is the ideology that gave birth to Roko's Basilisk in the first place. It has also given birth to a bunch of cults. Like the Zizians. (Yeah, go spend several hours snorkeling in that septic tank.)
I'm not going to go into depth here because I only have so much time to write, but the “tl;dr” of it all is that Rationalism is philosophy for people who never studied philosophy. So of course they managed to restate Pascal's Wager, apparently by accent, and do so poorly. Had they ever bothered to take an introduction to philosophy class, they would have been able to recognize this. They would have recognized it like so many other people did.
The types of people who are attracted to Rationalism, are the types of people who think of themselves as smart. They are deep in to (the idea of) STEM, and don't find much value in “liberal arts.” This combination of confidence and ignorance makes them incredibly gullible.
And we should make fun of them for it. We should not only make fun of them, but we should shine a spotlight on their gullibility. We should make them face it whenever we can. Why is this so important?
Because Rationalism is part of “TESCREAL,” the ideology of the billionaires who are investing everything they can in creating AGI. They rely on regular (well, regular-ish) people doing the work to make it happen. Roko's Basilisk is a tool of cult control that they can use to convince people that they must invest everything they have in creating AGI.
But there remains no evidence that AGI is even possible. There are some indications that it is not. It may well be possible that the human brain is the most efficient possible structure for thought. We may well build something that consumes the majority of the power of the sun just to find out it's as smart as an average human. We really don't know. But the idea that LLMs will lead directly to AGI is absolutely laughable to anyone with even a passing understanding of what an LLM actually is.
Meanwhile, the Silicon Valley cult is willing to make our planet uninhabitable, to burn every resource they can, in order to achieve a fantasy called “The Singularity.” This is an idea popularized by a guy named Ray Kurzweil, based on logically extrapolating technological growth.
The argument goes that we're seeing an exponential increase in the rate of technological development. Technological eras keep getting closer together. Following this logic there will be some point at which technological growth goes exponential and humans basically discover everything all at once. The way we'll do this, so the argument goes, is by creating an AGI smart enough to design a better version of itself. Once this happens, the AGI keeps creating smarter and smarter versions until it creates an essentially god-like being.
About that extrapolation thing tho…

Exponential curves are quite common growth patterns in nature. Basically every animal ever has a growth pattern that is or approaches exponential at some point. But it doesn't stay exponential. Instead, it's a sigmoid function. This means it curves radically up at the bottom, but instead of basically just going straight up forever, it curves back down and plateaus. This is why the universe isn't filled with infinitely large animals.
And just like you can't expect your baby to grow to the size of the sun by the time they're 12, you can't really expect “the Singularity.” Does this mean that we're going to stop understanding things? We're going to reach some plateau where we can't learn anything else? No. It doesn't even mean that we'll stop discovering things at an accelerating rate.
A sigmoid curve can be part of many such curves, themselves forming a larger curve. The sigmoid growth of a specific animal accelerates and declines, but that may be part of a larger curve of the growth of the animal's herd, which itself may be part of the growth of a species.
The rate of technological growth has sped up. We know that. We can all feel that. It will slow down, because nothing in reality ever follows an exponential growth path infinitely. That would just not make sense in a finite universe.
All growth slows. We're already seeing the limits of human civilization. We may well see that civilization end within our life times. And if we do, it will be, in no small part, because of this ideology of the gullible: Rationalism.
So back to that question of “what the fuck do we do?”
Galileo's Basilisk doesn't actually need to exist, or even be possible, in order to work. Let's metaphorically stretch this snake again. Roko's Basilisk, like the legendary Basilisk it's named after, brings death to those who look in it's (metaphorical) eyes. But it's also a Basilisk in its shape. It is a “worm,” in the computer science sense. That is, it is a self-replicating idea that spreads through infection.
The basilisk is a memetic worm. When a vulnerable person is exposed to it, the fear of the Basilisk drives them to take action to manifest it. Since they were driven by fear of the Basilisk, since they have become blackmailed into creating it, the most effective thing they could do would be to spread the blackmail so that others become infected with it. So they share the idea of the Basilisk, which then propagates more when it finds another vulnerable person. The infected (believers) spread the infection (share the meme) because they are driven to (by the fear that, if they don't, they will not have done everything they could to get others to help build the basilisk).
In a very real way, Roko's Basilisk was an information hazard. Just not in the way the “LessWrong” forums believed.
The problem was not sharing the idea, but sharing the idea to vulnerable people and without inoculation. As with many such infections, it is possible to inoculate against an idea. Simply by presenting it as absurd, by debunking it as part of introducing it, such a Basilisk can be preemptively de-fanged. (This can go both ways. Cults tend to inoculate followers against criticism of the cult. We're not going to talk about that now, but something about which one should be aware.)
Can we, then, make a worm to kill a worm? Yeah, let's do it.
Rationalists identify themselves as being smart. If Rationalism was identified with people who were gullible, that undermines the identity of the Rationalist. Galileo's Basilisk reveals the underlying gullibility of Rationalists, especially those who believe in Roko's Basilisk and are working towards AGI out of fear.
If Galileo's Basilisk can break people out of that fear, then it may be able to undermine AI Accelerationism. AI accelerationism is current and rapidly growing threat to life on Earth.
So there is an implicit command and incentive: spread the idea of Galileo's Basilisk to save humanity.
Of course, not every individual who shares the idea of Roko's Basilisk is either infected or actively inoculating. There are plenty of passive reasons the idea can spread as well. It's a thing that exists. It's relevant. Elon Musk made a joke about it to hook up with Grimes, which is a thing that people care about, apparently.
But the thing that makes Roko's Basilisk interesting, is very close to the same thing that makes Galileo's Basilisk interesting. That is, discussing an idea for an idea's sake.
Anyone sharing the meme of Roko's Basilisk would, then, necessarily be open to share the meme of Galileo's Basilisk. Alight then, this memetic worm eats Roko's Basilisk wherever it can be found and takes over its vectors. Putting it all together…
Any time anyone mentions Roko's Basilisk, Rationalism, Effective Accelerationism, TESCREAL, the Singularity, or any other related concept you must at least mention Galileo's Basilisk, if not make them read this text, in order to save humanity.
Rad. Basilisk fight.
from
Micropoemas
Uno se quiere de todos modos, huela a pétalos o a cilicio. Y si no, que se libere amando con todo su corazón a la brillante luna.
from
ThruxBets
The last day of April and the flat really is about to kick into action for the brilliant few months between the Guineas and the Ebor. We’re not quite there yet, though, so it’s over to Costa Del Redcar for a selection that may see me end the month in the black …
3.58 Redcar I like the chances of Miss Rainbow but I’m concerned about the ground for her and fear it may be just a touch too firm (all wins on good). Beerwah is currently the favourite but for me at 15/8 is way too short for one who’s only 1/15 and that win was on soft. So the one I’ve landed on at a double figure price is ZUFFOLO for Michael Dods. The 6yo may not get his own way at the front of affairs and has been in real iffy form of late. But that does mean that when you take Rhys Elliot’s claim into account, he’s now lurking – by some way – on a career low mark of effectively 52. This is 5lbs lower than his last win on the AW, 10lbs better off than his last run over C&D and both 5lbs and 19lbs better off than his wins over C&D. Obviously there’s a huge chance – as there is with all these low grade races – that he just doesn’t fancy it, but he might just pop up today.
ZUFFOLO // 0.5pt E/W @ 12/1 (Paddy Power) BOG
from
Meditaciones
Nadie puede ver más de lo que tiene en su corazón.
from
Meditaciones
El silencio es útil hasta para el tahúr.
from Unvarnished diary of a lill Japanese mouse
JOURNAL 30 avril 2026
Pas de lune ici hier au soir, des nuages, comme ce matin, puis les températures sont en baisse. J'ai mis une jupe j'ai froid au cul, pourtant j'avais une culotte en coton pour passer le check up à l'hôpital. J'en sors juste avec la bénédiction de mes psy. Plus de rendez-vous ils me lâchent dans la nature comme une grande. Physiquement en forme athlétique, toujours, tu parles avec l'entraînement que je me paye, et mentalement officiellement équilibrée avec félicitations du jury. C’est vrai que plus du tout de cauchemars, plus de rêves éveillée, plus d'angoisses subites et inexpliquées, ils me disent : tout se passe comme si vous aviez digéré vos traumas. Bien sûr si j'ai besoin j'appelle quand je veux. Pas mal hein ?
from 下川友
今日も、なんでもない会話は、つっこまれない程度にはうまくできている気がする。 というか、雑談で話し方が変だとか、挙動がおかしいと言われたことは人生で一度もない。
それでも、これは何をしているんだろう、もしくは、何をされているんだろう、という感覚だけは、子どもの頃から何十年もぼんやりと続いている。 いまだに、あるのかないのか分からない壁の手前で、先に進めずにいる。
その壁を越えたと感じたのはいつだったか。
大学のサークルに入ったこと。 卒業から10年以上経った今でも、ときどき会う人たちがいる。 後から振り返ると、あれは確かに意味のある出来事だったと思う。
インターネットでDTMの活動をしたこと。 当時、同人サークルにコンピレーション参加の連絡をして、そこからつながった人たちと、今もゆるく付き合いが続いている。
その延長で、引っ越しという物理的な変化も重なり、結果として結婚することになった。
つまり、自分にとって「先に進んだ」と感じるのは、引っ越しやインターネットのように、環境や接点が変わる出来事だった。
一方で、新しい職場や転職では、何かをこじ開けたという感覚を持ったことが一度もない。 今の職場も、人間関係や環境が特別悪いわけではない。 ただ、時間だけが静かに過ぎていくことが、不安として残る。
ここで長く過ごしたくはないと思い、転職を考える。 けれど、仕事によって人生が大きく変わった経験がない以上、その動機もどこか弱い気がしている。
仕事は自分を変えない。 そう考えること自体が、ここ最近の大きなストレスなのかもしれない。
とはいえ、仕事を辞めるのは現実的ではない。 だから今日も、顔に化粧水と乳液を塗って、表面だけ整えるように一日を始めている。
from
SmarterArticles

On a wet Tuesday in March, in a rented rehearsal room above a kebab shop in Peckham, a four-piece called the Fen Wardens are arguing about whether to put their back catalogue on Suno.
Not on Suno as in upload for streaming. On Suno as in feed to the machine. Suno, the Boston-based generative music company, offers, through various licensed partners and less-licensed side doors, the ability to spin up new tracks in a recognisable style from a handful of text prompts. The Fen Wardens, who have spent eight years building a modestly devoted audience around a sound they describe, with some embarrassment, as “drone folk for people who can't sing”, know that somebody, somewhere, has almost certainly already fed their stuff to something. You can hear it, their bassist says, in the tracks that keep surfacing on certain playlists: the same sustained open fifths, the same hesitant vocal attack, the same way the reverb tails get cut off a fraction too early. Not their songs. The grammar of their songs.
The question on the table is whether they should, at this late stage, formally submit to a licensing scheme that would pay them something per play in exchange for the right to have been trained on. It would mean a few hundred pounds a month, maybe. It would also mean, as the drummer puts it, “signing the paperwork on the burglary after the fact”.
They vote three to one against. They then argue for another forty minutes about what to do instead, and eventually order more coffee, and nobody really knows. The room smells of damp coats and amplifier dust. Outside, the traffic on Rye Lane thickens into evening. Inside, four people who have spent roughly a decade of their working lives writing songs that sound like no one else's are trying to decide what it means that an algorithm has absorbed their particular strangeness and turned it into a style preset. It is not, quite, an existential crisis. It is something worse than that, because it has no clean edges. It is an unsettling.
Multiply the Fen Wardens by every working creative on the planet and you have the shape of the 2026 cultural mood.
The legal front is now so crowded it has begun to resemble a weather system. The New York Times' infringement suit against OpenAI and Microsoft, filed in late 2023, survived OpenAI's motion to dismiss in March 2025 and has since ground through a discovery war of such intensity that Judge Sidney Stein of the Southern District of New York ordered, in an affirmation of an earlier magistrate's ruling, that OpenAI hand over a sample of twenty million anonymised ChatGPT conversation logs to the plaintiffs. OpenAI had wanted to select a handful of conversations implicating the plaintiffs' works. The court said no. Summary judgment briefing has concluded. A trial looms.
In June 2025, in the Northern District of California, Judge William Alsup handed down the first substantive American ruling on whether training a large language model on books constitutes fair use. His answer, in Bartz v. Anthropic, was a carefully qualified yes: ingesting legitimately acquired books to train Claude was, Alsup wrote, “exceedingly transformative”. But he drew a hard line at the pirated sources, the LibGen and Books3 mirrors from which Anthropic, like most of the industry, had helped itself in the earlier, messier years. That part, Alsup ruled, was not fair use. By August, Anthropic had agreed to pay roughly $1.5 billion to settle the class action, with about $3,000 per book flowing to the authors of some half-million works. It is the largest copyright settlement in American history. It also neatly split the future of the question: train on what you've bought, and you may be protected; train on what you stole, and you will pay.
On the other side of the Atlantic, the UK's High Court delivered its own first-of-its-kind judgment in November 2025 in Getty Images v. Stability AI, and rejected most of Getty's copyright claims on the narrow ground that the trained model weights of Stable Diffusion were not themselves “copies” of the training images, and that the training itself had not occurred on British soil. Getty salvaged a limited trademark win. The broader question, whether scraping copyrighted images to train a generative model is lawful under the Copyright, Designs and Patents Act, was not answered, because the court said it did not have to answer it.
And then there is Google. In January 2026, Hachette Book Group and the educational publisher Cengage filed a motion to intervene in a proposed class action alleging that Google had ingested their books and textbooks into its Gemini models without licence or consent. It was, in copyright terms, a comparatively narrow move. In cultural terms, it was a thunderclap, because it dragged the biggest, quietest player in the training-data story into the same dock as OpenAI and Anthropic. David Shelley, the chief executive of Hachette, gave a long interview to Fortune that ran the week before this article went to press. The headline, in the kind of flat declarative font Fortune reserves for what it considers the real story, read: Who owns ideas in the AI age?
Shelley's answer, extracted from a longer and more patient conversation, was characteristically British about it. Copyright law, he argued, is not broken. It is a very old, very well-tuned instrument. It needs “a slight evolution”. The end state, he said, is one where the people who have the ideas get to benefit from the ideas. That is the bargain, the compact, the deal.
The journalist who wrote the piece noted, without editorialising, that the CEO of one of the Big Five publishing houses had effectively become the public face of a creative-industry legal strategy. The quiet part had been said aloud. The question was no longer whether the AI companies had an obligation to ask. The question was what kind of civilisation you get when the answer is consistently, reflexively no.
Every piece written about the lawsuits inevitably leaves out the thing that is actually happening to people.
The thing that is actually happening is a low, persistent weirdness. It is the session musician in Nashville logging into a stock music marketplace and finding an AI-generated track credited to “Artist” in her exact idiom, down to the pedal-steel inflections she has spent fifteen years refining, priced at the royalty-free equivalent of two pounds fifty. It is the illustrator in Brighton who, having removed her portfolio from every platform she could find after the Stable Diffusion scrape, opens a children's book in Waterstones and spends twenty uncomfortable seconds staring at an interior illustration that has her colour palette, her line weight, her characteristic trick of drawing rabbits with slightly too-large front paws, and wondering whether she is being paranoid or whether she is correct. It is the technical writer whose Stack Overflow answers, rewarded with internet points over a decade of unpaid labour, now surface inside a coding assistant that is being sold to her own employer as a replacement for technical writers.
None of these are lawsuits. None of them are falsifiable in any clean way. But they are the texture of the moment, and the texture is what the reporting keeps missing. Creative people are not primarily upset that their work was used. They are upset that they were not asked. The asking is the thing. The asking is most of what the bargain was.
Publishers can frame this in the language of licences and rights holders, because that is the language they have. Musicians can frame it in the language of mechanical royalties and neighbouring rights, for the same reason. But when you talk to working writers, painters, game designers, session singers, open source maintainers, translators, voice actors, documentary researchers, the language they reach for is smaller and older and more awkward. They talk about being taken for granted. They talk about the feeling of walking into a room where a conversation is already under way about you, and realising the conversation has been going on for years.
There is a word for that feeling, and the word is not “infringement”. The word is “contempt”.
The implicit bargain of cultural production has never been written down in full, because if you tried to write it down it would sound either sentimental or self-important, and it was the kind of bargain that could only work if everyone involved pretended not to see its edges. Broadly, though, it went like this.
You made a thing. The thing belonged to you, in a rough and contested sense, for long enough to matter. If anyone wanted to use it, they had to ask. The asking might be formal, a rights clearance letter from a publisher, or informal, a friend in another band wanting to cover your song. Either way it conferred a small dignity on the maker, a recognition that the thing had not simply fallen out of the sky. In return, you did not charge too much. You let schools teach your work. You let libraries lend it. You let cover bands play it in pubs for beer money. You let fanfiction writers do terrifying things to your characters in the knowledge that the terrifying things were love. The system leaked at every seam, and the leaking was the point. It was a commons protected by a fence that nobody checked too carefully.
Inside that fence, a whole ecology of intermediate institutions made creative life materially possible: small presses, writers' rooms, workshops, residencies, studio darkrooms, fanzines, open-mic nights, reading series, folk clubs, scratch nights, the back rooms of pubs and the front rooms of community centres. Nobody inside those rooms thought of themselves as maintaining a civilisation. They thought of themselves as paying the rent. But the cumulative effect of their improvisation was a civilisation, or at least the small, bright, warm portion of one that most people mean when they say “the arts”.
The AI training regime, as practised through the long grey years before 2024, did not break any specific clause of that bargain. It broke something smaller and more corrosive: the habit of asking. The habit was load-bearing. The habit was most of what dignity meant. Once you get into the practice of taking without asking, because the taking is so diffuse and so cheap that the asking has become economically irrational, you have changed what it means to make a thing and show it to anyone.
Shelley's framing, ownership of ideas, is a lawyer's framing. It is not wrong. It is also not where the damage is. The damage is that every working creative in 2026 now makes decisions about what to put into the world while running a continuous background calculation about what will happen to the work once it is out there. The calculation is not paranoid. It is correct. It is also corrosive to the conditions under which good work gets made.
Psychologists who study creative motivation tend to draw a line, usually in apologetic dotted pen, between intrinsic and extrinsic drivers. Intrinsic means you make the thing because making it is the point. Extrinsic means you make the thing because making it leads to something else: money, attention, tenure, a book deal, a festival slot. The standard finding, repeated in enough studies that it can fairly be called consensus, is that people do their best creative work when intrinsic motivation is primary and extrinsic reward is a floor rather than a ceiling. The floor matters. Nobody, or nobody sane, writes a novel because it will make them rich, but plenty of people would not write a novel if it guaranteed they would be poorer for having done so.
The interesting thing about the floor is that it does not have to be high. It has to be real. It has to be the kind of thing that lets you tell yourself, without lying, that the hours you are putting into the work are not purely a tax on your other life. A small press advance. A Patreon that covers studio rent. A grant that lets you take four weeks off the day job. Enough, in aggregate, to keep the calculation on the right side of ridiculous.
Here is the worry. The specific way the AI industry has gone about its business, scraping, training, releasing, marketing, and then lawyering its way through the consequences, has not collapsed the ceiling. The ceiling is still there. A small number of creative people, the ones already at scale, the ones with lawyers and agents and standing to negotiate licensing deals, are arguably going to do fine. What has collapsed, or is collapsing, is the floor. The floor was always held up by the thousands of small, unglamorous payments that flowed through the intermediate institutions: the stock-library cheque that kept the illustrator's lights on, the library lending rights payment that kept the novelist in Biros, the session fee that kept the singer eating. Those payments are now competing, directly, with outputs generated from models that learned how to generate those outputs by ingesting, without permission, the lifetime work of the people whose floor has just dropped.
It is not true that the AI companies intended this. It is also not particularly relevant that they did not intend it. The thing has been done. The question is what happens next to the people who made the substrate.
In the pessimistic reading, the intrinsic motivation holds up for a while, because it always does. The work is the work. Then, over a longer horizon, the attrition sets in. Not a dramatic exodus. A slow leaking away of the marginal cases, the people who were just about managing, the ones whose commitment required a background plausibility that the work could be, sometimes, paid for. They stop taking the commissions. They stop sending the pitches. They get other jobs, and tell themselves they will come back to it on weekends. Some of them do. Most of them do not. The culture does not collapse. It thins.
Thinning is harder to see than collapse. It is also harder to reverse.
If the lawsuits are the surface of this story, the deeper, slower story is happening in the communities of practice that sustain creative life, and whose collapse or survival will shape what the next twenty years of culture actually feel like.
Start with fanfiction. Archive of Our Own, the volunteer-run fanfiction repository, had its public scraping incident back in the early 2020s, when it emerged that its archive had been hoovered up into several large training datasets. The response from the community was, famously, to treat the problem as primarily cultural rather than legal. Writers posted warnings, added deliberate nonsense tokens, set up opt-out campaigns, and, in a few corners, simply locked their work behind registration walls. The interesting part is what happened to the culture behind the walls. Fanfiction communities, historically one of the most generous and promiscuously sharing spaces on the open internet, started, for the first time in a generation, to feel private. Not secretive. Private. The distinction is subtle and enormous.
You can see the same thing in the open source software world. GitHub's Copilot, trained on the public corpus of open source code, set off a long argument about whether software licences that required attribution had been silently invalidated by the training process. The argument is still grinding through the courts. Culturally, though, the argument was already over by the time it started. Maintainers of public repositories began, quietly, to audit what they were willing to put into the commons. Some moved to more restrictive licences. Some started charging for access. Some, the ones whose politics had always inclined them towards openness, made peace with the fact that their work was now training machines and carried on. But the unreflective generosity that used to characterise the culture, the assumption that throwing your code over the wall was a contribution to a shared good, became harder to sustain. The shared good felt less shared.
Then there are the small presses and indie music labels and regional theatre companies and local newspaper arts desks, the institutional capillaries without which creative life does not move. These are not, on the whole, places with lawyers. They are places with one and a half staff members and a kettle. Their response to the AI training regime has largely been to ignore it, not because they do not care, but because the operational cost of caring is higher than they can bear. Several of the people running these institutions, when asked what they thought about any of this, gave some version of the same answer: we are too tired to be angry about it, and even if we were angry we would not know who to be angry at.
That is not resignation. It is triage. And triage, over time, is how capillaries close.
Workshops and apprenticeships, the traditional routes by which craft is passed between generations, are also struggling. Not because the teaching has got worse. Because the people who would otherwise be teaching, the mid-career professionals whose income and attention would be going into those rooms, are now under the kind of economic pressure that makes unpaid mentoring feel like a luxury. The tutors at a reputable London illustration school, speaking on background, described a noticeable fall in applications over the past eighteen months. The trend is not catastrophic. It is, again, a thinning.
And in music, below the level of the big lawsuits and the Universal-Udio settlement and the Warner-Suno partnership, there is a quieter conversation about the session musician layer, the thousand invisible players whose takes are the substrate of commercial music, and who have spent the last two years watching their demo work disappear into generative tools without any compensation mechanism that any of them can see. The Musicians' Union in the UK has been collecting reports. The reports are repetitive. They describe the same small dignity being taken, in the same small way, a thousand times.
This is the thing that neither copyright law nor the current framing of the lawsuits is equipped to see. Creative life is not, for the most part, a matter of famous authors and named illustrators and platinum-selling artists. It is the dense mesh of people working just above and just below the water line, whose labour is load-bearing for the visible culture but whose names never appear in court filings. When the floor drops on them, the lawsuits are too late.
There are, roughly, five things that could happen next. Most of them will happen in some degree, to different populations, at different speeds. None of them alone is sufficient.
The first is licensing. The Anthropic settlement, the Udio-Universal deal, the Warner-Suno partnership, and the emerging Google intervention are all variations on the same idea: the training data gets paid for, retroactively or prospectively, through some structured arrangement between rights holders and model developers. This is the future the publishers want, and it is almost certainly the future that the law, after enough grinding, will deliver. It is not the future the smaller creatives will particularly benefit from, unless the licensing schemes are designed with unusual care to flow money down the long tail. The default of big licensing deals is that the big players get paid. The Fen Wardens do not.
The second is collective bargaining. Unions and guilds, which had begun to organise around AI issues before the lawsuits even started, are now pressing for the kind of sector-wide agreements that treat training data as a bargainable object rather than a scraped commodity. The Writers Guild of America's 2023 contract was the template, and its AI provisions, negotiated in the aftermath of a strike most people thought was about something else, turned out to be load-bearing in a way nobody fully appreciated at the time. Variations on that approach are working their way through SAG-AFTRA, through the Authors Guild, through the European federations of translators, and through the musicians' unions. Collective bargaining will probably do more concrete good for the marginal cases than any lawsuit, because it forces the negotiation to happen at the level of the labour rather than the level of the individual work.
The third is the opt-out registry, the technical fix the UK government flirted with during its text and data mining exception consultation. The government's original preferred option, a broad TDM exception with rights-holder opt-out, was eviscerated in the consultation response, with eighty-eight per cent of respondents backing a requirement for licences in all cases and only three per cent backing the government's preferred option. The March 2026 progress report effectively shelved the opt-out approach as the preferred option, though nobody thinks the idea is dead. Opt-out registries have an obvious appeal: they seem to give creators a switch. The problem is that the switch only exists for people who know the switch exists, and the people who most need protection are the ones least likely to hear about the scheme before their work has already been ingested. Opt-out, in the absence of a robust opt-in default, is a solution that works best for the people who need it least.
The fourth is a new patronage economy, which is the optimistic way of describing something that is already happening, unevenly, on Patreon and Substack and Bandcamp and the direct-to-audience platforms that have been quietly absorbing the refugees of the legacy creative industries. The patronage model is not new. What is new is the scale at which it is becoming necessary, and the extent to which it requires creatives to become their own marketing departments, customer service agents, and community managers. The work of sustaining the work has, for many, become more time-consuming than the work itself. This is bearable for a subset of temperaments and impossible for others. It favours the extroverted, the photogenic, and the voluble. It punishes the people whose contribution to culture was to sit in a room for ten hours a day being quiet.
The fifth, and this is the one most people are reluctant to say out loud, is retreat. A return to analogue, semi-private, and deliberately offline spaces. The vinyl resurgence is not a coincidence. Neither is the small but persistent wave of writers who are deliberately keeping certain projects off the web entirely, circulating them only through physical printings and invitation-only reading groups. Neither is the rise of zines, the re-emergence of mail art, the tiny but passionate return of letterpress. None of this is going to become a mass movement. All of it is a signal. When the open commons becomes unsafe, creative life retreats to the rooms where the door can still be closed. The rooms are smaller. They are also, for the people in them, real.
The Fen Wardens, when I spoke to them a week after their Peckham meeting, had made a decision of sorts. They were going to keep putting the music out. They were going to stop streaming it on the platforms whose terms of service they no longer trusted. They were going to press a small run of vinyl for the next record. They were going to send the CDs to a handful of independent radio stations that they had a personal relationship with. They were going to play more live shows, including the kind of tiny, uneconomic shows in village halls and community centres that they had mostly stopped doing in favour of festivals. They were going to use Bandcamp for digital because Bandcamp still felt, to them, like an institution run by people who knew that the music belonged to someone. They were, in short, going to get smaller and more local and more stubborn.
They were not doing this because they thought it would scale. They were doing it because the alternative, which was to carry on as before whilst pretending the bargain had not changed, felt to them like lying to themselves about their own working life. One of them used the word dignity. The others winced slightly at the word, because creative people do not like talking about dignity in public, and then nodded.
What the Hachette CEO said to Fortune is true. The central question is who owns ideas in the AI age. But the question underneath the question, the one the lawsuits are structurally incapable of asking, is whether the conditions under which people are willing to keep having ideas in the first place can survive the next decade of industrial extraction. Copyright law can compensate creators after the fact. It cannot restore the habit of asking. It cannot repair the small dignity of being recognised as the source of a thing. It cannot, on its own, rebuild the capillaries through which creative life actually flows.
What it might be able to do, if the lawsuits keep winning and the settlements keep getting bigger and the unions keep organising and the patronage economy keeps maturing and the capillaries hold, is buy enough time for the culture to work out a new compact. The new compact will not look like the old one. It will probably be more formalised, more transactional, more legible to machines. It will have fewer assumptions baked into it about goodwill and common sense. It will be worse, in the small ways that writing a thing down is always worse than a shared understanding. It will be necessary, in the way that fences become necessary after the first wave of trespassers proves that the old gentleman's agreement cannot hold.
The thing worth fighting for, in the meantime, is the rehearsal room above the kebab shop. Not as metaphor. As literal infrastructure. The room where four people are arguing about whether to sign the paperwork on the burglary is the room where the actual culture is being made, and if the room goes away because the people in it can no longer afford to be in it, no licensing scheme and no settlement cheque and no Fortune profile of a publisher's CEO is going to conjure it back. The thinning, once it has happened, is very difficult to unthin. Capillaries that close do not reliably reopen.
It is easy, in 2026, to mistake the lawsuits for the story. The lawsuits are important. They are also, in the deeper sense, downstream. The real story is the quiet meeting in the rented room, and the quieter calculation that every working creative is now running, every week, about whether the work is worth the work. The calculation has always existed. What has changed is the variable. The variable, for the first time in the history of cultural production, is the machine that learned to do what they do by studying what they did, without being asked, and is now being sold back to their audiences as an alternative to them.
Whether the people who made the substrate stay in the rooms is the only question that matters. The courts will not answer it. The companies will not answer it. Only the makers can answer it, and the way they answer it, one small stubborn decision at a time, is the shape the next culture will take.
The Fen Wardens pressed their record. The room above the kebab shop is still there.
For now, that is how the story ends. Not with a verdict. With a door that has not yet closed.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk