from EpicMind

Illustration eines antiken Philosophen in Toga, der erschöpft an einem modernen Büroarbeitsplatz vor einem Computer sitzt, umgeben von leeren Bürostühlen und urbaner Architektur.

Freundinnen & Freunde der Weisheit! Stress wird heute oft als Krankheit verstanden – als etwas, das vermieden, bewältigt oder therapiert werden muss. Doch ein genauerer Blick zeigt: Stress ist weder ungewöhnlich noch per se negativ. Im Gegenteil – richtig verstanden und eingeordnet, kann er uns wachsen lassen.

Stress ist normal – und oft sogar hilfreich
Der Grundgedanke: Stress gehört zum Leben. Er ist nicht automatisch ein Anzeichen von Überforderung, sondern oft ein Zeichen von Einsatz, Verantwortung oder Entwicklung. Ohne Druck kein Fortschritt, ohne Herausforderung keine Leistung – ob beim Lernen, im Beruf oder in der persönlichen Entwicklung. Stress wirkt dabei wie ein Antrieb, der uns aktiv hält und dazu bringt, Prioritäten zu setzen, uns zu fokussieren oder Gewohnheiten zu überdenken.

Die philosophische Perspektive: Von Schopenhauer bis Nietzsche
Historisch gesehen wurde Stress nie als Krankheit begriffen. Die Stoiker etwa betrachteten Belastung als unvermeidlich – der entscheidende Punkt sei, wie wir darauf reagieren. Auch Schopenhauer ging davon aus, dass das Leben vor allem aus Leiden bestehe – dieses zu akzeptieren sei klüger als es zu leugnen. Nietzsche hingegen sah gerade in der Überwindung von Widerständen den Weg zu persönlicher Freiheit und innerer Stärke. Sein berühmtes Diktum „Was mich nicht umbringt, macht mich stärker“ bringt diesen Gedanken auf den Punkt: Stress ist nicht das Problem – sondern eine Einladung zum Wachstum.

Fazit: Nicht alles pathologisieren – sondern einordnen und nutzen
Wir sollten nicht jede Anspannung als Störung betrachten. Die Tendenz, alltägliche Emotionen wie Stress oder Unzufriedenheit vorschnell zu pathologisieren, verstärkt eher das Gefühl von Hilflosigkeit. Wer hingegen lernt, Stress als Teil des Lebens zu akzeptieren – und ihn als Impuls zur Veränderung nutzt –, handelt selbstwirksam und findet oft zu mehr Klarheit und Widerstandskraft zurück. Stress ist kein Makel, sondern oft ein Zeichen dafür, dass etwas auf dem Spiel steht. Wer sich ihm nicht entzieht, sondern ihn versteht und einordnet, wird nicht schwächer, sondern stärker. Die Philosophie bietet dafür seit Jahrhunderten einen robusten Bezugsrahmen – aktueller denn je.

Denkanstoss zum Wochenbeginn

„Die Erinnerungen sind das einzige Paradies, aus dem wir nicht vertrieben werden können.“ – Jean Paul (1763–1825)

ProductivityPorn-Tipp der Woche: To-do-Listen richtig nutzen

To-do-Listen helfen dir, den Überblick zu behalten – aber nur, wenn du sie gezielt einsetzt. Priorisiere deine Liste und setze realistische Ziele, anstatt sie mit unendlich vielen Aufgaben zu überladen.

Aus dem Archiv: Was wir heute von Carl Gustav Jung lernen können

1933 schrieb Carl Gustav Jung in einem Brief an einen seiner Patienten: „Man lebt, wie man leben kann. Es gibt keinen einzigen bestimmten Weg für den einzelnen, der ihm vorgeschrieben oder der passend wäre.“ Mit diesen Worten formulierte er eine seiner zentralen Einsichten: Jeder Mensch beschreitet seinen individuellen Lebensweg, ohne eine vorgegebene Richtung. Doch was kann Jung uns heute noch über Selbsterkenntnis und persönliche Entwicklung lehren?

weiterlesen …

Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!


EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.


Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.

Topic #Newsletter

 
Weiterlesen... Discuss...

from An Open Letter

I woke up at 7 AM today to play tennis with my dad, And I recorded a little bit of it was my glasses And I’m glad that I did because I realized this is the first video I have of us.

 
Read more...

from SmarterArticles

On the morning of 9 April 2026, a small miracle of coordination is unfolding in the cognitive infrastructure of the planet.

A graduate student in Hyderabad is asking Claude how to tighten the argument in a paper on monetary policy. A copywriter in São Paulo is feeding ChatGPT the bullet points for a pitch deck. A civil servant in Warsaw is asking Gemini to draft a consultation response on housing density. A novelist in Lagos wants to know whether her second chapter drags. A thirteen-year-old in suburban Ohio is asking an assistant, any assistant, whether she should reply to a text from the boy she likes.

None of them know each other. None of them are writing about the same thing.

And yet the sentences they are about to produce will share more DNA than any comparable population of human sentences has shared since the King James Bible standardised written English in 1611. The cadences will be familiar. The rhetorical scaffolding will be familiar. Tactful three-point framing, tentative fourth consideration, breezy affirming close. Certain adjectives will recur at a frequency no unassisted population of writers has ever produced. And certain ideas, once prominent, will be faintly audible or missing entirely, as if someone had quietly removed a frequency from the signal.

A paper circulating on arXiv in early 2026 calls this, with characteristic academic understatement, “algorithmic monoculture.”

The term is not new. Jon Kleinberg and Manish Raghavan introduced it in the Proceedings of the National Academy of Sciences in 2021, back when it still functioned mostly as a warning about hiring software and credit-scoring systems. The newer work expands the frame. It argues that the rise of large language models, trained on overlapping corpora, fine-tuned using near-identical methods, and optimised against a suspiciously similar set of human preferences, has produced something the world has not previously had to reckon with: a planetary-scale cognitive layer that is simultaneously almost invisible to individual users and profoundly consequential, at the population level, to the diversity of human thought.

The individual-level invisibility is the interesting part.

Walk up to any one of those users and ask them whether the AI is helping. They will say yes. The assistant is responsive. The writing is better than what they would have produced alone. The code compiles. The email hits the right tone. The student understands monetary policy now in a way she did not understand it at breakfast. Each interaction is, in isolation, a small gift.

And it is precisely because the interactions are small, isolated gifts that the aggregate effect is so hard to see. There is no aggrieved party. There is no victim. There is only the slow, statistical narrowing of the range of things that get written, thought, proposed, rejected, tried, and considered.

The monoculture does not feel like a monoculture from inside it. It feels like being helped.

The Paper That Said the Quiet Part

The arXiv paper, and the broader cluster of early-2026 work around it, does something previous contributions in the literature mostly refused to do. It tries to estimate the thing that is being lost.

The headline result is simple. When a representative multilingual sample of fifteen thousand human respondents from five countries is asked to produce preference rankings across a standard battery of open-ended questions, and the same battery is put to twenty-one leading language models, the models collectively occupy a region of preference space that covers roughly forty-one per cent of the range humans span.

The other fifty-nine per cent is not underrepresented. It is absent.

That finding is in line with a string of earlier results that, taken together, amount to something closer to a verdict. A 2024 study in the Cell journal Trends in Cognitive Sciences found that co-writing with any mainstream LLM, regardless of which company trained it, produced sentences whose stylistic variance collapsed towards a common centre within a handful of exchanges. A large-scale analysis of fourteen million PubMed abstracts by researchers at Tübingen, first published in 2024 and updated in 2025, documented a sudden surge after November 2022 in the frequency of a small, stable set of “LLM preferred” words: delve, intricate, showcasing, pivotal, underscore, meticulous. In some sub-corpora, more than thirty per cent of biomedical abstracts now carry the linguistic fingerprint of having passed through a chatbot.

A separate working paper measured writing convergence in research papers before and after ChatGPT's release. Early adopters, male researchers, non-native English speakers, and junior scholars moved their prose fastest and furthest towards the model mean.

The people who most needed the help were the ones whose voices changed the most.

Something similar is happening in creative domains, although the evidence is messier. The Association for Computing Machinery's 2024 conference on Creativity and Cognition published a paper whose findings most researchers in the area now treat as foundational: ask humans to generate divergent-thinking responses to open prompts, and you see the expected long-tail distribution of weird, bad, brilliant, and unclassifiable answers. Ask an LLM the same, and you get a narrower, tighter, more plausibly-competent set of responses.

On average, the LLM does well. At the population level, it produces far less variety than a comparable population of humans.

The authors used the phrase “homogenising effect on creative ideation” and meant it literally. Other groups have pushed back, arguing that the picture is more complicated and that sampling choices matter. The disagreement is real. The overall direction of drift is not really in dispute any more.

How the Narrowing Happens

To understand why the drift is happening, it helps to dispense with two stories.

The first is that the models have a secret aesthetic they are imposing on us. They do not. The Midjourney look and the ChatGPTese voice are not creative preferences in any meaningful sense. They are artefacts of the training and tuning pipeline.

The second is that the problem is a handful of frontier labs colluding to produce bland output. They are not colluding. They are doing the same thing independently because the gradients of the problem push everyone towards the same hill.

The first gradient is the training data. A language model is, in the end, a statistical compression of a corpus. If you scrape Common Crawl, Wikipedia, the major English-language book collections, StackExchange, Reddit, GitHub, and a handful of licensed newspaper archives, you will end up with a corpus that overlaps by perhaps seventy or eighty per cent with anyone else's scrape of the same substrate. There are differences around the edges, a bit more Chinese here, a bit more code there, a different cut-off date, but the overall shape is remarkably stable across labs. Dolma, The Pile, RedPajama, C4, FineWeb: each is an attempt to produce a general-purpose training corpus and each contains a broadly similar cross-section of publicly available human text.

Models trained on such substrates are already close to each other before any tuning happens. They have been fed from the same trough.

The second gradient is reinforcement learning from human feedback. This is the technique that turned eerily capable text continuation engines into the compliant, helpful assistants that five hundred million people now use daily. The idea is simple. Present humans with pairs of model outputs, ask which is better, train a reward model on those preferences, then use the reward model to fine-tune the base model. The result is a system shaped, gradient step by gradient step, to produce answers humans in the labelling pool tend to approve of.

The problem is that humans in the labelling pool, particularly professional labellers working through the contract platforms the frontier labs use, develop remarkably consistent tastes. They prefer answers that are structured, polite, hedged, comprehensive, and written with a faint institutional politeness most people would recognise as American corporate email register. They dislike answers that are rude, uncertain, fragmentary, idiosyncratic, strange.

None of this is their fault. It is a predictable consequence of asking a few thousand people to impose ratings on millions of responses. You get the average of their tastes. Not the span.

The third gradient is optimisation itself. Reinforcement learning, by its nature, pushes policies towards the highest-scoring actions available. Apply it to language generation and the model concentrates its probability mass on outputs that reliably score well. Researchers call this “mode collapse,” a phrase borrowed from the generative adversarial network literature, and the phenomenon has been documented so many times in RLHF pipelines that it is considered standard. A 2024 ICLR study measured the effect and found that post-RLHF models exhibited “significantly reduced output diversity compared to SFT across a variety of measures,” with the authors explicitly framing this as a tradeoff between generalisation quality and the breadth of the response distribution.

In plain English: the models get better at the average task and worse at producing a range of answers to any one task. They converge on the plausible-sounding centre.

The fourth gradient is feedback from deployment. Once a model is serving production traffic, the telemetry from its users shapes the next round of training. Responses users rate up are preferred. Responses users regenerate or abandon are suppressed. And the users, naturally, have been trained on earlier outputs of the same models.

They prefer things that look like what they have come to expect. Within a few cycles, the distribution of acceptable responses narrows further, and the aesthetic the model produces becomes the aesthetic its users demand, which becomes the aesthetic the model produces.

The loop closes.

This is the mechanism by which “the ChatGPT look” became a recognisable category in 2023, stabilised through 2024, and was operating as a near-parody of itself by late 2025. It is a statistical attractor in the feedback graph.

The Ghost in the Text

If you want to see the monoculture in the wild, you do not have to look very hard.

The Tübingen paper on PubMed abstracts is the most quantitatively damning evidence, and the excess-vocabulary methodology used there has since been applied to other corpora with consistent results. News writing, marketing copy, policy consultations, customer support macros, cover letters, LinkedIn posts. Every corpus where people write under time pressure shows the same tell-tale vocabulary surge. A 2025 study testing English news articles for lexical homogenisation found some metrics moving and others holding steady, a useful corrective against overclaiming. But nobody is now arguing that writing on the open web looks the same in 2026 as it did in 2021.

The visual domain is noisier, partly because the models change faster and partly because creative industries have aggressively developed counter-aesthetics. The “Midjourney look,” a recognisable stew of moody lighting, glassy skin, hyper-saturated background bokeh, and compositions that feel vaguely cinematic without belonging to any specific film, became so pervasive in 2023 and 2024 that stock photography buyers began filtering it out as a separate category. Professional illustrators and art directors responded by prompting against it, fine-tuning custom models, and, in some cases, branding human-made work as “not AI” the way food manufacturers brand their products “not GMO.”

The counter-movement has produced some of the more interesting visual culture of the last two years. It exists in reaction to a monoculture it did not create.

In software, the convergence is more measurable. The major coding assistants, GitHub Copilot, Cursor, Anthropic's Claude Code, Google's Gemini Code Assist, now write or materially influence something on the order of forty per cent of the code committed to open-source repositories, and a higher share of new code inside large enterprises. They do this against a training substrate that is itself overwhelmingly composed of previously-written open-source code. The result is a global convergence on a narrow set of idioms: particular naming conventions, particular error-handling patterns, particular library choices.

Experienced engineers report the strange sensation of reading a new codebase and recognising the model's fingerprint before they can identify the author's.

Hiring is perhaps the clearest case of Kleinberg and Raghavan's original concern becoming literal. By the time a candidate's CV reaches a human reviewer at a Fortune 500 firm in 2026, it has typically passed through multiple LLM-based screening layers. The screening models are fine-tuned on labelled examples of “good” and “bad” candidates, and the labels come from a small number of vendors whose training sets overlap heavily. A paper on arXiv in early 2026 on strategic hiring under algorithmic monoculture modelled what happens when most firms in a labour market delegate their screening to correlated systems, and produced the result theorists had predicted for five years: certain candidates are now rejected by every employer in a sector because they sit in a region of candidate space that the shared screening model treats as undesirable.

This is the outcome homogenisation effect Rishi Bommasani's group formalised at NeurIPS in 2022. It has moved from thought experiment to operational reality.

A Short History of Monocultures That Ended Badly

Every generation of technologists likes to believe its tools are so new that history has nothing to say about them. Every generation is wrong.

The story of human civilisation contains a long list of monocultures that looked like efficiency gains right up until the moment they revealed themselves as fragilities. Two are worth the reread.

The first is the Irish potato crop of the 1840s. By the early nineteenth century, the peasantry of Ireland had concentrated their agriculture almost entirely on a single variety, the Irish Lumper, because it produced more calories per acre than any alternative on the poor, boggy land they farmed. The Lumper was propagated vegetatively, which meant that every potato in the ground was, genetically, a clone of every other. When Phytophthora infestans arrived from the Americas in 1845, it encountered no genetic diversity to slow it down. The blight moved through the crop the way a single-variant virus moves through an unvaccinated population.

Roughly one million people starved. Another million emigrated. A population that had stood at eight and a half million before the famine was down to four and a half million by the end of the century.

The catastrophe was not caused by the blight alone. It was caused by the combination of a uniform crop and a novel pathogen, and the uniformity was the variable humans had chosen.

The second is the financial modelling monoculture of the early 2000s. For roughly two decades, risk management inside large banks converged on a single family of statistical tools built around Value-at-Risk, often in almost identical Monte Carlo implementations, parameterised against overlapping historical windows, and regulated into near-universal adoption by Basel II. Andrew Haldane, then of the Bank of England, gave a 2009 speech at the Federal Reserve of Kansas City that remains the sharpest diagnosis of what had happened. He described the pre-crisis financial system as a monoculture in which “risk management became silo-based” and “finance became a monoculture” that “acted alike” under stress, “less disease-resistant” than a more heterogeneous system would have been.

When the underlying assumptions of the models broke in 2008, they broke everywhere at once, because everyone was running versions of the same model.

The crisis was not caused by bad modelling. It was caused by good modelling replicated until there was no dissent left in the system.

Both stories carry the same lesson. Monocultures look efficient in steady state and catastrophic in transition. They reduce small, distributed losses in the good years and concentrate them into a single correlated failure in the bad year. If you were trying to design a system that minimises variance on any given day and maximises the probability of a civilisation-scale shock, you could hardly do better than a globally adopted AI assistant trained by four companies on broadly overlapping data using broadly overlapping techniques.

The Counter-Arguments, Fairly Stated

It would be unfair to describe the situation without taking seriously the people who think the alarm is overblown. There are several of them. Some of their points are good.

The first counter-argument is that writing has always converged under the pressure of shared infrastructure. The King James Bible homogenised English prose. The Associated Press Stylebook homogenised American journalism. Microsoft Word's grammar checker, installed on half a billion machines, quietly imposed the active voice on a generation of office workers. Every technology that reduces the cost of producing acceptable text also narrows the range of text being produced. The question, the sceptics say, is not whether LLMs are narrowing the distribution, but whether the narrowing is qualitatively different from previous episodes.

The best evidence we have suggests that the convergence is faster and deeper than any previous episode. But the sceptics are right that proportionality matters.

The second counter-argument is that the monoculture is a transient phenomenon of the current training paradigm. Base models are getting better at preserving distributional diversity. Techniques like Direct Preference Optimisation, constitutional AI, and the community-alignment data-collection protocols described in the arXiv paper itself offer a plausible path to models that are both helpful and genuinely pluralistic. The problem, on this view, is not that AI is inherently homogenising; it is that the specific RLHF pipelines of 2022 to 2025 were homogenising, and the next generation of alignment methods will fix it.

Anthropic's work on constitutional pluralism and Meta's 2025 research on diversity-preserving fine-tuning both show real improvements on certain metrics. The question is whether the improvements are keeping pace with the scale of deployment. The honest answer is probably no.

The third counter-argument is the most interesting. It holds that humans were never as diverse in their expressed thought as the loss-of-diversity argument assumes. Take a population of first-year undergraduates, give them an essay prompt, and you already get substantial convergence on a handful of rhetorical templates, shared references, and predictable argumentative moves. The diversity we imagine we are losing was never there to begin with. What the LLMs are doing is making visible a pre-existing homogeneity and perhaps nudging it slightly harder in the direction it was already going.

There is something to this. Human culture has always moved through fashions, canons, and shared templates. The model-free baseline was not a paradise of idiosyncratic genius.

The fourth counter-argument is pragmatic. Even granting that LLMs reduce variance at the margin, they dramatically expand the number of people who can participate in written cognitive work. A non-native speaker in a field dominated by English-language publication can now write papers that reach the same readers as a native speaker. A dyslexic student can produce prose that reflects her thinking rather than her difficulty with spelling. A small-business owner without marketing staff can produce professional copy. The aggregate diversity of the cognitive commons might actually be higher, not lower, because more voices are in the room even if each individual voice is a bit more standardised.

The honest answer to all four arguments is that they do not dissolve the problem. They calibrate it.

The monoculture is not apocalyptic, but it is real. The convergence is not new in kind, but it is larger in scale than any previous episode. The loss of diversity is partial and might be partly reversible with better tuning methods, but the reversal is not happening at the pace the deployment is. And the expansion of participation is genuine, but it is not a substitute for the distinct kinds of cognitive variety the current systems are dampening.

We are left with a real problem that is smaller than the loudest critics claim and larger than the loudest defenders will admit.

Where Dissent Lives Now

One unsettling feature of the current moment is that the space in which intellectual dissent used to happen has been partly reabsorbed into the tools generating the mainstream.

When a student wants to argue against the received view, the assistant she uses to sharpen her argument has been trained on a corpus in which the received view is massively overrepresented, and tuned on preferences that treat the received view as the baseline of reasonableness. Her heterodox position can still be articulated. But only in the voice of the orthodoxy, with the orthodoxy's cadences and framings and preferred caveats.

The tool is helpful. It is just that the help comes in a specific register, and the register quietly pulls everything towards a centre.

This is not new in the history of dissent. Samizdat writers in the Soviet Union wrote in a Russian inherited from the official press. Heterodox economists spent the 1990s writing in the neoclassical vocabulary they were criticising. The tools of mainstream thought always bleed into the voice of people trying to escape it.

What is new is the speed and completeness of the bleed. When the tool is in every sentence, in every revision, in the autocomplete of the email drafting the pamphlet, the vocabulary of dissent has fewer places to hide.

This matters because epistemic diversity is the raw material out of which new ideas are built. Scientific revolutions, as Thomas Kuhn argued in 1962, happen when a tradition runs out of resources to solve its own puzzles and a cluster of previously marginal approaches suddenly becomes mainstream. If the marginal approaches are never articulated in the first place, because the tools of articulation bias their users towards the centre, the Kuhnian dynamic stalls. The revolutions do not come, because the conditions for revolution do not form.

This is the deepest worry in the monoculture literature, and the one hardest to test empirically, because the counterfactual is unobservable. We will not know which ideas were quietly filtered out of human discourse by the assistants of the 2020s.

We will only know what did not get said.

Interventions That Might Actually Help

The question is what to do. Nobody is sure. But interventions are being tried, and some look more promising than others.

The first category is technical. Preserving diversity during alignment is an active area of research, and the tools are improving. Regularisation penalties that explicitly reward response-distribution breadth. Constitutional methods that bake pluralism into the model's self-description. Multi-objective optimisation against competing preference signals. Community-alignment datasets built from stratified samples of global populations rather than the labelling pools of San Francisco contractors.

None of this is a complete solution, but the direction is legible. If the frontier labs decided tomorrow that response diversity was a first-class metric and weighted it at, say, twenty per cent of their tuning objective, the curves would move within months.

The question is whether they will. Response diversity is not what users say they want. Helpful answers are what they say they want. The gradient of commercial incentives does not obviously favour pluralism.

The second category is structural. Antitrust enforcement on foundation model markets is the obvious lever, and the European Commission has been exploring it since 2024, with the Digital Markets Act designation process now looking seriously at whether the largest LLM providers meet the gatekeeper thresholds. The theory of the case is that a market with four dominant providers training near-identical systems against near-identical benchmarks is not producing meaningful consumer choice. In the US, the Federal Trade Commission's 2024 inquiry into AI partnerships was a tentative step in a similar direction.

Neither jurisdiction has yet delivered a ruling that would materially shift the competitive landscape. But the conceptual groundwork is being laid.

The third category is institutional. The homogenising effects of mainstream models can be partly countered by the deliberate cultivation of distinctive alternatives. National or regional foundation model efforts, public-interest model trainings by universities or public broadcasters, domain-specific models trained on curated corpora that lie outside the standard scrape: none of these need to outcompete the frontier labs on general capability. They just need to exist, and to be good enough to be used by people who want an alternative voice.

The European EuroLLM project, Singapore's SEA-LION, Japan's Sakana work, the Allen Institute's continuing release of fully open weights and training data: these are the seeds of what might eventually be a more diverse ecosystem. Whether they grow into anything that genuinely counterbalances the big four depends on the next few years of funding and political will.

The fourth category is personal. Every writer, every coder, every thinker who uses these tools faces a daily choice that aggregates into the larger cultural effect. There is a real difference between letting the assistant do the thinking and letting it help with the thinking. It does not show up on any individual day. It shows up over months, in the divergence between users who kept their voice and users who surrendered it.

The people who have thought most seriously about this tend to converge on a discipline. Use the tool as a collaborator, not an author. Accept or reject each suggestion as a conscious choice. Reread the output and ask whether it still sounds like you. And, most importantly, write things sometimes without the tool at all, to keep the neural pathways of solo composition from atrophying.

These are small habits. They cannot fix a structural problem. But they are the only layer of defence available to the individual user right now, and they probably matter more than the user thinks.

The Diversity We Have Not Yet Lost

It is tempting to close a piece like this in the register of warning. But the warning register is part of what we are trying to escape.

The monoculture is not destiny. It is a tendency produced by a set of choices, most of which were made for defensible reasons and none of which are irreversible. The frontier labs could weight diversity higher. The regulators could act. The users could develop better habits. The open ecosystem could grow. A future model architecture could sidestep the RLHF trap in a way nobody currently sees.

The space of possible futures is wide.

What is not wide is the window. The feedback loops between models, users, training data, and cultural production are tightening. Every year in the current paradigm adds another layer of training data generated by previous models, another layer of user taste conditioned by previous outputs, another layer of convention baked into what counts as a good answer.

Monocultures are easier to prevent than to reverse, because the diversity you need to repopulate them with has to come from somewhere, and the main reservoir, the independent creative output of unassisted humans, is shrinking as a share of the total.

The Lumper potato, as any evolutionary biologist will tell you, was not an unreasonable choice in 1840. It grew well on poor land. It fed hungry people. The problem was not that the Lumper was bad.

The problem was that it was everywhere, and there was nothing else.

When the blight came, the absence of alternatives was what turned an agricultural problem into a civilisational one. The lesson is not that monocultures are always wrong. It is that they are always a bet on the future being continuous with the past, and the bet compounds over time until it is the only bet on the board.

The humans asking their assistants for help on 9 April 2026 are not doing anything wrong. They are using the tools available to them, the tools are genuinely helpful, and the sentences they produce are better than the sentences they would have produced alone. That is the seductive part. And the accurate part. And also the part that makes the aggregate picture so hard to see.

Somewhere underneath the millions of small, helpful interactions, the distribution of human expression is quietly tightening.

Whether it keeps tightening, or whether we decide to plant something else in the field alongside the Lumper, is still an open question. It may not stay open for long.


References and Sources

  1. Kleinberg, J., and Raghavan, M. (2021). “Algorithmic monoculture and social welfare.” Proceedings of the National Academy of Sciences, 118(22). https://www.pnas.org/doi/10.1073/pnas.2018340118
  2. Bommasani, R., et al. (2022). “Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?” Proceedings of NeurIPS 2022. https://arxiv.org/abs/2211.13972
  3. “Cultivating Pluralism In Algorithmic Monoculture: The Community Alignment Dataset.” arXiv preprint 2507.09650 (2025, revised 2026). https://arxiv.org/abs/2507.09650
  4. Baek, J., and Bastani, H. (2026). “Strategic Hiring under Algorithmic Monoculture.” arXiv preprint 2502.20063. https://arxiv.org/pdf/2502.20063
  5. “The Homogenizing Effect of Large Language Models on Human Expression and Thought.” Trends in Cognitive Sciences (2026). https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(26)00003-3
  6. Preprint version: “The Homogenizing Effect of Large Language Models on Human Expression and Thought.” arXiv:2508.01491. https://arxiv.org/abs/2508.01491
  7. Kobak, D., et al. (2024). “Delving into ChatGPT usage in academic writing through excess vocabulary.” arXiv:2406.07016. https://arxiv.org/abs/2406.07016
  8. Geng, M., et al. (2025). “Divergent LLM Adoption and Heterogeneous Convergence Paths in Research Writing.” arXiv:2504.13629. https://arxiv.org/abs/2504.13629
  9. Anderson, B. R., Shah, J. H., and Kreminski, M. (2024). “Homogenization Effects of Large Language Models on Human Creative Ideation.” Proceedings of the 16th ACM Conference on Creativity & Cognition. https://dl.acm.org/doi/10.1145/3635636.3656204
  10. Ghods, K., and Liu, P. (2025). “Evidence Against LLM Homogenization in Creative Writing.” https://kiaghods.com/assets/pdfs/LLMHomogenization.pdf
  11. “We're Different, We're the Same: Creative Homogeneity Across LLMs.” arXiv:2501.19361 (2025). https://arxiv.org/abs/2501.19361
  12. Kirk, R., et al. (2024). “Understanding the Effects of RLHF on LLM Generalisation and Diversity.” ICLR 2024. https://arxiv.org/abs/2310.06452
  13. “Testing English News Articles for Lexical Homogenization Due to Widespread Use of Large Language Models.” ACL 2025 Student Research Workshop. https://aclanthology.org/2025.acl-srw.95/
  14. “Examining linguistic shifts in academic writing before and after the launch of ChatGPT.” Scientometrics (2025). https://link.springer.com/article/10.1007/s11192-025-05341-y
  15. Haldane, A. G. (2009). “Rethinking the financial network.” Speech at the Financial Student Association, Amsterdam. Bank for International Settlements. https://www.bis.org/review/r090505e.pdf
  16. “Did Value at Risk cause the crisis it was meant to solve?” Institute for New Economic Thinking, Oxford. https://www.inet.ox.ac.uk/news/value-at-risk
  17. University of California Museum of Paleontology. “Monoculture and the Irish Potato Famine: cases of missing genetic variation.” Understanding Evolution. https://evolution.berkeley.edu/the-relevance-of-evolution/agriculture/monoculture-and-the-irish-potato-famine-cases-of-missing-genetic-variation/
  18. Wikipedia contributors. “Great Famine (Ireland).” https://en.wikipedia.org/wiki/Great_Famine_(Ireland)
  19. Kuhn, T. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
  20. Wikipedia contributors. “Reinforcement learning from human feedback.” https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Millennial Survival

It’s strange how life tends to remind you of things you were recently thinking about. In my case, it is once again reminding me how much we are all subject to chance, randomness, and being blindsided by things we don’t expect.

This week we had family members visiting from out of state. The second evening after they arrived, one of our visitors didn’t look well. The following morning they looked even less well and we pushed them to go to urgent care. Once at urgent care, the doctors said that they needed to go to the ER immediately. Now, after three more days, they have been admitted to the local hospital awaiting a complex surgical procedure to remove a potentially cancerous mass in near one of their internal organs. What was supposed to be a three day visit is going to turn into at least a three week ordeal that could upend our family.

It is crazy how without any real warning things can drastically change in a matter of hours. In these situations we are reminded of how little control we sometimes have over what happens to us. All you can do is try and make the best decisions possible during the subsequent hours, days, and weeks to influence the outcome in a positive direction. I believe we have done this and now all we can do is wait and see while offering as much support to the family member impacted as possible. Let’s hope for a brighter tomorrow.

 
Read more... Discuss...

from Noisy Deadlines

I have a 2018 Corsair Strafe mechanical keyboard with the Cherry MX Red Switches. I’ve been getting tired typing on it, and I’ve been noticing a lot of missed keystrokes while I type. I am a fast typer, and I think I got tired of this keyboard.

So, I was looking for another mechanical keyboard, specifically one that I could customize, change the caps and switches if needed. Basically, a keyboard that could grow with me without being too complicated. I tested some keyboards on my local computer store, and the Keychron ones got my attention.

I wanted a more tactile experience (the Cherry Red is linear), so I went with a Keychron V6 Ultra 8K with the Tactile Banana switches. I love it! 😍

It worked well with the cable connection, and also connected with Bluetooth and the 2.4G dongle on my Ubuntu 25.10.

The issue: Can’t use the Launcher to customize the keyboard

In order to customize and remap the keys and for this keyboard, we have to do it online, via the Keychron Launcher.

The manufacturer guide says that the Launcher only works with Chrome/Edge or Opera browsers.

I had Chromium installed via Snap and I opened the launcher website. The site recognized my keyboard, but it wouldn't connect.

Solution attempts

I did some online searching and I discovered that Linux has some security measures in place that avoids a userspace application to write to hardware input. So the solution is to create an “udev.rule” to add permissions. I followed the instructions from this article: HOWTO: Get the Keychron Launcher working in Debian GNU/Linux.

So my steps were something like this:

  • I identified my keyboard vendor/product information using lsusb | grep -i keychron

  • Which gave me following info: Bus 003 Device 013: ID 3434:0c60 Keychron Keychron V6 Ultra 8K

  • Great! Then I created the rule with sudo nano /etc/udev/rules.d/99-keychron.rules

  • And this was my first try to create the rule: KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="3434", ATTRS{idProduct}=="0c60", MODE="0660", GROUP="ariadne", TAG+="uaccess", TAG+="udev-acl"

  • Then, I ran the two commands to reload the rules and trigger them: sudo udevadm control --reload-rules sudo udevadm trigger

  • It didn't work, Chromium still could not connect to the keyboard.

  • In Chromium I checked: Settings -> Privacy and Security -> Site settings -> Additional permissions -> HID devices and ensured HID access was allowed.

  • I tried different rules, tweaking here and there, played around with user groups, and nothing worked. I unplugged, plugged, restarted the computer, I even tried to run Chromium with root access temporarily. Nothing worked.

  • All the time I was checking chrome://device-log/ to see what was going on, and got a list of errors like this: HIDEvent[21:52:54] Failed to open '/dev/hidraw7': FILE_ERROR_ACCESS_DENIED

HIDEvent[21:52:54] Access denied opening device read-write, trying read-only.

  • I did some more tweaks to the udev.rules, and I ended up with this in my rules file:

# Keychron V6 Ultra 8K - Normal Mode KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="3434", ATTRS{idProduct}=="0c60", MODE="0666", TAG+="uaccess"

# STM32 Bootloader - Required for Firmware Flashing SUBSYSTEM=="usb", ATTRS{idVendor}=="3434", ATTRS{idProduct}=="0c60", MODE="0666", TAG+="uaccess"

  • It was still not working. I knew it was something to do with permissions from Chromium.

  • Then the next day I did more digging online, and I read that Chromium installed via Snap is actually sandboxed and often cannot see hardware even if the udev rules are current. The solution? Get the .deb install package for Google Chrome.

  • So I downloaded and installed the official Google Chrome .deb native package directly from the Google website.

  • And then it worked!!! 🤘

  • Keychron Launcher connected to the keyboard, I could do the Firmware update and started playing with remapping keys.

My Final Checklist

So, as final checklist, these are the steps to take if I want to remap or update firmware on my Keychron keyboard :

Preparation of udev.rules (needs to be done only once):

  1. Identify keyboard's vendor/product information using : lsusb | grep -i keychron

  2. Create rule with: sudo nano /etc/udev/rules.d/99-keychron.rules

  3. Add these lines to the rules: # Keychron V6 Ultra 8K - Normal Mode KERNEL=="hidraw\*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="3434", ATTRS{idProduct}=="0c60", MODE="0666", TAG+="uaccess" # STM32 Bootloader - Required for Firmware Flashing SUBSYSTEM=="usb", ATTRS{idVendor}=="3434", ATTRS{idProduct}=="0c60", MODE="0666", TAG+="uaccess"\

  4. Save and exit (Ctrl+O, Enter, Ctrl+X)

  5. Then run these commands to activate the new rules: sudo udevadm control --reload-rules sudo udevadm trigger

  6. Disconnect/Connect keyboard.

Run Keychron Launcher

  1. Connect the keyboard with the cable
  2. On the keyboard itself, select the physical toggle to USB connection
  3. Open Google Chrome (not Chromium, make sure it is the .deb version of Google Chrome, not Snap)
  4. Go to https://launcher.keychron.com/
  5. Choose to connect the keyboard, and voilà!

#linux #tech

 
Read more... Discuss...

from Millennial Survival

Experiencing people leaving an organization that are part of your peer group is never fun. This is especially true when you recognize that the person leaving created a sense of balance on the team that was much needed. Once they are gone, that balance will be thrown off again, decisions the person made will be called into question, and there will be a lot of anxiety on the part of their team.

Sadly, this is the situation that me and our organization find ourselves in now. With a new CEO on-board within the last six months, this is completely unknown territory that we are entering. None of us have any idea how the hiring process is going to go to replace this person. We don’t know if leadership will care about finding someone that integrates well with the rest of the team or if they will intentionally look to bring in a more disruptive force to shake things up. the organization has been through significant change over the past year, much of it positive, yet it is still anxiety inducing.

Now we wait to see what comes next. Time will tell if this change will be positive or if the organization is going to suffer because of it.

 
Read more... Discuss...

from epistemaulogies

From first principles: AI and Capitalism

You’re probably caught in a bit of confusion. You know AI is powerful. You know it will change everything. But you’ve tried to use it in your day-to-day life and found a false promise was somewhere introduced. It hasn’t made your job significantly easier. It gives advice you can’t always trust. You aren’t sure how it’s supposed to actually fit into your, or anyone’s life, let alone be such an omnipotent threat or savior as to radically alter the fate of humanity. Are you crazy?

On the contrary. If you pay attention to the contradictions you notice in the reality vs. the perception of GenAI, you can use this case as a vaccine, to inoculate your thinking against the lies that capitalism routinely parrots in order to convince you of its worth and necessity. Let’s hold up the mirror.

AI is a perfect reflection of capitalism itself.

1. Economics is a social construction to solve a social problem (how to value transactions – not how to deal with scarcity. Orthodox economics clearly doesn’t “deal” with scarcity in any way, especially natural scarcity; it's neatly externalized in order to obscure the real decisions made, politically and socially, about who does and doesn't deserve resources).

2. Capitalism nominates a class of people who are value-deciders (owner class, now investor class) and, through business relationships between one another and a dialectic between that class and the working class (the non-owner, non-investor class), value is decided.

3. Capitalism’s value-deciders are the bourgeois, those who own capital. Traditionally capital was the means of production, i.e., the buildings and machines and land that created products which were sold for a profit. This class of owners were able to decide the value of those products among other owners based on their incentive to sell. But they are also able to decide the value of the labor that helps create the products by virtue of their willingness to buy. – Willingness to sell and willingness to buy are also subject to social creation in addition to material constraints. (Ads, psychology, the social distribution of the things needed to live, inflation, colonialism, etc.)

4. But capitalism has a major internal contradiction: because owners are not exposed to much risk, there’s not much constraint on available wealth – capitalism tends to monopolize. But it must have the appearance of being competitive or it will lead to unchecked inflation and the collapse of value. To solve this social challenge, capitalism seeks unlimited growth from its investments. Investments that fail to grow fail existentially and must be stripped for parts. This maintains pressure and participation in the economy. – But the failure only extends to the business and the workers. It does not extend to the owners – again, see the point that they are not exposed to risk.

5. Because growth is merely a social construction to solve the social problem of not enough risk exposure for wealth accumulators, it is essentially an illusion and can be endlessly gamed by those who are considered value-deciders, but only if it maintains the illusion of value coming from growth, from something “real” like scarcity or demand.

6. This tendency leads capitalism to abstraction, or “going meta” (Survival of the Richest). As “growth” in sectors is conquered by other owners or by an increasing concentration among the same owners, the need to demonstrate more growth (and therefore the validity of capitalism as a social enterprise) leads to the creation of levels of abstraction upon the original transaction (i.e., the original valuation – a bet on the 49ers to win the Super Bowl, upon which a surprising amount of abstraction can be layered: The stock price of the gambling company, the bets against the stock price of the gambling company, the mortgage owned by the better, the bets against that mortgage defaulting, etc. etc. etc.; not to mention the value of the stock of the 49ers, the Super Bowl ad space, ad nauseam).

7. Therefore, capitalism is an economic system organized by a class of owner-value-deciders who must consistently achieve the perception of growth. Since growth tied to physical scarcity will quickly exhaust itself and make the internal contradiction clear, their chief mode of growth is abstraction, where a new arena of value-determinations can be made.

8. Some initial value under capitalism is determined by a “market” via transactions: The creation of a product or service that is then sold.

9. But much of the value-determination under capitalism is facilitated through bets, placed through the stock market, or now through prediction markets; or in the holding of property; or in any accumulation of a certain capital.

10. Though the final payment of the bet is zero-sum, for both the arbiter of the bet and the outcome on which bets are placed, hype creates value (for the arbiter, on the cut; for the outcome, on the temporary infusion of capital which can be used to purchase value elsewhere and is not due back, since it’s the responsibility of the losers). – Also, bet-takers can hedge their overall investment in the bet to effectively “both sides” the bet while reaping real wealth from the benefits of owning bets (tax evasion, other benefits of being wealthy conferred by regulatory capture)

11. Therefore, hype – the perception of value whether there “is” or “isn’t”, whether it’s a “good” bet or not – creates real wealth under capitalism.

12. This is explains the AI tech bubble but it also explains why companies seem to legitimately think AI will improve their business outcomes: it is the perception of the offloading of work. And that’s why it DOES create value, at least among publicly-traded companies that are able to convince shareholders (betters) that the adoption of AI is valuable. Just the perception of being able to reduce labor costs or otherwise innovate creates real wealth. And because it is a bet, the value of the bet is largely determined by hype.

13. Similarly, the value or innovation created by AI itself, as in your evaluation of its output, is also determined by hype: by your ability or willingness to believe that its output is human, or super-human. It creates nothing but a perception. It is literally a machine that creates perceptions that are likely to be believable.

14. It’s basically the endgame capitalist technology.

Thanks for listening.

~

 
Read more... Discuss...

from JustAGuyinHK

I never thought I would get married. I never thought I would be looking to buy a house with someone. Yet, here I am doing both. It feels incredible, wonderful, and a bit scary, mostly on the buying-a-house part due to age rather than anything else.

Falling in love and getting hitched was never in my thoughts because of my lifestyle, mostly nomadic. People come and go in my life. They don’t stick around. Part of it is living overseas. Part of it is just my nature. It is something I accepted as part of my path until it changed a few years ago.

I met the love of my life – the one who changed me. The one who shaped how I would love many years ago. It began with a clear end – he would move to the United States at some point. We would enjoy our time together and see things, but there would be an unknown end date. In the early years of that relationship, we talked about being together forever, but there would be awkward pauses, so we dropped the topic and enjoyed our time. It ended as expected, and I was hurt. I fell for another, but quickly saw that the future there wasn't going to happen because of timing.

Then I met him with no expectations, no hopes for the future, only to enjoy being with him. We saw each other a lot, then more. We travelled and learned more about each other. There was safety and security as we grew together. It was love, and I felt it for a while, but this feeling or fear – “he will leave me” was still there even though there were no signs or anything, but the thought was there.

He came home with me last year to meet my mom and see my childhood home. He saw the place where I grew the most – Korea, where I spent 7 years. In return, I got to know him more and liked what I saw and what I learned. We grew together and began seeing how lucky I am to have him in my life, and we wanted to build a future together.

The thought has always been there. The talks have always been there. Until we talked last night. He moved in fully near the beginning of the year and has enjoyed it a lot. We have been looking for apartments to build, which is a huge step. Then I turned to him, and we talked, never sure how to 'do it right.' So I asked, “Do you wanna?” and he said, “Sure.” We were joking, but we weren’t. I am lucky beyond words and looking forward to many, many years ahead.

 
Read more...

from Roscoe's Story

In Summary: * Another quiet Sunday ends well. The San Antonio Spurs win over the Portland Trail Blazers this afternoon was MOST enjoyable. The only things remaining between now and bedtime are my night prayers, and I intend to start on them soon.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 231.92 lbs. * bp= 151/91 (67)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 07:10 – 1 big cookie, 1 banana * 08:30 – 1 ham and cheese sandwich * 10:00 – candied bananas * 12:50 – garden salad * 13:45 – bowl of pancit * 15:30 – 1 big cookie * 16:15 – 1 fresh apple

Activities, Chores, etc.: * 07:20 – bank accounts activity monitored. * 07:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 12:20 – listening to the pregame show of this afternoon's Detroit Tigers vs Cincinnati Reds on the Reds Radio Network * 14:00 – now listening to the pregame show ahead of today's San Antonio Spurs vs Portland Trail Blazers game * 14:40 – and... the Spurs Game is starting. * 17:20 – and ... Spurs win 114 to 93.

Chess: * 11:00 – moved in all pending CC games, registered for another “3 days per move CC tournament” with games starting 01 May

 
Read more...

from Free as Folk

#writing #revolution #NoDAPL #indigenous #landback #MMIWR #abolition #education #essay

This post is Part 1 of a series on social revolutions of the past 30 years — examples where public consciousness has massively shifted in favor of liberation. My aim is to create space to pause and acknowledge how things have changed in ways that once felt impossible, remind us that things can always be otherwise. It is inspired in part by Rebecca Solnit’s 2016 edition of Hope in the Dark and David Graeber’s 2007 essay “The Shock of Victory.

The average education about Native American history when I was growing up in rural Nevada was pretty much “Indians helped the Pilgrims at Thanksgiving” or “savages viciously attacked poor defenseless settlers.”

Nowadays, while you may still hear such distortions and genocide-justifying lies from right wing pundits, broader public awareness of indigenous peoples’ continued existence and ongoing defense of their lands, stewardship practices and philosophy have blossomed in fire.


Thin Green Line protestors in Tacoma, WA, source: Media Project Online

Books like Braiding Sweetgrass and The Serviceberry by indigenous scientist Robin Wall Kimmerer have been a sustained presence on the NYT Best Seller list, and the former was one of the most checked out books from the public library in 2024.

Even television shows like the FX dramedy Reservation Dogs (2021-2023), created by indigenous filmmakers Taika Waititi (Māori and European descent) and Sterlin Harjo (Seminole and Muskogee descent) has opened up a wider space in the media landscape for depictions of indigenous characters as something beyond crass stereotypes or the lie of the “Vanishing Indian.”

Reservation Dogs | Shows | CBC Gem

Reservation Dogs poster, source: FX

Films like Martin Scorsese’s Killers of the Flower Moon (2023) have brought to the mainstream moviegoing public a powerful story of what colonization really looked like, depicting indigenous Americans not as “backward savages,” but in fact the prosperous land-owning class of the Osage Nation of modern-day Oklahoma — that is, until their family members are systematically murdered to give the white settlers access to exploit that land’s rich oil reserves through marriage to an Osage woman.

This character, Mollie Burkhart, is stunningly played by Lily Gladstone (Piegan Blackfeet, Nez Perce), for which she received an Academy Award nomination for Best Actress. Gladstone she has since used her platform to Executive Produce four films to date, centering on contemporary Native American stories of Missing and Murdered Indigenous Women (Fancy Dance), adolescence (Jazzy), confronting generational trauma of the residential school system (Sugarcane), and steps toward restoration of indigenous land and animal stewardship (Bring them Home).

The discussions of settler colonialism have gone from basically unspeakable heresy against the very soul of America to, it seems to me, pretty widely accepted in liberal to leftist circles at least (I mean John Oliver made the direct comparison of the US to Israel on a late-night comedy show). Reading Roxanne Dunbar-Ortiz’ An Indigenous Peoples History of the United States in 2024, I was struck by just how far the public sphere has shifted in narratives about indigenous people in just the 12 years since the book’s publication.

#NoDAPL

I trace a significant part of this recent shift to the 2016-2017 Standing Rock protests against the Dakota Access oil Pipeline, which made international news as indigenous water protectors and allies in solidarity occupied the historic lands of the Standing Rock Sioux Tribe for 11 months through the harsh North Dakota winter. The protests and occupations were multi-pronged, including support from 87 indigenous nations, thousands of activists, legal scholars, and organizers.

Dakota Access fires back at tribes and #NoDAPL movement ahead of ...

NoDAPL protest march in 2016, source: IndianNZ

The NoDAPL protests brought the issues of indigenous tribal sovereignty, broken treaties, and especially the indigenous conception of water and lands as sacred to the forefront of public discourse about climate change and the United States’ history of genocide.

The backlash

With each of the social revolutions I will cover in this series, I must acknowledge not just the positive steps toward shifting public consciousness, but also the reactionary backlash which inevitably follows.

This has been twofold: the State repression against activists attempting to defend water and life, and culture war against intellectuals, educators, and artists. In the former, law enforcement has deployed all manner of violent tactics (borrowed from the anti-Civil Rights police violence of the 1950s-1960s), from water cannons to chemical weapons and rubber bullets, to siccing dogs on protestors. The legal repression escalated to such a degree that those occupying the Standing Rock Sioux reservation were given prison sentences ranging from a few months, up to eight years (for single count of property damage).

Not to be deterred, #StopCopCity protestors began occupying the Weelaunee Forest in Atlanta in 2021 in the wake of Black Lives Matter Uprisings in 2020 (which I will cover in a future entry of this series), connecting struggle against anti-Black systemic racism and police with indigenous sovereignty. Again, protestors and those engaging in direct action were met with violence, most famously the murder of non-violent resister Tortuguita (whose death is still under investigation), which made international news spurred a week-long demonstration of solidarity.

undefined

Tortuguita in Welaunee Forest in 2021, source: Twitter

The second prong of backlash against rising indigenous sovereignty can be seen in the response to revisionist histories like 1619 project (commemorating the 400th anniversary of the beginning of American slavery upon its publication in 2019). The same year, President Trump signed into law the 1776 Commission, intended to enforce “patriotic education” to combat to “twisted web of lies” he claimed was being taught regarding systemic racism in U.S. schools.

This, paired with the overall withdrawal of funding from US education and the ongoing dismantling of US Department of Education by Executive Order is the result of long decades of psychological warfare waged by the likes of Steven Bannon and other right-wing political actors, cataloged brilliantly (and disturbingly) in Annalee Newitz 2024 book Stories Are Weapons: Psychological Warfare and the American Mind.

Paths forward

That said, I am encouraged by Grace Lee Boggs’ words in The Next American Revolution (2012), where she analyzes how radical, beloved community has risen in Detroit in the face of monumental dis-investment and violence by the State and Capital, creating autonomous networks of care and creativity — including in education. Alternatives to “patriotic” public schooling are cropping up, like the Boggs School, founded in 2013 on the philosophy and activism of the late Grace Lee and her husband Jimmy Boggs, over their decades of organizing in the Midwest city.

These types of schools center around education as a practice of freedom, in the tradition of Paolo Freire’s work in literacy in rural Brazil, Freedom Schools of the 1960s which opened up education to Black Americans to learn about their history and spark critical consciousness to take action in their society.

Education has long been a site of struggle for Indigenous peoples everywhere, with a major tactic of colonization being the suppressed of indigenous knowledge, language, and traditions — perhaps most famously in the Residential School System, part of the “Kill the Indian, Save the Man” philosophy of forced assimilation and destruction of indigenous culture.

Promising efforts in excavating and restoring indigenous knowledge systems are blossoming all over the world, like the School of Māori and Pacific Development at the University of Waikato in Aotearoa (New Zealand), established in 1996 and becoming the Te Pua Wānanga ki te Ao, Faculty of Māori and Indigenous Studies in 2016. The emergence of these sorts of research institutions are heartening, as are the environmental remediation projects combining indigenous land stewardship and Western scientific methods.

Commencement Ceremony at the University of Waikato, source: Waikato.ac.nz

Indigenous peoples have been resisting erasure, colonization, and dispossession for hundreds of years. Now is the time of a growing movement to stand in solidarity and learn from one another if we want to make it into the next century.

Suggested Reads

 
Read more... Discuss...

from The happy place

I have two things on my mind

(This will be my best post yet)

1

I am now after a painfully long time in the microwave transformed into a popcorn.

There’s no way on this earth to unpop a popcorn

This new me isn’t just a hard shell but inside out

Soft

Of course it hurt, but look at me now

I am weightless

This is my final form of course

#poetry


2

I’m watching Tulsa king. I see with great interest Stallone playing this mafioso guy out of prison, just murdering anyone who he finds disrespectful, just doing things his way, even though he is a prisoner of his own principles, is somewhat satisfying: seeing him solve most of his problems with violence like that.

Yes👍 🤌


 
Read more... Discuss...

from Faucet Repair

24 April 2026

The Leonardo book A Life in Drawing (2019) has been open on the floor of my studio this week; specifically his map drawings. In the summer of 1504, he was employed by the Florentine government to map parts of the river Arno, and there's one drawing in particular that I keep returning to—on page 127, fig. 93—A weir on the Arno east of Florence. It describes damage to the river embankment from water bursting through a weir. Such a wonderful drawing, the movement of the water alive in his precisely-rendered rushing and swirling lines, the site of destruction gently heightened with a darker blue than the rest of the wash representing the water. That meeting, between the physical intensity of natural phenomena and measured observational focus such that the eye dilates enough to make room for the emotion of a space to enter through the hand, is something close to what I'm after right now.

 
Read more...

from Have A Good Day

In 2026, I started using a paper notebook as my main organizational tool. That came with a conscious effort to let go of the idea of finding the perfect workflow or toolchain. Four months in, I have to say it is working pretty well.

First, handwriting is faster and more fun than typing on a keyboard, especially a virtual one. If you need the copy digitized, you have to rekey it, but I find that small overhead acceptable, because in many cases I need to revise the text anyway (so far, all digitalization tools, including smart pens, have not worked for me. Fixing errors in the automatically converted text is far more unpleasant than simply rekeying).

Using a paper notebook for task management, Bullet Journal-style, also has the advantage that of keeping you honest. Task management apps make it too easy to create a multitude of tasks and conveniently push them from day to day. The limited space in a notebook forces you to decide whether you want to manually copy, complete, or give up a task.

However, I need to remind myself constantly that the notebook is not a precious journal of my life but a working tool. There is an entire notebook culture that tries to convince you otherwise. I currently use a $35 Art Collection Moleskine notebook because it was the only one with dot-grid paper I could find on New Year’s Eve (the McNally Jackson bookstore has a wide selection of notebooks, but it seems to categorically reject dot-grid paper). At more than 20 cents per 120g page, it makes you wonder whether the paper is worth it for what you want to write down. Honestly, I’m looking forward to being done with it and using a more reasonable notebook.

 
Read more...

from Zéro Janvier

The Darkest Road est un roman publié en anglais en 1986. Il s’agit du troisième et dernier volet de The Fionavar Tapestry, une trilogie de fantasy par l'auteur canadien Guy Gavriel Kay.

The young heroes from our own world have gained power and maturity from their sufferings and adventures in Fionavar. Now they must bring all the strength and wisdom they possess to the aid of the armies of Light in the ultimate battle against the evil of Rakoth Maugrim and the hordes of the Dark.

On a ghost-ship the legendary Warrior, Arthur Pendragon, and Pwyll Twiceborn, Lord of the Summer Tree, sail to confront the Unraveller at last. Meanwhile, Darien, the child within whom Light and Dark vie for supremacy, must walk the darkest road of any child of earth or stars.

Je ne vais pas faire durer le suspense plus longtemps : ce troisième tome est encore meilleur que les précédents et conclut magistralement la trilogie. Les deux premiers volets étaient déjà riches en grands moments mais ils permettaient aussi bâtir des fondations pour une conclusion épique et émouvante. Cela paye totalement dans ce troisième tome : les enjeux sont colossaux et surtout, après m’être attaché aux personnages, j’ai été d’autant plus touché par ce qui leur arrive et par les choix qu’ils font.

Les choix, il faut en parler, car il s’agit là d’un thème majeur de la trilogie, sous-jacent jusque là et qui se révèle totalement dans ce dernier tome. La question du libre arbitre face au destin est centrale dans le récit de Guy Gavriel Kay. Ses personnages semblent parfois enfermés dans une destinée inévitable, mais ils font des choix. Parfois difficiles, parfois douloureux, parfois tragiques. Parfois, il n’y a que de mauvais choix, et il faut choisir entre deux maux. Parfois, il faut savoir abandonner le pouvoir. Ou sacrifier sa vie pour celle des autres.

Je me souviens des premiers chapitres du premier roman, j’étais intrigué, déjà un peu envouté, mais je n’étais pas forcément séduit par les protagonistes que l’auteur mettait en scène. Aujourd’hui, après avoir tourné la dernière page du dernier tome, je vois tout le chemin parcouru avec tous ces personnages que j’ai appris à aimer et dont je me souviendrai longtemps. Je garderai également le souvenir de ces personnages dites « secondaires » mais tellement mémorables : Matt Sören, Galadan, Darien, Finn, Diarmuid bien sûr.

Ce qui avait commencé comme un récit de fantasy épique classique, fortement inspiré par Tolkien, avec une dose de Narnia et de légende arthurienne, s’est avéré un cycle de très grande qualité, servi par un style impeccable et envoutant. Je pressentais après le premier tome que cette trilogie était l’une des rares qui pourrait ne pas souffrir de la comparaison avec l’œuvre de Tolkien : je suis ravi de pouvoir le confirmer aujourd’hui.

 
Lire la suite... Discuss...

from Faucet Repair

22 April 2026

Image inventory: fuzzy figure on a street from above through a magnifying glass, a calligraphic graffiti of the letter B on the tube, the point of a man's mohawk on his neck approaching the apex of a mandala-like tattoo on his back, an arching tree canopy over a street receding downhill into a distant cluster of homes (near Crystal Palace Park), the tail of a concrete lion outside the British Museum, a peeling billboard of a billboard, at the top of a hill a yellow to red gradient sculpture (yellow and orange vertical steel beams leaning against a red one), dead fish stacked vertically in bowls on a table at a farmer's market, a spider web spanning a hole in a brick wall, a small wire dragonfly sculpture, a street intersection (stark shadows) from above, a mouse running across tube tracks.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog