Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from drpontus
Generative AI has been deployed at scale before societies had time to understand its consequences. In only a few years, experimental models have rapidly become embedded in education, healthcare, media, public services, and daily communication. The dominating narrative calls this progress. In reality, it is a transfer of influence and control over information and decision-making to a handful of Silicon Valley-based technology companies.
That is why AI governance is the most pressing issue of the AI era. Without it, we are not shaping this technology. We are allowing a handful of private actors to shape our economies, institutions, and public discourse.
AI governance is often misunderstood as simple regulation or compliance. It is neither bureaucratic overhead nor a brake on innovation. AI governance is the system of rules, institutions, technical safeguards, and accountability structures that determine how AI is designed, deployed, evaluated, and controlled.
It includes:
In short, governance determines whether AI serves society — or whether society becomes a testing ground for tech companies.
The latest LLM-powered AI technology has advanced at a pace that far outstrips our ability to regulate it. Companies release increasingly powerful models into public and institutional use while the systems that should verify safety, evaluate societal risks, and assign responsibility remain fragmented or nonexistent.
This gap is structural. The current economic incentives of the AI industry reward rapid deployment, market capture, and scale. Safety, transparency, and accountability are viewed as slowing down that process. As a result, governance is treated as a secondary concern – something to be added after technologies are widely adopted and dependencies are already established.
History suggests this is the most dangerous phase of technological development. Industrialization, pharmaceuticals, aviation, and nuclear power all demonstrate the same pattern: early expansion without oversight creates systemic risk that later requires costly and reactive regulation. AI is following that trajectory, but at unprecedented speed and scale.
AI development has been driven by impressive demonstrations rather than proven reliability or societal readiness. The risk is not that AI exists, but that it is deployed in complex social systems before we understand how to control it.

Generative AI is often described as a productivity tool. That framing obscures its real impact. AI systems increasingly shape information flows, language use, economic opportunities, and decision-making processes. They are becoming a layer of societal infrastructure.
Infrastructure carries power. Whoever controls it influences communication, knowledge production, and public services. When AI infrastructure is controlled by a small number of private actors, governance is no longer just about technology. It becomes a question of democracy, sovereignty, and institutional resilience.
For smaller countries like Sweden, the issue is particularly acute. Reliance on external AI systems creates strategic dependency. If AI becomes foundational to education, public administration, healthcare, and communication, access and control over these systems becomes as critical as energy or telecommunications infrastructure.
AI sovereignty is therefore basic risk management and a resilience issue. It ensures that democratic societies retain the capacity to govern technologies that shape their citizens’ lives.
The concept of AI alignment is often framed as a technical challenge: how to make AI systems follow human values. But this framing avoids a fundamental question – whose values?
Alignment cannot be solved inside corporate research labs alone. Human values are negotiated through democratic processes, legal systems, and cultural institutions. Without governance, alignment becomes an internal corporate policy rather than a societal decision.
True alignment requires enforceable standards. It requires transparency about training data, model behavior, and deployment contexts. It requires independent evaluation and public accountability. Without these mechanisms, alignment becomes marketing language rather than a measurable outcome.
The urgency of AI governance is not hypothetical. The consequences of insufficient oversight are already visible across multiple dimensions.
Creative industries face structural disruption as AI models are trained on copyrighted work without consent or compensation. Invisible global labor markets support AI systems through data annotation and content moderation, often under poor working conditions. The environmental footprint of large-scale AI – including energy consumption and water use – remains opaque and largely unregulated.
At the same time, generative AI enables new forms of misinformation, deepfake abuse, and automated manipulation. Legal responsibility for these harms remains unclear. Technology companies increasingly operate as publishers and information intermediaries while avoiding the accountability traditionally required of those roles.
These issues are systemic outcomes of technological scaling without governance frameworks capable of distributing risks and benefits fairly.
Current AI development exemplifies what happens when technological capability outpaces social wisdom about appropriate use. We have created powerful tools for mass content generation without considering whether replacing human creativity with statistical pattern matching serves any purpose beyond reducing labor costs. We have built systems that can mimic human reasoning without addressing whether mimicry advances understanding. We have enabled unprecedented surveillance and manipulation capabilities without establishing boundaries around acceptable applications.
The halo effect that surrounds AI makes these questions difficult to raise. Skepticism about specific deployments gets conflated with opposition to progress. Calls for oversight get framed as obstacles to innovation. Concerns about harms get dismissed as “luddism”. This rhetorical strategy benefits those who profit from unconstrained development while silencing those who bear its costs.
The dominant narrative suggests that governance slows innovation. Evidence from other sectors suggests the opposite. Aviation safety regulations made commercial flight trustworthy. Pharmaceutical oversight made medicine reliable. Environmental regulation drove cleaner industrial technologies.
We would never allow the pharmaceutical industry to self-regulate, yet we permit AI companies to deploy systems affecting hundreds of millions of users without independent oversight, safety testing, or liability frameworks. The comparison is apt. Both industries produce products with significant potential for societal harm. Both require expert evaluation before mass deployment. Yet only one operates under a regulatory regime designed to protect the public.
Governance creates stable conditions for innovation by building trust, ensuring safety, and distributing benefits more broadly. Without governance, technological progress becomes fragile. Public backlash, legal uncertainty, and systemic failures eventually undermine the technology itself.
Responsible AI innovation depends on governance structures that are credible, transparent, and enforceable.

AI governance is often discussed as something governments or regulators must solve. But governance is not only a legal framework. It is also a design discipline and an operational responsibility. Every organization that develops, deploys, or procures AI systems becomes part of the governance ecosystem.
For small and medium-sized enterprises (SMEs), public organizations, and design teams, responsible AI governance is not about building large compliance departments. It is about embedding accountability, transparency, and human oversight directly into how services are designed and delivered.
Six practical pillars can guide that work.
Governance begins with knowing how AI systems behave in real conditions.
What SMEs and organizations can do:
What designers can do:
Good AI design assumes systems will fail sometimes – and ensures those failures are visible, understandable, and recoverable.
Responsible AI requires openness about what the system does, what data it uses, and what risks it carries.
What SMEs and organizations can do:
What designers can do:
Transparency builds trust. Hidden automation erodes it.
AI systems often blur responsibility between developers, vendors, and organizations. Governance requires clarity about who is accountable when things go wrong.
What SMEs and organizations can do:
What designers can do:
Accountability means AI systems are never allowed to operate without human responsibility attached.
Many organizations do not build AI – they buy it. Procurement is therefore one of the most powerful governance tools available.
What SMEs and organizations can do:
What designers can do:
Every procurement decision shapes the AI ecosystem.
AI systems rely on large amounts of human labor and creative content. Governance requires recognizing and respecting that human foundation.
What SMEs and organizations can do:
What designers can do:
Responsible AI should augment human work, not erase its value.
AI systems evolve over time. Governance must evolve with them.
What SMEs and organizations can do:
What designers can do:
Governance is not a one-time checklist. It is an ongoing responsibility.
For SMEs and designers, AI governance is not simply risk management. It is a competitive and ethical advantage. Organizations that design transparent, accountable, and trustworthy AI services build stronger customer relationships, reduce legal risk, and create more resilient products.
Responsible AI design also aligns with long-term innovation. Systems that users understand, trust, and control are more likely to be adopted sustainably.
Governance is therefore not only about avoiding harm. It is about designing technology that earns trust and creates lasting societal value.
Every major technological shift forces societies to decide what kind of future they are willing to build. Generative AI is no exception. It carries enormous potential: it can strengthen public services, accelerate scientific discovery, and expand access to knowledge. But it also carries the capacity to concentrate power, erode cultural diversity, destabilize labor markets, and weaken trust in information systems.
Governance determines which of these futures becomes reality.
Too often, the debate around AI is framed as a race – a competition between nations, companies, and institutions to develop more powerful systems faster than everyone else. But the real race is not technological. It is moral and institutional. It is the race between capability and responsibility.
Right now, technological capability is accelerating rapidly. Responsibility is not.
Dutch historian and author Rutger Bregman describes moral ambition as the willingness to dedicate talent, resources, and political will to solving humanity’s most urgent and complex problems. Moral ambition rejects the idea that the most capable actors should simply pursue profit, prestige, or technological dominance. Instead, it asks what those actors owe to society.
Artificial intelligence demands precisely this kind of ambition.
Developing systems that shape language, information flows, education, public administration, and democratic discourse is not a neutral technical exercise. It is an act that redistributes power across society. And power, when left ungoverned, rarely distributes itself fairly.
The question is no longer whether AI will influence our future. It already does. The question is whether we will take responsibility for guiding that influence.
Responsibility means acknowledging that technological progress does not automatically produce social progress. It means accepting that safety, fairness, sustainability, and democratic accountability must be designed into AI systems deliberately. It means building institutions capable of auditing, regulating, and shaping technologies that are increasingly embedded in everyday life.
Most importantly, responsibility means rejecting the idea that governance is an obstacle to innovation. Governance is what makes innovation legitimate, sustainable, and worthy of public trust.
History offers a clear lesson. The Industrial Revolution created unprecedented wealth and productivity – but it also produced exploitation, inequality, and social upheaval. The benefits society now associates with industrialization did not emerge from technology alone. They emerged from labor movements, democratic reform, public regulation, and collective demands for fairness and safety.
The same is true for AI.
If we want artificial intelligence to strengthen democracy, improve working life, preserve cultural and linguistic diversity, and contribute to a sustainable future, then governance cannot remain reactive or symbolic. It must be proactive, evidence-based, and democratically grounded. It must be built with the same seriousness and ambition that currently drives technological development itself.
Artificial intelligence may shape the future. But responsibility will decide whether that future is worth living in.
from
hex_m_hell
I met a guy in a Dutch class I was taking. He was a cook and he used ChatGPT to get recipes. I explained what ChatGPT actually was and it kind of blew his mind. He had thought that it was some kind of magic search engine that would look through all the information on the Internet to answer your questions. It is not even close to that.
If you, or someone you know, thinks that ChatGPT or any “AI” “knows” things, or “finds answers,” or “figures things out,” then this post is for you. You have been sold something that isn't quite what it seems.
I'm going to do my best to explain this in as similar a way as possible to the first time I explained it. This means I'll simplify things a bit. When I throw technical language or details, it will only be so that you can dig in deeper if you want. It won't be necessary to understand anything in depth to grasp the general concept.
I'm adding a bit more context than when I first explained it. But I'm also going to try to keep this as short as possible, because it's actually a lot less complex to understand at a high level than you might have been lead to believe.
We're going to start simple, and build up. Imagine you counted each letter in this script.
Now you turn those into statistics.
from string import ascii_uppercase
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
counts = {l:script_text.count(l) for l in ascii_uppercase}
total_letters = sum(v for v in counts.values())
probabilities = {k:v/total_letters for k,v in counts.items()}
return probabilities
| A | 0.07698545710115154 |
|---|---|
| B | 0.014110396280477916 |
| C | 0.030869870789857815 |
| D | 0.0335189490187598 |
| E | 0.11493755744174732 |
| F | 0.01816510785532789 |
| G | 0.02659890793101584 |
| H | 0.036113964426663786 |
| I | 0.06368600313564361 |
| J | 0.0022706384819159865 |
| K | 0.009460993674649943 |
| L | 0.04854841325620371 |
| M | 0.02713953614099584 |
| N | 0.06406444288262962 |
| O | 0.07979672379304752 |
| P | 0.025085148943071848 |
| Q | 0.001351570524949992 |
| R | 0.062064118505703626 |
| S | 0.07363356219927555 |
| T | 0.10174622911823539 |
| U | 0.030923933610855815 |
| V | 0.014218521922473915 |
| W | 0.02297669892414986 |
| X | 0.0038925231118559765 |
| Y | 0.017191977077363897 |
| Z | 0.0006487538519759961 |
Your highest probability letters would usually be “ETOANRISH”, but for us it's “ETOSARINL”. Not far off.
'J' makes up about .2% of letters, while 'Q' and Z are both under .01%. This isn't too far off what we'd expect from a normal distribution. 'S' is a bit more common than we'd expect. There are some things skewing these stats, for this specific set, but that will probably become obvious later.
Great.
If we take those stats we can generate “words.” Let's just choose 10 letters and see what happens…
from random import choices
from string import ascii_uppercase
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
counts = {l:script_text.count(l) for l in ascii_uppercase}
hat = ''.join(l*c for l,c in counts.items())
return [''.join(choices(hat, k=10)) for _ in range(5)]
['ANGOUCGOUE', 'SNEKEEOHRE', 'TYEOWTOPLR', 'THUGESTCNG', 'TEOEVSGINE']
'SNEKEEOHRE'
That doesn't really sound a lot like English, does it?. Neither is 'ANGOUCGOUE' or 'TYEOWTOPLR.' 'THUGESTCNG' includes an English word, so we're not too far off the mark. Perhaps we can improve this by including a bit more information. What if we get the average word size?
fname = "what_an_llm_really_is.org"
with open(fname) as fd:
script_text = fd.read().upper()
last_offset = 0
offset = script_text.find(" ")
lengths = {}
while offset > 0:
length = offset-last_offset
if length not in lengths:
lengths[length] = 0
lengths[length] += 1
last_offset = offset
offset = script_text.find(" ", offset+1)
word_count = sum(v for v in lengths.values())
length_stats = {l:c/word_count for l,c in lengths.items()}
return length_stats
| 8 | 0.056666666666666664 |
|---|---|
| 5 | 0.12333333333333334 |
| 3 | 0.12111111111111111 |
| 4 | 0.15 |
| 9 | 0.028888888888888888 |
| 7 | 0.06666666666666667 |
| 2 | 0.11333333333333333 |
| 6 | 0.07777777777777778 |
| 10 | 0.012222222222222223 |
| 12 | 0.023333333333333334 |
| 11 | 0.008888888888888889 |
| 16 | 0.005555555555555556 |
| 30 | 0.0044444444444444444 |
| 1 | 0.12555555555555556 |
| 18 | 0.006666666666666667 |
| 20 | 0.01888888888888889 |
| 24 | 0.0033333333333333335 |
| 14 | 0.01 |
| 19 | 0.011111111111111112 |
| 17 | 0.0011111111111111111 |
| 23 | 0.0011111111111111111 |
| 59 | 0.0011111111111111111 |
| 21 | 0.008888888888888889 |
| 22 | 0.005555555555555556 |
| 13 | 0.006666666666666667 |
| 26 | 0.0011111111111111111 |
| 56 | 0.0011111111111111111 |
| 43 | 0.0022222222222222222 |
| 54 | 0.0011111111111111111 |
| 15 | 0.0022222222222222222 |
Cool. Now let's generate some “sentences.”
from random import choices
len_pop = [i[0] for i in length_stats]
len_weights = [i[1] for i in length_stats]
letter_pop = [i[0] for i in script_word_stats]
letter_weigths = [i[1] for i in script_word_stats]
lengths = choices(len_pop, weights=len_weights, k=5)
return ' '.join(''.join(choices(letter_pop, weights=letter_weigths, k=l)) for l in lengths)
'N GITE SDNAT YPDSRGTC SWONRTLOEA'
Rad. Death metal lyrics.
We're getting a bit closer. If you run it a bunch sometimes short words will come out like “I.” You might even get an “AM” or “DOG” every now and then, but it mostly sounds like something uttered by a Lovecraftian horror. But… it's getting a bit closer, isn't it? Little by little.
It's not hard to make things that are even more like words.
We know that there are more rules to English spelling than letter probability. I bet you could recite one:
“I” before “E” except after “C.”
It does seem to “know” that some words are only one letter but it doesn't “know” that “N” is not one of them. And what about other letter pairs? There are a lot of pairs of letters that are common in English, like “th.”
We really need to be using more information. We could probably massively improve things just by tracking the probability of the next letter based on the current letter. That would capture things like “th” and “ie.” If we tracked two letters, that would also give us the “CEI” thing.
Well, this is actually a thing and it has a name. It's called a “Markov model.” It's a bit more complicated than my basic statistics example, so I'm not going to write one up. Instead I'm going to use someone else's.
git clone https://github.com/ggouzi/markov-word-generator.git
You can download this markov word generator if you would like. It can produce words like…
from markov_word_generator import MarkovWordGenerator, WordType
generator = MarkovWordGenerator(
markov_length=2,
language="en",
word_type=WordType.WORD,
)
return '\n'.join(generator.generate_word() for _ in range(10))
virs lep undingers perstorable posedgentionating whadlistubtly sed pes delingeoulle marks
These sound much more like real words. Sometimes it even makes real words like “marks.”
Great.
The more previous letters we use to determine the statistical probability of the next word, the more often we produce to real words.
from markov_word_generator import MarkovWordGenerator, WordType
generator = MarkovWordGenerator(
markov_length=5,
language="en",
word_type=WordType.WORD,
)
return '\n'.join(generator.generate_word() for _ in range(10))
sinner offprintential novices chapman downfall unpoetic ironwork sidest acupunctuary penicilling
Now what if instead of words we build this type of model for sentences?
Turns out there's code for that too. Let's use it on some of my writing.
pip install markovify --break-system-packages
from os.path import expanduser
from glob import glob
import markovify
text = ""
for fn in glob(expanduser("~/Writing/posts/posted/*.org")):
with open(fn) as fd:
text += fd.read()
text_model = markovify.Text(text)
return text_model.make_sentence()
He told me how close the bullet had been sliced up and was having lemon poured over the world.
Well that's strangely dark. It doesn't completely make sense, but it does sound almost human. If you've read my personal stuff, you may recognize the vibe. But hey! It's much better than those sentences we started with earlier, isn't it?
Maybe we should run it on this script to get something a little less dark.
import markovify
with open('what_an_llm_really_is.org') as fd:
text = fd.read()
text_model = markovify.Text(text)
return text_model.make_sentence()
Basil shows up a lot of things that simply can't be represented with statistics.
Hey, it almost sounds like something I've said. I mean, technically I did say it… just then…
Let's compare our first sentence with this one to see how far we've come:
'N GITE SDNAT YPDSRGTC SWONRTLOEA'
“Basil shows up a lot of things that simply can't be represented with statistics.”
As we add more and more statistical data we start to get things that seem more and more “real” or “alive.” Even making small changes to the models, still only using basic statistics, can give us really significant improvements in behavior.
The differences between the first sentence and the second all comes from the amount of statistical data we're capturing, and the way we're “tokenizing” or “breaking up the things we're doing statistics on.”
Now, if you extrapolate from the difference between between the first model using raw letter counts, and the second based on a Markov model, you start to see how a much more complex statistical model could give you output like you see from LLMs like ChatGPT.
But none of this is knowledge. This program isn't “thinking about things I wrote and coming up with a sentence based on them.” It's a simple statistical operation. The output is essentially random, constrained by the weights we calculated from the input data.
An LLM is just a very complex statistical model built using unimaginably large amounts of data, to produce an unimaginably large statistical model.
A simple auto-complete can do a pretty good job of guessing what you might want to type next by using a Markov model, like we've already used, to generate options. If you start writing on your phone and then you hit auto-complete a bunch of times, you'll probably get a sentence that's not too different form the one we generated with the Markov model just now.
And that's basically what an LLM does. It takes your input, shoves a special prompt at the beginning, and passes it through the statistical model to produce some output based on your input. It's the world's most advanced auto-complete.
It's advanced, using really incredible math, but it's not smart. It's just ducking auto-complete.
Ducking.
Duck.
duck…
So let's go back to the recipe thing.
>>> Please give me a good salad recipe that uses wawai (sea mango). Wawai, also known as sea mango or mangrove mango, is a unique fruit found in tropical regions. It has a distinct flavor and texture compared to regular mangoes and can be used in various dishes, including salads. Here’s a simple yet refreshing salad recipe that incorporates wawai:
### Ingredients:
- 2 ripe wawai fruits
- 1 red onion, finely chopped
- 1 cucumber, thinly sliced
- 1 cup fresh cilantro leaves
- Juice of 1 lime
- 2 tablespoons olive oil
- Salt to taste
### Instructions:
- *Prepare the Wawai:* Wash and peel the wawai fruits if necessary. Cut them into small pieces.
- *Chop the Onion:* Finely chop the red onion.
- *Slice Cucumber:* Thinly slice the cucumber.
- *Chop Cilantro:* Finely chop the cilantro leaves.
- *Combine Ingredients:* In a large bowl, combine the wawai, onion, cucumber, and cilantro.
- *Add Lime Juice and Olive Oil:* Pour in the lime juice and olive oil over the salad ingredients. Mix well to ensure everything is coated evenly.
- *Season with Salt:* Season with salt to taste.
This salad makes for a light and refreshing meal perfect for hot days or as a side dish. Enjoy!
“Sea Mango” or “wawai” is Cerbera manghas. It's deadly. It's literally poison. Do not make this recipe. If you ate this, you would die.
People who pay attention to popular culture may know it from “White Lotus.” Yeah. That's the plant this LLM just gave me a recipe for. This is suicide salad. Don't make this.

So that was obviously cheating a bit. I'm using a smaller model, one that I can actually run on my computer. It doesn't have all the guard rails that ChatGPT and Google have developed based on their repeated failures.
Google's AI will warn you that this is poisonous. After people point out some off-the-wall thing like this, LLM companies like Google tune their AI to provide warnings or fix obvious errors. We don't know how many other pieces of deadly advice still exist, or how many other ways you can run into the same deadly advice. These “fixes” don't actually fix everything because they can't. (That would be impossible... but explaining that is much harder, so we'll skip it for now.) These adjustments just skew the stats a bit so you're more likely to get some kind of warning if you ask how to do something dangerous.
So an LLM can use statistical representations to generate something that actually may work as a real recipe. If you want something with pasta, it may recommend tomatoes. Basil shows up a lot with tomatoes, especially in pasta, so it will probably show up too. Salt is in almost everything, so you'll probably get salt. You can start to see how a recipe can come together from just stats.
You can definitely catch the “vibe” of a recipe with a statistical model alone, just like you can catch the vibe of English from a Markov model. That doesn't mean it will be the best, or even be especially good.
What you're not getting is a “hand crafted recipe just for you,” or “the best recipe that a super-intelligence could possibly produce.” It doesn't understand flavors, or how to blend them, or what cheese smells like. It's just randomly creating text with similar statistical properties as the input.
It may well be like if you someone who had never tasted a type of cuisine read a whole bunch of books about it and then made a recipe. Like when Dutch Hello Fresh gives you a recipe for Mexican food and it uses pasta sauce instead of salsa.
You probably won't die, but you probably won't find anything really innovative. It may not taste bad, but it also may not be any good.
Now, I said that LLMs don't “search the Internet to find answers.”

That's true, but it can be a bit more complex than that. It can be really hard to square what I've just said with Google's AI search results that reference specific pages. So let me explain that a bit.
There's a thing called “Retrieval Augmented Generation” that weights the statistics a little more heavily towards some input. Google search works exactly the way it has for years before the advent of LLMs. Google then takes the top matching pages, already ranked using it's existing algorithm, and dumps the top matching ones into the LLM to skew the stats in favor of that content.
And with that, (intentional or not) Google creates the really convincing illusion of a smart AI that searches the Internet to answer your questions.
So instead of getting a recipe out of a blender, you get one that's essentially plagiarized from a real human… a real human who won't be able to monetize their work through ads because you've viewed their content from Google instead of from their site.
Wow. Now they're gonna starve. I hope you're happy with yourself.
All of this relies on page rank working correctly. But sometimes you search for something and something… else… comes up.

If game lore happens to be the thing that comes up at the top of your search results, Google's LLM may confidently inform you about a fantasy world. The LLM doesn't know the difference between fantasy and reality. It's all just math. That's it… and a lot of things can't be represented with statistics. Not everything can even be represented with math.
So if some nerds get together and start making jokes about the name collision between “The Doctor” from Star Trek: Voyager and “The Doctor” from Doctor Who, an LLM might…

…come to the wrong conclusion.
These problems get fixed, manually, after they occur. So at any point you could just randomly run in to one of these and you would only know if you start with the understanding that the LLM may well be wrong, and may be wrong quite often.
It may work, for now, for some things. But it's definitely not magic. It's a tool. It can do some interesting things.
LLMs can be really useful for very specific things, like natural language processing. They are far better at processing large amounts of language data than any previous method. That image of about “The Doctor” was found quickly using an LLM.

But they're also extremely bad at a lot of things. The more niche the knowledge is, the more wrong it will tend to be. There are a lot of things that simply can't be represented mathematically right now at all, those things are extremely wrong.
For example, computer security really revolves around the difference between how people expect a system to work and the way a system actually behaves. There are some statistical associations between syntax and unexpected behavior, but semantics, the meaning of the thing, is often not even documented much less available as statistics. If people don't even know to write something down then a machine definitely can't extract that information documentation. Security is fundamentally a creative process, which may not even be possible to automate.
Just like how medicine is really about empathy, and law is really about story telling, and… there are a lot of things we keep being told these statistical models will do. The fact is, they won't. This math won't do that, because it can't.
Not only will LLMs never do these things. We really have yet to see any evidence that there is any math, anything we could possibly encode into an algorithm, that can.
And these models will probably just get a lot worse. There are a lot of reasons why, but let's play with our Markov model again.
What happens when you use the output Markov model as input to itself?
from pprint import pprint
from random import choice
from markov_word_generator import MarkovWordGenerator, WordType
wbg = {}
generator = MarkovWordGenerator(
markov_length=1,
# dictionary_filename="what_an_llm_really_is.dict",
language="en",
word_type=WordType.WORD,
)
for g in range(300):
words = [generator.generate_word() for _ in range(100)]
wbg[g] = choice(list(words))
with open("tmp.dict", 'w') as fd:
fd.write('\n'.join(words))
generator = MarkovWordGenerator(
markov_length=3,
dictionary_filename="tmp.dict",
)
pprint(wbg)
pprint(set(words))
{0: 'madugeroliveerufondefuns',
1: 'nerpranoaliere',
2: 'rdvests',
3: 'tcibederanutrsiorphalat',
4: 'walowadia',
5: 'rdvesteravouslpang',
6: 'sibsteravouslpang',
7: 'esteranoaliere',
8: 'walowadia',
9: 'flinantirs',
10: 'ntirs',
11: 'ntirs',
12: 'rdvesteravouslpang',
13: 'fomont',
14: 'rdvesteravous',
15: 'we',
16: 'we',
17: 'we',
18: 'rdvestere',
19: 'ss',
20: 'cos',
[...]
30: 'cos',
31: 'ss',
32: 'walowadick',
33: 'rdvesteravouslpang',
34: 'walowadick',
35: 'ss',
36: 'walowadick',
37: 'rdvesteravouslpang',
38: 'rdvesteravouslpang',
39: 'ss',
40: 'fomont',
41: 'ss',
[...]
79: 'rdvesteravouslpang',
80: 'rdvesteravouslpang',
81: 'cos',
82: 'rdvesteravouslpang',
83: 'cos',
84: 'cos',
85: 'rdvesteravouslpang',
86: 'cos',
87: 'cos',
[...]
115: 'rdvesteravouslpang',
116: 'cos',
117: 'cos',
118: 'cos',
119: 'cos',
120: 'cos',
121: 'cos',
122: 'rdvesteravouslpang',
123: 'rdvesteravouslpang',
124: 'cos',
125: 'cos',
126: 'rdvesteravouslpang',
127: 'cos',
128: 'cos',
[...]
297: 'cos',
298: 'cos',
299: 'cos'}
{'cos'}
We generated 100 words from an English training set, then generated 100 more from the 100 we generated, and so on. We repeated this 299 times. Each time we chose a random word from the 100 as a sample, to kind of “see how things are going.”
This model is tuned down to 1 character of “attention,” (that means, we're only looking at one previous character to figure out the next) so it's not great. “fomonts” looks a bit like English. We selected a real word, “we,” on the 15th iteration (15th time feeding it to itself), and chose that a couple more times. At iteration 20 we produce “cos.” We'll see this again… 'walowadick' is a fun word that shows up on iteration 32. I'm going to have to use somewhere and see if I can get away with it. It looks a bit like 'walowadia,' which showed up on iteration 8. By iteration 49 we're starting to fall into a pattern of producing either 'rdvesteravouslpang' or 'cos,' and by 127 we're only producing 'cos'.
We can see at the end that every word we output is just “cos” over and over again.
This is what LLMs are starting to do to themselves. Humans are still producing content, sure, but LLMs are also being used to produce content. They can produce way more content, way faster than humans. So LLMs start to consume more and more LLM generated output, in some kind of reverse ouroboros or LLM human centipede. LLM centipede. Slopipede. A walowadick, if you will.
These systems are not just raw statistical models. They're engineered, tweaked, refined to work in very specific ways. They are built to make us trust them, to make us believe what they're saying is legitimate.
But these systems are also feeding off our data. That means we actually can also manipulate their output.
walowadick:
- noun
- LLM quality degradation manifesting from the ingestion of LLM output, repeated over multiple iterations.
- The intentional manipulation of LLM output as a form of collective resistance.
The LLM feedback loop has become a complete walowadick.
- adjective
- having the properties of LLM quality degradation that manifest from LLM feedback loops.
- having the properties of intentional output manipulation as an LLM.
It's not just slop. It's walowadick slop.
If text shows up enough, if it's linked to enough, it will end up in the LLM. It will change the output. Some of this manipulation can be prevented, but words are interesting. Anyone can make up a joke word that can influence LLMs, but those words can't always be removed or fixed.
Words, by their usage, become real. So those who realize what LLMs are and don't want to see them everywhere can resist. That resistance can be encoded in language, and that language can leave an indelible mark in LLMs.
fomont:
- noun
- A fake word that one pretends to be a real word, that ceases to be a fake word by people choosing to make it a real word, which they do as a joke.
They saw the 'fnord' and realized that it had thereby become a fomont.
I'm not telling anyone to stop using LLMs. I'm just saying that you should know enough about the tools you use to keep yourself safe. You wouldn't pick up an angle grinder or a circular saw without at least a basic understanding of safety. LLMs can be at least that dangerous.
But if you understand what's really going on, then you're that much less likely to get hurt. I hope this clears a few things up.
Edit:
I assume that there's a real linguistic term that means the same things as “fomont” and @etymology_nerd has probably already made a video about it... I guess we'll see.
Also, thanks again to @unchartedworlds@scicomm.xyz for catching typos!
from
Lanza el dodo
En enero apenas he jugado en físico más allá de la mayor parte de una partida de Trivial en Nochevieja que acabó cuando todo el mundo estaba harto del juego y de los dados y comenzó a discutir. ¡Para que digan que los juegos no permiten socializar con amistades!
Tampoco es que haya jugado cosas muy complejas en BGA (aunque tengo una partida abierta de Crucero Galáctico y de Concordia), y tampoco nada me ha parecido genial. En orden creciente de dificultad, Ubongo consiste rellenar un puzzle con poliminós lo más rápido posible, con lo que no hay una mayor estrategia. Igual de sencillo es Booty Dice, donde con la mecánica de lanzar dados, bloquear algunos, y poder relanzar hasta dos veces, se plantea una lucha entre piratas por conseguir más doblones. Lo he probado únicamente a dos jugadores y no tiene sentido la estrategia de buscar doblones o hacer que tu rival los pierda cuando puedes hacer que pierda vidas. Tampoco creo que a más jugadores mejore mucho.
Junk Drawer es otro juego de poliminós, donde hay 4 cajones con 4 criterios de puntuación, y vamos colocando objetos en cada cajón, de manera que de cada bloque de 4 objetos vaya al menos un elemento en cada cajón. Es sencillo y me recuerda a Cartógrafos, siendo a mi parecer Junk Drawer menos interesante, pues la puntuación de cada cajón es independiente (más allá de que la pieza que va en uno no puede ir en otro cajón), a diferencia de Cartógrafos que, además de tener lápices de colores, hay que llevar el tempo de cada ronda, y tratar de optimizar criterios de puntuación contradictorios en un mismo tablero.
Schotten Totten es un juego de cartas de reglas sencillas para dos personas que requiere bastante faroleo y control del riesgo. En tu turno juegas una carta y robas otra en tu lado de uno de las 9 zonas. Cuando en una zona hay 3 cartas de cada jugador se comprueba quién ha ganado, y gana quien consigue ganar 5 zonas o 3 zonas consecutivas. Para cada mecánica que exista, Knizia ha diseñado ya un juego que “está bastante bien” con una temática aleatoria.
Azure, además de cosas de Microsoft, es un juego abstracto para dos jugadores donde se lucha mediante mayorías por el favor de 4 figuras que otorgan puntos y poderes. La colocación de nuestras piezas en el tablero significará puntos directos, cartas que serán recursos para gastar en colocar nuestras piezas y descuentos para colocaciones futuras. Todo esto es una carrera por llegar a 25 puntos, con lo que las partidas son rápidas y nuestras decisiones tienen todo el peso del juego.
INK es un juego de draft de losetas y construcción de patrones, unas mecánicas usadas por tantos juegos que es imposible que no haya uno de estos cada mes. Este juego presenta una carrera por gastar todas tus botellas de tinta en colocarlas en tus losetas cuando cumples un patrón (tener X espacios contiguos de un color concreto). El añadido aquí es que cuando cumples esta condición puedes activar un efecto según el tamaño de la región de ese color y que de las 20 botellas iniciales, la mitad deben ir en dos colores que se te asignan inicialmente. Este juego es más sencillo en apariencia que en la práctica, pues no siempre es la mejor estrategia completar regiones grandes y hay que poner mucha atención a los efectos disponibles. Con esto quiero decir que necesitaría jugar más, a ver si me entero de qué hay que hacer realmente.
Carnuta es un juego sencillo de colección de cartas donde cada carta tiene un un símbolo y un criterio de puntuación basado en conjuntos de símbolos, normalmente de otras cartas. Las cartas también tienen un coste asociado que, generalmente, consiste en voltear o descartar unas runas que tienen, en el dorso y el reverso, una luna y un sol. Por tanto, tenemos que ir sucediendo girar runas con el símbolo del sol para pagar una carta y runas de luna para pagar otra, eso mientras coinciden en la medida de lo posible los símbolos de los criterios de puntuación de unas cartas con las otras. Me parece demasiada parafernalia para un juego bastante simple. Por ejemplo, los criterios de puntuación son más fáciles que los de Faraway y sin embargo el turno de cada jugador serían más engorrosos y la explicación serían comparables en tiempo pese a tener Faraway un concepto más innovador. Dewan va de ir colonizando un terreno, consiguiendo cartas y construyendo casas, tal y como está el precio de la vivienda este juego puede ser ofensivo. Teniendo casas en distintos terrenos puedes completar ciertas cartas que dan puntos. El punto de tensión se genera porque el tablero es pequeño para el número de casas que necesitas construir y entonces habrá bloqueos. Pero más allá de eso, creo que hay juegos con un planteamiento parecido con misiones y efectos más interesantes, como Kingdom Builder o The Guild of Merchant Explorers.
7 Wonders Dice es el juego de something and write inspirado levemente en la saga de juegos de 7 Wonders. Es una secuela ambientada en una temática que no está de moda. Como si la nueva entrega de A todo gas fuese una comedia musical. Si te gustan las dos cosas o no conoces nada de ninguna de las dos cosas, te parecerá entretenido y, en caso contrario, una versión que no aporta nada nuevo al género ni a la saga.
En la búsqueda de un juego de deducción que tenga más mecánicas de la pura deducción lógica, he probado Alchemists incluso estando en BGA en versión alpha, algo que no suelo hacer porque, además de que pueda ser una versión aún inestable, si ganas una partida no te dan el logro y, por tanto, los puntos para canjear por un mes gratis. Hobiconsejo de ser pobre como una rata. Sobre Alchemists, en este juego horriblemente feo que combina la deducción con la programación de acciones somos alquimistas tratando de decir qué “componente químico” se corresponde con cada ingrediente, algo que haremos mezclando ingredientes para obtener pociones. La deducción es interesante aunque me parece que la carga es menor que en otros juegos porque influye mucho la gestión de los ingredientes y monedas que tienes, necesarias para comprar artefactos y publicar las teorías, que es lo que te dará puntos y reputación. Además hay que considerar qué información revelar porque publicar una teoría sobre qué componente es un ingrediente puede resolver el resto del panel para los rivales.
Spirited mezcla la mecánica de forzar la suerte con la colección de sets y una pequeña construcción de tablero en un juego de cartas. Imaginad el juego de colección de las familias de hace siglos y que ahora sería cancelado por los estereotipos racistas pero ahora son furros de civilizaciones antiguas. Cada civilización presenta un efecto al jugar la carta y el objetivo es tener en tu campamento muchas cartas de una civilización o al menos una de cada. Esto te permitirá conseguir puntos y el favor de un clan. Pero, cuidado, si en tu línea tienes tres carta de una civilización, perderás parte del progreso actual. Es un juego que tiene varios conceptos más avanzados de lo que cabría esperar por su estética, y si no tienes pensado el combo puedes entrar en una situación de bloqueo y que sea un poco farragoso salir de ahí. Quiero jugar otra partida porque sería curioso que los autores del juego El muro de Adriano, aka la declaración de la renta, hayan hecho un juego más sencillo que a la vez es más enrevesado.

Tags: #boardgames #juegosdemesa
from An Open Letter
I’m packing up and it’s rough, but I’m so excited. I just want E to move in with me already.
from Unvarnished diary of a lill Japanese mouse
Journal 5 février 2026
On a beaucoup parlé toutes les deux bien sûr. A n'est pas du tout surprise. Comme elle travaille sur des sujets sensibles, elle se doutait depuis longtemps qu’on avait fait des enquêtes sur nous. Elle est certaine qu'on a nos dossiers dans les bureaux avec nos noms ou nos noms de code même pourquoi pas la tzarine et la souricette ? J'espère pas neko quand même. Nos téléphones tu parles pas besoin d'ouvrir le mien, il est sous surveillance depuis longtemps. Elle a un téléphone et un laptop fournis directement par eux. Ils voulaient juste me signaler qu'ils me surveillaient, et si je m'en apercevais c'est tout. On m'a joué une comédie bien mise en scène pour voir ce que je pigeais et grossir mon dossier, voilà ce qu'elle pense. Et là-dessus on s'aime ils ne peuvent rien y changer. La seule chose qui l'inquiète c'est que s’ils me le demandent je ne pourrai pas refuser d'y aller et ça ne lui plaît pas, mais l'une comme l'autre on est prises dans l'histoire comme des mouches dans une toile d'araignée.
from Robert Galpin
the morning ride in the rain — spring still a pipe dream the Tyne boiling mercury to the armpits of the staiths

Today I had an off day of surfing.
The wind was stronger than expected. It was kind of crowded for my spot. Waves were wrapping from the West and peaking, breaking almost perpendicular to shore.
When I first arrived I said a little prayer “Lord, if you want me to surf, give me a parking spot.” I drove around and, what do you know, a really nice spot near the showers opens up. As soon as I get out of the car I feel the breeze briskly picking up speed, starting to blow side-shore. I wasn’t feeling it. But the late-morning was beautiful, a classic looking south shore of O’ahu kind of day. So I grabbed my camera and took some photos of the sun glistening off the water, Diamond Head in the back ground, people paddling and swimming in the foreground—a photo that could have existed over a hundred years ago. Took close-up photos of the rocks, testing out a 25 year old digital camera I got for Christmas, to replace an exact model I had short-sightedly given away years back.
I return to my car and stare at the water. An uncle next to me is gearing up to paddle out. Another uncle, his friend, comes over and they start talking story. I decide to call it and make my way around toward the driver’s side.
“Eh! You going out?!” the other uncle says.
“Nah. Too windy.”
“Can I have your stall, den?”
“Sure.”
“I come back. Get one brown SUV. Eh watch my water bottle while I go gettum.”
As he leaves his water bottle on the curb and walks away I look at his friend and I joke: “he’s very trusting. Gotta watch out for these haoles you know!” I say with a smile. “We known for taking things.”
“Eh,” the first uncle says with a dismissive tone. “All kinds of people can do all kinds of things.”
We get to talking. Richard is his name. I’ve seen him in the water before, but he usually paddles out shortly as I’m heading in. Today I’m at the spot at a later time. He urges me to go out.
“Too crowded. Plus I told your friend I’d give up my spot for him.”
He dismisses this and tells me it’s good and I need to go out. Eventually the brown SUV comes rolling around. I give shakas and say goodbye as I drive away. About five cars down I see another car pulling out. As I drive past I can’t shake the feeling that this is all God’s way of telling me that I need to paddle out. So I loop around and pull into the other spot. This one is actually better because it has more shade. I pull down my 11-foot glider (pretty much my exclusive board for the past three years), sunscreen, wetsuit vest zipped up. And I walk over to the cut between rocks where I can paddle out. I see Richard and I tell him that he convinced me to change my mind. He gives a loud approval.
I make the paddle in good time. The crowd thinned a bit in the interim. Waves have power. I see a few familiar faces, folks I did not expect to see in the water because they’re usually out at my normal time. I see a wave on the horizon, taking shape. I whip my board around and paddle. I feel the momentum taking me so I hop to my feet. The wave is beginning to break in front of me, so I go to fade left and surf on my back-hand. But there’s no face there. The wave is a strict right. So I fade back to front-side and try to get into it. I squat and begin scooping at the water, hoping to pick up more speed, but no dice. So I paddle back out, chuckling to myself.
After a while I see a set forming on the horizon. No one seems to be going for it, so I spin around and start paddling. I easily catch the wave and drop in, going right. I squat a bit in the face and then stand to adjust my position, dropping down the face in order to carve my way back up. But I see that it’s walling up too far ahead and is going to close out. So I fade back left to see another closing section coming behind me. So I turn to go straight and ride out the whitewater. But I get caught between two breaking sections, the foam engulfing my board and I feel the force underneath me. I get knocked off my board and plunged under the foam. I feel the chaos of the colliding waves rolling over me and I surrender to the current. Once the wave fully passes I surface. Another wave is breaking, but I have enough time to take stock of my surroundings and know that my board has made its way toward shore, pretty far from my location.
So I start swimming.
At this point I should probably note that I prefer to surf without a leash. Unless it’s particularly big and/or crowded, I’ll forego having a urethane chord dancing about my feet. Leashes can give us a sense of false security. They can and will fail and so we need to be prepared to swim when that eventually happens. Plus, leash-free surfing forces one to be more intentional in their surfing, as well as cognizant of one’s board.
It’s been awhile since I’ve had a long swim for a board. Since I’m wearing a wet-suit vest, I have some buoyancy and I have better results from flipping on my back and kicking my way toward my board. I hold my breath and descend under white water, wait for the roll of the wave to wash over me, return to the surface, and then kick my way again.
There’s always a threat of panic in the back of my mind when I have to swim for a board. I’m pretty far from the beach where I surf and there’s a lot of water. Also infrequent tiger shark sightings. But I keep myself calm. Eventually I see that an off-duty lifeguard who surfs my spot has retrieved my board. I thank him and grab it. I bob on the inside, considering the time and effort it would take to get my leash. Nah. I’ll paddle back out.
As I’m nearing the outside, I see the lifeguard wipe out. His big yellow board is bouncing among the whitewater, making its way to shore. He, too, is not wearing a leash. So I turn my board around and grab some whitewater and make my way to where his board is bobbing on the shallow reef. I grab it and start paddling in his direction. He gets it. I tell him we’re even. We both laugh and paddle back out.
By the time I make it back outside, I’m getting tired. I tell one of the uncles I know that I got my swim in for the day and he laughs. The wind has significantly picked up and is blowing almost onshore. After a time I see another wave making its way toward me. It’s mine. I paddle and begin to make the drop a bit later than I was expecting. So I grab the rails and decide to ride it on my belly. The speed is unreal. I’m constantly on the verge of being rolled over, but I keep my composure and let myself fly toward the beach. I decide that I’m not about to paddle back out. This will be the ride, for what it’s worth.
The wave peters out in the shallows of the reef. The tide is nearly dead low, which means that I’ll have to be careful not to let my fin hit anything.
I’m a good surfer. I’ve been at it for 26 years. I get long nose rides on the well-formed South Shore faces. I drop in and run my hands along the face of the waves. I’ve even garnered compliments for my ability to hit the lip with an 11-foot board, on occasion. I’ve shaped boards, ridden a variety of designs. I know the mythology and the legends. I know surfing inside and out.
And I still have off days.
Blessed be the off days.
That saying came into my mind as I carefully paddled over the shallow reef. A large honu (sea turtle) popped its head up next to me. “Hey, cuz!” I said. It swam directly under my board.
This past Sunday we heard Jesus give the Beatitudes. There’s a tendency to read the Beatitudes as Jesus giving us a list of rewards: “be a peacemaker, get a blessing; put up with grief and persecution; get a blessing.” But Jesus is actually saying that peace-making, grieving, being persecuted, being poor in spirit, etc. are themselves blessings. In the Greek language that Matthew’s gospel was maybe first written in, the Beatitudes are in what’s called the “indicative mood.” Meaning that the blessings are indicated by the other stuff. The blessings aren’t rewards for doing certain things.
This idea translates broadly. An off-day of surfing is a blessing, if I choose to see it. Blessed be the off days, because they help you appreciate the better days. Or, Blessed be the off days, because they make you a better swimmer.
I didn’t get to have a morning of beautiful glides on my huge board. I didn’t get to run to the nose and hang ten on a perfectly groomed wave face. I didn’t even get to drop in while squatted down, feeling the cool water with my fingers as I experience the thrill of dropping into the face of a wave and setting myself up for an elegant bottom turn to set my rail and just… go.
Nope. I got wiped out. I swam a lot. I got skunked on wind-blown waves that were both somehow mushy and strong.
But I got in the water. I learned that I’m finally mature enough to appreciate even the days where my surfing kinda sucks.
Blessed be the off days, indeed.
***
The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.
#Surfing #Reflection #Ocean #Theology #Jesus #Church #Hawaii #Oahu
from DocuNero
When invoice automation tools first entered the market, they promised a simple outcome: fewer spreadsheets, less typing, and faster processing. OCR-based systems became the default choice, and for a while, they delivered real value.
Yet many finance teams today still find themselves manually checking totals, fixing line items, and validating tax values. Automation exists — but manual work hasn’t disappeared.
The question is why.
OCR was built to convert scanned documents into readable text. According to Wikipedia’s explanation of optical character recognition, the technology focuses on identifying characters and words from images, not interpreting meaning or relationships between values.
That limitation becomes obvious with invoices.
An OCR engine may correctly read every number on a page, yet still fail to determine which amount represents tax, which is a subtotal, or whether the final total actually adds up. The document is readable, but the data isn’t trustworthy.
Most OCR-driven workflows look automated on paper. In reality, human checks creep back in at critical points.
Someone verifies totals before posting. Someone fixes misaligned line items. Someone confirms tax calculations.
These steps exist because the system extracts data but doesn’t validate it. Automation removes typing, but it doesn’t remove responsibility — it simply shifts it.
AI-based invoice processing approaches the problem differently.
Instead of only asking what text appears on a document, AI evaluates whether the information makes sense together. It understands invoice structure, recognizes patterns across documents, and validates relationships between fields automatically.
This turns invoice processing from a transcription task into a decision-aware workflow.
Platforms like DocuNero are designed around this idea, combining OCR with AI-driven validation to extract invoice data while also checking totals, taxes, and consistency before the information reaches accounting systems.
Speed is often highlighted as the main benefit of automation, but confidence is more important.
When finance teams trust their data, downstream processes improve naturally. Reporting becomes cleaner, audits become simpler, and month-end close stops feeling like damage control. The work shifts from fixing errors to using information.
That’s when automation actually delivers on its promise.
OCR wasn’t a failure — it was a foundation.
As invoice volumes grow and financial workflows become more complex, automation must move beyond reading documents and start understanding them. AI doesn’t replace OCR; it completes it.
True automation isn’t about extracting data faster. It’s about trusting the data you extract.
from
Shad0w's Echos
#nsfw #CeCe
As the semester wound down, the first real crisis in CeCe's unapologetic lifestyle loomed like a storm cloud over our urban city skyline. She was heading home for the summer—back to her family's strict household on the outskirts, where the internet was heavily blocked and censored, monitored by her oppressive parents who still treated her like the sheltered girl she'd once been. They had no idea how far she'd strayed, and CeCe dreaded the thought of being cut off. She didn't want to burn out on studying or summer courses; it'd be nice to have an actual academic break, lounging in the humid heat, maybe catching up on sleep. But she needed her porn—the constant stream of it that had become her lifeline.
She hadn't stopped watching since that first night I'd shown her, not even for a day. In fact, after our heart-to-heart during spring break, it was all she'd been consuming, scrolling endlessly on her phone or laptop, her curvy body always bare or barely covered, fingers working her slick folds as explicit scenes played out. She hadn't worn a bra in months, her full C-cup breasts free under loose tops or nothing at all, nipples often peeking through fabric as if daring the world to notice.
I found her one evening in our dorm, pacing naked, her caramel skin glistening with anxious sweat, thick thighs trembling as she clutched her phone like a talisman. Tears streamed down her face, her usual confidence crumbling into a full-blown breakdown. “Tasha, I can't do this,” she sobbed, collapsing onto the bed, her juicy ass sinking into the mattress as she curled up. “Home's a prison—no porn, no freedom. I can't be naked. They'll watch everything. What if I lose it? I need it, like air. It's my everything now.”
I sat beside her, pulling her into my arms, her bare breasts pressing against me as I stroked her back, calming her the way only I could. “Shh, breathe, CeCe. We'll figure it out. Remember, I can send you stuff—through any chat app, like Telegram. It's encrypted, private. Videos, links, whatever you need. I'll keep you stocked, okay? You won't go without.”
She looked up at me, eyes wide with relief, and nodded, her sobs easing into shaky breaths. But as the days turned into weeks and she headed home, I realized how important she was to me. I was falling for her hard. But right now, I couldn't tell her. Not with the looming transition from freedom to a prison of clothing and morality. I knew in these moments, I just had to be her rock and send her porn and assure her that everything will be ok.
I sent her porn religiously—curated clips of exhibitionist scenes, gooning sessions, whatever matched her escalating tastes—slipping them through Telegram in the dead of night, careful not to trigger her family's filters. She was dealing with their constant judgment and their the suffocating rules. I kept her supplied, feeding this beautiful addiction from afar. Porn was her only anchor amid the oppressive chaos.
That's how I became CeCe's enabler, especially when she didn't have free and open internet. And in the process, I have to admit, I developed my own growing porn addiction—hours spent hunting for the perfect videos to send her, diving deeper into the rabbit hole myself, my nights blurring into a haze of arousal as I knew I had to be in CeCe's world to really be with her. I loved it.
CeCe made it through that grueling summer, thanks in no small part to my covert porn deliveries via Telegram—clips of wild exhibitionist scenes that kept her sane amid her family's stifling rules. When sophomore year kicked off, we scored dorms together again. Full credit went to CeCe and her well-crafted tactics and requests. I don't question her ways. I just like watching her work her magic.
Our dorm for this year was in a taller building on the edge of campus, overlooking the bustling Georgia city streets below. It felt like a fresh start, our little sanctuary where she could let loose without judgment. But CeCe had been up to something super perverted over the break, a secret she hadn't breathed a word about until we were unpacking boxes in our new room.
One evening, as we settled in, she stripped naked as usual—her caramel curves on full display, thick thighs spreading as she lounged on her bed with her phone in hand. That's when she confessed, her voice casual but laced with that familiar thrill. “You know, Tasha, when my parents left the house for errands or whatever, I'd sneak out to the backyard completely naked. Just me, the sun on my skin, watching porn on my phone and masturbating right there in the open air. Fingers deep in my pussy, moaning loud enough for the neighbors to maybe hear if they listened close. It was risky, but God, it felt amazing—wind on my tits, grass under my ass, coming hard while some video played of a girl flashing in public.”
I stared at her, my heart racing, a mix of shock and that twisted arousal she always stirred in me. “CeCe, that's... insane. What if someone saw?”
She laughed, rubbing her clit absentmindedly as she scrolled for her next fix. “I can't stop escalating, Tasha. Being forced to wear clothes indoors all summer? It built up this rebellion in me. When they were gone, I had to get it all out—bare, exposed, free. It's like my body's screaming for more now.” Her eyes sparkled with that calculated recklessness; she wasn't dumb about it—she timed it perfectly, checked the angles, made sure the yard's fences hid just enough. But it was pushing boundaries further than ever.
Then she glanced at our third-floor window, the city lights twinkling below like distant stars. “Hey, can we keep the blinds open at all times? Maybe the window wide open too until it's too cold? Up here, no one's really looking, but the breeze... it'd feel so good on my skin while I watch and play.”
I hesitated, biting my lip, knowing it was another step into her world. But this woman—my fearless, addictive CeCe—was more exciting than any relationship I could imagine with someone else. Guys seemed boring by comparison; she kept life electric, charged with that platonic tension we danced around. “Okay,” I agreed reluctantly, cracking the window open, the humid night air rushing in. “Just... be careful.”
As sophomore year progressed, I made a conscious decision to invest more time in CeCe, pouring my energy into our unique bond. I'd gone on a few dates during freshman year—casual flings with guys from class or apps—but after our raw heart-to-heart over spring break, they all paled in comparison. My mind always drifted back to her, to the electric tension we shared, the way her fearless spirit lit up my world. No one else was as captivating, as alive. CeCe, meanwhile, with my help, had developed a deeper kink.
She wasn't watching porn; she was becoming what she saw. She wanted to be fully consumed by her porn addiction. Her sessions in our dorm grew longer and more intense, often with the windows open to let the city breeze tease her bare skin. In turn, I normalized it and never judged her. Sometimes I even encouraged her just to see her smile while she was rubbing herself silly into her 4th hour of gooning.
However, I wanted to bring a little bit of balance into her life. I figured it was time to coax her out of her “porn cave,” as I started calling our room. Not big crowds or wild parties—that'd overwhelm her—but simple outings to change the scenery: leisurely walks in quiet parks, cozy corners in small cafes where we could sip coffee and people-watch. “Come on, CeCe,” I'd say, “fresh air might do you good. Balance things out a bit.” To my surprise, she agreed, her eyes lighting up with that mischievous spark. “Sure, Tasha. As long as I can be comfortable.”
Of course, “comfortable” meant going out braless under one of her hoodies—she had ten different colors now, zipped just enough to hint at her full C-cup breasts swaying freely beneath, and it was basically all she wore when leaving the dorms. I expected that much; it was her signature look, rebellious and teasing. But CeCe had something planned she didn't tell me, a perverted twist she'd cooked up in secret.
On our first night out, we headed to a dimly lit cafe a few blocks from campus, the city streets humming with evening traffic. She wore her usual: a loose gray hoodie and tiny black shorts that barely peeked out from under the hem. Halfway through our walk, she pulled me into a shadowed alley, grinning like she'd won the lottery. “Check this out, Tasha,” she whispered, hiking up the hoodie just enough to reveal the truth.
Her shorts? They'd been modified—the crotch completely cut out, leaving a massive hole that exposed her slick, shaved pussy to the cool air. She was essentially naked in public, but still “covered” from a distance. The fabric acted like garters, framing her thick thighs and covering her juicy ass, while ensuring something dangled below the hoodie to mimic normal shorts. Topless underneath, pussy fully on display if anyone got close enough. “See? I can feel the breeze right on my clit, but no one's the wiser. It's perfect—exposed but hidden.”
I sighed, shaking my head, a familiar mix of exasperation and reluctant admiration washing over me. “CeCe... seriously?” But I couldn't stop smiling. Can't stay mad at her. This was just how she was—reckless yet calculated, always one step ahead in her escalation game. True to her word, she started going out more with me, as long as she could rock this setup. We'd stroll through parks, her exposed pussy brushing against the modified shorts with every step, or sit in cafes where she'd subtly rock her hips, savoring the thrill. I went along with it, my silent vow to her holding strong.
Eventually, though, CeCe started escalating this activity too, pushing the boundaries like she always did. One sunny afternoon, we ventured to a secluded park on the city's outskirts, a quiet spot with winding paths and hidden benches surrounded by trees. We found a bench away from the main trails, chatting about classes and her latest porn finds.
Even though I was right there with her while she was gooning, she was so animated and happy when she talked about porn. Sometimes she talked in circles about her obsession. I just got lost in her eyes as she showed me how comfortable and safe she felt with me.
During her passionate discussion about her porn addiction, she did something new in the park. Without hesitation, she spread her legs wide, the cut-out shorts framing her dripping pussy as she slipped her fingers inside, rubbing her clit in slow, deliberate circles while maintaining eye contact. She smiled when she saw my reaction. Then she changed the subject. “So, what do you think about that new engineering prof?” she asked casually, her breath hitching as she plunged deeper, moans mixing with her words.
Instead of reacting or pulling her back, I just carried on the conversation, my voice steady. “He's tough, but fair—way better than last year's. Pass me the water?” At this point, what CeCe did was normal to me; her shameless masturbation, even in semi-public, had become just another part of our rhythm, as familiar as her laugh or the curve of her caramel hips. I corrupted her and in turn, she wanted to corrupt me.
from
SmarterArticles

In May 2024, something unprecedented appeared on screens across Central Asia. A 52-second video in Pashto featured a news anchor calmly claiming responsibility for a terrorist attack in Bamiyan, Afghanistan. The anchor looked local, spoke fluently, and delivered the message with professional composure. There was just one problem: the anchor did not exist. The Islamic State Khorasan Province (ISKP) had produced its first AI-generated propaganda bulletin, and the implications for global security, content moderation, and the very architecture of our information ecosystem would prove profound.
This was not an isolated experiment. Days later, ISKP released another AI-driven segment, this time featuring a synthetic anchor dressed in Western attire to claim responsibility for a bombing in Kandahar. The terrorist organisation had discovered what Silicon Valley already knew: generative AI collapses the marginal cost of content production to nearly zero, whilst simultaneously expanding the potential for audience capture beyond anything previously imaginable.
The question now facing researchers, policymakers, and platform architects is not merely whether AI-generated extremist content poses a threat. That much is evident. The deeper concern is structural: what happens when the economics of inflammatory content production fundamentally shift in favour of those willing to exploit human psychological vulnerabilities at industrial scale? And what forms of intervention, if any, can address vulnerabilities that are built into the very architecture of our information systems?
To understand the stakes, one must first grasp the peculiar economics of the attention economy. Unlike traditional markets where production costs create natural barriers to entry, digital content operates under what economists call near-zero marginal cost conditions. Once the infrastructure exists, producing one additional piece of content costs essentially nothing. A research paper published on arXiv in 2025 frames the central challenge succinctly: “When the marginal cost of producing convincing but unverified content approaches zero, how can truth compete with noise?”
The arrival of large language models like GPT-4 and Claude represents what researchers describe as “a structural shift in the information production function.” This shift carries profound implications for the competitive dynamics between different types of content. Prior to generative AI, producing high-quality extremist propaganda required genuine human effort: scriptwriters, video editors, voice actors, translators. Each element imposed costs that naturally limited production volume. A terrorist organisation might release a dozen slickly produced videos annually. Now, the same organisation can generate thousands of variations in multiple languages, tailored to specific demographics, at effectively zero marginal cost.
The economic literature on this phenomenon identifies what researchers term a “production externality” in information markets. Producers of low-quality or harmful content do not internalise the negative social effects of their output. The social marginal cost vastly exceeds the private marginal cost, creating systematic incentives for information pollution. When generative AI capabilities (what some researchers term “offence”) dramatically outstrip detection technologies (“defence”), the marginal cost of producing harmful content falls precipitously, “systemically exacerbating harm.”
This creates what might be called a market bifurcation effect. Research suggests a “barbell” structure will emerge in content markets: low-end demand captured by AI at marginal cost, whilst human creators are forced into high-premium, high-complexity niches. The middle tier of content production essentially evaporates. For mainstream media and entertainment, this means competing against an infinite supply of machine-generated alternatives. For extremist content, it means the historical production barriers that limited proliferation have effectively disappeared.
The U.S. AI-powered content creation market alone was estimated at $198.4 million in 2024 and is projected to reach $741.1 million by 2033, growing at a compound annual growth rate of 15.8%. This explosive growth reflects businesses adopting AI tools to reduce time and costs associated with manual content creation. The same economics that drive legitimate business adoption, however, equally benefit those with malicious intent.
The economics of production tell only half the story. The other half concerns distribution, and here the structural vulnerabilities of attention economies become starkly apparent.
Modern social media platforms operate on a simple principle: content that generates engagement receives algorithmic promotion. This engagement-optimisation model has proved extraordinarily effective at capturing human attention. It has also proved extraordinarily effective at amplifying inflammatory, sensational, and divisive material. As Tim Wu, the legal scholar who coined the term “net neutrality,” observed, algorithms “are optimised not for truth or well-being, but for engagement, frequently achieved through outrage, anxiety, or sensationalism.”
The empirical evidence for this amplification effect is substantial. Research demonstrates that false news spreads six times faster than truthful news on Twitter (now X), driven largely by the emotional content that algorithms prioritise. A landmark study published in Science in 2025 provided causal evidence for this dynamic. Researchers developed a platform-independent method to rerank participants' feeds in real time and conducted a preregistered 10-day field experiment with 1,256 participants on X during the 2024 US presidential campaign. The results were striking: decreasing or increasing exposure to antidemocratic attitudes and partisan animosity shifted participants' feelings about opposing political parties by more than 2 points on a 100-point scale. This effect was comparable to several years' worth of polarisation change measured in long-term surveys.
Research by scholars at MIT and elsewhere has shown that Twitter's algorithm amplifies divisive content far more than users' stated preferences would suggest. A systematic review synthesising a decade of peer-reviewed research (2015-2025) on algorithmic effects identified three consistent patterns: algorithmic systems structurally amplify ideological homogeneity; youth demonstrate partial awareness of algorithmic manipulation but face constraints from opaque recommender systems; and echo chambers foster both ideological polarisation and identity reinforcement.
The review also found significant platform-specific effects. Facebook is primarily linked to polarisation, YouTube is associated with radicalisation with particularly strong youth relevance, and Twitter/X emphasises echo chambers with moderate youth impact. Instagram and TikTok remain under-researched despite their enormous user bases, a concerning gap given TikTok's particularly opaque recommendation system.
The implications for AI-generated content are profound. If algorithms already preferentially amplify emotionally charged, divisive material created by humans, what happens when such material can be produced at unlimited scale with sophisticated personalisation? The answer, according to researchers at George Washington University's Program on Extremism, is that extremist groups can now “systematically exploit AI-driven recommendation algorithms, behavioural profiling mechanisms, and generative content systems to identify and target psychologically vulnerable populations, thereby circumventing traditional counterterrorism methodologies.”
Perhaps the most concerning aspect of AI-enabled extremism is its capacity for psychological targeting at scale. Traditional propaganda operated as a broadcast medium: create a message, distribute it widely, hope it resonates with some fraction of the audience. AI-enabled propaganda operates as a precision instrument: identify psychological vulnerabilities, craft personalised messages, deliver them through algorithmically optimised channels.
Research published in Frontiers in Political Science in 2025 documented how “through analysing huge amounts of personal data, AI algorithms can tailor messages and content which appeal to a particular person's emotions, beliefs and grievances.” This capability transforms radicalisation from a relatively inefficient process into something approaching industrial production.
The numbers are sobering. A recent experiment estimated that AI-generated propaganda can persuade anywhere between 2,500 and 11,000 individuals per 100,000 targeted. Research participants who read propaganda generated by GPT-3 were nearly as persuaded as those who read real propaganda from state actors in Iran or Russia. Given that elections and social movements often turn on margins smaller than this, the potential for AI-generated influence operations to shift outcomes is substantial.
The real-world evidence is already emerging. In July 2024, Austrian authorities arrested several teenagers who were planning a terrorist attack at a Taylor Swift concert in Vienna. The investigation revealed that some suspects had been radicalised online, with TikTok serving as one of the platforms used to disseminate extremist content that influenced their beliefs and actions. The algorithm, optimised for engagement, had efficiently delivered radicalising material to psychologically vulnerable young people.
This is not a failure of content moderation in the traditional sense. It is a structural feature of engagement-optimised systems encountering content designed to exploit that optimisation. Research published in Frontiers in Social Psychology in 2025 found that TikTok's algorithms “privilege more extreme material, and through increased usage, users are gradually exposed to more and more misogynistic ideologies.” The algorithms actively amplify and direct harmful content, not as a bug, but as a consequence of their fundamental design logic.
The combination of psychological profiling and generative AI creates what researchers describe as an unprecedented threat vector. Leaders of extremist organisations are no longer constrained by language barriers, as AI translation capabilities expand their reach across linguistic boundaries. Propaganda materials can now be produced rapidly using just a few keywords. The introduction of deepfakes adds another dimension, enabling the misrepresentation of words or actions by public figures. As AI systems become more publicly available and open-source, the barriers to entry for their use continue to lower, making it easier for malicious actors to adopt AI technologies at scale.
Faced with these challenges, platforms have relied on a suite of content moderation tools developed primarily for human-generated content. The most sophisticated of these is “fingerprinting” or hashing, which creates unique digital signatures for known harmful content and automatically removes matches across the platform. This approach has proved reasonably effective against the redistribution of existing terrorist videos and child sexual abuse material.
Generative AI renders this approach largely obsolete. According to research from the Combating Terrorism Center at West Point, “by manipulating their propaganda with generative AI, extremists can change a piece of content's digital fingerprint, rendering fingerprinting mute as a moderation tool.” A terrorist can now take existing propaganda, run it through an AI system that makes superficially minor modifications, and produce content that evades all hash-based detection whilst preserving the harmful message.
The scale challenge compounds this technical limitation. A 2024 report from Philosophy & Technology noted that “humans alone can't keep pace with the enormous volume of content that AI creates.” Most content moderation decisions are now made by machines, not human beings, and this is only set to accelerate. Automation amplifies human error, with biases embedded in training data and system design, whilst enforcement decisions happen rapidly, leaving limited opportunities for human oversight.
Traditional keyword and regex-based filters fare even worse. Research from the University of Chicago's Data Science Institute documented how “GenAI changes content moderation from a post-publication task to a real-time, model-layer challenge. Traditional filters, based on keywords or regex, fail to catch multilingual, evasive, or prompt-driven attacks.”
The detection arms race shows signs of favouring offence over defence. Research from Drexel University identified methods to detect AI-generated video through “fingerprints” unique to different generative models. However, as a Reuters Institute analysis noted, “deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.” Studies have demonstrated poorer performance of detection tools on certain types of content, and researchers warn of “a potential 'arms race' in technological detection, where increasingly sophisticated deepfakes may outpace detection methods.”
The gender dimension of this challenge deserves particular attention. Image-based sexual abuse is not new, but the explosion of generative AI tools to enable it marks a new era for gender-based harassment. For little or no cost, any individual with an internet connection and a photo can produce sexualised imagery of that person. The overwhelming majority of this content targets women and girls, ranging from teenagers to politicians and other public figures. This represents a form of AI-generated extremism that operates at the intersection of technology, misogyny, and the commodification of attention.
If traditional content moderation cannot address the AI-generated extremism challenge, what about reforming platform architecture itself? Here the picture grows more complex, touching on fundamental questions about the design logic of attention economies.
The European Union has attempted the most comprehensive regulatory response to date. The Digital Services Act (DSA), which came into full force in 2024, imposes significant obligations on Very Large Online Platforms (VLOPs) with over 45 million monthly EU users. The law forces platforms to be more transparent about how their algorithmic systems work and holds them accountable for societal risks stemming from their services. Non-compliant platforms face fines up to 6% of annual global revenue. During the second quarter of 2024, the Commission publicly confirmed that it had initiated formal proceedings against several major online platforms, requiring detailed documentation on content moderation systems, algorithmic recommender systems, and advertising transparency.
The EU AI Act adds additional requirements specific to AI-generated content. Under this legislation, certain providers must detect and disclose manipulated content, and very large platforms must identify and mitigate systemic risks associated with synthetic content. China has gone further still: as of September 2025, all AI-generated content, whether text, image, video, or audio, must be labelled either explicitly or implicitly, with obligations imposed across service providers, platforms, app distributors, and users.
In February 2025, the European Commission released a new best-practice election toolkit under the Digital Services Act. This toolkit provides guidance for regulators working with platforms to address risks including hate speech, online harassment, and manipulation of public opinion, specifically including those involving AI-generated content and impersonation.
These regulatory frameworks represent important advances in transparency and accountability. Whether they can fundamentally alter the competitive dynamics between inflammatory and mainstream content remains uncertain. The DSA and AI Act address disclosure and risk mitigation, but they do not directly challenge the engagement-optimisation model that underlies algorithmic amplification. Platforms may become more transparent about how their algorithms work whilst those algorithms continue to preferentially promote outrage-inducing material.
Some researchers have proposed more radical architectural interventions. In her 2024 book “Invisible Rulers,” Renee DiResta, formerly of the Stanford Internet Observatory and now at Georgetown University's McCourt School of Public Policy, argued for changes that would make algorithms “reward accuracy, civility, and other values” rather than engagement alone. The Center for Humane Technology, co-founded by former Google design ethicist Tristan Harris, has advocated for similar reforms, arguing that “AI is following the same dangerous playbook” as social media, with “companies racing to deploy AI systems optimised for engagement and market dominance, not human wellbeing.”
Yet implementing such changes confronts formidable obstacles. The attention economy model has proved extraordinarily profitable. In 2024, private AI investment in the United States far outstripped that in the European Union, raising concerns that stringent regulation might simply shift innovation elsewhere. The EU Parliament's own analysis acknowledged that “regulatory complexity could be stifling innovation.” Meanwhile, research institutions dedicated to studying these problems face their own challenges: the Stanford Internet Observatory, which pioneered research into platform manipulation, was effectively dismantled in 2024 following political pressure, with its founding director Alex Stamos and research director Renee DiResta both departing after sustained attacks from politicians who alleged their work amounted to censorship.
Beyond the technical and economic challenges lies a deeper philosophical problem. Our frameworks for regulating speech, including the human rights principles that undergird them, were developed for human expression. What happens when expression becomes “hybrid,” generated or augmented by machines, with fluid authorship and unclear provenance?
Research published in Taylor & Francis journals in 2025 argued that “conventional human rights frameworks, particularly freedom of expression, are considered ill-equipped to govern increasingly hybrid media, where authorship and provenance are fluid, and emerging dilemmas hinge more on perceived value than rights violations.”
Consider the problem of synthetic personas. An AI can generate not just content but entire fake identities, complete with profile pictures, posting histories, and social connections. These synthetic personas can engage in discourse, build relationships with real humans, and gradually introduce radicalising content. From a traditional free speech perspective, we might ask: whose speech is this? The AI developer's? The user who prompted the generation? The corporation that hosts the platform? Each answer carries different implications for responsibility and remedy.
The provenance problem extends to detection. Even if we develop sophisticated tools to identify AI-generated content, what do we do with that information? Mandatory labelling, as China has implemented, assumes users will discount labelled content appropriately. But research on misinformation suggests that labels have limited effectiveness, particularly when content confirms existing beliefs. Moreover, as the Reuters Institute noted, “disclosure techniques such as visible and invisible watermarking, digital fingerprinting, labelling, and embedded metadata still need more refinement.” Malicious actors may circumvent these measures “by using jailbroken versions or creating their own non-compliant tools.”
There is also the question of whether gatekeeping mechanisms designed for human creativity can or should apply to machine-generated content. Copyright law, for instance, generally requires human authorship. Platform terms of service assume human users. Content moderation policies presuppose human judgment about context and intent. Each of these frameworks creaks under the weight of AI-generated content that mimics human expression without embodying human meaning.
The problem grows more acute when considering the speed at which these systems operate. Research from organisations like WITNESS has addressed how transparency in AI production can help mitigate confusion and lack of trust. However, the refinement of disclosure techniques remains ongoing, and the gap between what is technically possible and what is practically implemented continues to widen.
Despite these challenges, researchers and technologists are exploring new approaches that might address the structural vulnerabilities of attention economies to AI-generated extremism.
One promising direction involves using large language models themselves for content moderation. Research published in Artificial Intelligence Review in 2025 explored how LLMs could revolutionise moderation economics. Once fine-tuned for the task, LLMs would be far less expensive to deploy than armies of human content reviewers. OpenAI has reported that using GPT-4 for content policy development and moderation enabled faster and more consistent policy iteration, reduced from months to hours, enhancing both accuracy and adaptability.
Yet this approach carries its own risks. Using AI to moderate AI creates recursive dependencies and potential failure modes. As one research paper noted, the tools and strategies used for content moderation “weren't built for GenAI.” LLMs can hallucinate, reflect bias from training data, and generate harmful content “without warning, even when the prompt looks safe.”
Another architectural approach involves restructuring recommendation algorithms themselves. The Science study on algorithmic polarisation demonstrated that simply reranking content to reduce exposure to antidemocratic attitudes and partisan animosity measurably shifted users' political attitudes. This suggests that alternative ranking criteria, prioritising accuracy or viewpoint diversity over engagement, could mitigate polarisation effects. However, implementing such changes would require platforms to sacrifice engagement metrics that directly drive advertising revenue. The economic incentives remain misaligned with social welfare.
Some researchers have proposed more fundamental interventions: breaking up large platforms, imposing algorithmic auditing requirements, creating public interest alternatives to commercial social media, or developing decentralised architectures that reduce the power of any single recommendation system. Each approach carries trade-offs and faces significant political and economic barriers.
Perhaps most intriguingly, some researchers have suggested using AI itself for counter-extremism. As one Hedayah research brief noted, “LLMs could impersonate an extremist and generate counter-narratives on forums, chatrooms, and social media platforms in a dynamic way, adjusting to content seen online in real-time. A model could inject enough uncertainty online to sow doubt among believers and overwhelm extremist channels with benign content.” The prospect of battling AI-generated extremism with AI-generated counter-extremism raises its own ethical questions, but it acknowledges the scale mismatch that human-only interventions cannot address.
The development of more advanced AI models continues apace. GPT-5, launched in August 2025, brings advanced reasoning capabilities in a multimodal interface. Its capabilities suggest a future moderation system capable of understanding context across formats with greater depth. Google's Gemini 2.5 family similarly combines speed, multimodal input handling, and advanced reasoning to tackle nuanced moderation scenarios in real time. Developers can customise content filters and system instructions for tailored moderation workflows. Yet the very capabilities that enable sophisticated moderation also enable sophisticated evasion.
The most profound concern may be the one hardest to address: the possibility that AI-generated extremism at scale could systematically shift cultural baselines over time. In an “attention ecology,” as researchers describe it, algorithms intervene in “the production, circulation, and legitimation of meaning by structuring knowledge hierarchies, ranking content, and determining visibility.”
If inflammatory content consistently outcompetes moderate content for algorithmic promotion, and if AI enables the production of inflammatory content at unlimited scale, then the information environment itself shifts toward extremism, not through any single piece of content but through the aggregate effect of millions of interactions optimised for engagement.
Research on information pollution describes this as a “congestion externality.” In a digital economy where human attention is the scarce constraint, an exponential increase in synthetic content alters the signal-to-noise ratio. As the cost of producing “plausible but mediocre” content vanishes, platforms face a flood of synthetic noise. The question becomes whether quality content, however defined, can maintain visibility against this tide.
A 2020 Pew Research Center survey found that 64% of Americans believed social media had a mostly negative effect on the direction of the country. This perception preceded the current wave of AI-generated content. If attention economies were already struggling to balance engagement optimisation with social welfare, the introduction of AI-generated content at scale suggests those struggles will intensify.
The cultural baseline question connects to democratic governance in troubling ways. During the 2024 election year, researchers documented deepfake audio and video targeting politicians across multiple countries. In Taiwan, deepfake audio of a politician endorsing another candidate surfaced on YouTube. In the United Kingdom, fake clips targeted politicians across the political spectrum. In India, where over half a billion voters went to the polls, people were reportedly “bombarded with political deepfakes.” These instances represent early experiments with a technology whose capabilities expand rapidly.
Can interventions address these structural vulnerabilities? The technical answer is uncertain. Detection technologies continue to improve, but they face a fundamental asymmetry: defenders must identify all harmful content, whilst attackers need only evade detection some of the time. Watermarking and provenance systems show promise but can be circumvented by determined actors using open-source tools or jailbroken models.
The political answer is perhaps more concerning. The researchers and institutions best positioned to study these problems have faced sustained attacks. The Stanford Internet Observatory's effective closure in 2024 followed “lawsuits, subpoenas, document requests from right-wing politicians and non-profits that cost millions to defend, even when vindicated by the US Supreme Court in June 2024.” The lab will not conduct research into any future elections. This chilling effect on research occurs precisely when such research is most needed.
Meanwhile, the economic incentives of major platforms remain oriented toward engagement maximisation. The EU's regulatory interventions, however significant, operate at the margins of business models that reward attention capture above all else. The 2024 US presidential campaign occurred in an information environment shaped by algorithmic amplification of divisive content, with AI-generated material adding new dimensions of manipulation.
There is also the question of global coordination. Regulatory frameworks developed in the EU or US have limited reach in jurisdictions that host extremist content or provide AI tools to bad actors. The ISKP videos that opened this article were not produced in Brussels or Washington. Addressing AI-generated extremism requires international cooperation at a moment when geopolitical tensions make such cooperation difficult.
Internal documents from major platforms have occasionally offered glimpses of the scale of the problem. One revealed that 64% of users who joined extremist groups on Facebook did so “due to recommendation tools.” According to the Mozilla Foundation's “YouTube Regrets” report, 12% of content recommended by YouTube's algorithms violates the company's own community standards. These figures predate the current wave of AI-generated content. The integration of generative AI into content ecosystems has only expanded the surface area for algorithmic radicalisation.
The fundamental question raised by AI-generated extremist content concerns the sustainability of attention economies as currently constructed. These systems were designed for an era when content production carried meaningful costs and human judgment imposed natural limits on the volume and extremity of available material. Neither condition obtains in an age of generative AI.
The structural vulnerabilities are not bugs to be patched but features of systems optimised for engagement in a competitive marketplace for attention. Algorithmic amplification of inflammatory content is the logical outcome of engagement optimisation. AI-generated extremism at scale is the logical outcome of near-zero marginal production costs. Traditional content moderation cannot address dynamics that emerge from the fundamental architecture of the systems themselves.
This does not mean the situation is hopeless. The research cited throughout this article points toward potential interventions: algorithmic reform, regulatory requirements for transparency and risk mitigation, AI-powered counter-narratives, architectural redesigns that prioritise different values. Each approach faces obstacles, but obstacles are not impossibilities.
What seems clear is that the current equilibrium is unstable. Attention economies that reward engagement above all else will increasingly be flooded with AI-generated content designed to exploit human psychological vulnerabilities. The competitive dynamics between inflammatory and mainstream content will continue to shift toward the former as production costs approach zero. Traditional gatekeeping mechanisms will continue to erode as detection fails to keep pace with generation.
The choices facing societies are not technical alone but political and philosophical. What values should govern information ecosystems? What responsibilities do platforms bear for the content their algorithms promote? What role should public institutions play in shaping attention markets? And perhaps most fundamentally: can liberal democracies sustain themselves in information environments systematically optimised for outrage?
These questions have no easy answers. But they demand attention, perhaps the scarcest resource of all.
Combating Terrorism Center at West Point, “Generating Terror: The Risks of Generative AI Exploitation,” CTC Sentinel, Volume 17, Issue 1, January 2024. https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation/
Frontiers in Political Science, “The role of artificial intelligence in radicalisation, recruitment and terrorist propaganda,” 2025. https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2025.1718396/full
Frontiers in Social Psychology, “Social media, AI, and the rise of extremism during intergroup conflict,” 2025. https://www.frontiersin.org/journals/social-psychology/articles/10.3389/frsps.2025.1711791/full
Science, “Reranking partisan animosity in algorithmic social media feeds alters affective polarization,” 2025. https://www.science.org/doi/10.1126/science.adu5584
GNET Research, “Automated Recruitment: Artificial Intelligence, ISKP, and Extremist Radicalisation,” April 2025. https://gnet-research.org/2025/04/11/automated-recruitment-artificial-intelligence-iskp-and-extremist-radicalisation/
GNET Research, “The Feed That Shapes Us: Extremism and Adolescence in the Age of Algorithms,” December 2025. https://gnet-research.org/2025/12/12/the-feed-that-shapes-us-extremism-and-adolescence-in-the-age-of-algorithms/
arXiv, “The Economics of Information Pollution in the Age of AI,” 2025. https://arxiv.org/html/2509.13729
arXiv, “Rewarding Engagement and Personalization in Popularity-Based Rankings Amplifies Extremism and Polarization,” 2025. https://arxiv.org/html/2510.24354v1
Georgetown Law, “The Attention Economy and the Collapse of Cognitive Autonomy,” Denny Center for Democratic Capitalism. https://www.law.georgetown.edu/denny-center/blog/the-attention-economy/
George Washington University Program on Extremism, “Artificial Intelligence and Radicalism: Risks and Opportunities.” https://extremism.gwu.edu/artificial-intelligence-and-radicalism-risks-and-opportunities
International Centre for Counter-Terrorism (ICCT), “The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence.” https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence
Hedayah, “AI for Counter-Extremism Research Brief,” April 2025. https://hedayah.com/app/uploads/2025/06/Hedayah-Research-Brief-AI-for-Counter-Extremism-April-2025-Design-DRAFT-28.04.25-v2.pdf
Philosophy & Technology (PMC), “Moderating Synthetic Content: the Challenge of Generative AI,” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11561028/
Taylor & Francis, “Platforms as architects of AI influence: rethinking moderation in the age of hybrid expression,” 2025. https://www.tandfonline.com/doi/full/10.1080/20414005.2025.2562681
Taylor & Francis, “The Ghost in the Machine: Counterterrorism in the Age of Artificial Intelligence,” 2025. https://www.tandfonline.com/doi/full/10.1080/1057610X.2025.2475850
European Commission, “The Digital Services Act,” Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act
AlgorithmWatch, “A guide to the Digital Services Act.” https://algorithmwatch.org/en/dsa-explained/
TechPolicy.Press, “The Digital Services Act Meets the AI Act: Bridging Platform and AI Governance.” https://www.techpolicy.press/the-digital-services-act-meets-the-ai-act-bridging-platform-and-ai-governance/
ISD Global, “Towards transparent recommender systems: Lessons from TikTok research ahead of the 2025 German federal election.” https://www.isdglobal.org/digital_dispatches/towards-transparent-recommender-systems-lessons-from-tiktok-research-ahead-of-the-2025-german-federal-election/
Reuters Institute for the Study of Journalism, “Spotting the deepfakes in this year of elections: how AI detection tools work and where they fail.” https://reutersinstitute.politics.ox.ac.uk/news/spotting-deepfakes-year-elections-how-ai-detection-tools-work-and-where-they-fail
U.S. GAO, “Science & Tech Spotlight: Combating Deepfakes,” 2024. https://www.gao.gov/products/gao-24-107292
World Economic Forum, “4 ways to future-proof against deepfakes in 2024 and beyond,” February 2024. https://www.weforum.org/stories/2024/02/4-ways-to-future-proof-against-deepfakes-in-2024-and-beyond/
Springer, “Content moderation by LLM: from accuracy to legitimacy,” Artificial Intelligence Review, 2025. https://link.springer.com/article/10.1007/s10462-025-11328-1
Meta Oversight Board, “Content Moderation in a New Era for AI and Automation.” https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-automation/
MDPI, “Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth,” 2024. https://www.mdpi.com/2075-4698/15/11/301
Center for Humane Technology. https://www.humanetech.com/
NPR, “A major disinformation research team's future is uncertain after political attacks,” June 2024. https://www.npr.org/2024/06/14/g-s1-4570/a-major-disinformation-research-teams-future-is-uncertain-after-political-attacks
Platformer, “The Stanford Internet Observatory is being dismantled,” 2024. https://www.platformer.news/stanford-internet-observatory-shutdown-stamos-diresta-sio/
Grand View Research, “U.S. AI-Powered Content Creation Market Report, 2033.” https://www.grandviewresearch.com/industry-analysis/us-ai-powered-content-creation-market-report
Mozilla Foundation, “YouTube Regrets” Report. https://foundation.mozilla.org/
Pew Research Center, “Americans' Views of Technology Companies,” 2020. https://www.pewresearch.org/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Happy Duck Art
Been getting more comfortable with acrylics. I really like the sort of “intuitive abstract” thing – I’m not stellar at representational painting. I don’t enjoy it. It can be fun, sometimes, to incorporate it, but I’m unlikely to decide to become a capturer of real life.
I would, however, like to get better at photographing my paintings, because I’m not terrific at that. :)



from
Roscoe's Story
In Summary: * This is another good day in the Roscoe-verse. Again today I've been able to keep HVAC system turned off and the heavy front door open, letting the fresh air in through the screen door for most of the day. If the weather forecasts are right, I'll be able to do this for at least the next 10 days.
Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Health Metrics: * bw= 226.08 lbs. * bp= 159/91 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:00 – toast and butter * 08:00 – 2 tangerines * 10:00 – 1 peanut butter sandwich * 14:00 – lasagna * 16:30 – 1 fresh apple * 17:45 – 2 HEB Bakery cookies
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 05:55 – bank accounts activity monitored * 06:15 – read, pray, follow news reports from various sources, surf the socials * 13:30 to 15:00 – watch old game shows and eat lunch at home with Sylvia * 15:00 – listen to Dan Bongino podcast * 16:40 – listening now to the last part of The Jack Riccardi Show, will follow this with the first part of The Joe Pags Show, before tuning the radio to The Home for IU Women's Basketball ahead of tonight's game. After the game, my plan is to finish my night prayers and get ready for bed.
Chess: * 12:00 – moved in all pending CC games
from bone courage
The excerpt below opens The Salesman, a satire on government efficiency written in 2016 and submitted in 2024 for publication. The good folks at Waxing and Waning are publishing the short story. Go ahead and pre-order their March issue to see where Paul lands.
Paul Winter races a million miles an hour over the hot Mojave asphalt to close his next sale. Nothing slows down his square black sedan. Not a fat green caterpillar. Not a desert tortoise. Not potholes. And not the five thousand Kelvins rising to the stark sky that sends the scorching heat back to earth hotter than before. Judgement! Sweet steel-belted tires rush toward that unvanishing point in the hazy liquid distance. Paul wipes his thick brow. Dusty kerchief? No, sweaty and grimy. White when he started the day. In a motel. Behind a motel, boiling in his car, if he’s honest. And today he’s honest. He has to be. It’s sales day. Close or be closed.
Paul’s big hand rests on a tall, drab grey case in the passenger seat. His thumb fingers the golden “SH7000” stenciled on top. He lost his ring finger in the war. Not even the shattered nub remains. Wearing his great-grandfather’s US Army Ruptured Duck ring on his pinky cured the phantom finger; made it go away. The ring transcended superstition into conviction; Paul never goes without it. His tie? Polyester. His feeling: indiscernible. Indigestion? Intolerable.
X10H395 is his target school. Blind designations guarantee no biases. Blind designations guarantee equal, enthusiastic, exceptional sales. Blind designations are efficient. And efficiency is king. That’s the government’s line, anyway. So the government contracted DeKline Pharmaceutical to sell the line, and DeKline invented a process and device to optimize efficiency. Its sales manual begins and ends with: 100% efficiency is 100% happiness.
from brendan halpin
More Epstein emails have dropped, and after seeing my former boss, Danny Hillis, in one of the recent photo dumps, I went a-searchin’ for Danny’s name in the emails.
There are hundreds of pages of emails mentioning Danny Hillis, though, to be fair, every email appears like five times, as some are duplicated and then every reply in a chain appears as its own document. So it’s not really hundreds of emails.
But it is dozens. And what it details is nothing salacious—more what I would categorize as the banality of evil. Epstein would take a trip to Cambridge and see if Hillis was available, and they’d meet up. Or Epstein would be assembling a group of Harvard/MIT stars, and he’d make sure Hillis was there. Or Hillis would travel to New York and see Epstein there.
Again, I need to stress that there’s none of the chumminess or coded language in the Epstein/Hillis exchanges that you see in these emails from people who were obviously involved in the abuse that defined Epstein’s life. It’s friendly and professional. Actually a lot of it is actually logistical communications between their assistants. (Fun fact—they didn’t redact Hillis’ personal cell number from all the emails, so if you want to give him a call or send a text, that is totally something you can do, unless he’s changed his number!)
But, as I mentioned in my previous piece, the absence of criminal behavior on Hillis’ part doesn’t indicate a moral clean slate. He and Epstein saw each other a couple of times per year, as far as I can tell from the emails (confession—I didn’t look through all 200+ pages), between the years 2010 and 2018. He also did some consulting work for Epstein’s reputation-laundering philanthropic foundation. The files indicate that he signed an NDA, but my connections in the fundraising world tell me this isn’t necessarily unusual or sinister.
For those who want a lil’ timeline refresher, Epstein’s first conviction was in 2008. James Patterson’s book, which really brought the extent of Epstein’s evil to public prominence, came out in 2016. Epstein was arrested again in 2019.
So Danny Hillis knew that Jeffrey Epstein was a pedophile. And he chose to spend time with him anyway. For Danny Hillis, the systematic abuse of children was not a dealbreaker.
I reached out to Danny Hillis to ask him about this through emails at his Long Now Foundation and his company Applied Invention. I asked how he made sense of his association with a monster. I’m not surprised that he didn’t respond, but I can’t help speculating. So here goes.
First, you need to know that Danny Hillis is fantastically rich by the standards of normal people, but not at all rich by Epstein standards. I saw an estimate of Hillis’ net worth at $60 million. So he’s not one of the big money men that Epstein hung with.
Hillis is more of a professional smart person, only while most such people have cushy academic jobs that at least require them to like supervise grad students or publish papers, Hillis seems to have spent most of his time after Thinking Machines folded in 1996 doing something close to nothing, but different than the day before. His Wikipedia page is rife with mentions of stuff he invented that never really saw any practical application, with the exception of putting a data center in a shipping container, which, good for him, I guess.
His big legacy project as a public intellectual is to take a mountain owned by Jeff Bezos and put a really big clock in it. No, really. The clock is supposed to last ten thousand years. It’s got chimes programmed by Brian Eno.
Honestly I kind of respect the lack of hustle. Nice work if you can get it!
So, anyway, though Hillis moved in billionaire circles, he was never really one of them. His role, like those of Steven Pinker, Joi Ito, Noam Chomsky, and the other Harvard and MIT-affiliated people Epstein hung out with, was more akin to that of a trained monkey. Epstein threw them coins (or proximity to money and power), and they clapped and danced and did their little show, which was talking about Big Ideas. (But never the Big Idea that the great majority of the problems on planet earth are due to the hoarding of wealth. That’s just crazy talk!)
It’s actually a symbiotic relationship, because the evil, venal, ultimately petty money men get to feel like they aren’t just Scrooge McDuck cavorting in a swimming pool of money, but, rather, People Who Matter. People who are helping to shape the world’s future! People who talk to the smartest people in the world and sometimes give them money!
And the trained monkeys get the coins. I noted that Epstein seemed to have set up two meetings for Danny Hillis with Barclay’s president Jes Staley, so perhaps that alone explains Hillis’ willingess to spend time with a known pedophile.
But I have another theory about these people. I think Elon Musk kind of gave the game away when he said he considers us all to be NPCs. If you don’t play tabletop or video role playing games, you should know that NPC means non-player character. It’s like, the guy in the tavern who gives you information about where the goblins are hiding, or whatever. They look like people, (or gnomes or elves or whatever) but they fundamentally don’t matter except as far as they help move the players’ story along.
In other words, in billionaire circles, regular people don’t really count as human beings. This is why they’re able to treat masses of people with such casual cruelty—you wouldn’t do that to a person, but a broke guy in China making an iphone? That’s not a person. A single mom walking 15 miles a day in an Amazon warehouse and having to stand in a security line for an unpaid hour at the end of every work day? Well, that’s not a person. You? Me? We’re not people to them.
And so this is my theory about Danny Hillis—he found Epstein useful as a way to continue his professional smart guy grift, and the girls Epstein was abusing? Well, they weren’t the daughters of Danny Hillis or any of the people he spent time with. It’s not like Epstein was doing those things to people.
Maybe I’m wrong. If Danny Hillis is reading this, let me just say hi, fuck you, and please feel free to email me to explain why you spent eight years being friends with a monster.
from Faucet Repair
22 January 2026
Starlight Way (working title): I've wanted to make an all-white painting for a while and have failed at past attempts, but it seems I may have finally found a way into one. Which in my head felt something like approaching the painting as a white Conté crayon drawing on toned paper. The nucleus of the image is based off of a 9 meter sculpture of a scaled model Qatar Airways Boeing 777-9 aircraft around Heathrow Terminal 4 near Starlight Way. The painting doesn't reflect or need to reflect that specific location visually, so the title will change. Maybe just Plane is better...it is. Anyway, the important part is what the paint is doing. The explorations of space, value, line, and yes—plane—that emerged. I think I can trace those elements back to two works I looked at a lot this week:
Phoebe Helander, Wire Form III (Divided Space) (2026) David Ostrowski, F (Jung, Brutal, Gutaussehend) (2012)
Each of these paintings address space/the picture plane/gravity/color in interesting ways, and while it's unwise to reach for these effects intentionally, I do think what subconsciously drew me to portraying the sculpture was related to these concerns via its position as an object unmooring from the ground while remaining fixed to it. And I think what resulted sits at the center of an axis that acknowledges multiple potential trains of thought without committing fully to any of them—emerging from/being pulled back into a place of origin, crossing/being stuck at a horizon, taking off/crashing, dissecting space/being absorbed by space, and additive line/subtractive line.
from Faucet Repair
20 January 2026
Image inventory: Faint reflection of a shower curtain on marble (decay, rust, dirt, sand, columns, metal), toothbrush on its side looking at its reflection in a small broken mirror (blue, melting, recognizing, horizontal, cracking), a new leaf growing from a houseplant (wet, green, light, soil, brown, glistening, dew), Daejeon covered in snow (pale yellow sky, my shaved head), a broken foosball table in the sun (slanted shadows of foosmen, foosmen looking at shadows, foosmen turned up, foosmen turned down), a neon yellow rope and a thin rainbow slouching against a brick wall (phenomena, approaching, slight, long), a light bulb with dew on it, an ashtray with rainwater (preserved, coy fish, shrimp, spooning), orange/yellow windows of a building in Wood Green (sunglasses), a green rubbish bin surrounded by blue rubbish bins (outnumbered), two carrots in a Tupperware container (two are three), a woman sitting on a bus with city lights encircling her (fireflies, string lights), giant advertisement of strawberries and raspberries in a window (blood), cone-shaped light (hood), giant shadow of a hand in the corner of a room.