Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Unvarnished diary of a lill Japanese mouse
Journal 5 février 2026
On a beaucoup parlé toutes les deux bien sûr. A n'est pas du tout surprise. Comme elle travaille sur des sujets sensibles, elle se doutait depuis longtemps quon avait fait des enquêtes sur nous. Elle est certaine qu'on a nos dossiers dans les bureaux avec nos noms ou nos noms de code même pourquoi pas la tzarine et la souricette ? J'espère pas neko quand même. Nos téléphones tu parles pas besoin d'ouvrir le mien, il est sous surveillance depuis longtemps. Elle a un téléphone et un laptop fournis directement par eux. Ils voulaient juste me signaler qu'ils me surveillaient, et si je m'en apercevais c'est tout. On m'a joué une comédie bien mise en scene pour voir ce que je pigeais et grossir mon dossier, voilà ce qu'elle pense. Et là-dessus on s'aime ils ne peuvent rien y changer. La seule chose qui l'inquiète c'est que si ils me le demandent je ne pourrai pas refuser d'y aller et ça ne lui plaît pas, mais l'une comme l'autre on est prises dans l'histoire comme des mouches dans une toile d'araignée.
from Robert Galpin
the morning ride in the rain — spring still a pipe dream the Tyne boiling mercury to the armpits of the staiths

Today I had an off day of surfing.
The wind was stronger than expected. It was kind of crowded for my spot. Waves were wrapping from the West and peaking, breaking almost perpendicular to shore.
When I first arrived I said a little prayer “Lord, if you want me to surf, give me a parking spot.” I drove around and, what do you know, a really nice spot near the showers opens up. As soon as I get out of the car I feel the breeze briskly picking up speed, starting to blow side-shore. I wasn’t feeling it. But the late-morning was beautiful, a classic looking south shore of O’ahu kind of day. So I grabbed my camera and took some photos of the sun glistening off the water, Diamond Head in the back ground, people paddling and swimming in the foreground—a photo that could have existed over a hundred years ago. Took close-up photos of the rocks, testing out a 25 year old digital camera I got for Christmas, to replace an exact model I had short-sightedly given away years back.
I return to my car and stare at the water. An uncle next to me is gearing up to paddle out. Another uncle, his friend, comes over and they start talking story. I decide to call it and make my way around toward the driver’s side.
“Eh! You going out?!” the other uncle says.
“Nah. Too windy.”
“Can I have your stall, den?”
“Sure.”
“I come back. Get one brown SUV. Eh watch my water bottle while I go gettum.”
As he leaves his water bottle on the curb and walks away I look at his friend and I joke: “he’s very trusting. Gotta watch out for these haoles you know!” I say with a smile. “We known for taking things.”
“Eh,” the first uncle says with a dismissive tone. “All kinds of people can do all kinds of things.”
We get to talking. Richard is his name. I’ve seen him in the water before, but he usually paddles out shortly as I’m heading in. Today I’m at the spot at a later time. He urges me to go out.
“Too crowded. Plus I told your friend I’d give up my spot for him.”
He dismisses this and tells me it’s good and I need to go out. Eventually the brown SUV comes rolling around. I give shakas and say goodbye as I drive away. About five cars down I see another car pulling out. As I drive past I can’t shake the feeling that this is all God’s way of telling me that I need to paddle out. So I loop around and pull into the other spot. This one is actually better because it has more shade. I pull down my 11-foot glider (pretty much my exclusive board for the past three years), sunscreen, wetsuit vest zipped up. And I walk over to the cut between rocks where I can paddle out. I see Richard and I tell him that he convinced me to change my mind. He gives a loud approval.
I make the paddle in good time. The crowd thinned a bit in the interim. Waves have power. I see a few familiar faces, folks I did not expect to see in the water because they’re usually out at my normal time. I see a wave on the horizon, taking shape. I whip my board around and paddle. I feel the momentum taking me so I hop to my feet. The wave is beginning to break in front of me, so I go to fade left and surf on my back-hand. But there’s no face there. The wave is a strict right. So I fade back to front-side and try to get into it. I squat and begin scooping at the water, hoping to pick up more speed, but no dice. So I paddle back out, chuckling to myself.
After a while I see a set forming on the horizon. No one seems to be going for it, so I spin around and start paddling. I easily catch the wave and drop in, going right. I squat a bit in the face and then stand to adjust my position, dropping down the face in order to carve my way back up. But I see that it’s walling up too far ahead and is going to close out. So I fade back left to see another closing section coming behind me. So I turn to go straight and ride out the whitewater. But I get caught between two breaking sections, the foam engulfing my board and I feel the force underneath me. I get knocked off my board and plunged under the foam. I feel the chaos of the colliding waves rolling over me and I surrender to the current. Once the wave fully passes I surface. Another wave is breaking, but I have enough time to take stock of my surroundings and know that my board has made its way toward shore, pretty far from my location.
So I start swimming.
At this point I should probably note that I prefer to surf without a leash. Unless it’s particularly big and/or crowded, I’ll forego having a urethane chord dancing about my feet. Leashes can give us a sense of false security. They can and will fail and so we need to be prepared to swim when that eventually happens. Plus, leash-free surfing forces one to be more intentional in their surfing, as well as cognizant of one’s board.
It’s been awhile since I’ve had a long swim for a board. Since I’m wearing a wet-suit vest, I have some buoyancy and I have better results from flipping on my back and kicking my way toward my board. I hold my breath and descend under white water, wait for the roll of the wave to wash over me, return to the surface, and then kick my way again.
There’s always a threat of panic in the back of my mind when I have to swim for a board. I’m pretty far from the beach where I surf and there’s a lot of water. Also infrequent tiger shark sightings. But I keep myself calm. Eventually I see that an off-duty lifeguard who surfs my spot has retrieved my board. I thank him and grab it. I bob on the inside, considering the time and effort it would take to get my leash. Nah. I’ll paddle back out.
As I’m nearing the outside, I see the lifeguard wipe out. His big yellow board is bouncing among the whitewater, making its way to shore. He, too, is not wearing a leash. So I turn my board around and grab some whitewater and make my way to where his board is bobbing on the shallow reef. I grab it and start paddling in his direction. He gets it. I tell him we’re even. We both laugh and paddle back out.
By the time I make it back outside, I’m getting tired. I tell one of the uncles I know that I got my swim in for the day and he laughs. The wind has significantly picked up and is blowing almost onshore. After a time I see another wave making its way toward me. It’s mine. I paddle and begin to make the drop a bit later than I was expecting. So I grab the rails and decide to ride it on my belly. The speed is unreal. I’m constantly on the verge of being rolled over, but I keep my composure and let myself fly toward the beach. I decide that I’m not about to paddle back out. This will be the ride, for what it’s worth.
The wave peters out in the shallows of the reef. The tide is nearly dead low, which means that I’ll have to be careful not to let my fin hit anything.
I’m a good surfer. I’ve been at it for 26 years. I get long nose rides on the well-formed South Shore faces. I drop in and run my hands along the face of the waves. I’ve even garnered compliments for my ability to hit the lip with an 11-foot board, on occasion. I’ve shaped boards, ridden a variety of designs. I know the mythology and the legends. I know surfing inside and out.
And I still have off days.
Blessed be the off days.
That saying came into my mind as I carefully paddled over the shallow reef. A large honu (sea turtle) popped its head up next to me. “Hey, cuz!” I said. It swam directly under my board.
This past Sunday we heard Jesus give the Beatitudes. There’s a tendency to read the Beatitudes as Jesus giving us a list of rewards: “be a peacemaker, get a blessing; put up with grief and persecution; get a blessing.” But Jesus is actually saying that peace-making, grieving, being persecuted, being poor in spirit, etc. are themselves blessings. In the Greek language that Matthew’s gospel was maybe first written in, the Beatitudes are in what’s called the “indicative mood.” Meaning that the blessings are indicated by the other stuff. The blessings aren’t rewards for doing certain things.
This idea translates broadly. An off-day of surfing is a blessing, if I choose to see it. Blessed be the off days, because they help you appreciate the better days. Or, Blessed be the off days, because they make you a better swimmer.
I didn’t get to have a morning of beautiful glides on my huge board. I didn’t get to run to the nose and hang ten on a perfectly groomed wave face. I didn’t even get to drop in while squatted down, feeling the cool water with my fingers as I experience the thrill of dropping into the face of a wave and setting myself up for an elegant bottom turn to set my rail and just… go.
Nope. I got wiped out. I swam a lot. I got skunked on wind-blown waves that were both somehow mushy and strong.
But I got in the water. I learned that I’m finally mature enough to appreciate even the days where my surfing kinda sucks.
Blessed be the off days, indeed.
***
The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.
#Surfing #Reflection #Ocean #Theology #Jesus #Church #Hawaii #Oahu
from DocuNero
When invoice automation tools first entered the market, they promised a simple outcome: fewer spreadsheets, less typing, and faster processing. OCR-based systems became the default choice, and for a while, they delivered real value.
Yet many finance teams today still find themselves manually checking totals, fixing line items, and validating tax values. Automation exists — but manual work hasn’t disappeared.
The question is why.
OCR was built to convert scanned documents into readable text. According to Wikipedia’s explanation of optical character recognition, the technology focuses on identifying characters and words from images, not interpreting meaning or relationships between values.
That limitation becomes obvious with invoices.
An OCR engine may correctly read every number on a page, yet still fail to determine which amount represents tax, which is a subtotal, or whether the final total actually adds up. The document is readable, but the data isn’t trustworthy.
Most OCR-driven workflows look automated on paper. In reality, human checks creep back in at critical points.
Someone verifies totals before posting. Someone fixes misaligned line items. Someone confirms tax calculations.
These steps exist because the system extracts data but doesn’t validate it. Automation removes typing, but it doesn’t remove responsibility — it simply shifts it.
AI-based invoice processing approaches the problem differently.
Instead of only asking what text appears on a document, AI evaluates whether the information makes sense together. It understands invoice structure, recognizes patterns across documents, and validates relationships between fields automatically.
This turns invoice processing from a transcription task into a decision-aware workflow.
Platforms like DocuNero are designed around this idea, combining OCR with AI-driven validation to extract invoice data while also checking totals, taxes, and consistency before the information reaches accounting systems.
Speed is often highlighted as the main benefit of automation, but confidence is more important.
When finance teams trust their data, downstream processes improve naturally. Reporting becomes cleaner, audits become simpler, and month-end close stops feeling like damage control. The work shifts from fixing errors to using information.
That’s when automation actually delivers on its promise.
OCR wasn’t a failure — it was a foundation.
As invoice volumes grow and financial workflows become more complex, automation must move beyond reading documents and start understanding them. AI doesn’t replace OCR; it completes it.
True automation isn’t about extracting data faster. It’s about trusting the data you extract.
from
Shad0w's Echos
#nsfw #CeCe
As the semester wound down, the first real crisis in CeCe's unapologetic lifestyle loomed like a storm cloud over our urban city skyline. She was heading home for the summer—back to her family's strict household on the outskirts, where the internet was heavily blocked and censored, monitored by her oppressive parents who still treated her like the sheltered girl she'd once been. They had no idea how far she'd strayed, and CeCe dreaded the thought of being cut off. She didn't want to burn out on studying or summer courses; it'd be nice to have an actual academic break, lounging in the humid heat, maybe catching up on sleep. But she needed her porn—the constant stream of it that had become her lifeline.
She hadn't stopped watching since that first night I'd shown her, not even for a day. In fact, after our heart-to-heart during spring break, it was all she'd been consuming, scrolling endlessly on her phone or laptop, her curvy body always bare or barely covered, fingers working her slick folds as explicit scenes played out. She hadn't worn a bra in months, her full C-cup breasts free under loose tops or nothing at all, nipples often peeking through fabric as if daring the world to notice.
I found her one evening in our dorm, pacing naked, her caramel skin glistening with anxious sweat, thick thighs trembling as she clutched her phone like a talisman. Tears streamed down her face, her usual confidence crumbling into a full-blown breakdown. “Tasha, I can't do this,” she sobbed, collapsing onto the bed, her juicy ass sinking into the mattress as she curled up. “Home's a prison—no porn, no freedom. I can't be naked. They'll watch everything. What if I lose it? I need it, like air. It's my everything now.”
I sat beside her, pulling her into my arms, her bare breasts pressing against me as I stroked her back, calming her the way only I could. “Shh, breathe, CeCe. We'll figure it out. Remember, I can send you stuff—through any chat app, like Telegram. It's encrypted, private. Videos, links, whatever you need. I'll keep you stocked, okay? You won't go without.”
She looked up at me, eyes wide with relief, and nodded, her sobs easing into shaky breaths. But as the days turned into weeks and she headed home, I realized how important she was to me. I was falling for her hard. But right now, I couldn't tell her. Not with the looming transition from freedom to a prison of clothing and morality. I knew in these moments, I just had to be her rock and send her porn and assure her that everything will be ok.
I sent her porn religiously—curated clips of exhibitionist scenes, gooning sessions, whatever matched her escalating tastes—slipping them through Telegram in the dead of night, careful not to trigger her family's filters. She was dealing with their constant judgment and their the suffocating rules. I kept her supplied, feeding this beautiful addiction from afar. Porn was her only anchor amid the oppressive chaos.
That's how I became CeCe's enabler, especially when she didn't have free and open internet. And in the process, I have to admit, I developed my own growing porn addiction—hours spent hunting for the perfect videos to send her, diving deeper into the rabbit hole myself, my nights blurring into a haze of arousal as I knew I had to be in CeCe's world to really be with her. I loved it.
CeCe made it through that grueling summer, thanks in no small part to my covert porn deliveries via Telegram—clips of wild exhibitionist scenes that kept her sane amid her family's stifling rules. When sophomore year kicked off, we scored dorms together again. Full credit went to CeCe and her well-crafted tactics and requests. I don't question her ways. I just like watching her work her magic.
Our dorm for this year was in a taller building on the edge of campus, overlooking the bustling Georgia city streets below. It felt like a fresh start, our little sanctuary where she could let loose without judgment. But CeCe had been up to something super perverted over the break, a secret she hadn't breathed a word about until we were unpacking boxes in our new room.
One evening, as we settled in, she stripped naked as usual—her caramel curves on full display, thick thighs spreading as she lounged on her bed with her phone in hand. That's when she confessed, her voice casual but laced with that familiar thrill. “You know, Tasha, when my parents left the house for errands or whatever, I'd sneak out to the backyard completely naked. Just me, the sun on my skin, watching porn on my phone and masturbating right there in the open air. Fingers deep in my pussy, moaning loud enough for the neighbors to maybe hear if they listened close. It was risky, but God, it felt amazing—wind on my tits, grass under my ass, coming hard while some video played of a girl flashing in public.”
I stared at her, my heart racing, a mix of shock and that twisted arousal she always stirred in me. “CeCe, that's... insane. What if someone saw?”
She laughed, rubbing her clit absentmindedly as she scrolled for her next fix. “I can't stop escalating, Tasha. Being forced to wear clothes indoors all summer? It built up this rebellion in me. When they were gone, I had to get it all out—bare, exposed, free. It's like my body's screaming for more now.” Her eyes sparkled with that calculated recklessness; she wasn't dumb about it—she timed it perfectly, checked the angles, made sure the yard's fences hid just enough. But it was pushing boundaries further than ever.
Then she glanced at our third-floor window, the city lights twinkling below like distant stars. “Hey, can we keep the blinds open at all times? Maybe the window wide open too until it's too cold? Up here, no one's really looking, but the breeze... it'd feel so good on my skin while I watch and play.”
I hesitated, biting my lip, knowing it was another step into her world. But this woman—my fearless, addictive CeCe—was more exciting than any relationship I could imagine with someone else. Guys seemed boring by comparison; she kept life electric, charged with that platonic tension we danced around. “Okay,” I agreed reluctantly, cracking the window open, the humid night air rushing in. “Just... be careful.”
As sophomore year progressed, I made a conscious decision to invest more time in CeCe, pouring my energy into our unique bond. I'd gone on a few dates during freshman year—casual flings with guys from class or apps—but after our raw heart-to-heart over spring break, they all paled in comparison. My mind always drifted back to her, to the electric tension we shared, the way her fearless spirit lit up my world. No one else was as captivating, as alive. CeCe, meanwhile, with my help, had developed a deeper kink.
She wasn't watching porn; she was becoming what she saw. She wanted to be fully consumed by her porn addiction. Her sessions in our dorm grew longer and more intense, often with the windows open to let the city breeze tease her bare skin. In turn, I normalized it and never judged her. Sometimes I even encouraged her just to see her smile while she was rubbing herself silly into her 4th hour of gooning.
However, I wanted to bring a little bit of balance into her life. I figured it was time to coax her out of her “porn cave,” as I started calling our room. Not big crowds or wild parties—that'd overwhelm her—but simple outings to change the scenery: leisurely walks in quiet parks, cozy corners in small cafes where we could sip coffee and people-watch. “Come on, CeCe,” I'd say, “fresh air might do you good. Balance things out a bit.” To my surprise, she agreed, her eyes lighting up with that mischievous spark. “Sure, Tasha. As long as I can be comfortable.”
Of course, “comfortable” meant going out braless under one of her hoodies—she had ten different colors now, zipped just enough to hint at her full C-cup breasts swaying freely beneath, and it was basically all she wore when leaving the dorms. I expected that much; it was her signature look, rebellious and teasing. But CeCe had something planned she didn't tell me, a perverted twist she'd cooked up in secret.
On our first night out, we headed to a dimly lit cafe a few blocks from campus, the city streets humming with evening traffic. She wore her usual: a loose gray hoodie and tiny black shorts that barely peeked out from under the hem. Halfway through our walk, she pulled me into a shadowed alley, grinning like she'd won the lottery. “Check this out, Tasha,” she whispered, hiking up the hoodie just enough to reveal the truth.
Her shorts? They'd been modified—the crotch completely cut out, leaving a massive hole that exposed her slick, shaved pussy to the cool air. She was essentially naked in public, but still “covered” from a distance. The fabric acted like garters, framing her thick thighs and covering her juicy ass, while ensuring something dangled below the hoodie to mimic normal shorts. Topless underneath, pussy fully on display if anyone got close enough. “See? I can feel the breeze right on my clit, but no one's the wiser. It's perfect—exposed but hidden.”
I sighed, shaking my head, a familiar mix of exasperation and reluctant admiration washing over me. “CeCe... seriously?” But I couldn't stop smiling. Can't stay mad at her. This was just how she was—reckless yet calculated, always one step ahead in her escalation game. True to her word, she started going out more with me, as long as she could rock this setup. We'd stroll through parks, her exposed pussy brushing against the modified shorts with every step, or sit in cafes where she'd subtly rock her hips, savoring the thrill. I went along with it, my silent vow to her holding strong.
Eventually, though, CeCe started escalating this activity too, pushing the boundaries like she always did. One sunny afternoon, we ventured to a secluded park on the city's outskirts, a quiet spot with winding paths and hidden benches surrounded by trees. We found a bench away from the main trails, chatting about classes and her latest porn finds.
Even though I was right there with her while she was gooning, she was so animated and happy when she talked about porn. Sometimes she talked in circles about her obsession. I just got lost in her eyes as she showed me how comfortable and safe she felt with me.
During her passionate discussion about her porn addiction, she did something new in the park. Without hesitation, she spread her legs wide, the cut-out shorts framing her dripping pussy as she slipped her fingers inside, rubbing her clit in slow, deliberate circles while maintaining eye contact. She smiled when she saw my reaction. Then she changed the subject. “So, what do you think about that new engineering prof?” she asked casually, her breath hitching as she plunged deeper, moans mixing with her words.
Instead of reacting or pulling her back, I just carried on the conversation, my voice steady. “He's tough, but fair—way better than last year's. Pass me the water?” At this point, what CeCe did was normal to me; her shameless masturbation, even in semi-public, had become just another part of our rhythm, as familiar as her laugh or the curve of her caramel hips. I corrupted her and in turn, she wanted to corrupt me.
from
SmarterArticles

In May 2024, something unprecedented appeared on screens across Central Asia. A 52-second video in Pashto featured a news anchor calmly claiming responsibility for a terrorist attack in Bamiyan, Afghanistan. The anchor looked local, spoke fluently, and delivered the message with professional composure. There was just one problem: the anchor did not exist. The Islamic State Khorasan Province (ISKP) had produced its first AI-generated propaganda bulletin, and the implications for global security, content moderation, and the very architecture of our information ecosystem would prove profound.
This was not an isolated experiment. Days later, ISKP released another AI-driven segment, this time featuring a synthetic anchor dressed in Western attire to claim responsibility for a bombing in Kandahar. The terrorist organisation had discovered what Silicon Valley already knew: generative AI collapses the marginal cost of content production to nearly zero, whilst simultaneously expanding the potential for audience capture beyond anything previously imaginable.
The question now facing researchers, policymakers, and platform architects is not merely whether AI-generated extremist content poses a threat. That much is evident. The deeper concern is structural: what happens when the economics of inflammatory content production fundamentally shift in favour of those willing to exploit human psychological vulnerabilities at industrial scale? And what forms of intervention, if any, can address vulnerabilities that are built into the very architecture of our information systems?
To understand the stakes, one must first grasp the peculiar economics of the attention economy. Unlike traditional markets where production costs create natural barriers to entry, digital content operates under what economists call near-zero marginal cost conditions. Once the infrastructure exists, producing one additional piece of content costs essentially nothing. A research paper published on arXiv in 2025 frames the central challenge succinctly: “When the marginal cost of producing convincing but unverified content approaches zero, how can truth compete with noise?”
The arrival of large language models like GPT-4 and Claude represents what researchers describe as “a structural shift in the information production function.” This shift carries profound implications for the competitive dynamics between different types of content. Prior to generative AI, producing high-quality extremist propaganda required genuine human effort: scriptwriters, video editors, voice actors, translators. Each element imposed costs that naturally limited production volume. A terrorist organisation might release a dozen slickly produced videos annually. Now, the same organisation can generate thousands of variations in multiple languages, tailored to specific demographics, at effectively zero marginal cost.
The economic literature on this phenomenon identifies what researchers term a “production externality” in information markets. Producers of low-quality or harmful content do not internalise the negative social effects of their output. The social marginal cost vastly exceeds the private marginal cost, creating systematic incentives for information pollution. When generative AI capabilities (what some researchers term “offence”) dramatically outstrip detection technologies (“defence”), the marginal cost of producing harmful content falls precipitously, “systemically exacerbating harm.”
This creates what might be called a market bifurcation effect. Research suggests a “barbell” structure will emerge in content markets: low-end demand captured by AI at marginal cost, whilst human creators are forced into high-premium, high-complexity niches. The middle tier of content production essentially evaporates. For mainstream media and entertainment, this means competing against an infinite supply of machine-generated alternatives. For extremist content, it means the historical production barriers that limited proliferation have effectively disappeared.
The U.S. AI-powered content creation market alone was estimated at $198.4 million in 2024 and is projected to reach $741.1 million by 2033, growing at a compound annual growth rate of 15.8%. This explosive growth reflects businesses adopting AI tools to reduce time and costs associated with manual content creation. The same economics that drive legitimate business adoption, however, equally benefit those with malicious intent.
The economics of production tell only half the story. The other half concerns distribution, and here the structural vulnerabilities of attention economies become starkly apparent.
Modern social media platforms operate on a simple principle: content that generates engagement receives algorithmic promotion. This engagement-optimisation model has proved extraordinarily effective at capturing human attention. It has also proved extraordinarily effective at amplifying inflammatory, sensational, and divisive material. As Tim Wu, the legal scholar who coined the term “net neutrality,” observed, algorithms “are optimised not for truth or well-being, but for engagement, frequently achieved through outrage, anxiety, or sensationalism.”
The empirical evidence for this amplification effect is substantial. Research demonstrates that false news spreads six times faster than truthful news on Twitter (now X), driven largely by the emotional content that algorithms prioritise. A landmark study published in Science in 2025 provided causal evidence for this dynamic. Researchers developed a platform-independent method to rerank participants' feeds in real time and conducted a preregistered 10-day field experiment with 1,256 participants on X during the 2024 US presidential campaign. The results were striking: decreasing or increasing exposure to antidemocratic attitudes and partisan animosity shifted participants' feelings about opposing political parties by more than 2 points on a 100-point scale. This effect was comparable to several years' worth of polarisation change measured in long-term surveys.
Research by scholars at MIT and elsewhere has shown that Twitter's algorithm amplifies divisive content far more than users' stated preferences would suggest. A systematic review synthesising a decade of peer-reviewed research (2015-2025) on algorithmic effects identified three consistent patterns: algorithmic systems structurally amplify ideological homogeneity; youth demonstrate partial awareness of algorithmic manipulation but face constraints from opaque recommender systems; and echo chambers foster both ideological polarisation and identity reinforcement.
The review also found significant platform-specific effects. Facebook is primarily linked to polarisation, YouTube is associated with radicalisation with particularly strong youth relevance, and Twitter/X emphasises echo chambers with moderate youth impact. Instagram and TikTok remain under-researched despite their enormous user bases, a concerning gap given TikTok's particularly opaque recommendation system.
The implications for AI-generated content are profound. If algorithms already preferentially amplify emotionally charged, divisive material created by humans, what happens when such material can be produced at unlimited scale with sophisticated personalisation? The answer, according to researchers at George Washington University's Program on Extremism, is that extremist groups can now “systematically exploit AI-driven recommendation algorithms, behavioural profiling mechanisms, and generative content systems to identify and target psychologically vulnerable populations, thereby circumventing traditional counterterrorism methodologies.”
Perhaps the most concerning aspect of AI-enabled extremism is its capacity for psychological targeting at scale. Traditional propaganda operated as a broadcast medium: create a message, distribute it widely, hope it resonates with some fraction of the audience. AI-enabled propaganda operates as a precision instrument: identify psychological vulnerabilities, craft personalised messages, deliver them through algorithmically optimised channels.
Research published in Frontiers in Political Science in 2025 documented how “through analysing huge amounts of personal data, AI algorithms can tailor messages and content which appeal to a particular person's emotions, beliefs and grievances.” This capability transforms radicalisation from a relatively inefficient process into something approaching industrial production.
The numbers are sobering. A recent experiment estimated that AI-generated propaganda can persuade anywhere between 2,500 and 11,000 individuals per 100,000 targeted. Research participants who read propaganda generated by GPT-3 were nearly as persuaded as those who read real propaganda from state actors in Iran or Russia. Given that elections and social movements often turn on margins smaller than this, the potential for AI-generated influence operations to shift outcomes is substantial.
The real-world evidence is already emerging. In July 2024, Austrian authorities arrested several teenagers who were planning a terrorist attack at a Taylor Swift concert in Vienna. The investigation revealed that some suspects had been radicalised online, with TikTok serving as one of the platforms used to disseminate extremist content that influenced their beliefs and actions. The algorithm, optimised for engagement, had efficiently delivered radicalising material to psychologically vulnerable young people.
This is not a failure of content moderation in the traditional sense. It is a structural feature of engagement-optimised systems encountering content designed to exploit that optimisation. Research published in Frontiers in Social Psychology in 2025 found that TikTok's algorithms “privilege more extreme material, and through increased usage, users are gradually exposed to more and more misogynistic ideologies.” The algorithms actively amplify and direct harmful content, not as a bug, but as a consequence of their fundamental design logic.
The combination of psychological profiling and generative AI creates what researchers describe as an unprecedented threat vector. Leaders of extremist organisations are no longer constrained by language barriers, as AI translation capabilities expand their reach across linguistic boundaries. Propaganda materials can now be produced rapidly using just a few keywords. The introduction of deepfakes adds another dimension, enabling the misrepresentation of words or actions by public figures. As AI systems become more publicly available and open-source, the barriers to entry for their use continue to lower, making it easier for malicious actors to adopt AI technologies at scale.
Faced with these challenges, platforms have relied on a suite of content moderation tools developed primarily for human-generated content. The most sophisticated of these is “fingerprinting” or hashing, which creates unique digital signatures for known harmful content and automatically removes matches across the platform. This approach has proved reasonably effective against the redistribution of existing terrorist videos and child sexual abuse material.
Generative AI renders this approach largely obsolete. According to research from the Combating Terrorism Center at West Point, “by manipulating their propaganda with generative AI, extremists can change a piece of content's digital fingerprint, rendering fingerprinting mute as a moderation tool.” A terrorist can now take existing propaganda, run it through an AI system that makes superficially minor modifications, and produce content that evades all hash-based detection whilst preserving the harmful message.
The scale challenge compounds this technical limitation. A 2024 report from Philosophy & Technology noted that “humans alone can't keep pace with the enormous volume of content that AI creates.” Most content moderation decisions are now made by machines, not human beings, and this is only set to accelerate. Automation amplifies human error, with biases embedded in training data and system design, whilst enforcement decisions happen rapidly, leaving limited opportunities for human oversight.
Traditional keyword and regex-based filters fare even worse. Research from the University of Chicago's Data Science Institute documented how “GenAI changes content moderation from a post-publication task to a real-time, model-layer challenge. Traditional filters, based on keywords or regex, fail to catch multilingual, evasive, or prompt-driven attacks.”
The detection arms race shows signs of favouring offence over defence. Research from Drexel University identified methods to detect AI-generated video through “fingerprints” unique to different generative models. However, as a Reuters Institute analysis noted, “deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.” Studies have demonstrated poorer performance of detection tools on certain types of content, and researchers warn of “a potential 'arms race' in technological detection, where increasingly sophisticated deepfakes may outpace detection methods.”
The gender dimension of this challenge deserves particular attention. Image-based sexual abuse is not new, but the explosion of generative AI tools to enable it marks a new era for gender-based harassment. For little or no cost, any individual with an internet connection and a photo can produce sexualised imagery of that person. The overwhelming majority of this content targets women and girls, ranging from teenagers to politicians and other public figures. This represents a form of AI-generated extremism that operates at the intersection of technology, misogyny, and the commodification of attention.
If traditional content moderation cannot address the AI-generated extremism challenge, what about reforming platform architecture itself? Here the picture grows more complex, touching on fundamental questions about the design logic of attention economies.
The European Union has attempted the most comprehensive regulatory response to date. The Digital Services Act (DSA), which came into full force in 2024, imposes significant obligations on Very Large Online Platforms (VLOPs) with over 45 million monthly EU users. The law forces platforms to be more transparent about how their algorithmic systems work and holds them accountable for societal risks stemming from their services. Non-compliant platforms face fines up to 6% of annual global revenue. During the second quarter of 2024, the Commission publicly confirmed that it had initiated formal proceedings against several major online platforms, requiring detailed documentation on content moderation systems, algorithmic recommender systems, and advertising transparency.
The EU AI Act adds additional requirements specific to AI-generated content. Under this legislation, certain providers must detect and disclose manipulated content, and very large platforms must identify and mitigate systemic risks associated with synthetic content. China has gone further still: as of September 2025, all AI-generated content, whether text, image, video, or audio, must be labelled either explicitly or implicitly, with obligations imposed across service providers, platforms, app distributors, and users.
In February 2025, the European Commission released a new best-practice election toolkit under the Digital Services Act. This toolkit provides guidance for regulators working with platforms to address risks including hate speech, online harassment, and manipulation of public opinion, specifically including those involving AI-generated content and impersonation.
These regulatory frameworks represent important advances in transparency and accountability. Whether they can fundamentally alter the competitive dynamics between inflammatory and mainstream content remains uncertain. The DSA and AI Act address disclosure and risk mitigation, but they do not directly challenge the engagement-optimisation model that underlies algorithmic amplification. Platforms may become more transparent about how their algorithms work whilst those algorithms continue to preferentially promote outrage-inducing material.
Some researchers have proposed more radical architectural interventions. In her 2024 book “Invisible Rulers,” Renee DiResta, formerly of the Stanford Internet Observatory and now at Georgetown University's McCourt School of Public Policy, argued for changes that would make algorithms “reward accuracy, civility, and other values” rather than engagement alone. The Center for Humane Technology, co-founded by former Google design ethicist Tristan Harris, has advocated for similar reforms, arguing that “AI is following the same dangerous playbook” as social media, with “companies racing to deploy AI systems optimised for engagement and market dominance, not human wellbeing.”
Yet implementing such changes confronts formidable obstacles. The attention economy model has proved extraordinarily profitable. In 2024, private AI investment in the United States far outstripped that in the European Union, raising concerns that stringent regulation might simply shift innovation elsewhere. The EU Parliament's own analysis acknowledged that “regulatory complexity could be stifling innovation.” Meanwhile, research institutions dedicated to studying these problems face their own challenges: the Stanford Internet Observatory, which pioneered research into platform manipulation, was effectively dismantled in 2024 following political pressure, with its founding director Alex Stamos and research director Renee DiResta both departing after sustained attacks from politicians who alleged their work amounted to censorship.
Beyond the technical and economic challenges lies a deeper philosophical problem. Our frameworks for regulating speech, including the human rights principles that undergird them, were developed for human expression. What happens when expression becomes “hybrid,” generated or augmented by machines, with fluid authorship and unclear provenance?
Research published in Taylor & Francis journals in 2025 argued that “conventional human rights frameworks, particularly freedom of expression, are considered ill-equipped to govern increasingly hybrid media, where authorship and provenance are fluid, and emerging dilemmas hinge more on perceived value than rights violations.”
Consider the problem of synthetic personas. An AI can generate not just content but entire fake identities, complete with profile pictures, posting histories, and social connections. These synthetic personas can engage in discourse, build relationships with real humans, and gradually introduce radicalising content. From a traditional free speech perspective, we might ask: whose speech is this? The AI developer's? The user who prompted the generation? The corporation that hosts the platform? Each answer carries different implications for responsibility and remedy.
The provenance problem extends to detection. Even if we develop sophisticated tools to identify AI-generated content, what do we do with that information? Mandatory labelling, as China has implemented, assumes users will discount labelled content appropriately. But research on misinformation suggests that labels have limited effectiveness, particularly when content confirms existing beliefs. Moreover, as the Reuters Institute noted, “disclosure techniques such as visible and invisible watermarking, digital fingerprinting, labelling, and embedded metadata still need more refinement.” Malicious actors may circumvent these measures “by using jailbroken versions or creating their own non-compliant tools.”
There is also the question of whether gatekeeping mechanisms designed for human creativity can or should apply to machine-generated content. Copyright law, for instance, generally requires human authorship. Platform terms of service assume human users. Content moderation policies presuppose human judgment about context and intent. Each of these frameworks creaks under the weight of AI-generated content that mimics human expression without embodying human meaning.
The problem grows more acute when considering the speed at which these systems operate. Research from organisations like WITNESS has addressed how transparency in AI production can help mitigate confusion and lack of trust. However, the refinement of disclosure techniques remains ongoing, and the gap between what is technically possible and what is practically implemented continues to widen.
Despite these challenges, researchers and technologists are exploring new approaches that might address the structural vulnerabilities of attention economies to AI-generated extremism.
One promising direction involves using large language models themselves for content moderation. Research published in Artificial Intelligence Review in 2025 explored how LLMs could revolutionise moderation economics. Once fine-tuned for the task, LLMs would be far less expensive to deploy than armies of human content reviewers. OpenAI has reported that using GPT-4 for content policy development and moderation enabled faster and more consistent policy iteration, reduced from months to hours, enhancing both accuracy and adaptability.
Yet this approach carries its own risks. Using AI to moderate AI creates recursive dependencies and potential failure modes. As one research paper noted, the tools and strategies used for content moderation “weren't built for GenAI.” LLMs can hallucinate, reflect bias from training data, and generate harmful content “without warning, even when the prompt looks safe.”
Another architectural approach involves restructuring recommendation algorithms themselves. The Science study on algorithmic polarisation demonstrated that simply reranking content to reduce exposure to antidemocratic attitudes and partisan animosity measurably shifted users' political attitudes. This suggests that alternative ranking criteria, prioritising accuracy or viewpoint diversity over engagement, could mitigate polarisation effects. However, implementing such changes would require platforms to sacrifice engagement metrics that directly drive advertising revenue. The economic incentives remain misaligned with social welfare.
Some researchers have proposed more fundamental interventions: breaking up large platforms, imposing algorithmic auditing requirements, creating public interest alternatives to commercial social media, or developing decentralised architectures that reduce the power of any single recommendation system. Each approach carries trade-offs and faces significant political and economic barriers.
Perhaps most intriguingly, some researchers have suggested using AI itself for counter-extremism. As one Hedayah research brief noted, “LLMs could impersonate an extremist and generate counter-narratives on forums, chatrooms, and social media platforms in a dynamic way, adjusting to content seen online in real-time. A model could inject enough uncertainty online to sow doubt among believers and overwhelm extremist channels with benign content.” The prospect of battling AI-generated extremism with AI-generated counter-extremism raises its own ethical questions, but it acknowledges the scale mismatch that human-only interventions cannot address.
The development of more advanced AI models continues apace. GPT-5, launched in August 2025, brings advanced reasoning capabilities in a multimodal interface. Its capabilities suggest a future moderation system capable of understanding context across formats with greater depth. Google's Gemini 2.5 family similarly combines speed, multimodal input handling, and advanced reasoning to tackle nuanced moderation scenarios in real time. Developers can customise content filters and system instructions for tailored moderation workflows. Yet the very capabilities that enable sophisticated moderation also enable sophisticated evasion.
The most profound concern may be the one hardest to address: the possibility that AI-generated extremism at scale could systematically shift cultural baselines over time. In an “attention ecology,” as researchers describe it, algorithms intervene in “the production, circulation, and legitimation of meaning by structuring knowledge hierarchies, ranking content, and determining visibility.”
If inflammatory content consistently outcompetes moderate content for algorithmic promotion, and if AI enables the production of inflammatory content at unlimited scale, then the information environment itself shifts toward extremism, not through any single piece of content but through the aggregate effect of millions of interactions optimised for engagement.
Research on information pollution describes this as a “congestion externality.” In a digital economy where human attention is the scarce constraint, an exponential increase in synthetic content alters the signal-to-noise ratio. As the cost of producing “plausible but mediocre” content vanishes, platforms face a flood of synthetic noise. The question becomes whether quality content, however defined, can maintain visibility against this tide.
A 2020 Pew Research Center survey found that 64% of Americans believed social media had a mostly negative effect on the direction of the country. This perception preceded the current wave of AI-generated content. If attention economies were already struggling to balance engagement optimisation with social welfare, the introduction of AI-generated content at scale suggests those struggles will intensify.
The cultural baseline question connects to democratic governance in troubling ways. During the 2024 election year, researchers documented deepfake audio and video targeting politicians across multiple countries. In Taiwan, deepfake audio of a politician endorsing another candidate surfaced on YouTube. In the United Kingdom, fake clips targeted politicians across the political spectrum. In India, where over half a billion voters went to the polls, people were reportedly “bombarded with political deepfakes.” These instances represent early experiments with a technology whose capabilities expand rapidly.
Can interventions address these structural vulnerabilities? The technical answer is uncertain. Detection technologies continue to improve, but they face a fundamental asymmetry: defenders must identify all harmful content, whilst attackers need only evade detection some of the time. Watermarking and provenance systems show promise but can be circumvented by determined actors using open-source tools or jailbroken models.
The political answer is perhaps more concerning. The researchers and institutions best positioned to study these problems have faced sustained attacks. The Stanford Internet Observatory's effective closure in 2024 followed “lawsuits, subpoenas, document requests from right-wing politicians and non-profits that cost millions to defend, even when vindicated by the US Supreme Court in June 2024.” The lab will not conduct research into any future elections. This chilling effect on research occurs precisely when such research is most needed.
Meanwhile, the economic incentives of major platforms remain oriented toward engagement maximisation. The EU's regulatory interventions, however significant, operate at the margins of business models that reward attention capture above all else. The 2024 US presidential campaign occurred in an information environment shaped by algorithmic amplification of divisive content, with AI-generated material adding new dimensions of manipulation.
There is also the question of global coordination. Regulatory frameworks developed in the EU or US have limited reach in jurisdictions that host extremist content or provide AI tools to bad actors. The ISKP videos that opened this article were not produced in Brussels or Washington. Addressing AI-generated extremism requires international cooperation at a moment when geopolitical tensions make such cooperation difficult.
Internal documents from major platforms have occasionally offered glimpses of the scale of the problem. One revealed that 64% of users who joined extremist groups on Facebook did so “due to recommendation tools.” According to the Mozilla Foundation's “YouTube Regrets” report, 12% of content recommended by YouTube's algorithms violates the company's own community standards. These figures predate the current wave of AI-generated content. The integration of generative AI into content ecosystems has only expanded the surface area for algorithmic radicalisation.
The fundamental question raised by AI-generated extremist content concerns the sustainability of attention economies as currently constructed. These systems were designed for an era when content production carried meaningful costs and human judgment imposed natural limits on the volume and extremity of available material. Neither condition obtains in an age of generative AI.
The structural vulnerabilities are not bugs to be patched but features of systems optimised for engagement in a competitive marketplace for attention. Algorithmic amplification of inflammatory content is the logical outcome of engagement optimisation. AI-generated extremism at scale is the logical outcome of near-zero marginal production costs. Traditional content moderation cannot address dynamics that emerge from the fundamental architecture of the systems themselves.
This does not mean the situation is hopeless. The research cited throughout this article points toward potential interventions: algorithmic reform, regulatory requirements for transparency and risk mitigation, AI-powered counter-narratives, architectural redesigns that prioritise different values. Each approach faces obstacles, but obstacles are not impossibilities.
What seems clear is that the current equilibrium is unstable. Attention economies that reward engagement above all else will increasingly be flooded with AI-generated content designed to exploit human psychological vulnerabilities. The competitive dynamics between inflammatory and mainstream content will continue to shift toward the former as production costs approach zero. Traditional gatekeeping mechanisms will continue to erode as detection fails to keep pace with generation.
The choices facing societies are not technical alone but political and philosophical. What values should govern information ecosystems? What responsibilities do platforms bear for the content their algorithms promote? What role should public institutions play in shaping attention markets? And perhaps most fundamentally: can liberal democracies sustain themselves in information environments systematically optimised for outrage?
These questions have no easy answers. But they demand attention, perhaps the scarcest resource of all.
Combating Terrorism Center at West Point, “Generating Terror: The Risks of Generative AI Exploitation,” CTC Sentinel, Volume 17, Issue 1, January 2024. https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation/
Frontiers in Political Science, “The role of artificial intelligence in radicalisation, recruitment and terrorist propaganda,” 2025. https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2025.1718396/full
Frontiers in Social Psychology, “Social media, AI, and the rise of extremism during intergroup conflict,” 2025. https://www.frontiersin.org/journals/social-psychology/articles/10.3389/frsps.2025.1711791/full
Science, “Reranking partisan animosity in algorithmic social media feeds alters affective polarization,” 2025. https://www.science.org/doi/10.1126/science.adu5584
GNET Research, “Automated Recruitment: Artificial Intelligence, ISKP, and Extremist Radicalisation,” April 2025. https://gnet-research.org/2025/04/11/automated-recruitment-artificial-intelligence-iskp-and-extremist-radicalisation/
GNET Research, “The Feed That Shapes Us: Extremism and Adolescence in the Age of Algorithms,” December 2025. https://gnet-research.org/2025/12/12/the-feed-that-shapes-us-extremism-and-adolescence-in-the-age-of-algorithms/
arXiv, “The Economics of Information Pollution in the Age of AI,” 2025. https://arxiv.org/html/2509.13729
arXiv, “Rewarding Engagement and Personalization in Popularity-Based Rankings Amplifies Extremism and Polarization,” 2025. https://arxiv.org/html/2510.24354v1
Georgetown Law, “The Attention Economy and the Collapse of Cognitive Autonomy,” Denny Center for Democratic Capitalism. https://www.law.georgetown.edu/denny-center/blog/the-attention-economy/
George Washington University Program on Extremism, “Artificial Intelligence and Radicalism: Risks and Opportunities.” https://extremism.gwu.edu/artificial-intelligence-and-radicalism-risks-and-opportunities
International Centre for Counter-Terrorism (ICCT), “The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence.” https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence
Hedayah, “AI for Counter-Extremism Research Brief,” April 2025. https://hedayah.com/app/uploads/2025/06/Hedayah-Research-Brief-AI-for-Counter-Extremism-April-2025-Design-DRAFT-28.04.25-v2.pdf
Philosophy & Technology (PMC), “Moderating Synthetic Content: the Challenge of Generative AI,” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11561028/
Taylor & Francis, “Platforms as architects of AI influence: rethinking moderation in the age of hybrid expression,” 2025. https://www.tandfonline.com/doi/full/10.1080/20414005.2025.2562681
Taylor & Francis, “The Ghost in the Machine: Counterterrorism in the Age of Artificial Intelligence,” 2025. https://www.tandfonline.com/doi/full/10.1080/1057610X.2025.2475850
European Commission, “The Digital Services Act,” Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act
AlgorithmWatch, “A guide to the Digital Services Act.” https://algorithmwatch.org/en/dsa-explained/
TechPolicy.Press, “The Digital Services Act Meets the AI Act: Bridging Platform and AI Governance.” https://www.techpolicy.press/the-digital-services-act-meets-the-ai-act-bridging-platform-and-ai-governance/
ISD Global, “Towards transparent recommender systems: Lessons from TikTok research ahead of the 2025 German federal election.” https://www.isdglobal.org/digital_dispatches/towards-transparent-recommender-systems-lessons-from-tiktok-research-ahead-of-the-2025-german-federal-election/
Reuters Institute for the Study of Journalism, “Spotting the deepfakes in this year of elections: how AI detection tools work and where they fail.” https://reutersinstitute.politics.ox.ac.uk/news/spotting-deepfakes-year-elections-how-ai-detection-tools-work-and-where-they-fail
U.S. GAO, “Science & Tech Spotlight: Combating Deepfakes,” 2024. https://www.gao.gov/products/gao-24-107292
World Economic Forum, “4 ways to future-proof against deepfakes in 2024 and beyond,” February 2024. https://www.weforum.org/stories/2024/02/4-ways-to-future-proof-against-deepfakes-in-2024-and-beyond/
Springer, “Content moderation by LLM: from accuracy to legitimacy,” Artificial Intelligence Review, 2025. https://link.springer.com/article/10.1007/s10462-025-11328-1
Meta Oversight Board, “Content Moderation in a New Era for AI and Automation.” https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-automation/
MDPI, “Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth,” 2024. https://www.mdpi.com/2075-4698/15/11/301
Center for Humane Technology. https://www.humanetech.com/
NPR, “A major disinformation research team's future is uncertain after political attacks,” June 2024. https://www.npr.org/2024/06/14/g-s1-4570/a-major-disinformation-research-teams-future-is-uncertain-after-political-attacks
Platformer, “The Stanford Internet Observatory is being dismantled,” 2024. https://www.platformer.news/stanford-internet-observatory-shutdown-stamos-diresta-sio/
Grand View Research, “U.S. AI-Powered Content Creation Market Report, 2033.” https://www.grandviewresearch.com/industry-analysis/us-ai-powered-content-creation-market-report
Mozilla Foundation, “YouTube Regrets” Report. https://foundation.mozilla.org/
Pew Research Center, “Americans' Views of Technology Companies,” 2020. https://www.pewresearch.org/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Happy Duck Art
Been getting more comfortable with acrylics. I really like the sort of “intuitive abstract” thing – I’m not stellar at representational painting. I don’t enjoy it. It can be fun, sometimes, to incorporate it, but I’m unlikely to decide to become a capturer of real life.
I would, however, like to get better at photographing my paintings, because I’m not terrific at that. :)



from
Roscoe's Story
In Summary: * This is another good day in the Roscoe-verse. Again today I've been able to keep HVAC system turned off and the heavy front door open, letting the fresh air in through the screen door for most of the day. If the weather forecasts are right, I'll be able to do this for at least the next 10 days.
Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Health Metrics: * bw= 226.08 lbs. * bp= 159/91 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:00 – toast and butter * 08:00 – 2 tangerines * 10:00 – 1 peanut butter sandwich * 14:00 – lasagna * 16:30 – 1 fresh apple * 17:45 – 2 HEB Bakery cookies
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 05:55 – bank accounts activity monitored * 06:15 – read, pray, follow news reports from various sources, surf the socials * 13:30 to 15:00 – watch old game shows and eat lunch at home with Sylvia * 15:00 – listen to Dan Bongino podcast * 16:40 – listening now to the last part of The Jack Riccardi Show, will follow this with the first part of The Joe Pags Show, before tuning the radio to The Home for IU Women's Basketball ahead of tonight's game. After the game, my plan is to finish my night prayers and get ready for bed.
Chess: * 12:00 – moved in all pending CC games
from bone courage
The excerpt below opens The Salesman, a satire on government efficiency written in 2016 and submitted in 2024 for publication. The good folks at Waxing and Waning are publishing the short story. Go ahead and pre-order their March issue to see where Paul lands.
Paul Winter races a million miles an hour over the hot Mojave asphalt to close his next sale. Nothing slows down his square black sedan. Not a fat green caterpillar. Not a desert tortoise. Not potholes. And not the five thousand Kelvins rising to the stark sky that sends the scorching heat back to earth hotter than before. Judgement! Sweet steel-belted tires rush toward that unvanishing point in the hazy liquid distance. Paul wipes his thick brow. Dusty kerchief? No, sweaty and grimy. White when he started the day. In a motel. Behind a motel, boiling in his car, if he’s honest. And today he’s honest. He has to be. It’s sales day. Close or be closed.
Paul’s big hand rests on a tall, drab grey case in the passenger seat. His thumb fingers the golden “SH7000” stenciled on top. He lost his ring finger in the war. Not even the shattered nub remains. Wearing his great-grandfather’s US Army Ruptured Duck ring on his pinky cured the phantom finger; made it go away. The ring transcended superstition into conviction; Paul never goes without it. His tie? Polyester. His feeling: indiscernible. Indigestion? Intolerable.
X10H395 is his target school. Blind designations guarantee no biases. Blind designations guarantee equal, enthusiastic, exceptional sales. Blind designations are efficient. And efficiency is king. That’s the government’s line, anyway. So the government contracted DeKline Pharmaceutical to sell the line, and DeKline invented a process and device to optimize efficiency. Its sales manual begins and ends with: 100% efficiency is 100% happiness.
from brendan halpin
More Epstein emails have dropped, and after seeing my former boss, Danny Hillis, in one of the recent photo dumps, I went a-searchin’ for Danny’s name in the emails.
There are hundreds of pages of emails mentioning Danny Hillis, though, to be fair, every email appears like five times, as some are duplicated and then every reply in a chain appears as its own document. So it’s not really hundreds of emails.
But it is dozens. And what it details is nothing salacious—more what I would categorize as the banality of evil. Epstein would take a trip to Cambridge and see if Hillis was available, and they’d meet up. Or Epstein would be assembling a group of Harvard/MIT stars, and he’d make sure Hillis was there. Or Hillis would travel to New York and see Epstein there.
Again, I need to stress that there’s none of the chumminess or coded language in the Epstein/Hillis exchanges that you see in these emails from people who were obviously involved in the abuse that defined Epstein’s life. It’s friendly and professional. Actually a lot of it is actually logistical communications between their assistants. (Fun fact—they didn’t redact Hillis’ personal cell number from all the emails, so if you want to give him a call or send a text, that is totally something you can do, unless he’s changed his number!)
But, as I mentioned in my previous piece, the absence of criminal behavior on Hillis’ part doesn’t indicate a moral clean slate. He and Epstein saw each other a couple of times per year, as far as I can tell from the emails (confession—I didn’t look through all 200+ pages), between the years 2010 and 2018. He also did some consulting work for Epstein’s reputation-laundering philanthropic foundation. The files indicate that he signed an NDA, but my connections in the fundraising world tell me this isn’t necessarily unusual or sinister.
For those who want a lil’ timeline refresher, Epstein’s first conviction was in 2008. James Patterson’s book, which really brought the extent of Epstein’s evil to public prominence, came out in 2016. Epstein was arrested again in 2019.
So Danny Hillis knew that Jeffrey Epstein was a pedophile. And he chose to spend time with him anyway. For Danny Hillis, the systematic abuse of children was not a dealbreaker.
I reached out to Danny Hillis to ask him about this through emails at his Long Now Foundation and his company Applied Invention. I asked how he made sense of his association with a monster. I’m not surprised that he didn’t respond, but I can’t help speculating. So here goes.
First, you need to know that Danny Hillis is fantastically rich by the standards of normal people, but not at all rich by Epstein standards. I saw an estimate of Hillis’ net worth at $60 million. So he’s not one of the big money men that Epstein hung with.
Hillis is more of a professional smart person, only while most such people have cushy academic jobs that at least require them to like supervise grad students or publish papers, Hillis seems to have spent most of his time after Thinking Machines folded in 1996 doing something close to nothing, but different than the day before. His Wikipedia page is rife with mentions of stuff he invented that never really saw any practical application, with the exception of putting a data center in a shipping container, which, good for him, I guess.
His big legacy project as a public intellectual is to take a mountain owned by Jeff Bezos and put a really big clock in it. No, really. The clock is supposed to last ten thousand years. It’s got chimes programmed by Brian Eno.
Honestly I kind of respect the lack of hustle. Nice work if you can get it!
So, anyway, though Hillis moved in billionaire circles, he was never really one of them. His role, like those of Steven Pinker, Joi Ito, Noam Chomsky, and the other Harvard and MIT-affiliated people Epstein hung out with, was more akin to that of a trained monkey. Epstein threw them coins (or proximity to money and power), and they clapped and danced and did their little show, which was talking about Big Ideas. (But never the Big Idea that the great majority of the problems on planet earth are due to the hoarding of wealth. That’s just crazy talk!)
It’s actually a symbiotic relationship, because the evil, venal, ultimately petty money men get to feel like they aren’t just Scrooge McDuck cavorting in a swimming pool of money, but, rather, People Who Matter. People who are helping to shape the world’s future! People who talk to the smartest people in the world and sometimes give them money!
And the trained monkeys get the coins. I noted that Epstein seemed to have set up two meetings for Danny Hillis with Barclay’s president Jes Staley, so perhaps that alone explains Hillis’ willingess to spend time with a known pedophile.
But I have another theory about these people. I think Elon Musk kind of gave the game away when he said he considers us all to be NPCs. If you don’t play tabletop or video role playing games, you should know that NPC means non-player character. It’s like, the guy in the tavern who gives you information about where the goblins are hiding, or whatever. They look like people, (or gnomes or elves or whatever) but they fundamentally don’t matter except as far as they help move the players’ story along.
In other words, in billionaire circles, regular people don’t really count as human beings. This is why they’re able to treat masses of people with such casual cruelty—you wouldn’t do that to a person, but a broke guy in China making an iphone? That’s not a person. A single mom walking 15 miles a day in an Amazon warehouse and having to stand in a security line for an unpaid hour at the end of every work day? Well, that’s not a person. You? Me? We’re not people to them.
And so this is my theory about Danny Hillis—he found Epstein useful as a way to continue his professional smart guy grift, and the girls Epstein was abusing? Well, they weren’t the daughters of Danny Hillis or any of the people he spent time with. It’s not like Epstein was doing those things to people.
Maybe I’m wrong. If Danny Hillis is reading this, let me just say hi, fuck you, and please feel free to email me to explain why you spent eight years being friends with a monster.
from Faucet Repair
22 January 2026
Starlight Way (working title): I've wanted to make an all-white painting for a while and have failed at past attempts, but it seems I may have finally found a way into one. Which in my head felt something like approaching the painting as a white Conté crayon drawing on toned paper. The nucleus of the image is based off of a 9 meter sculpture of a scaled model Qatar Airways Boeing 777-9 aircraft around Heathrow Terminal 4 near Starlight Way. The painting doesn't reflect or need to reflect that specific location visually, so the title will change. Maybe just Plane is better...it is. Anyway, the important part is what the paint is doing. The explorations of space, value, line, and yes—plane—that emerged. I think I can trace those elements back to two works I looked at a lot this week:
Phoebe Helander, Wire Form III (Divided Space) (2026) David Ostrowski, F (Jung, Brutal, Gutaussehend) (2012)
Each of these paintings address space/the picture plane/gravity/color in interesting ways, and while it's unwise to reach for these effects intentionally, I do think what subconsciously drew me to portraying the sculpture was related to these concerns via its position as an object unmooring from the ground while remaining fixed to it. And I think what resulted sits at the center of an axis that acknowledges multiple potential trains of thought without committing fully to any of them—emerging from/being pulled back into a place of origin, crossing/being stuck at a horizon, taking off/crashing, dissecting space/being absorbed by space, and additive line/subtractive line.
from Faucet Repair
20 January 2026
Image inventory: Faint reflection of a shower curtain on marble (decay, rust, dirt, sand, columns, metal), toothbrush on its side looking at its reflection in a small broken mirror (blue, melting, recognizing, horizontal, cracking), a new leaf growing from a houseplant (wet, green, light, soil, brown, glistening, dew), Daejeon covered in snow (pale yellow sky, my shaved head), a broken foosball table in the sun (slanted shadows of foosmen, foosmen looking at shadows, foosmen turned up, foosmen turned down), a neon yellow rope and a thin rainbow slouching against a brick wall (phenomena, approaching, slight, long), a light bulb with dew on it, an ashtray with rainwater (preserved, coy fish, shrimp, spooning), orange/yellow windows of a building in Wood Green (sunglasses), a green rubbish bin surrounded by blue rubbish bins (outnumbered), two carrots in a Tupperware container (two are three), a woman sitting on a bus with city lights encircling her (fireflies, string lights), giant advertisement of strawberries and raspberries in a window (blood), cone-shaped light (hood), giant shadow of a hand in the corner of a room.
from Lastige Gevallen in de Rede
We beloven je een grote toekomst maar weten nog niet wanneer als het straks niet is dan vast wel een volgende keer we denken dan aan horden van adepten hangend aan je lippen een podium voor u alleen met daarop vers gedragen slippen aanhangers bij de vleet applaus voor iedere scheet alle belangrijke personen willen u altijd aan hun zij en bij de afhaal chinees komt er wel wat meer dan sambal bij
maar nu moet u nog gewoon zelf u verse spruitjes halen en betalen en nu zal niemand u met complimenten overladen u bent niet meer dan een lid van de gemeenschap een persoon wachtend op succes onder aan de trap
maar later klinken er bellen en wapperen er vaandels als u komt de hele vijfde kritiek uitgevoerd door het klassiek toets octet verstomd uwer naam zal zelfs meer dan heilig zijn als je een dag ontbreekt aan het ontbijt lijdt men pijn
maar nu moet u zelf een afspraak bij ons maken en wij zullen in u bijzijn voortdurend zware zuchten slaken alsof u de meest vermoeiende persoon bent die we kennen en laten merken dat wij u nu nog lange niet zo hoog beminnen
maar ga zo door en wij zullen mogelijk spijt krijgen van onze zuchten bij ons zelf voldane oude ego vandaan vluchten en heel veel en anders ademen in u nabijheid teleurgesteld zijn als u alweer met een ander vrijt
maar voorlopig moet u hier tekenen in drievoud dan krijgt u van ons wat vreten voor zelf behoud bent u ons dankbaar en dus niet andersom en als je hiervoor weg loopt kijken wij niet naar u om
from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
Return of the economy goods Five dreams And none were war A secretive someday This times the light And is our country A nation under the Sun Millions taking care A cannonball as they say With misery but to no effect Staying in the hue Of dollarest men A duty to implore Pieces by the steer And five dimes at King World- One for Oprah And a thousand dreams flowing river Mercy goes round when you know Brailled measure for the day we don’t decorate The odyssey of Woman-child Third in her class but says thanks Good parents and a past get No reprisal for the generation Thirty years and solitude This empathy is yours All deliveries final To who speak eloquent or not A share in everybody And a home remembered wealth The currents have a way Out of the desert- And into the eyes- of Oprah Winfrey Tearship dawn And at our best Heaven to victory repeating.
from eivindtraedal

Det er jo interessant å se hvordan media bøyer seg for litt offererrolle og hylekor fra høyresida. NRK gir nå en beklagelse til Asle Toje. De får svare for egen journalistikk, men jeg ser ikke at pressen generelt har mye å belage. Det eneste som er beklagelig nå, er om de slutter å grave. Her er det nemlig mer å spørre om. Historiene til Toje går ikke helt opp:
På lørdag sa Toje til Aftenposten at han kun hadde møtt Steve Bannon én gang, at han ikke kunne huske å ha snakket med ham om Listhaug, og at han ikke hadde hatt noen annen kontakt med Bannon. Men i Epstein-filene kan vi lese at Steve Bannon hevder at ham mailer Toje “fem ganger daglig”. På oppfølgingspørsmål fra Aftenposten svarte Toje da “mulig jeg var på en mailingliste”.
På podcasten til Wolfgang Wee for to dager siden innrømmer Toje plutselig at han har snakket med Bannon om å møte Sylvi Listhaug. I den nye versjonen av historien er det nå Bannon som foreslår et møte med Listhaug. Det kan stemme, for i 2018 var Bannon på sjarmoffensiv i hele Europa. Her endrer Toje også historien om e-post-korrespondansen. “Det er mulig at vi maila noen ting om saken jeg skreiv i Dagens Næringsliv om Kinapolitikk”.
På få dager har Toje altså gått fra å kun å ha møtt Bannon én gang, ikke huske noen samtale om Listhaug, og ikke hatt noen annen kontakt, til at han har møtt Bannon, at han har snakka med Bannon om å møte Listhaug, og deretter maila med Bannon.
Det er jo underlig at alle som på forskjellige måter har vært eksponert i den siste Epstein-lekkasjen har så fryktelig dårlig hukommelse. Dersom jeg hadde utvekslet flere e-poster med en så notorisk skikkelse som Bannon, så ville jeg huska det svært klart.
Hvorfor er alt dette interessant? Vel, en grunn til å grave videre i denne koblinga er jo Tojes virke i offentligheten. Seinest den 22. desember publiserte Toje en mye omdiskutert kronikk i Aftenposten som starter med et retorisk spørsmål om ikke Trump-administrasjonen har et poeng når de sier at Europa er i ferd med å kollapse, med referanse til USAs nye “sikkerhetsstrategi”.
Denne “sikkerhetstrategien” fremstår som om den er ført i pennen av Bannon selv, og Toje står klar til å “forstå” og vise forståelse for den. Akkurat slik han har vært ivrig etter å “forstå” både Putin og Trump på forskjellige måter de siste årene, med stor grad av forståelse.
Det hører med til historien at Toje selv omtalte Bannon som en fascist seinest i fjor. Det kan jo tyde på at han har fått et relativt klart bilde av mannen. Men hvis nobelkomiteens nestleder ikke har snakket sant om sin kontakt med denne MAGA-ideologen, som tydeligvis har blitt opprettet med nobelkomitévervet som inngangsbillett, så fortjener offentligheten å vite mer om det.
Og for å vær krystallklar på det: dette handler ikke om Epstein. Han har kun vært en mellommann i denne sammenheng. Det handler om Tojes kontakt Bannon.
(og ja: om du vil høre mer om Bannon – sjekk ut denne ukas MAGAwatch!) Se mindre