Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from folgepaula
sure come over let's roll a joint and chill you can spread yourself over the couch pillows sure come over we can dream with eyes wide open and I can stare at you for hours while you breathe the air from my nose holding that trembling pause that sits between the lips to lips and we both feel that stupid organic electricity so much so that now you need to distract and start counting my freckles I hate when you do that until you try to connect the dots in the hope of revealing the image of some mythological being perhaps finding a map of a submerged treasure but all you will find and believe me when I say this all you will find are some random shapes that do not provide you any clues do not provide you any directions it only says yes I am here.
/oc25
from folgepaula
become the orgasm you are.
/jan 26
from folgepaula
I sleep with myself, lying in fetal position I sleep with myself, spooning my soul I sleep with myself. I sleep with myself hugging myself, there's no night so long I do not sleep with myself. like a troubadour holding his lute I sleep with myself. under a starry night I sleep with myself, while others are born or die or have birthdays I sleep with myself. sometimes I fall asleep with myself still wearing my reading glasses. but even in the dark I know I am sleeping with myself. and whoever wants to sleep with me, will have to sleep by my side.
/dec 25
from
EpicMind

Freundinnen & Freunde der Weisheit! Ein neuer Monat hat begonnen und der Newsletter erscheint bereits in der fünften Ausgabe. Heute zeige ich auf, warum Tagträumen alles andere als nutzlos ist.
Tagträumen galt lange als unproduktives Abschweifen. Neue neurowissenschaftliche Studien zeigen jedoch, dass genau dieses mentale Umherschweifen dem Gehirn hilft, zu lernen und kreative Verbindungen zu knüpfen. Dahinter stehen drei unterschiedliche Arten der Neugier: epistemische Neugier, die nach neuem Wissen strebt; divergente Neugier, die spielerisch nach Möglichkeiten sucht; und affektive Neugier, die emotionale Reize verarbeitet. Während Mäuse im Labor frühere Reize „vorausschauend“ erneut aktivierten, zeigte sich beim Menschen, dass stille Wachphasen das Gedächtnis stärken und kreatives Denken fördern.
Diese Formen der Neugier zeigen sich besonders dann, wenn der Geist nicht gezielt gelenkt wird: beim Duschen, Spazieren oder bei monotonen Aufgaben. In solchen Momenten wird das sogenannte „Default Mode Network“ aktiv – ein Hirnnetzwerk, das Erinnerungen aufruft, Zukünftiges durchspielt und neue Lösungswege simuliert. Wer also scheinbar abschweift, ist oft in einem kognitiven Zustand erhöhter Verbindungskraft zwischen Planung, Erinnerung und Vorstellung – ein perfektes Umfeld für Ideenentwicklung.
Auch im Alltag zeigt sich die Wirkung: Menschen, die bewusst kurze Denkpausen einlegen, tolerieren Schmerz besser, reduzieren Stresshormone und lösen Probleme kreativer. Firmen, die „mind-wander breaks“ einführen, berichten von effizienteren Meetings. Statt das Abschweifen zu bekämpfen, lohnt es sich, es gezielt zuzulassen – als stillen Motor für Lernen, Kreativität und emotionale Balance.
„Das ganze Ungluück der Menschen nährt sich aus einem einzigen Umstand, nämlich, dass sie nicht ruhig in einem Zimmer bleiben können.“ – Blaise Pascal (1623–1662)
Wenn eine Aufgabe weniger als zwei Minuten dauert, erledige sie sofort. Das verhindert, dass kleine To-dos sich zu einem riesigen Berg aufstauen und spart dir Zeit beim späteren Nacharbeiten.
Hast du dich jemals gefragt, ob es einen Zusammenhang zwischen deiner bevorzugten Tageszeit und deiner geistigen Leistungsfähigkeit gibt? Neue Forschungsergebnisse legen nahe, dass „Eulen“, also Menschen, die nachts aktiv sind und spät schlafen gehen, in kognitiven Tests tendenziell besser abschneiden als „Lerchen“, die früh aufstehen und morgens am produktivsten sind.
Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!
EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.
Topic #Newsletter
Anonymous
How Long Does Xanax Last in the Brain and Central Nervous System?
Xanax (alprazolam) is a benzodiazepine commonly prescribed for anxiety and panic disorders. Many people ask how long it really lasts—not just how long they feel calmer, but how long it remains active in the brain and central nervous system (CNS). The answer depends on how the drug works, how the body processes it, and individual factors like dose and metabolism.
How Xanax Works in the Brain?
Xanax enhances the activity of GABA (gamma-aminobutyric acid), a neurotransmitter that slows down brain signaling. By increasing GABA’s calming effect, Xanax reduces excessive neuronal firing, which eases anxiety, muscle tension, and panic symptoms.
Once absorbed, Xanax Tablets quickly cross the blood–brain barrier, which is why its effects can be felt relatively fast compared to some other anxiety medications.
Onset, Peak, and Duration of CNS Effects
Immediate-release (IR) Xanax follows a fairly predictable timeline:
Onset in the brain: ~30–60 minutes after ingestion
Peak CNS effects: ~1–2 hours
Noticeable calming effects: ~4–6 hours
Residual CNS activity: up to 12 hours or longer in some people
Extended-release (XR) Xanax releases the drug slowly, leading to:
A smoother onset
Lower peak intensity
CNS effects that may last 10–24 hours
Even after the noticeable calming effect fades, low levels of alprazolam can still influence brain activity.
How Long Xanax Stays in the Brain vs. the Body?
Xanax has an average half-life of about 11 hours in healthy adults, which determines how long xanax for anxiety- relief continues to influence the brain and central nervous system.
After roughly 11 hours, about 50% of the drug remains active in the body and brain.
After approximately 24 hours, a smaller but still measurable amount of xanax for anxiety- relief can remain present.
Complete elimination from the CNS may take 2–4 days, depending on individual factors such as metabolism, age, and liver function.
Even when xanax for anxiety- relief no longer feels strongly active, subtle effects on attention, reaction time, and mood may persist while the drug is still in the brain.
Factors That Affect How Long Xanax Lasts in the CNS
Several variables influence duration and intensity:
Dosage: Higher doses last longer and have stronger CNS effects
Frequency of use: Regular use can lead to accumulation in the brain
Age: Older adults often clear Xanax more slowly
Liver function: Xanax is metabolized in the liver; impairment prolongs effects
Body composition: Fat tissue can temporarily store benzodiazepines
Drug interactions: Alcohol, opioids, or other sedatives can intensify and extend CNS effects
Cognitive and Neurological Effects Over Time
While active in the CNS, Xanax may cause:
Drowsiness or sedation
Slower reaction time
Impaired memory or concentration
Reduced alertness
With repeated or long-term use, the brain can adapt to the presence of Xanax, which may lead to tolerance (needing more for the same effect) and dependence.
What Happens as Xanax Wears Off in the Brain?
As alprazolam levels decline:
GABA activity returns toward baseline
Anxiety symptoms may gradually reappear
Some people experience rebound anxiety or restlessness
Abrupt discontinuation after regular use can cause withdrawal symptoms because the brain has adjusted to the drug’s presence.
Key Takeaways
Xanax begins acting in the brain within 30–60 minutes
Peak CNS effects occur around 1–2 hours
Calming effects usually last 4–6 hours, but brain activity changes can last longer
Complete clearance from the brain may take several days
Duration varies based on dose, formulation, metabolism, and individual health factors
from tomson darko
Geloof mensen niet dat ‘negativiteit’ je negatief maakt. Klagen is goed. Je moet de boel daarboven in je hoofd goed ventileren.
Start daarom een klaaglijst.
Het werkt heel simpel. Voor je gaat slapen sla je je schriftje open, doe je de dop van een stift eraf, zet een ‘bullet’ en begin.
Klaag over al je onzekerheden, irritaties, opmerkelijke momenten, stomme momenten, grappige momenten en andere observaties van de dag. In een lijstje.
Ja, ik weet het, je hoort geen zeur te zijn. Je mag niet te negatief doen. Maar juist bij onze beste vrienden klagen we erop los, omdat die niet oordelen.
Toch?
Het papier is je beste vriend.
Het papier oordeelt niet.
Klaag daarom vaker.
Haal de frustratie van het leven uit je systeem.
Grote klachten. Kleine klachten.
Je zult zien dat, over een paar maanden als je erin terugbladert, het leest als poëzie. Het laat je grinniken. Het laat je relativeren. Het laat je herinneren. Het laat het tijdelijke ongemak van het leven zien.
Nou. Hier een klaaglijst van mij:
Benieuwd naar jouw klaaglijst van vandaag.
from tomson darko
Het stomste aan een mentale stoornis hebben, is dat die nooit helemaal weggaat nadat je ervan af bent gekomen.
Het is zo’n litteken dat gerust eens in de zoveel tijd opnieuw opengaat.
Zo kan ik me op sommige dagen nog steeds zonder reden enorm gespannen voelen, omdat ik de deur uit moet.
Als een echo uit een tijd dat ik jarenlang elke dag gespannen was om de deur uit te gaan.
Hoewel deze spanning niet precies voelt als toen en ik weet dat ik de controle heb, is leuk anders.
Wat ik doe op dit soort momenten, is mijn mentale noodpakket erbij pakken.
In dit pakket zit het volgende:
In mijn playlist staat maar één liedje. Ik druk op de repeatknop en zet daarna het liedje op. En vanaf de eerste tonen gaat mijn ademhaling automatisch naar beneden.
Het nummer heet The Lotus Eaters van de band Dead Can Dance.
(Ik kan gerust uren praten over hun muziek.)
Vertrouw de techniek niet. Zorg er ook voor dat je het liedje echt in bezit hebt. Via een cd of als digitaal bestand. Laat je niet verrassen als je ergens in de bergen zonder 5G in paniek raakt en je kalmtenummer nodig hebt. Zorg dat het fysiek op je telefoon staat.
Ik weet bijvoorbeeld dat sommige mensen mijn podcast opzetten om een paniekaanval weg te ademen. Heel goed. Wees voorbereid. Maak playlistjes aan met je favoriete afleveringen en download ze.
Ik heb bijvoorbeeld mijn favoriete boek zowel fysiek, digitaal als luisterboek. Er ligt een exemplaar bij mijn computer, in mijn boekenkast in mijn werkkamer en in de kast in de woonkamer. Zodat ik er altijd overal direct toegang toe heb.
Het is het boek van Thomas Moore (1940) — met dubbel o — genaamd De donkere nachten van de ziel.
Als ik gek word van mijn perfectionisme bij mijn teksten, sla ik het boek A Guide for the Perplexed van Werner Herzog (1942) open om zijn radicale visie op kunst te lezen.
Het inspireert enorm.
Net zoals ik de film Palm Springs (2020) opzet als ik echt niet kan slapen en mijn gedachten me gek maken. Het is mijn comfortfilm. Ik kan er geen genoeg van krijgen.
Nu een opdracht voor jou:
Je volgende trauma is altijd dichterbij dan je denkt.
Wees voorbereid.
from tomson darko
In het laatste seizoen van de misdaadserie Breaking Bad (2008) ziet Walter White eindelijk in voor wie hij het allemaal deed in het leven.
Nee.
Hij startte een drugsrijk voor zichzelf.
Gewoon voor zichzelf.
Hij was alleen te egoïstisch om dat in te zien.
Hij praatte seizoenen lang al zijn slechte daden goed. Inclusief moord, vernieling en het overspoelen van de straten met zijn zelfgemaakte drugs. Dat hij het voor anderen deed.
Wie kan er nou tegen wat ambitie zijn om je familie zonder zorgen achter te laten als je sterft aan kanker?
Binge-watch een middag een Amerikaans real life-spelprogramma en je wordt overspoeld met mensen die meedoen om hun familie trots te maken en hopelijk met veel geld terugkomen om aardig gevonden te worden.
Niemand die zegt: ‘Ik sta hier, omdat ik denk dat mijn leven minder waardeloos voelt als mensen me op straat herkennen van tv.’
Ik vraag me ook altijd af voor wie mensen in januari naar de sportschool gaan. De maanden ervoor heb ik ze niet gezien. En na februari ga ik ze ook niet meer zien.
Dus voor wie doen ze het?
Die ‘nieuw jaar, nieuwe ik’-motivatie-wind-in-de-rug is voor niemand voldoende om een nieuwe routine in te bouwen. Het moet uiteindelijk toch volledig uit jezelf komen en dat is blijkbaar niet aan de hand.
Volgens komiek en zelfhulpgoeroe Jimmy Carr (1972) is dit probleem simpel te tackelen.
Vraag niet ‘wat iemand doet in het leven’, maar ‘waarom je doet wat je doet?’
Wat?
Nee. Waarom.
Als de wat en de waarom niet hetzelfde zijn, is het wachten op een existentiële crisis omdat je het allemaal niet meer weet.
Ik wil daar graag de vraag aan toevoegen voor WIE je het doet.
Wie zegt dat je buikje er niet mag zijn? De samenleving? Die ene op wie je indruk wil maken? De stem van je moeder uit je jeugd?
Of ligt het probleem dieper?
Waarom blijf je maar ongezond eten in huis halen? Waarom blijf je maar snaaien?
Welke leegte probeer je te vullen?
Of is het gewoon simpeler?
Je vindt het lekker en je hebt geen zelfdiscipline?
Ik oordeel niet.
Maar je denkt van wel. Dat ik oordeel.
Waarom denk je dat?
Doe ik je aan je vader denken?
Zo’n zelfcrisis is niet erg.
Er zullen er nog vele in je leven volgen. De melancholie gaat je nog regelmatig opzoeken.
Om jezelf weer in het gareel te brengen. Om uit te vogelen wat je volgende stap in het leven wordt. En die stap willen we graag begrijpen. Daarom is de ‘waarom’-vraag zo belangrijk.
Zodat je op een dag kan concluderen wat Walter White concludeerde in het laatste seizoen:
‘I did it for me. I was good at it. And I was really… I was alive.’
Liefs,
tomson darko
from Nerd for Hire
195 pages Edited by Susan Kaye Quinn (2024)
Read this if you like: Solarpunk, short speculative fiction, inventive worldbuilding
tl;dr summary: Collection of shorts that each imagine a hopeful future in a different way

One of the coolest things about science fiction is its capacity to inspire actual change in the real world by showing people what's possible. There are plenty of examples of technology imagined in the pages of a novel and later created in a physical form. And it can have a similar impact on how people think and view the world. This is one of the reasons I've been getting more into solarpunk. It's an antidote for that overwhelmed hopelessness that sets in when I think about climate change and how incredibly fucked the planet is. Stories might not be able to solve the whole problem, but I like reading about one potential way that we could sidestep disaster (or at least be okay, still, after the disaster happens). I also feel a smidge of hope knowing that someone has put a solution out into the world that, just maybe, someone else can run with to make some aspect of reality just a little bit better.
All of the stories in this collection nailed that spirit of solarpunk. There are elements of nearly every story that are depressing or downright terrifying on their own: a flooded future Rio de Janeiro; a derelict space station populated by giant centipedes; an invisible beast trapped behind a city fence. But these potential downers serve as jumping off points for very hopeful narratives, and that ability to find the points of light hidden inside something very dark is really what's so magical about solarpunk in general for me.
Getting a bit more specific with this collection, I found the worldbuilding to be on point across the stories. They were consistently able to ground the reader in a very vivid setting without bogging it down with tons of description. ”The Doglady and the Rainstorm” especially is a perfect example of how using the right specific details can build an entire world in just a few paragraphs. The opening image is the main character, Joseane, boating down a street-turned-canal between vertical gardens, accompanied by the buzz of pollinator drones. By the second page, I'm fully anchored in a place and can place it roughly in the semi-near future, and I'm ready to move with Joseane through this world.
“Centipede Station” is another one where the worldbuilding is on-point and accomplished very quickly through choice descriptions (it also just might be my favorite, though that's a hard choice because all of the stories have their strong points). In the first two paragraphs, we understand both that Pebble and Moss are on a space station, and that we're dealing with something more organic than what you might expect from that setting. It's not easy to both establish and subvert a reader's expectations that quickly without giving them whiplash. It's accomplished here by keeping the language straightforward and the characters central. It doesn't start with a zoomed-out view of the station—it starts with two characters huddled around a campfire, then extends outward to the centipedes chittering in the dark, then finally hovers over them to show the context.
Something else I enjoyed about both of these stories is how they defied reader expectations. I've read stories set in flooded cities before, but never one that focuses on a dogwalker as a protagonist like “The Doglady and the Rainstorm” does. Telling the story through that unexpected perspective, and then adding in the surprise wrinkle that she has a paralyzing fear of rainstorms, makes the familiar trope feel completely new. With “Centipede Station” I was able to see the ending coming from fairly early on, but in that “I feel smart” way, not a way that kills the tension. Instead, it shifts the tension, away from a fear of the centipedes and the environment and more toward the other layers, like how the characters are dealing with their grief while navigating this alien landscape.
As a cryptid fanatic, “What Kind of Bat Is This?” was pretty much written for me to love it. It has a somewhat familiar setup: a scientist, Bree, accidentally falls into an unsearched cave and in the process stumbles across a long-hidden form of life. What I like is how this premise is balanced against Bree's relationship with Izzy, a former colleague. The emotional arc isn't just the combination of fear and excitement associated with discovering a new species, but also Bree's complicated relationship with Izzy, and the way it sours what should otherwise be an incredible discovery. I also like the way it comes around at the end, with the ultimate conclusion being one of sharing the joy and working together, which is a spirit I think is central to the idea of solarpunk in general.
When it comes to which stories in the collection I think have the most helpful and in-the-moment beneficial message, “A Merger in Corn Country” tops that list, I think. The premise of this one, in brief, is that a sustainable commune buys the farm next to an old, set-in-his-ways corn farmer. And spoiler: it doesn't veer into the obvious conflict you'd expect from that set-up. Instead, it's generally a warm-fuzzy feel-good romp, and I enjoyed every minute of it. The voice really helped with this. It's charming and folksy without veering into caricature. There isn't a ton of conflict or tension in the story—but in this case, I feel like that's the point. In a way, the reader’s expectations generate the main tension. I, at least, just kept waiting for the other shoe to drop and for there to be some big fight and falling out between Dennis and the commune, and I was glad to be proven wrong and have the story go a different direction.
“The Park of the Beast” has the darkest, most ominous feel of all of them. At first, I would say there's a more dystopian vibe to the caged-off substation than the utopian world most of the stories inhabit. But there's a quirkiness to the voice that lightens it, pushing it more toward absurdism than horror. The beast is frightful, but described in a way that makes it feel like something to root for more than something to fear. Maybe I'm reading into this, but it felt to me like the narrator was eager for the day the beast would break free of its cage, like it would be the force that freed all of them from a larger fear. I liked that inversion of the expected, and that's what kept it feeling like it belonged in this collection.
I think it's fitting that “The Park of the Beast” comes after “Ancestors, Descendants” because it's another one that starts off feeling more dystopian. It opens on the protagonist alone in his town after the rest of its inhabitants abandoned it; when he does find more people, they're not exactly friendly. This turns in the second half, though, and ultimately the arc is one of finding strength in community. From a craft standpoint, I was impressed by how “Ancestors, Descendants” covered such a broad span of time and still gave the reader characters to care about. It's written as a kind of diptych of flash pieces, each with its own arc. That's an unusual format but one that works for this story because it keeps the characters completely separate in the reader's mind, and lets them fully sink into each world and get to stay with each character for a whole stretch.
“Coriander” feels like an excellent anchor for the collection for multiple reasons. For one, I like the symmetry of starting and ending with flooded worlds. In the case of “Coriander”, that world is the city where her great-grandmother (her ah-zho) grew up. The narrator is there to connect with her lineage, and that mission also puts the story in conversation with “Ancestors, Descendants.”
I often focus on the individual stories in a collection, but the overall reading experience is important, too, and Bright Green Futures does an excellent job in that regard. The variety of settings and voices keeps each story feeling fresh, but they have enough of a thematic thread running through them that they feel like they all belong together. I'd definitely give this collection a strong recommend for anyone who wants to dig deeper into the solarpunk genre.
See similar posts:
#BookReviews #ShortStory #Solarpunk #SciFi #WorldBuilding
from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
The Victory of Notre Dame
In places West of here The champions of change forego And Justice have- in the forecast More than supple rain And the Earth to be our Water Justice Rome Things of portence view Assaults in trial and view The subtle preoccupation In time a bliss report To barren optics and unclear The victim path But we were weary in support I’m seeing other paths- this victory due In time will change the atmosphere And messy wars to challenge- the natural view It’s favoured here Loving simple mountains Bare on top And robots receiving oil Never in need of oil affects the better- and their heir We suit as human And playing for Paris go This is us- til Tuesday And victory again For subtle paths we wonder Can calm assaulted be But through blue brine Epistles of the few And Justice Marry The solemn maybe Clearing chance For our message- End of oil.
from
💚
Portence Few and Fame
The nuts to be elastic By the shouts of providence sound Accostive by the exit For poem in right The day of this is landing To single Heaven be No more chairs for the good This standing genre In your time a life unmet Misery hall and syncopath Burdens to the relate A travel beam to Scotland And sure by day That nothing goes alone Beams of twer And quanticy unveered Simple analog with sound A German woman For all this salad hers Beamed from Upper Clement’s I miss her And strike the laser red Hues of green escape And socio here for day In England’s news A rose to North Korea But not Embargo seeks the cheese And daybreak machine Forgotten rose A 404 not found.
from
💚
The Copland Esteem
And ready count So surely but found in Water The way we spoke- Insular and twenty A horticulture view There were shouts antagonistic And lighting events Perfect for our madness And twiddled watch Given unto her- Our single wonder omen Pittances review This time a fortune year Filing by friend and mercy wonder The Economy at bay Simple is our home And distance new The times repeat for reason Architectures real And accelerants to know What does happen is A soul between for just And higher Earth will be our Rome And Easter boom Places counted fifty Insolvent doom But borne of people And day within The certain one Would capital acclaim The victory won And new to the famous A fact in charter case Facts above in view To rectify the past And present call Our Wonder.
from
💚
⭐️✨
The star between the Sun Affording life ever after Insoluble fiber within And sympathy on the grass And hewing poems Distributing other worldly A day’s quota Blinking at your fear Erasing it Then folding new To your preferences of heat, and light And mercy on all men Blinking at thirty watts And visible down the street Cousin mayhem And five points shelter Going it alone By five and her between This substance Q And matte effects of dew For foreign friends And very last frostbite Amen to you- clemency be Above of here Forever.
from
SmarterArticles

The numbers paint a picture of paradox. According to the 2025 Stack Overflow Developer Survey, 84% of developers now use or plan to use AI tools in their development process. Yet in that same survey, more developers actively distrust the accuracy of AI tools (46%) than trust them (33%). Only a fraction, just 3%, report highly trusting AI output. The most experienced developers show even deeper scepticism, with senior engineers recording the lowest “highly trust” rate at 2.6% and the highest “highly distrust” rate at 20%.
This trust deficit exists for good reason. GitClear's analysis of 211 million changed lines of code reveals that code churn, the percentage of lines reverted or updated less than two weeks after being authored, has climbed from 5.5% in 2020 to 7.9% in 2024. The percentage of code associated with refactoring has plummeted from 25% in 2021 to less than 10% in 2024, whilst duplicated code blocks have increased eightfold. For the first time in GitClear's measurement history, developers are pasting code more often than they are refactoring or reusing it.
The 2025 DORA Report, titled “State of AI-assisted Software Development” and surveying nearly 5,000 technology professionals globally, confirms what many engineering leaders suspected: higher AI adoption correlates with higher levels of software delivery instability, even whilst improving outcomes at nearly every other level. As the DORA researchers concluded, AI accelerates software development, but that acceleration can expose weaknesses downstream. Without robust control systems, an increase in change volume leads to instability.
This is not a story about whether AI coding assistants work. They demonstrably do. GitHub reports developers code 55% faster with Copilot. Microsoft's AI-powered code review assistant now supports over 90% of pull requests across the company, impacting more than 600,000 PRs monthly. The question is not capability but control: what structural mechanisms prevent this accelerated code production from accumulating technical debt that will cripple systems for years to come?
For decades, software development bottlenecks resided in implementation. Teams had more ideas than they could code, more designs than they could build, more requirements than they could fulfil. AI coding assistants have relocated that constraint. The bottleneck is no longer writing code; it is verifying it.
Qodo's research on the state of AI code quality in 2025 captures this shift precisely. AI coding agents increased output by 25 to 35%, but most review tools do not address the widening quality gap. The consequences include larger pull requests touching multiple architectural surfaces, growing merge queues, regressions appearing across shared libraries and services, and senior engineers spending more time validating AI-authored logic than shaping system design.
Engineering leaders report that review capacity, not developer output, has become the limiting factor in delivery. When 80% of PRs proceed without any human comment or review because an AI review tool has been enabled, the question becomes what quality signals are being missed. The 2025 Stack Overflow survey found that the biggest single frustration, cited by 66% of developers, is dealing with AI solutions that are almost right but not quite. The second biggest frustration, reported by 45%, is that debugging AI-generated code takes more time than expected.
The traditional approach to code quality, manual review by experienced developers, cannot scale to meet AI-accelerated output. Code review times have ballooned by approximately 91% in teams with high AI usage. The human approval loop has become the chokepoint. Yet eliminating human review entirely creates precisely the conditions for technical debt accumulation that organisations seek to avoid.
GitHub's latest Octoverse data shows monthly code pushes climbed past 82 million, merged PRs reached 43 million, and 41% of new code originated from AI-assisted generation. AI adoption reached 84% of all developers. The delivery side of the pipeline is expanding, yet the code review stage remains tied to the same human capacity limits it had five years ago.
The term “vibe coding” was introduced by Andrej Karpathy in February 2025 and named the Collins English Dictionary Word of the Year for 2025. It describes an approach where developers accept AI-generated code without fully comprehending its functionality, leading to undetected bugs, errors, or security vulnerabilities. Whilst this approach may be suitable for prototyping or throwaway weekend projects as Karpathy originally envisioned, it poses significant risks in professional settings where a deep understanding of the code is crucial for debugging, maintenance, and security.
The numbers tell a sobering story. A quarter of the Y Combinator Winter 2025 startup batch have 95% of their codebases generated by AI. According to YC managing partner Jared Friedman, “It's not like we funded a bunch of non-technical founders. Every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would have built their product from scratch but now 95% of it is built by an AI.”
Yet by the fourth quarter of 2025, the industry began experiencing what experts call the “Vibe Coding Hangover.” In September 2025, Fast Company reported that senior software engineers were citing “development hell” when working with AI-generated vibe-code. A study by METR found that applications built purely through vibe coding were 40% more likely to contain critical security vulnerabilities, such as unencrypted databases.
The risks manifested dramatically in real incidents. In May 2025, Lovable, a Swedish vibe coding application, was reported to have security vulnerabilities in the code it generated, with 170 out of 1,645 Lovable-created web applications having issues that would allow personal information to be accessed by anyone.
Simon Willison, a respected voice in the developer community, stated plainly: “Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial.”
YC general partner Diana Hu offered a nuanced perspective: “You have to have the taste and enough training to know that an LLM is spitting bad stuff or good stuff. In order to do good vibe coding, you still need to have taste and knowledge to judge good versus bad.” This requirement for human judgement becomes the foundation for understanding why structural controls matter.
Understanding how AI-generated code creates technical debt requires examining the specific mechanisms at work. Ana Bildea, writing on AI technical debt, identifies three primary vectors: model versioning chaos caused by the rapid evolution of code assistant products, code generation bloat, and organisation fragmentation as independent groups adopt different models and approaches.
These vectors interact multiplicatively. A team using GPT-4 generates code in one style whilst another team using Claude generates code in a different style. Both integrate with legacy systems that have their own conventions. The AI tools optimise for immediate task completion without awareness of broader architectural context. Each generated component may function correctly yet contradict established patterns elsewhere in the codebase.
The Ox Security report titled “Army of Juniors: The AI Code Security Crisis” outlined ten architecture and security anti-patterns commonly found in AI-generated code. The research found that AI-generated code introduced 322% more privilege escalation paths and 153% more design flaws compared to human-written code. AI-generated code is highly functional but systematically lacking in architectural judgement.
Forrester predicts that by 2025, more than 50% of technology decision-makers will face moderate to severe technical debt, with that number expected to hit 75% by 2026 due to AI's rapid growth. CodeScene's research demonstrates that unhealthy code exhibits 15 times more defects, requires twice the development time, and creates 10 times more delivery uncertainty compared to healthy code.
The financial implications compound over time. CISQ estimates that nearly 40% of IT budgets are spent maintaining technical debt. A Stripe report found developers spend, on average, 42% of their work week dealing with technical debt and bad code. If AI assistance accelerates code production without corresponding attention to code quality, it simply accelerates technical debt accumulation.
The problem extends beyond individual codebases. As more AI-generated code enters training datasets, a troubling feedback loop emerges. Georgetown University's Centre for Security and Emerging Technology identified this as one of three broad risk categories: downstream cybersecurity impacts including feedback loops where insecure AI-generated code gets incorporated into training data for future models. The technical debt of today becomes embedded in the AI assistants of tomorrow.
The solution emerging across successful engineering organisations involves mandatory architectural review before AI implements major changes, combined with automated quality gates that treat AI-generated code as untrusted by default.
CodeScene's approach implements guardrails across three dimensions: code quality, code familiarity, and test coverage. Every line of code, whether AI-generated or handwritten, undergoes automated review against defined quality standards. Their CodeHealth Monitor detects over 25 code smells including complex methods and god functions. When AI or human introduces issues, the monitor flags them instantly before the code reaches the main branch.
The quality dimension requires teams to define their code quality standards explicitly and automate enforcement via pull request reviews. A 2023 study found that popular AI assistants generate correct code in only 31.1% to 65.2% of cases. Similarly, CodeScene's Refactoring vs. Refuctoring study found that AI breaks code in two out of three refactoring attempts. These statistics make quality gates not optional but essential.
The familiarity dimension addresses knowledge preservation. When AI generates code that bypasses human understanding, organisations lose institutional knowledge about how their systems work. Research from the 2024 DORA report reveals that 39% of respondents reported little to no trust in AI-generated code. This distrust correlates with experience level, and guardrails should institutionalise this scepticism by requiring review from developers familiar with affected areas before AI-generated changes merge.
The third dimension emphasises test coverage. Comprehensive test suites serve as executable documentation of expected behaviour. When AI-generated code breaks tests, the violation becomes immediately visible. When tests pass, developers gain confidence that at least basic correctness has been verified.
The most successful teams have developed heuristics for balancing AI automation with human oversight. The emerging consensus suggests using AI for 40 to 60% of review tasks, including syntax checking, pattern matching, and basic security analysis, whilst reserving human review for critical paths. The guidance is clear: never automate 100% of code review. Human oversight remains essential for business logic, architecture, and compliance.
This balance requires explicit frameworks. Organisations should establish clear workflows that combine automated checks with manual oversight, promoting accountability and reducing the risk of introducing subtle errors. Effective code review blends the speed and consistency of AI with the judgement and creativity of human engineers.
Certain domains demand particular caution. Teams should never skip human review for AI-generated code touching authentication, payments, or security. Most advanced AI code reviews catch 90% of bugs, making them highly accurate for common issues. However, they work best with human oversight for complex business logic and architectural decisions.
The statistics on developer experience with AI underscore why this balance matters. According to Qodo's research, 76.4% of developers fall into what they term the “red zone,” experiencing frequent hallucinations with low confidence in shipping code. Approximately 25% estimate that one in five AI suggestions contain factual errors. These numbers make the case for structured human oversight inescapable.
A three-layer architecture has emerged as a recommended starting point: real-time IDE feedback for immediate guidance, PR-level analysis for comprehensive review, and periodic architectural reviews for systemic evaluation. Organisations configure these layers properly, maintain human oversight for critical paths, and tune for false positives over time.
The challenge of integrating AI code review into CI/CD pipelines whilst maintaining both velocity and quality has driven significant innovation in 2025. The goal is not to slow development but to prevent instability from propagating into production systems.
Microsoft's approach demonstrates what enterprise-scale integration can achieve. Their AI-powered code review assistant started as an internal experiment and now supports over 90% of PRs across the company. Per early experiments and data science studies, 5,000 repositories onboarded to their AI code reviewer observed 10 to 20% median PR completion time improvements. The key insight is that AI review does not replace human review but augments it, catching mechanical issues so human reviewers can focus on design intent and system behaviour.
The risk-tiered approach borrowed from regulatory frameworks proves particularly effective. Minimal-risk applications like AI-generated marketing copy or meeting summaries receive light-touch reviews: automated quality checks, spot-checking by subject matter experts, and user feedback loops. High-risk applications involving financial decisions, security-critical systems, or architectural changes receive mandatory human review, audit trails, bias testing, and regular validation.
Guardian Life Insurance exemplifies this approach. Operating in a highly regulated environment, their Data and AI team codified potential risk, legal, and compliance barriers and their mitigations. They created two tracks for architectural review: a formal architecture review board for high-risk systems and a fast-track review board for lower-risk applications following established patterns.
The most effective teams have shifted routine review load off senior engineers by automatically approving small, low-risk, well-scoped changes whilst routing schema updates, cross-service changes, authentication logic, and contract modifications to human reviewers. This selective automation maintains velocity for routine changes whilst ensuring human judgement for consequential decisions.
A well-governed AI code review system preserves human ownership of the merge button whilst raising the baseline quality of every PR, reduces back-and-forth, and ensures reviewers only engage with work that genuinely requires their experience.
Perhaps the most significant methodological shift of 2025 has been the rise of specification-driven development (SDD), a paradigm that uses well-crafted software requirement specifications as prompts for AI coding agents to generate executable code. Thoughtworks describes this as one of the key new AI-assisted engineering practices of the year.
The approach explicitly separates requirements analysis from implementation, formalising requirements into structured documents before any code generation begins. GitHub released Spec Kit in September 2025, an open-source toolkit providing templates and workflows for this approach. The framework structures development through four distinct phases: Specify, Plan, Tasks, and Implement.
In the Specify phase, developers capture user journeys and desired outcomes. This is not about technical stacks or application design. It focuses on experiences and what success looks like: who will use the system, what problem it solves, how users will interact with it, and what outcomes matter. In the Plan phase, developers encode their desired stack, architecture, and constraints.
The Tasks phase breaks specifications into focused, reviewable work units. Each task solves a specific piece of the puzzle and enables isolated testing and validation. Only in the Implement phase do AI agents begin generating code, now guided by clear specifications and plans rather than vague prompts.
At its core, spec coding aims for orchestral precision: 95% or higher accuracy in implementing specs on the first attempt, with code that is error-free and unit tested. Humans craft the “what”, focusing on user stories or natural-language descriptions of desired outcomes, whilst setting “how” guardrails for everything from security to deployment.
The problems with AI coding that SDD addresses stem from the fact that vibe coding is too fast, spontaneous, and haphazard. Because it is so easy for AI to generate demonstrable prototypes, many people overlook the importance of good engineering practices, resulting in too much unmaintainable, defective, one-off code. It is important to bring serious requirements analysis, prudent software design, necessary architectural constraints, and human-in-the-loop governance into the picture.
As one commentator observed: if everyone is writing three times more code and your baseline is one security vulnerability per thousand lines, you have tripled your security vulnerabilities. The only way out is guardrails that work at the speed of AI-assisted development.
The inherent tension in AI code generation lies in applying probabilistic systems to tasks that demand determinism. A banking system cannot approximate correct behaviour; it must be precisely correct. Yet large language models produce outputs that vary with temperature settings, prompt phrasing, and training data distributions.
Deterministic guardrails address this tension by wrapping probabilistic generation in verification layers that enforce correctness regardless of how the code was produced. These include tests, smarter linters, and observability infrastructure that provides reliable behavioural data for verification. AI changes how developers write and review code, increasing the need for strict verification.
Guardrails improve overall system accuracy and reliability through deterministic formatting. By enforcing a schema, organisations ensure the LLM output is always in a format like JSON or XML that code can parse. Self-healing pipelines in many guardrail libraries can automatically re-ask the LLM to fix its own mistakes if the first response fails validation.
The critical insight is that specifications without verification are just documentation. The power of spec-driven development comes from continuous validation through test-driven guardrails. Each acceptance criterion in a specification becomes a test case. As the agent implements, tests validate correctness against spec requirements.
Tessl, which launched spec-driven development tools in 2025, captures intent in structured specifications so agents can build with clarity and guardrails. Their Spec Registry, available in open beta, and Tessl Framework represent a growing ecosystem of tools designed to bring determinism to probabilistic code generation.
OpenSpec, another entrant in this space, adds a lightweight specification workflow that locks intent before implementation, giving developers deterministic, reviewable outputs. The common thread across these tools is the recognition that AI generation must be bounded by human-specified constraints to produce reliable results.
Whilst LLMs do not generate deterministic code like traditional code generation, compiling, or automated test execution, clear specifications can still help reduce model hallucinations and produce more robust code.
The challenge facing organisations is designing review workflows robust enough to ensure quality and manage risks, yet flexible enough to enable innovation and capture productivity gains. The worst outcome is replacing the implementation bottleneck with a review bottleneck that negates all productivity advantages.
The DORA Report 2025 identifies seven AI capabilities that distinguish high-performing teams: clear and communicated AI stance providing organisational clarity on expectations and permitted tools, healthy data ecosystems with quality and accessible internal data, AI-accessible internal data enabling context integration beyond generic assistance, strong version control practices with mature development workflows and rollback capabilities, working in small batches maintaining incremental change discipline, user-centric focus preserving product strategy clarity despite accelerated velocity, and quality internal platforms providing technical foundations that enable scale.
The platform engineering connection proves particularly important. The DORA Report found 90% of organisations now have platform engineering capabilities, with a direct correlation between platform quality and AI's amplification of organisational performance. As the report states, a high-quality platform serves as the distribution and governance layer required to scale the benefits of AI from individual productivity gains to systemic organisational improvements.
Qodo's platform approach demonstrates how this works in practice. Their context engine can index dozens or thousands of repositories, mapping dependencies and shared modules so review agents can see cross-repo impacts. Standards can be defined once, covering security rules, architecture patterns, and coding conventions, then applied consistently across teams, services, and repositories.
The technical mechanisms for preventing AI-generated technical debt matter little if organisations cannot enforce them consistently across distributed or rapidly growing engineering teams. This requires building AI governance as core infrastructure rather than treating it as policy documentation.
As organisations prepare for 2026, one operational fact is clear: AI increases the rate of code production, but PR review capacity controls the rate of safe code delivery. Companies that invest upfront in defining clear controls and guardrails will unlock transformative productivity gains. Those that do not will find AI amplifies whatever dysfunction already exists.
The regulatory landscape adds urgency to this capability building. The EU AI Act's high-risk provisions take effect on 2 August 2026, with penalties reaching 35 million euros or 7% of global annual turnover for prohibited AI practices. Enterprises must demonstrate full data lineage tracking, human-in-the-loop checkpoints for workflows impacting safety or financial outcomes, and risk classification tags labelling each model with its risk level, usage context, and compliance status.
Key obligations taking effect in August 2026 include full requirements for high-risk AI systems spanning risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
KPMG reports that half of executives plan to allocate 10 to 50 million dollars in the coming year to secure agentic architectures, improve data lineage, and harden model governance. Cybersecurity is identified as the single greatest barrier to achieving AI strategy goals by 80% of leaders surveyed, up from 68% in Q1.
McKinsey's research on the agentic organisation emphasises that governance cannot remain a periodic, paper-heavy exercise. As agents operate continuously, governance must become real time, data driven, and embedded, with humans holding final accountability. Just as DevSecOps embedded automated checks into digital delivery, agentic organisations will embed control agents into workflows.
The shift from implementation bottleneck to review bottleneck requires organisations to build new capabilities systematically. Review capacity is not merely a process to be optimised but an organisational muscle to be developed.
Engineering leaders report several patterns that distinguish successful capability building. First, they treat AI-generated code as untrusted by default. Every piece of generated code passes through the same quality gates as human-written code: automated testing, security scanning, code review, and architectural assessment.
Second, they institutionalise the scepticism that senior engineers bring naturally. Research shows that when teams report considerable productivity gains, 70% also report better code quality, a 3.5x increase over stagnant teams. The correlation is not coincidental. Teams seeing meaningful results treat context as an engineering surface, determining what should be visible to the agent, when, and in what form.
Third, they invest in platform capabilities that distribute review capacity beyond senior engineers. The traditional model where senior engineers review all significant changes cannot scale. Instead, organisations encode institutional knowledge into automated review processes and apply it consistently, even as code volume and change velocity increase.
Fourth, they maintain humans as the ultimate arbiters of quality. The 2025 Stack Overflow survey found that in a future with advanced AI, the number one reason developers would still ask a person for help is “when I don't trust AI's answers”, cited by 75% of respondents. This positions human developers as the final checkpoint, not because humans are always right but because humans bear accountability for outcomes.
Fifth, they establish clear governance frameworks. Reviewer trust erodes if the AI comments on style whilst humans chase regressions. Organisations should define a RACI matrix (Responsible, Accountable, Consulted, Informed) for each feedback category.
The evidence increasingly suggests that disciplined adoption of AI coding assistants produces dramatically different outcomes than undisciplined adoption. Teams with strong code review processes experience quality improvements when using AI tools, whilst those without see quality decline. This amplification effect makes thoughtful implementation essential.
The DORA Report 2025 captures this dynamic with precision: AI does not fix a team; it amplifies what is already there. Strong teams use AI to become even better and more efficient. Struggling teams find that AI only highlights and intensifies their existing problems.
The implications for organisational strategy are significant. Before expanding AI adoption, organisations should assess their existing control systems. Do automated testing practices provide adequate coverage? Do version control workflows support rapid rollback? Do platform capabilities enable consistent enforcement across teams? Do review processes scale with output volume?
If the answer to these questions is no, then accelerating AI adoption simply accelerates the manifestation of existing weaknesses. The technical debt will accumulate faster. The instability will manifest sooner. The costs will compound more rapidly.
Conversely, organisations that invest in these foundational capabilities before or alongside AI adoption position themselves to capture genuine productivity gains without corresponding quality degradation. The 10 to 20% improvement in PR completion time that Microsoft observed was not achieved by eliminating review but by making review more efficient whilst maintaining standards.
Looking forward, the challenges of managing AI-generated code will intensify rather than diminish. Gartner predicts that 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. The transition from AI as coding assistant to AI as autonomous agent raises the stakes for governance and control mechanisms.
Investment and engineering capacity are focused on production-grade, orchestrated agents: systems that can be governed, monitored, secured, and integrated at scale. Leaders are converging on platform standards that consistently manage identity and permissions, data access, tool catalogues, policy enforcement, and observability.
The multi-agent coordination challenge introduces new governance requirements. When multiple AI agents collaborate on complex tasks, the potential for emergent behaviour that violates architectural constraints increases. The mechanisms that work for single AI assistants may prove insufficient for agent swarms.
By 2030, CIOs forecast that 75% of IT work will involve human-AI collaboration, with 25% being fully autonomous AI tasks. Software engineering will be less about writing code and more about orchestrating intelligent systems. Engineers who adapt to these changes, embracing AI collaboration whilst maintaining quality discipline, will thrive.
The question is not whether AI will transform software development. It already has. GitHub's latest Octoverse numbers show monthly code pushes climbed past 82 million, merged PRs reached 43 million, and 41% of new code originated from AI-assisted generation. AI adoption reached 84% of all developers.
The question is whether this transformation will create systems that remain maintainable, secure, and adaptable, or whether it will produce a generation of codebases that no one fully understands and no one can safely modify. The difference lies entirely in the structural controls that organisations implement.
Specification-driven development transforms AI from a source of unpredictable output variance into a reliable implementation tool by establishing constraints before generation begins. Deterministic guardrails ensure that probabilistic outputs meet deterministic requirements before reaching production. Tiered review processes scale human oversight by matching review intensity to risk level. Platform engineering distributes enforcement capability across the organisation rather than concentrating it in senior engineers.
These mechanisms are not optional refinements. They are the difference between AI that amplifies capability and AI that amplifies chaos. The organisations that treat them as infrastructure rather than aspiration will lead the next generation of software development. Those that do not will spend their futures paying down the technical debt that undisciplined AI adoption creates.
The tools are powerful. The velocity is real. The productivity gains are achievable. But only for organisations willing to pay the discipline premium that sustainable AI adoption demands.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Brand New Shield
The Fediverse Of Football? Could such a thing exist?
With all the news going on currently and how it is tying back to “The Shield” in the worst ways imaginable, maybe instead of just one league we have regional leagues all on equal footing where the league champs advance to a playoff to see who the champion of them all. I mean, why not right? The Bowl of Bowls so to speak. It'd be fun, exciting, and add a flavor to professional sports that we don't currently have on this side of the proverbial pond.
Also, by decentralizing the power structure to that degree, you don't have one body rule over everything because if one governing body has too much power and the wrong people rise to the top of said governing body, bad things happen. It's literally the history of sports organizations around the world. NFL, FIFA, PGA, IOC, you name the sport and the organization, there have been some shenanigans with unruly people at the top doing bad things. Single ownership models of several regional leagues who are linked together through some sort of football confederation and that confederation has a playoff to determine the champion of champions? By making said leagues regional you'd lower travel costs. You could theoretically buy buses and make most travel some sort of bus trip instead of buying plane tickets over and over again for cross-country flights and such. There are definitely advantages to this idea, just some details would need to be flushed out to make this a fully functional business proposal of sorts.
However many leagues and teams per league (all leagues in such an arrangement would need the same number of teams) is something that'd have to be determined. The opportunity is definitely there with the appropriate resources (which I currently do not have, full disclosure). Getting funding, distribution, what have you would be discussions for another time. The A7FL is the league that operates most closely to this vision I am writing about here, but they have “league owners” and a more vertical power structure than I am envisioning. The leagues should be the true power centers here, not the overarching body.
What if the “Brand New Shield” isn't just one organization but multiple organizations who operate on the same platform and use a more decentralized, confederated model than most professional sports? It's definitely an interesting idea to think about and something that is definitely going to be written about more on this blog.