from hex_m_hell

So I have a sourdough start now. At 8 days old now it's doing all the things it's supposed to do. It smells good. It floats. It doubles when I feed it.

You have to feed a sourdough starter. It's a living community that you care for, like some kind of strange collective pet that you also eat. I've grown plenty of plants, and mushrooms. I had a water kiefer culture that I used to use partially for the yeast to make bread. I also had a regular kiefer culture that turned milk into a soft cheese about once a week. The whey is perfect to kick start pickles, and I would pickle anything I could. I once pickled choke cherries with salt and anise to get something that could probably have been about as passable a substitute for li hing mui as one could make from all local Washington state ingredients. We had chickens that gave us eggs. I also pickled those. They were amazing.

I miss all of that, so now I'm sort of starting again small. We have our little plants in the apartment, clippings of anything I can grow in water. I had a water kiefer but it was never quite the same. The yeast was strong but the bacteria was weak, so the flavors never developed the way I had hoped. I gave up because I'm not interested in sugar beer.

But I've been slowly tending these jars of flour and water over the last week or so, watching them grow and change, until I have something I can use. All of this, mushrooms, plants, pickles, chickens, requires an attention and flexibility. We didn't use any (human made) chemicals on our plants, Our whole thing was informed by permiculture. All of this forced me to slow down and observe, just be present and be with the natural cycles of things.

Last night was the first time I was able to use my start, or at least tried to, to make bread (instead of relying on commercial yeast). Each time you feed the culture, you discard some. At a certain point that discard goes into your cooking. I hadn't really thought about what I was going to do with the culture. See, I hadn't really been using commercial yeast when I was in the US but I've been forced to use it since I got here. I just wanted to change that. So here I was with a bubbling jar starter, trying to figure out exactly what I do with it. So I watched a video about it, and I reminded me of all of this.

Though out the video she's just kind of feeling things out. Nothing is exact. She's interacting with this thing, this culture, that she's gotten to know over the past 11 years. It's not something that can easily be taught, because it's something that one learns to feel through experience. The chickens, the mushrooms, the plants, all the various cultures and things, they all have a life of their own.

They don't fall into easy and predictable schedules. They don't conform to mechanization. They refuse rigid time tables. They are alive. They respond to the weather, the humidity, the temperature. They defy the exact rigidity that we, humans, are forced in to at work.

There's something about that connection, to food, to life. There's something about a fluid involvement that stands in stark contrast to tables of fake metrics, concocted to pad resumes, that obscures the complexities of the world.

Maybe that's the thing I have against buying yeast. It has been bread to be regular and predictable, and in doing so has lost the chaos and complexity that forces you to understand it rather than control it. Control. This is the inescapable anathema of the capitalist life that brings death itself. It is for this purpose that our lives our fractured, we are isolated, and algorithmically fed mind-melting garbage until we snap. And yet, that control is an illusion.

There is something about being in a different relationship with food or medicine. Food, that thing that is so central to our lives, that we experience so often as forgettable transactions. But it becomes so visceral when you are part of it. What an interesting coincidence that capitalism has so prioritized separating us from this experience, enclosing the commons, removing our self-sufficiency, severing our relationship with our food, that we may be forced into the factories to become more predictable, controllable, quantifiable…. domesticated.

Some wild things are poorly adapted to domestication. Some things cannot be quantified, contained, controlled. I didn't always feel the walls, but, interestingly, it's through food that I feel them most intensely now.

 
Read more...

from nieuws van children for status

Casa Legal is een advocatenassociatie die aan een fractie van de kost voor de Staat een succesvol antwoord biedt op de oproep van het federale regeerakkoord van 2020 om via proefprojecten slachtoffers van geweld juridisch beter bij te staan. Concurrentie! Roept het controlerende monopoli van de Orde van Vlaamse Balies, hun Balies en BJB's. Dat moetten en zullen zij stoppen met SLAPP's 8504 en 8574 voor niemand minder dan het Grondwettelijk Hof! Gezien alle vorige parlementaire en andere onderzoeken weet de Staat dat er een probleem is van geïntegreerde hulp aan slachtoffers. Je mag alle psychologen en sociale assistenten van de wereld inzetten, zolang hun steun niet verworven wordt met degelijke juridische steun blijven we dweilen met de kraan open.

De Vlaamse advocaten klagen steen en been dat ze slecht betaald worden. Hun belangenverenigingen – Bureaus voor Juridische Bijstand, Balies, Orde van Vlaamse Balies – vinden dat, ocharme, advocaten te weinig zouden worden vergoed. Maar, ze staan wel te drummen om daders met hand en tand te verdedigen alsof hun eigen leven ervan afhangt.

Slachtoffers, en hun hulpverlening, staan letterlijk op straat.

In 2020 kwam in het regeerakkoord te staan dat de regering proefprojecten wilde om slachtoffers betere juridische ondersteuning te bieden.

Casa Legal ontstond. Een proefproject met oog op nationaal uitrollen ten voordele van slachtoffers. Met hun niet OVB aanpak hielpen zij meer slachtoffers van geweld dan wie ook in België ooit had voorgedaan aan een fractie van het budget. Zo ook ten voordele van slachtoffers van seksueel geweld op minderjarigen.

Wanneer Casa Legal preliminaire resultaten kenbaar maakte, die zo succesvol waren dat ze elke Vlaamse poen scheppende advocatenverbeelding tarten, begon de OVB SLAPP procedures tegen Casa Legal bij niemand minder dan het Grondwettelijk Hof. Rolnummers 8504 en 8574.

De financiële belangen van het door de Staat financieel in stand gehouden monopolie van de OVB leden die de middenvinger opsteken naar slachtoffers stonden publiekelijk voor piet snot. Zoals het enkel de advocaat eigen is schoot de OVB in een ego kramp, Casa Legal mag en zou niet bestaan zonder dat zij dat vanuit hun monopolie financieel zouden controleren.

Het parlement loeit, hoor de koebellen luiden, maar doet concreet niks anders dan lucht verplaatsen.

In afwachting dat de OVB en haar Balies de wurggreep die zij al 6 weken uitoefenen op het Grondwettelijk Hof en de rechtstaat lossen, en de procedure in het voordeel van het belang van alle slachtoffers vooruit zou kunnen gaan, lees hier hoe je Casa Legal kan steunen.

Je kan natuurlijk ook aan de oren van onze ezels in de kamer trekken door hen aan te schrijven. In bijzonder de klepels van de commissie justitie. Hoe meer dat doen, hoe beter.


alle informatie op deze site, zoals maar niet beperkt tot documenten en/of audio-opnames en/of video-opnames en/of foto's, is gemaakt en/of verzameld en gepubliceerd in het belang van gerechtigheid, samenleving en het Universele Recht op Waarheid

children for status is een onafhankelijk collectief dat schuldig verzuim door de Staat ten aanzien van seksueel geweld op minderjarigen en kinderhandel oplossingsgericht documenteert en aanklaagt

 
Read more...

from Prdeush

Ve Zmrdovci prdí každý. Prdí se v hospodě, na zápraží, při loučení i místo pozdravu. Většinou rychle, halabala a bez ambicí. Jenže málokdo tuší, že někteří dědci to berou vážně. Tak vážně, že si kvůli tomu založili tajný spolek. Ne kvůli moci. Kvůli kvalitě.

Spolek kvalitního prdu tvoří dva dědci a světe div se-jeden jelen. Scházejí se na nenápadném paloučku, kde se neprdí pro efekt, ale s rozmyslem. Každý prd se pečlivě dávkuje. Když je dobrý, nikdo nic neřekne. Když není, panuje trapné ticho, ve kterém si všichni uvědomí, že se to nepovedlo. Jelen má na starosti hloubku a lesní charakter, dědci drží stabilitu a konzistenci. Je to týmová práce. A funguje to.

Napětí nastalo ve chvíli, kdy se chtěla přidat sova. Přiletěla tiše, vypustila prd plný sebevědomí… a čekala uznání. Jenže prd byl pisklavý, chaotický a rozpadl se dřív, než se stihl usadit. Jeden dědek si odkašlal, druhý se podíval jinam a jelen jen otočil parohy. Verdikt byl jasný. Neprošla. Sova odletěla uražená a od té doby prdí naschvál dědkům za okny a tiskne na okenice svou velkou prdel.

Celé to dění z povzdálí sleduje kocour. Dědci o něm nevědí — a je to tak lepší. Kdyby zjistili, že někdo vidí, jak si nejlepší prdy nechávají bokem a veřejnosti dávají jen průměr, byl by malér. Kocour ale mlčí. Ví, že svět drží pohromadě právě díky těmhle malým podvodům.

 
Číst dále...

from Nyfiken

Jag har aldrig riktigt fattat varför man behöver en laddbox till sin elbil. För mig har det länge känts som en sån där grej som bara dök upp när elbilarna blev vanliga, lite som ett dyrt tillbehör som alla säger att man måste ha. Samtidigt har jag ju läst och hört överallt att det är farligt att ladda direkt i ett vanligt vägguttag, och där någonstans började min förvirring. Vad är det egentligen som är så annorlunda med en laddbox?

När jag började läsa på lite mer insåg jag att det inte handlar om själva elen i sig, utan om hur laddningen kontrolleras. Ett vanligt uttag är ju gjort för tillfälliga apparater, inte för att leverera hög belastning under många timmar i sträck. När man laddar en elbil drar den ofta mycket ström under lång tid, och då kan uttag, kontakter och kablar bli varma utan att någon riktigt märker det. Laddboxen är däremot byggd just för det här. Den pratar med bilen, ser till att strömmen bara släpps på när allt är rätt kopplat och kan stänga av direkt om något ser konstigt ut. Det var först då det klickade för mig att säkerheten sitter i övervakningen och kommunikationen, inte bara i sladden.

Jag har också förstått att laddboxen ofta har inbyggda skydd som man annars måste komplettera med i elcentralen. Den kan till exempel känna av felströmmar som ett vanligt uttag inte bryr sig om, och den kan anpassa laddningen så att husets elsystem inte överbelastas. Det känns rätt skönt att veta att det finns något som håller koll i bakgrunden, särskilt när bilen står och laddar på natten och ingen är vaken.

När jag tänker på vad man ska tänka på när man väljer laddbox så känns det mindre mystiskt nu än innan. För mig handlar det mycket om att den ska passa hur jag bor och hur jag använder bilen. Har man begränsad huvudsäkring kan det vara viktigt med lastbalansering, så att bilen inte tar all effekt när man lagar mat eller duschar. Det verkar också klokt att välja något som är godkänt och installerat av en elektriker, även om det lockar att ta genvägar. Jag har insett att laddboxen inte är där för att krångla till det, utan för att göra något som faktiskt är ganska avancerat både tryggt och bekvämt i vardagen.

När jag väl accepterade varför laddboxen finns började jag fastna i alla de där valen runt omkring, sånt som jag först tyckte kändes överdrivet men som faktiskt påverkar vardagen mer än man tror. En sån sak är om man ska ha fast kabel eller inte. Instinktivt tänkte jag att fast kabel är smidigast, det är bara att ta sladden och plugga in bilen utan att rota fram något ur bagaget. Samtidigt finns det något tilltalande med en box utan fast kabel, särskilt om man vill ha det lite prydligare eller om flera olika bilar ska använda samma laddare. Då slipper man fundera på om kabeln passar alla, men man får leva med att koppla i och ur sin egen sladd varje gång. För mig blev det tydligt att det egentligen handlarda alternativen är bra, men på olika sätt, och att valet mer handlar om hur lat eller ordningsam man är än om teknik.

Sen finns hela den smarta sidan av laddboxar, som jag först avfärdade som onödigt krångel. Men ju mer jag tänker på elpriser som hoppar upp och ner och på hur mycket en bil faktiskt drar, desto mer logiskt känns det. Att kunna styra laddningen till de timmar då elen är billigast känns nästan som en självklarhet, särskilt om bilen ändå står still hela natten. Jag gillar också tanken på att kunna följa laddningen i en app, inte för att stirra på grafer varje dag, men för att ha koll på vad det faktiskt kostar och hur mycket el bilen använder över tid.

Samtidigt märker jag att det är lätt att gå vilse bland funktioner som låter smartare än de kanske är i praktiken. Jag har fått känslan av att det viktigaste är att det grundläggande funkar stabilt, att uppkopplingen inte strular och att boxen inte blir värdelös om tillverkaren slutar uppdatera sin app. Det finns något tryggt i lösningar som inte kräver att man loggar in eller uppdaterar hela tiden för att göra det de ska, även om smarta funktioner absolut kan vara ett plus.

Jag har också börjat tänka mer långsiktigt. Kanske byter jag bil i framtiden, kanske ändras elavtalet, kanske vill jag koppla ihop laddningen med solceller eller annat längre fram. Då känns det vettigt att välja en laddbox som inte är låst till en enda lösning, utan som går att anpassa utan att man måste byta ut allt. För mig har laddboxen gått från att vara en mystisk pryl man tydligen måste ha, till att bli en ganska central del av hur hela hushållets el faktiskt används. Och det var inte alls så ointressant som jag först trodde.

 
Läs mer...

from G A N Z E E R . T O D A Y

Lost most of the morning to kitchen counter drama; It arrived with the wrong specs and rather than work quickly to cut and deliver a counter with the accurate specs, company personnel are instead dicking around trying to find someone to blame.

Egypt is plagued by terrible project management issues. So bad is the project management culture that you're bound to experience abhorrent inefficiencies with almost any outfit you work with, no matter how big or small.

We have a long ways to go to course-adjust in the old motherland.

In other news, my body has adapted to the new schedule, and I now auto-awake at 6:00am without the need for alarm clocks or any other interventions.

#journal

 
Read more... Discuss...

from An Open Letter

I love being able to spend my days with E. Four months flew by in the blink of an eye, mostly because she has felt like she’s been with me my whole life.

 
Read more...

from eivindtraedal

To manipulerte bilder delt av Trump i natt: det ene viser at USA har annektert Canada, Grønland og Venezuela, det andre viser Trump, Rubio og Vance som planter det amerikanske flagget på Grønland. USA har fått en gal keiser. Han er en dypt useriøs mann, men han mener det han sier.

Kartet på bildet til venstre er interessant. Hva minner dette om? Jeg har brukt mange timer av mitt liv på historiske strategispill på PC-en som handler om å klikke på kart og få det til å endre farge. Civilization, Europa Universalis, Crusader Kings og så videre. Både symbolbruken, retorikken og særlig bruken av memes fra Trump-administrasjonen viser at de er preget av denne kulturen.

Mentaliteten til Trump og hans omgivelser minner mest om når man har spilt et slikt strategispill på PC-en og vunnet alle objektiver og begynt å kjede seg. Hvorfor ikke bare begynne å krige litt, og se hvor mye av karter man kan legge under seg?

Etter å ha vunnet globalt hegemoni gjennom kulturell og økonomisk dominans og sterke allianser, har USA tydeligvis gått lei av en verden der hele spillet er rigget i deres favør, og bestemt seg for å rive ned hele greia. En dement og totalt uforutsigbar president omgitt av ryggesløse lakeier er i ferd med å omgjøre USA fra global supermakt til lokal bølle.

Det mest slående og skremmende med Trump akkurat nå, er hans forakt for Europa. Her finnes det tydeligvis ingen vinnende strategi. Hvis vi smisker, vekker det hans forakt. Hvis vi gir motstand, vekker det hans vrede. Men ved å yte motstand og påføre USA kostnader for sin oppførsel, kan vi bidra til å vekke USAs befolkning og den sovende Kongressen. Nattens demente memes tyder i alle fall på at det er lite håp om å nå igjennom til Trump selv.

 
Read more... Discuss...

from Thoughts on Nanofactories

Nanofactories – theoretical matter printers, like Star Trek’s Replicators – will be revolutionary once they exist. Any material thing can be printed at will, by anyone. But we don’t need to wait until that future to draw great use from them. Even as conceptual models in thought experiments today, they provide deeply interesting value.

That is what this blog is about: Considering the Cultural Effects of Matter Printers.

Note that I am emphasizing the cultural effects. We could speculate about the actual mechanics of Nanofactories, but I’ll leave that for someone else’s blog. I am more interested in the Science Fiction approach: where future technology is used as a conceptual tool to help us think more effectively about realities today.

By the time Nanofactories actually come around, we will also know that we’ve considered the cultural effects deeply and widely, and can utilize what we’ve learned.

And so, I welcome you to join me on this journey through Thoughts on Nanofactories.

 
Read more... Discuss...

from FEDITECH

L'industrie de la technologie traverse souvent des cycles prévisibles. D'abord l'euphorie de la découverte, puis la complexité de l'intégration réelle. Pour OpenAI, l'année 2026 sera un cap décisif dans cette chronologie. Selon un récent billet de blog publié par Sarah Friar, directrice financière de l'entreprise, l'objectif n'est plus seulement d'impressionner par la puissance brute des modèles, mais de se concentrer sur l'adoption pratique. En d'autres termes, la firme américaine cherche désormais à combler le fossé qui sépare ce que l'intelligence artificielle est capable de faire théoriquement et la manière dont les gens l'utilisent réellement au quotidien.

Son analyse se veut lucide. L'opportunité qui se présente est à la fois immense et immédiate. Elle ne réside d’ailleurs plus uniquement dans les chatbots conversationnels grand public. L'avenir se joue dans des secteurs critiques tels que la santé, la recherche scientifique et le monde de l'entreprise. Dans ces domaines, une meilleure intelligence artificielle se traduit directement par de meilleurs résultats opérationnels et humains.

Cette transition vers l'utilité concrète est le cœur du message intitulé « une entreprise qui grandit avec la valeur de l'intelligence ». Depuis le lancement de ChatGPT, OpenAI a connu une évolution fulgurante, passant du statut de laboratoire de recherche à celui de géant technologique mondial. Les métriques d'utilisateurs actifs, tant quotidiens qu'hebdomadaires, continuent d'atteindre des sommets historiques. Ce succès repose sur ce que Friar décrit comme un cercle vertueux (ou flywheel) reliant la puissance de calcul, la recherche de pointe, les produits finis et la monétisation. Mais ce moteur de croissance a un coût et il est astronomique.

Pour maintenir sa position de leader, OpenAI investit massivement. En novembre dernier, l'entreprise avait déjà pris des engagements en matière d'infrastructure s'élevant à environ 1 400 milliards de dollars. Ce chiffre vertigineux illustre la réalité économique de l'IA moderne: sécuriser une puissance de calcul de classe mondiale nécessite une planification sur plusieurs années.

La croissance n'est pourtant jamais parfaitement linéaire. Il existe des périodes où la capacité des serveurs dépasse l'usage et d'autres où la demande sature l'offre. Pour naviguer dans ces eaux troubles, OpenAI adopte une discipline financière stricte. La stratégie consiste à garder un bilan léger, en privilégiant les partenariats plutôt que la propriété directe des infrastructures et en structurant des contrats flexibles avec divers fournisseurs de matériel. Le capital est ainsi engagé par tranches, en réponse à des signaux de demande réels, évitant de verrouiller l'avenir plus que nécessaire.

L'évolution de l'usage entraîne inévitablement celle du modèle d'affaires. OpenAI a récemment annoncé l'arrivée prochaine de publicités sur sa plateforme et a lancé l'abonnement plus abordable « ChatGPT Go ». Mais selon Sarah Friar toujours, l'avenir ira bien au-delà de ce que l'entreprise vend actuellement.

Alors que l'intelligence artificielle s'infiltre dans la recherche scientifique, la découverte de médicaments, la gestion des systèmes énergétiques ou la modélisation financière, de nouveaux modèles économiques vont émerger. Nous pourrions voir apparaître des systèmes de licences, des accords basés sur la propriété intellectuelle, et surtout, une tarification basée sur les résultats. L'idée est de partager la valeur créée par l'IA, plutôt que de simplement vendre un accès. C'est ainsi que l'internet a évolué et l'intelligence artificielle suivra probablement le même chemin.

Enfin, cette adoption pratique pourrait bientôt prendre une forme physique. En partenariat avec le légendaire designer Jony Ive, OpenAI travaille sur des dispositifs matériels dédiés, dont le premier pourrait être dévoilé plus tard cette année. Cela marquerait l'étape ultime de la stratégie 2026: faire sortir l'IA de nos écrans pour l'intégrer, de manière pratique et tangible, dans notre réalité.

 
Lire la suite... Discuss...

from Expert Travel App Development That Increases Bookings

Hire Skilled Australian Developers for Faster Delivery

If you want predictable timelines and high quality results, Hire Skilled Australian Developers for Faster Delivery who work like your in house team. You get vetted talent, clear communication, modern tech expertise, and seamless collaboration to reduce risks, meet deadlines, and scale your product confidently while keeping development smooth and stress free. Explore more: Hire Dedicated Developers

 
Read more...

from tomson darko

Je gevoelens zijn als het weer.

Het kan hard regenen. Het kan stormen. De zon kan zo hard schijnen dat je gek wordt van een gebrek aan schaduw. Maar één ding staat vast.

Niemand kan het weer veranderen.

Je kunt je er wel op kleden.

Pas je dagritme aan aan hoe je je voelt.

Sombere dagen vragen om warme thee en een dekentje en een liefkozende stem naar jezelf toe.

Blije dagen vragen om dansen op muziek, mensen bellen en je favoriete taart bakken.

Ik praat al jaren met een vriendin in seizoenen.

Het is niet: hoe gaat het? Maar: in welk seizoen zit je?

Hoogzomer?

Kille winter?

Of juist zo’n koude winterdag met een blauwe lucht?

Het geeft een verdiepende laag aan het begin van het gesprek.

Zonder te vervallen in een simpel antwoord als ‘Prima’, terwijl je eigenlijk bedoelt: ‘Ik voel me al dagen alleen, genegeerd, waardeloos, ongesteld en dik en ik verlang naar mijn urn’.

Maar om zo het gesprek te beginnen, is ook weer zo wat.

Daarom.

De seizoenen.

Ik ben nog geen betere metafoor tegengekomen dan je gevoelens via de seizoenen te omschrijven.

Al wil ik best wel een lans breken om films als metafoor in te zetten.

  • ‘Het gaat zo goed met me, dat ik tegelijkertijd bang ben voor een ijsschots. Zoals in de film Titanic (1997).’
  • ‘Mijn emoties zitten vast in een oneindige cyclus. Zoals Bill Murray zijn dag blijft herbeleven in Groundhog Day (1993).’
  • ‘Ik ben net Frodo met zijn ring. De last van het schuldgevoel is zo zwaar en eindeloos. Maar ik weet dat ik het op een dag in de vulkaan kan gooien.’

De pech met films is alleen dat niet iedereen dezelfde films heeft gezien of de metafoor erin ziet.

De seizoenen kennen we allemaal.

(Of je moet je hele leven bij de evenaar hebben gewoond. Dan is het altijd zomer, en dat klinkt als een optimistisch persoon en die kan je beter gewoon ontwijken in je leven. Wantrouw de optimist! Want er is geen schaduw op de evenaar. En als je je eigen schaduw niet kunt zien, ken je jezelf dan wel voldoende? Ben je dan wel bewust genoeg van de donkere krachten die in je schuilen?)

=

Komiek, schrijver, acteur, presentator, mentale-gezondheid-voorvechter, Ai-criticus, Brit en meer, Stephen Fry (1957), gaat hier nog verder op in.

Fry worstelt zelf al zijn hele leven met depressies.

Hij omschrijft zijn gevoelens als het weer. En dat heeft verstrekkende gevolgen voor hoe je naar jezelf kijkt.

Het volgende komt uit een interview in de podcast The Diary of a CEO uit december 2022:

‘Het weer is echt. Je kunt niet zeggen: “Ach, het sneeuwt niet echt, er is geen sneeuwstorm buiten, dus ik trek gewoon een T-shirt aan.” Je moet accepteren dat het weer echt is. Maar je moet ook accepteren dat jij het niet hebt veroorzaakt. Ik heb het niet laten sneeuwen. Het is er gewoon.

‘En je hoeft ook niet te denken: “Nou, dat is het dan, het gaat voor altijd sneeuwen, het blijft altijd koud.”

‘Nee, het zal weer overgaan. Het weer heeft niets met jou te maken.

‘Je kunt het niet laten stoppen, en het is niet jouw schuld dat het er is.’

Vier belangrijke inzichten uit deze weer-metafoor van Fry:

  1. Je gevoelens zijn echt, ook al denk je van niet.
  2. Het is niet jouw schuld dat je je zo voelt.
  3. Kleed je naar het weer. Oftewel: pas je aan aan je eigen gevoelens.
  4. Het gaat vanzelf voorbij.

Ja.

Het gaat vanzelf voorbij.

Zoals ik altijd zeg: na zonneschijn komt altijd een stortbui.

Maar even serieus.

Probeer de volgende keer eens te vragen aan iemand wat voor weer of seizoen het is in zijn of haar hoofd.

Wat voor seizoen is het eigenlijk nu in jouw hoofd?

 
Read more...

from tomson darko

Soms stapelen de zorgen in je leven zo op, dat relativeren niet meer lukt.

Je hartslag neemt bij een gedachte aan je problemen het ritme aan van een EDM-liedje. Je komt ’s avonds nauwelijks meer in slaap. Het lijkt wel alsof alles erger en erger wordt. Terwijl veel van de problemen die je nu hebt tijdelijk ongemak zijn. En dan bedoel ik met tijdelijk maximaal een jaar.

Je zorgen zijn minder groot dan je denkt.

Tijd om een lesje relativeren toe te passen op jezelf.

Deze methode werkt verrassend goed.

Het enige nadeel is dat je dit pas over een jaar beseft.

Oké.

Ben je er klaar voor?

Doe dit.

  1. Open je agenda op je telefoon.
  2. Ga naar de datum van vandaag, maar dan over een jaar.
  3. Plan een afspraak met jezelf om 9 uur ’s ochtends.
  4. Som in het notitieveld al je zorgen op die je op dit moment hebt.
  5. Van een lekkende kraan, tot de stress in je baan, tot de onbeantwoorde appjes van een goede vriend. Denk er niet te lang over na. Schrijf op wat er in je opkomt.
  6. Zorg ervoor dat je over een jaar een melding krijgt van deze afspraak.
  7. Sluit nu je agenda.

Wat er over een jaar gaat gebeuren is dit:

  • Een deel van je zorgen zijn uit zichzelf opgelost.
  • Een deel van je zorgen heb je zelf opgelost.
  • Een deel van je zorgen zijn er nog steeds, maar voelen anders aan.

Wat je zult inzien, is dat je zorgen sneller voorbijgaan dan je denkt.

Je beseft dat sommige zorgen groter aanvoelen dan ze zijn. En dat sommige zorgen misschien voor langere tijd bij je zullen blijven.

Het geeft je een beter gevoel van de tijd en wellicht ook wat meer vertrouwen in zaken waar je geen controle over hebt.

Deze methode, copyright protected, heb ik zelf ontdekt in de naweeën van een depressie. De problemen stapelden zich maar op. Een verhuizing vol problemen, werkproblemen, liefdesproblemen, mentale gezondheidsproblemen. Autoproblemen, geen-fiets-meer-problemen, gedoe met hypotheekproblemen. Als ik de stop van het volle bad van m’n tijdelijke huisje eruit trok, zag ik mijn leven in het putje erin terug. Als ik een mees een regenworm uit de grond zag trekken, dacht ik: die heeft zijn enige probleem van vandaag al opgelost.

Relativeren is gewoon moeilijk als je gestrest bent. Maar het wordt ingewikkelder als er te veel stressfactoren zijn. Daarom: plan die afspraak met jezelf over een jaar in.

Wat je daarnaast kunt doen, is de tekst in het notitieveld van je agenda kopiëren en die nu in een notitie-app zetten. Zorg dat er voor elk zorgenpuntje zo’n vierkante box voor komt te staan die je kunt afvinken.

Kijk af en toe in de maanden die volgen naar dit lijstje en vink je problemen af.

Het motiveert enorm kan ik uit eigen ervaring vertellen.

liefs,

tomson

 
Read more...

from hamsterdam

Wake Up For What?

One of the more surprising occurrences over the past 10 years of politics were friends and acquaintances who were pro Bernie Sanders and later became either Trump supporters or seemingly sympathetic to Trump.

Over the coarse of many conversations with on such friend I discovered that he believed that America is broken beyond repair and that the election of Trump might, in his words, “serve as a catalyst for the fall of the 2 party system.”.

This is a dangerous gamble for two reasons.

First, it misdiagnoses the problem. America was not broken beyond repair. Yes, we had serious challenges—inequality, the cost of housing, institutional distrust, a feckless congress—but we also had a functioning democracy, the rule of law, and a robust economy. Change was possible through the only good mechanism human civilization has invented: democracy. The complaint that “Americans care about things politicians don't act on” is not proof that democracy failed—it's proof that people weren't voting based on what they claimed to care about. Hoping Trump will shock the conscience into action is not using democracy effectively; it's abandoning it.

Second, it underestimates the risk. Trump is not a controlled burn. He can do enormous and irreparable harm—to our democratic institutions, to the rule of law, to a world order that created the most prosperous and free era in human history. Betting on catastrophe as a catalyst assumes you can walk up to the edge of the abyss, peer over, and step back enlightened. History suggests otherwise, the abyss often peers back into you.

For the sake of argument, let's hope that Trump does not lead us into the abyss, and that an overwhelming majority of Americans see the importance of “fixing our problems” as a result of walking up to the edge of authoritarianism, that they “wake up” as my friend says. Even if all of this comes to pass, we are left with a twist on the immortal words of Lil Jon, “Wake Up For What?”

The assumption that Trump will shock people into waking up and dismantling the two-party system misreads what his voters actually want. Research shows Trump's coalition is not a unified movement but a fragmented alliance of groups with distinct identities, competing priorities, and clashing worldviews. Their top priorities are concrete and personal—the economy (93%), immigration (82%), the cost of living, anti-woke, abortion, etc. There is no alignment on structural political reform. When pollsters ask Americans about third parties, 58% say one is needed, but this reflects frustration, not commitment: Republicans' support for a third party actually dropped from 58% to 48% once Trump consolidated power. People say they want change, but research consistently finds that they are not aligned on the type of change they want, and they often simply want their team to win more completely.

Trump is already doing irreparable harm to our country and our values, and it's unclear if enough people will wake up fast enough to stop him from doing even greater harm. But as the data shows, even if they do wake up, they won't wake up to the same vision. There is no unified “aha” moment waiting on the other side of this chaos—just millions of people, still wanting different things, still needing to be persuaded.

That's the part this theory skips over. Democracy is not a vending machine where you insert a sufficient crisis and out comes reform. It's the long, frustrating work of changing minds one at a time. That work was available to us before Trump. It will be waiting for us after—if we're lucky enough to still have the institutions that make it possible.

Hoping for a collective awakening is not a strategy. The only way out is the way we should have been going all along: showing up, persuading people, voting like it matters. Because it does. It always did.

 
Read more...

from SmarterArticles

The promise was seductive: AI that writes code faster than any human, accelerating development cycles and liberating engineers from tedious boilerplate. The reality, as thousands of development teams have discovered, is considerably more complicated. According to the JetBrains State of Developer Ecosystem 2025 survey of nearly 25,000 developers, 85% now regularly use AI tools for coding and development. Yet Stack Overflow's 2025 Developer Survey reveals that only 33% of developers trust the accuracy of AI output, down from 43% in 2024. More developers actively distrust AI tools (46%) than trust them.

This trust deficit tells a story that productivity metrics alone cannot capture. While GitHub reports developers code 55% faster with Copilot and McKinsey studies suggest tasks can be completed twice as quickly with generative AI assistance, GitClear's analysis of 211 million changed lines of code reveals a troubling counter-narrative. The percentage of code associated with refactoring has plummeted from 25% in 2021 to less than 10% in 2024. Duplicated code blocks increased eightfold. For the first time in GitClear's measurement history, copy-pasted lines exceeded refactored lines.

The acceleration is real. So is the architectural degradation it enables.

What emerges from this data is not a simple story of AI success or failure. It is a more nuanced picture of tools that genuinely enhance productivity when deployed with discipline but create compounding problems when adopted without appropriate constraints. The developers and organisations navigating this landscape successfully share a common understanding: AI coding assistants require guardrails, architectural oversight, and deliberate workflow design to deliver sustainable value.

The Feature Creep Accelerator

Feature creep has plagued software development since the industry's earliest days. Wikipedia defines it as the excessive ongoing expansion or addition of new features beyond the original scope, often resulting in software bloat and over-complication rather than simple design. It is considered the most common source of cost and schedule overruns and can endanger or even kill products and projects. What AI coding assistants have done is not create this problem, but radically accelerate its manifestation.

Consider the mechanics. A developer prompts an AI assistant to add a user authentication feature. The AI generates functional code within seconds. The developer, impressed by the speed and apparent correctness, accepts the suggestion. Then another prompt, another feature, another quick acceptance. The velocity feels exhilarating. The Stack Overflow survey confirms this pattern: 84% of developers now use or plan to use AI tools in their development process. The JetBrains survey reports that 74% cite increased productivity as AI's primary benefit, with 73% valuing faster completion of repetitive tasks.

But velocity without direction creates chaos. Google's 2024 DORA report found that while AI adoption increased individual output by 21% more tasks completed and 98% more pull requests merged, organisational delivery metrics remained flat. More alarmingly, AI adoption correlated with a 7.2% reduction in delivery stability. The 2025 DORA report confirms this pattern persists: AI adoption continues to have a negative relationship with software delivery stability. As the DORA researchers concluded, speed without stability is accelerated chaos.

The mechanism driving this instability is straightforward. AI assistants optimise for immediate task completion. They generate code that works in isolation but lacks awareness of broader architectural context. Each generated component may function correctly yet contradict established patterns elsewhere in the codebase. One function uses promises, another async/await, a third callbacks. Database queries are parameterised in some locations and concatenated strings in others. Error handling varies wildly between endpoints.

This is not a failing of AI intelligence. It reflects a fundamental mismatch between how AI assistants operate and how sustainable software architecture develops. The Qodo State of AI Code Quality report identifies missing context as the top issue developers face, reported by 65% during refactoring and approximately 60% during test generation and code review. Only 3.8% of developers report experiencing both low hallucination rates and high confidence in shipping AI-generated code without human review.

Establishing Effective Guardrails

The solution is not to abandon AI assistance but to contain it within structures that preserve architectural integrity. CodeScene's research demonstrates that unhealthy code exhibits 15 times more defects, requires twice the development time, and creates 10 times more delivery uncertainty compared to healthy code. Their approach involves implementing guardrails across three dimensions: code quality, code familiarity, and test coverage.

The first guardrail dimension addresses code quality directly. Every line of code, whether AI-generated or handwritten, undergoes automated review against defined quality standards. CodeScene's CodeHealth Monitor detects over 25 code smells including complex methods and God functions. When AI or human introduces issues, the monitor flags them instantly before the code reaches the main branch. This creates a quality gate that treats AI-generated code with the same scrutiny applied to human contributions.

The quality dimension requires teams to define their code quality standards explicitly and automate enforcement via pull request reviews. A 2023 study found that popular AI assistants generate correct code in only 31.1% to 65.2% of cases. Similarly, CodeScene's Refactoring vs. Refuctoring study found that AI breaks code in two out of three refactoring attempts. These statistics make quality gates not optional but essential.

The second dimension concerns code familiarity. Research from the 2024 DORA report reveals that 39% of respondents reported little to no trust in AI-generated code. This distrust correlates with experience level: senior developers show the lowest “highly trust” rate at 2.6% and the highest “highly distrust” rate at 20%. These experienced developers have learned through hard experience that AI suggestions require verification. Guardrails should institutionalise this scepticism by requiring review from developers familiar with affected areas before AI-generated changes merge.

The familiarity dimension serves another purpose: knowledge preservation. When AI generates code that bypasses human understanding, organisations lose institutional knowledge about how their systems work. When something breaks at 3 a.m. and the code was generated by an AI six months ago, can the on-call engineer actually understand what is failing? Can they trace through the logic and implement a meaningful fix without resorting to trial and error?

The third dimension emphasises test coverage. The Ox Security report titled “Army of Juniors: The AI Code Security Crisis” identified 10 architecture and security anti-patterns commonly found in AI-generated code. Comprehensive test suites serve as executable documentation of expected behaviour. When AI-generated code breaks tests, the violation becomes immediately visible. When tests pass, developers gain confidence that at least basic correctness has been verified.

Enterprise adoption requires additional structural controls. The 2026 regulatory landscape, with the EU AI Act's high-risk provisions taking effect in August and penalties reaching 35 million euros or 7% of global revenue, demands documented governance. AI governance committees have become standard in mid-to-large enterprises, with structured intake processes covering security, privacy, legal compliance, and model risk.

Preventing Architectural Drift

Architectural coherence presents a distinct challenge from code quality. A codebase can pass all quality metrics while still representing a patchwork of inconsistent design decisions. The term “vibe coding” has emerged to describe an approach where developers accept AI-generated code without fully understanding it, relying solely on whether the code appears to work.

The consequences of architectural drift compound over time. A September 2025 Fast Company report quoted senior software engineers describing “development hell” when working with AI-generated code. One developer's experience became emblematic: “Random things are happening, maxed out usage on API keys, people bypassing the subscription.” Eventually: “Cursor keeps breaking other parts of the code,” and the application was permanently shut down.

Research examining ChatGPT-generated code found that only five out of 21 programs were initially secure when tested across five programming languages. Missing input sanitisation emerged as the most common flaw, while Cross-Site Scripting failures occurred 86% of the time and Log Injection vulnerabilities appeared 88% of the time. These are not obscure edge cases but fundamental security flaws that any competent developer should catch during code review.

Preventing this drift requires explicit architectural documentation that AI assistants can reference. A recommended approach involves creating a context directory containing specialised documents: a Project Brief for core goals and scope, Product Context for user experience workflows and business logic, System Patterns for architecture decisions and component relationships, Tech Context for the technology stack and dependencies, and Progress Tracking for working features and known issues.

This Memory Bank approach addresses AI's fundamental limitation: forgetting implementation choices made earlier when working on large projects. AI assistants lose track of architectural decisions, coding patterns, and overall project structure, creating inconsistency as project complexity increases. By maintaining explicit documentation that gets fed into every AI interaction, teams can maintain consistency even as AI generates new code.

The human role in this workflow resembles a navigator in pair programming. The navigator directs overall development strategy, makes architectural decisions, and reviews AI-generated code. The AI functions as the driver, generating code implementations and suggesting refactoring opportunities. The critical insight is treating AI as a junior developer beside you: capable of producing drafts, boilerplate, and solid algorithms, but lacking the deep context of your project.

Breaking Through Repetitive Problem-Solving Patterns

Every developer who has used AI coding assistants extensively has encountered the phenomenon: the AI gets stuck in a loop, generating the same incorrect solution repeatedly, each attempt more confidently wrong than the last. The 2025 Stack Overflow survey captures this frustration, with 66% of developers citing “AI solutions that are almost right, but not quite” as their top frustration. Meanwhile, 45% report that debugging AI-generated code takes more time than expected. These frustrations have driven 35% of developers to turn to Stack Overflow specifically after AI-generated code fails.

The causes of these loops are well documented. VentureBeat's analysis of why AI coding agents are not production-ready identifies brittle context windows, broken refactors, and missing operational awareness as primary culprits. When AI exceeds its context limit, it loses track of previous attempts and constraints. It regenerates similar solutions because the underlying prompt and available context have not meaningfully changed.

Several strategies prove effective for breaking these loops. The first involves starting fresh with new context. Opening a new chat session can help the AI think more clearly without the baggage of previous failed attempts in the prompt history. This simple reset often proves more effective than continued iteration within a corrupted context.

The second strategy involves switching to analysis mode. Rather than asking the AI to fix immediately, developers describe the situation and request diagnosis and explanation. By doing this, the AI outputs analysis or planning rather than directly modifying code. This shift in mode often reveals the underlying issue that prevented the AI from generating a correct solution.

Version control provides the third strategy. Committing a working state before adding new features or accepting AI fixes creates reversion points. When a loop begins, developers can quickly return to the last known good version rather than attempting to untangle AI-generated complexity. Frequent checkpointing makes the decision between fixing forward and reverting backward much easier.

The fourth strategy acknowledges when manual intervention becomes necessary. One successful workaround involves instructing the agent not to read the file and instead requesting it to provide the desired configuration, with the developer manually adding it. This bypasses whatever confusion the AI has developed about the file's current state.

The fifth strategy involves providing better context upfront. Developers should always copy-paste the exact error text or describe the wrong behaviour precisely. Giving all relevant errors and output to the AI leads to more direct fixes, whereas leaving it to infer the issue can lead to loops.

These strategies share a common principle: recognising when AI assistance has become counterproductive and knowing when to take manual control. The 90/10 rule offers useful guidance. AI currently excels at planning architectures and writing code blocks but struggles with debugging real systems and handling edge cases. When projects reach 90% completion, switching from building mode to debugging mode leverages human strengths rather than fighting AI limitations.

Leveraging Complementary AI Models

The 2025 AI landscape has matured beyond questions of whether to use AI assistance toward more nuanced questions of which AI model best serves specific tasks. Research published on ResearchGate comparing Gemini 2.5, Claude 4, LLaMA 4, GPT-4.5, and DeepSeek V3.1 concludes that no single model excels at everything. Each has distinct strengths and weaknesses. Rather than a single winner, the 2025 landscape shows specialised excellence.

Professional developers increasingly adopt multi-model workflows that leverage each AI's advantages while avoiding their pitfalls. The recommended approach matches tasks to model strengths: Gemini for deep reasoning and multimodal analysis, GPT series for balanced performance and developer tooling, Claude for long coding sessions requiring memory of previous context, and specialised models for domain-specific requirements.

Orchestration platforms have emerged to manage these multi-model workflows. They provide the integration layer that routes requests to appropriate models, retrieves relevant knowledge, and monitors performance across providers. Rather than committing to a single AI vendor, organisations deploy multiple models strategically, routing queries to the optimal model per task type.

This multi-model approach proves particularly valuable for breaking through architectural deadlocks. When one model gets stuck in a repetitive pattern, switching to a different model often produces fresh perspectives. The models have different training data, different architectural biases, and different failure modes. What confuses one model may be straightforward for another.

The competitive advantage belongs to developers who master multi-model workflows rather than committing to a single platform. This represents a significant shift in developer skills. Beyond learning specific AI tools, developers must develop meta-skills for evaluating which AI model suits which task and when to switch between them.

Mandatory Architectural Review Before AI Implementation

Enterprise teams have discovered that AI output velocity can exceed review capacity. Qodo's analysis observes that AI coding agents increased output by 25-35%, but most review tools do not address the widening quality gap. The consequences include larger pull requests, architectural drift, inconsistent standards across multi-repository environments, and senior engineers buried in validation work instead of system design. Leaders frequently report that review capacity, not developer output, is the limiting factor in delivery.

The solution emerging across successful engineering organisations involves mandatory architectural review before AI implements major changes. The most effective teams have shifted routine review load off senior engineers by automatically approving small, low-risk, well-scoped changes while routing schema updates, cross-service changes, authentication logic, and contract modifications to human reviewers.

AI review systems must therefore categorise pull requests by risk and flag unrelated changes bundled in the same pull request. Selective automation of approvals under clearly defined conditions maintains velocity for routine changes while ensuring human judgment for consequential decisions. AI-assisted development now accounts for nearly 40% of all committed code, making these review processes critical to organisational health.

The EU AI Act's requirements make this approach not merely advisable but legally necessary for certain applications. Enterprises must demonstrate full data lineage tracking knowing exactly what datasets contributed to each model's output, human-in-the-loop checkpoints for workflows impacting safety, rights, or financial outcomes, and risk classification tags labelling each model with its risk level, usage context, and compliance status.

The path toward sustainable AI-assisted development runs through consolidation and discipline. Organisations that succeed will be those that stop treating AI as a magic solution for software development and start treating it as a rigorous engineering discipline requiring the same attention to process and quality as any other critical capability.

Safeguarding Against Hidden Technical Debt

The productivity paradox of AI-assisted development becomes clearest when examining technical debt accumulation. An HFS Research and Unqork study found that while 84% of organisations expect AI to reduce costs and 80% expect productivity gains, 43% report that AI will create new technical debt. Top concerns include security vulnerabilities at 59%, legacy integration complexity at 50%, and loss of visibility at 42%.

The mechanisms driving this debt accumulation differ from traditional technical debt. AI technical debt compounds through three primary vectors. Model versioning chaos results from the rapid evolution of code assistant products. Code generation bloat emerges as AI produces more code than necessary. Organisation fragmentation develops as different teams adopt different AI tools and workflows. These vectors, coupled with the speed of AI code generation, interact to cause exponential growth.

SonarSource's August 2025 analysis of thousands of programming tasks completed by leading language models uncovered what researchers describe as a systemic lack of security awareness. The Ox Security report found AI-generated code introduced 322% more privilege escalation paths and 153% more design flaws compared to human-written code. AI-generated code is highly functional but systematically lacking in architectural judgment.

The financial implications are substantial. By 2025, CISQ estimates nearly 40% of IT budgets will be spent maintaining technical debt. A Stripe report found developers spend, on average, 42% of their work week dealing with technical debt and bad code. AI assistance that accelerates code production without corresponding attention to code quality simply accelerates technical debt accumulation.

The State of Software Delivery 2025 report by Harness found that contrary to perceived productivity benefits, the majority of developers spend more time debugging AI-generated code and more time resolving security vulnerabilities than before AI adoption. This finding aligns with GitClear's observation that code churn, defined as the percentage of code discarded less than two weeks after being written, has nearly doubled from 3.1% in 2020 to 5.7% in 2024.

Safeguarding against this hidden debt requires continuous measurement and explicit debt budgeting. Teams should track not just velocity metrics but also code health indicators. The refactoring rate, clone detection, code churn within two weeks of commit, and similar metrics reveal whether AI assistance is building sustainable codebases or accelerating decay. If the current trend continues, GitClear believes it could soon bring about a phase change in how developer energy is spent, with defect remediation becoming the leading day-to-day developer responsibility rather than developing new features.

Structuring Developer Workflows for Multi-Model Effectiveness

Effective AI-assisted development requires restructuring workflows around AI capabilities and limitations rather than treating AI as a drop-in replacement for human effort. The Three Developer Loops framework published by IT Revolution provides useful structure: a tight inner loop of coding and testing, a middle loop of integration and review, and an outer loop of planning and architecture.

AI excels in the inner loop. Code generation, test creation, documentation, and similar tasks benefit from AI acceleration without significant risk. Development teams spend nearly 70% of their time on repetitive tasks instead of creative problem-solving, and AI handles approximately 40% of the time developers previously spent on boilerplate code. The middle loop requires more careful orchestration. AI can assist with code review and integration testing, but human judgment must verify that generated code aligns with architectural intentions. The outer loop remains primarily human territory. Planning, architecture, and strategic decisions require understanding of business context, user needs, and long-term maintainability that AI cannot provide.

The workflow implications are significant. Rather than using AI continuously throughout development, effective developers invoke AI assistance at specific phases while maintaining manual control at others. During initial planning and architecture, AI might generate options for human evaluation but should not make binding decisions. During implementation, AI can accelerate code production within established patterns. During integration and deployment, AI assistance should be constrained by automated quality gates that verify generated code meets established standards.

Context management becomes a critical developer skill. The METR 2025 study that found developers actually take 19% longer when using AI tools attributed this primarily to context management overhead. The study examined 16 experienced open-source developers with an average of five years of prior experience with the mature projects they worked on. Before completing tasks, developers predicted AI would speed them up by 24%. After experiencing the slowdown firsthand, they still reported believing AI had improved their performance by 20%. The objective measurement showed the opposite.

The context directory approach described earlier provides one structural solution. Alternative approaches include using version-controlled markdown files to track AI interactions and decisions, employing prompt templates that automatically include relevant context, and establishing team conventions for what context AI should receive for different task types. The specific approach matters less than having a systematic approach that the team follows consistently.

Real-World Implementation Patterns

The theoretical frameworks for AI guardrails translate into specific implementation patterns that teams can adopt immediately. The first pattern involves pre-commit hooks that validate AI-generated code against quality standards before allowing commits. These hooks can verify formatting consistency, run static analysis, check for known security vulnerabilities, and enforce architectural constraints. When violations occur, the commit is rejected with specific guidance for resolution.

The second pattern involves staged code review with AI assistance. Initial review uses AI tools to identify obvious issues like formatting violations, potential bugs, or security vulnerabilities. Human reviewers then focus on architectural alignment, business logic correctness, and long-term maintainability. This two-stage approach captures AI efficiency gains while preserving human judgment for decisions requiring context that AI lacks.

The third pattern involves explicit architectural decision records that AI must reference. When developers prompt AI for implementation, they include references to relevant decision records. The AI then generates code that respects documented constraints. This requires discipline in maintaining decision records but provides concrete guardrails against architectural drift.

The fourth pattern involves regular architectural retrospectives that specifically examine AI-generated code. Teams review samples of AI-generated commits to identify patterns of architectural violation, code quality degradation, or security vulnerability. These retrospectives inform adjustments to guardrails, prompt templates, and review processes.

The fifth pattern involves model rotation for complex problems. When one AI model gets stuck, teams switch to a different model rather than continuing to iterate with the stuck model. This requires access to multiple AI providers and skills in prompt translation between models.

Measuring Success Beyond Velocity

Traditional development metrics emphasise velocity: lines of code, commits, pull requests merged, features shipped. AI assistance amplifies these metrics while potentially degrading unmeasured dimensions like code quality, architectural coherence, and long-term maintainability. Sustainable AI-assisted development requires expanding measurement to capture these dimensions.

The DORA framework has evolved to address this gap. The 2025 report introduced rework rate as a fifth core metric precisely because AI shifts where development time gets spent. Teams produce initial code faster but spend more time reviewing, validating, and correcting it. Monitoring cycle time, code review patterns, and rework rates reveals the true productivity picture that perception surveys miss.

Code health metrics provide another essential measurement dimension. GitClear's analysis tracks refactoring rate, code clone frequency, and code churn. These indicators reveal whether codebases are becoming more or less maintainable over time. When refactoring declines and clones increase, as GitClear's data shows has happened industry-wide, the codebase is accumulating debt regardless of how quickly features appear to ship. The percentage of moved or refactored lines decreased dramatically from 24.1% in 2020 to just 9.5% in 2024, while lines classified as copy-pasted or cloned rose from 8.3% to 12.3% in the same period.

Security metrics deserve explicit attention given AI's documented tendency to generate vulnerable code. The Georgetown University Centre for Security and Emerging Technology identified three broad risk categories: models generating insecure code, models themselves being vulnerable to attack and manipulation, and downstream cybersecurity impacts including feedback loops where insecure AI-generated code gets incorporated into training data for future models.

Developer experience metrics capture dimensions that productivity metrics miss. The Stack Overflow survey finding that 45% of developers report debugging AI-generated code takes more time than expected suggests that velocity gains may come at the cost of developer satisfaction and cognitive load. Sustainable AI adoption requires monitoring not just what teams produce but how developers experience the production process.

The Discipline That Enables Speed

The paradox of AI-assisted development is that achieving genuine productivity gains requires slowing down in specific ways. Establishing guardrails, maintaining context documentation, implementing architectural review, and measuring beyond velocity all represent investments that reduce immediate output. Yet without these investments, the apparent gains from AI acceleration prove illusory as technical debt accumulates, architectural coherence degrades, and debugging time compounds.

The organisations succeeding with AI coding assistance share common characteristics. They maintain rigorous code review regardless of code origin. They invest in automated testing proportional to development velocity. They track quality metrics alongside throughput metrics. They train developers to evaluate AI suggestions critically rather than accepting them reflexively.

These organisations have learned that AI coding assistants are powerful tools requiring skilled operators. In the hands of experienced developers who understand both AI capabilities and limitations, they genuinely accelerate delivery. Applied without appropriate scaffolding, they create technical debt faster than any previous development approach. Companies implementing comprehensive AI governance frameworks report 60% fewer hallucination-related incidents compared to those using AI tools without oversight controls.

The 19% slowdown documented by the METR study represents one possible outcome, not an inevitable one. But achieving better outcomes requires abandoning the comfortable perception that AI automatically makes development faster. It requires embracing the more complex reality that speed and quality require continuous, deliberate balancing.

The future belongs to developers and organisations that treat AI assistance not as magic but as another engineering discipline requiring its own skills, processes, and guardrails. The best developers of 2025 will not be the ones who generate the most lines of code with AI, but the ones who know when to trust it, when to question it, and how to integrate it responsibly. The tools are powerful. The question is whether we have the discipline to wield them sustainably.


References and Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Florida Homeowners Association Terror

When I moved into my new house, Florida’s minimum wage was $7.25 per hour. Of course if I had been making that, I would never have been able to rent or buy a home in Hillsborough County. I would have had to live in Polk, Hernando, or the Dominican Republic—all of which I did consider at some point!

I was making okay money when I closed on my home. The HOA fees were about $50 per month. That included lawn service provided that you did not fence in your yard (per the rules “back then”). And during the time that I have been here, I went from a decent salary to pretty good one as I was making more than the median average household in my area. (I did it, Mom and Dad!) But since many people, myself included, are merely one illness/firing/accident away from poverty, a good salary is not enough of a buffer. I managed to have illnessES, firingS, and accidentS…sometimes all in the same year. You need a great salary and great savings to make it in these suburban streets.

Florida’s minimum wage is now $14 per hour and will increase to $15 in the fall. My HOA fees increased to $103 effective this month (January 2026). It has been creeping each year. We did get an increase in services when security was added in 2020 to combat car thievery. I guess that was nice [for the people who prefer to leave their car doors unlocked at night]. But something about a 100% increase in a decade does not sit right with me. And what happened to that nature trail around the lakes?

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog