Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Roscoe's Quick Notes
Today brings two Road Races to fans of the sport. First up from the INDYCAR Racing Series will be the Firestone Grand Prix of St. Petersburg. And later this afternoon we'll have from the NASCAR Cup Series, the DuraMAX Texas Grand Prix.
I'll be bringing both races in from a local TV station OTA, via my old rabbit-ears antennae, rather than streaming via the Internet, thereby avoiding that annoying “buffering” that comes with extended Internet streaming.
And the adventure continues.
from
Ira Cogan
Cory Doctorow once again helps me make sense of the world
Fascinating read from Ars Technica about Wikipedia having to blacklist an archiving site via waxy
Perhaps you heard about an AI agent publishing a hit piece on an open source maintainer. I can't even put into words the implications of something like this. Our tech overlords are ruining the world. also via waxy
Wandering Arrow's latest Lobster Liberation Report.
An aside about how the truth matters and facts matter. I appreciate the Clintons right now. This isn't a commentary on if their politics are good or bad. It's a commentary about something a lot of people don't seem to care about or appreciate. The gist is they were like: Oh, you have questions? Swear me in so I can talk about it under oath. Well, I appreciate it.
from
intueor
På en ellers ret kedelig og almindelig tirsdag før juleferien var jeg til et møde på mit arbejde. Indkaldelse i Outlook og en Powerpoint-præsentation i det store mødelokale. Men hvad jeg ellers troede ville være en ret kedelig begivenhed fik mig mere og mere op at køre. Det var et møde om hvordan vi i fremtiden skal bruge AI-chatbots på arbejdspladsen, og bag efter havde jeg en følelse som jeg stadig har svært ved at bestemme, men frem for alt forstår jeg nu at jeg var sur.
Tilfældet vil have det at jeg arbejder i den del af det offentlige hvor man årligt i løbet af efteråret holder forhandlinger om løn og bonusser, og jeg havde derfor i perioden op til tænkt meget over hvad jeg kan gøre de næste par år for at ligge godt til her, og de overvejelser lå derfor ikke lang væk i min bevidsthed. Jeg kunne som mødet skred frem konstatere at stort set alle de punkter som jeg havde udset mig som mine fremtidige kompetencer, var noget som denne chatbot i stedet skal kunne i fremtiden. Det blev præsenteret som en glædelig nyhed, men jeg kunne ikke tolke det som andet end et konflikt der startede her, og det gjorde mig sur. Det var en udfordring af mine muligheder for at gøre karriere og sikre mig en god position at forhandle løn ud fra. For at gøre ond værre var der samtidig blevet varslet fyringer af mere end 600 medarbejdere i staten, noget som mit kontor dog var gået fri fra, og her var det flere stedet blevet sagt at det ikke var så stort et problem for Statens serviceniveau fordi produktiviteten kommer til at stige med AI. Mit humør blev for alvor punkteret da vi fik at vide at det takket være chatbotten i fremtiden ikke kommer til at betyde så meget hvis en medarbejder forlader arbejdspladsen. Igen præsenteret som en glædelig nyhed – som om det bare ville være super fedt. Men det er jo en katastrofe, tænkte jeg, for hvis jeg har én interesse som lønmodtager, så er det at det betyder noget hvis jeg siger op.
Jeg er måske lidt mere kritisk overfor det her end gennemsnittet, det indrømmer jeg. På sin vis har jeg måske også søgt konfrontationen, ventet på at den opstod. Det skyldes at jeg i længere tid har fulgt de lidt mere kritiske technyheder i det hele taget, men særligt er blevet fascineret af den uafhængige journalist Ed Zitron der med base i USA dækker AI-industrien i et nyhedsbrev og en podcast. Zitron bruger et interessant greb som ofte slår fejl, men som han mestrer ret fint: den sure indignation. Det er tydeligt at læse – og særligt høre – i hans podcast at han er skide sur, men han er det på en måde så det ikke kompromitterer ham, og i stedet blot gør ham mere troværdig. Nok fordi det er autentisk, men også fordi han i øvrigt kan finde ud af at forklare hvorfor han er sur, vreden forhindrer ham ikke i at formulere sammenhængende analyser og argumenter. Det har for mig været befriende fordi det ofte kan være forløsende at se andre være sure på en autentisk måde. Forløsende at høre at det er okay at være sur over nogle ting som man vitterligt synes er langt ude.
Zitron påstår at store dele af den amerikanske AI-branche lyver, og at det går ud over deres kunder. Både overfor de helt umiddelbare kunder til firmaerne som køber en chatbot hos OpenAI eller Microsoft, grafikkort hos Nvidia og så videre, men også de mange almindelige mennesker der har deres investeringer bundet op i det amerikanske aktiemarked – eksempelvis via aktiefonde – der er kommet i en bobbel på grund af AI-firmaernes overvurderede aktiepriser. Zitron har også en teori – hvis man kan kalde det det – for hvorfor de lyver, og som går ud på at de tjener penge på det. Ingen af de store AI-firmaer tjener på nuværende tidspunkt penge på deres kunder, da det koster flere penge at lave alle lave alle de chatbeskeder som brugerne gerne vil have fra deres chatbots end firmaerne kan få ind via et almindeligt månedligt abonnement. Det kan i nogle tilfælde være okay for en virksomhed at miste penge på sine kunder, særligt fordi det her stadig er relativt unge firmaer i vækst, og derfor kan det give mening i en periode bruger flere penge end man tjener for at etablere sig på markedet, og så siden begynde at tjene penge, når ført man har skabt sig et godt kundegrundlag. Problemet er imidlertid at de ikke har nogen strategi for at vende det her: der er ikke rigtig nogen udsigt til at de kan vende rundt og begynde at tjene penge. Det er et problem som de ligesom skubber foran sig og som de ikke ser ud til at kunne løse. Det betyder at de store AI-firmaers primære indtægt er investeringer i firmaet, og det betyder at de bliver nød til at hele tiden at opretholde en fortælling om at det helt store gennembrud er lige rundt om hjørnet. Eksempelvis talte de allesammen om „General Artifical Intelligence“ i lang tid, og aktier i OpenAI blev solgt på løftet om at de ville være det første firma til at opnå dette obskure fænomen som gik ud på at chatbots gik fra bare at være en chatbot til på en eller anden måde at være „mere“ intelligent, og kunne andet en bare at være en chatbot. Noget der er ret meningsløst, men ikke desto mindre noget som folk har investeret milliarder i, og som flere CEO’s talte alvorligt om. Men i dag har alle droppet ideen.
Det her er kan i høj grad få lov til at ske fordi medierne ikke kan finde ud af at stille kritiske spørgsmål fordi de ikke forstår den her dynamik. Et eksempel på det er min egen far. Han er journalist, og for et stykke tid siden var han sammen med nogle andre journalister på et besøg i Silicon Valley. Han kom tilbage og erklærede at det her AI „kommer til at ændre alting“. Jeg var ikke så overbevist, og jeg synes mere at han lød som en der var blevet frelst. Han fortalte at de havde mødt „ham der har lavet Second Life“. Philip Rosedale, som han hedder, fik et hit med computerprogrammet Second Life tilbage i 00’erne, hvor man kunne „leve“ i en virtuel anden verden – praktisk talt vil et slags computerspil. Second Life er karakteristisk for den slags produkter som den her branche lever af fordi det er relativt ligegyldigt i dag. Det var på sit højeste i løbet af 00’erne, og selv om det faktisk overraskede mig at finde ud af at serverne stadig er oppe at køre den dag i dag, så kunne jeg på en hverdagsaften kun finde 10 online, så vidt jeg kunne forstå interfacet. Ham der fyret der har skabt Second Life har altså ikke skabt et firma med en vedvarende salgsucces. På trods af at jeg er rimelig antikapitalistisk indstillet, så har jeg en vis respekt for virksomheder der skaber gode arbejdspladser, ordentlige produktet og som over en årrække har glade kunder og medarbejdere – men det er ikke tilfældet her. I stedet har han skabt et spil der blev hypet i ret kort periode, og så tjent sine penge på den hype. Det skyldes at i Second Life kunne man købe ting man ejede fra udvikleren af spillet som man så kunne sælge videre i en semi-åben økonomi inde i selve spillet. På en måde var han forud for sin tid, for kosmetiske ting i computerspil er nu en enorm forretning. Mange spillere troede derfor at de købte et investeringsobjekt fordi det troede at Second Life ville blive den næste store ting. Det er ret tydeligt i dag at det ikke blev den næste store ting, men nok mennesker troede på det i tilstrækkelig lang tid til at de brugte en masse penge i deres online-shop. Læser man i dag på Philip Rosedales hjemmeside – altså ham fyren bag, hvis du havde glemt det –, så kan man se at han har prøvet at gøre sig selv kunsten efter og ramme den næste store ting, men det lykkedes ikke, og han har lavet både et nu lukket VR start-up og et ditto AI.
Sat lidt på spidsen så tænker alle lidt som ham i dag. Alle er blevet kasino-hjernevasket, og det skyldes at de rigtig store penge de sidste mange år er tjent gennem aktiemarkedet. Den store drøm er ikke at sidde på en kontorgang med mørkebrune mahogni-paneler eller gå at hilse høfligt på sine ansatte og kunderne som Mads Skjern i Matador eller Waage Sandøs patriark Kaj Holger i Krøniken. Altså den gamle, konservative fortælling om det samfundsundstøttende forretningsliv. Nej, drømmen for moderne forretningsmænd er at sælge sine andele af et eller andet start-up på det rigtige tidspunkt, og så være ligeglad med hvordan det ellers går. Elon Musk er godt nok blevet velhavende på at sælge biler, men han er blevet verdens rigeste mand på sine aktier.
Det betyder for det første at det bliver en vinderstrategi at lyve. Du er alligevel ikke afhængig af relationer på lang sigt – for der er slet ikke noget langt sigte –, og derfor kan du lige så godt lyve. Fortælle noget bullshit om dine produkter som at din bil kan køre fuldautomatisk, at din chatbot bliver selvbevidst, at chatbotten kan fikse cancer eller hvad ved jeg. Markederne reagerer positivt på den slag, i hvert fald ind til den dag hvor de ikke gør det længere – hvor boblen springer. Men hvis man bare sælger sine aktier før det, så kan man jo være ligeglad.
Hvad har det med mig at gøre? På sin vis ikke noget. Det er ikke et stort amerikansk firma som leverer den chatbot jeg skal bruge på mit arbejde fordi Trump – heldigvis for vores offentlige IT – har været en idiot, og nu tør man ikke bruge amerikanske chatbots i de dele af Staten der behandler fortrolige oplysninger. Det er dog ikke kun godt, for reelt set har man i min styrelse valgt en dobbelt-op model hvor man både betaler for Chat-GPT-abonnementer til alle og vil betale for mindre løsninger til specielle opgaver. Men samtidig har det alt med mig at gøre, for den tankegang som det er lykkedes de store amerikanske firmaer at fremavle findes også i Danmark.
Som et led i deres strategi for at skabe flere investeringer i sig selv, så har de amerikanske AI-firmaer opbygget en slags FOMO. Selv min far har slugt fortællingen om at AI kommer til at ændre det hele, og at verden står foran en kæmpe omvæltning. Den historie kommer både i en dystopisk og en utopisk version: AI kommer til at redde verden og vi kommer til at leve super luksus eller et form for Termianator-scenarie hvor AI dræber os alle sammmen. Begge dele er lige idiotiske, men fælles for dem er at de bliver næret – i hvert fald offentligt – af stort set alle de store spillere i AI-branchen. Det er en effektive reklamestrategi fordi hvis AI kommer til at forandre verden fuldstændigt afgørende, så er det rationelt at investere sine sparepenge i de firmaer der udvikler AI. Det er endda rationelt at investere mere end hvad de nøgterne modeller for fornuftig investering siger – for de kan jo ikke indregne miraklet som kunstig intelligens kommer til at bringe til verden, eksempelvis det føromtalte AGI.
Det minder på den måde meget om „Pascals væddemål“, en berømt fidus formuleret af filosoffen Blaise Pascal tilbage i det 17. århundrede. Pascals væddemål går ud på at det er det rigtige at leve som kristen, for hvis Gud ikke eksisterer alligevel, så har du bare levet lidt mere nedern end du ellers ville, ved eksempelvis kun at kunne spise fisk under fasten eller afstå fra at dyrke bøssesex – og omvendt hvis Gud så rent faktisk eksisterer, så står du til at tjene en uendelig gevinst i himmeriget. Væddemålets pointe er altså at det er rationelt for dig at opgiver ret lidt værdi ved at leve som kristen frem for ateist for at viden en uendelig høj værdi ved at komme i himlen. Analogien med AI er at det virker rationelt at opgive relativt lidt værdi – ens sparepenge som man alligevel vil investere – mod potentielt at vinde en næsten uendelig værdi når AI bliver superintelligent eller overtager verdensherredømmet eller sådannoget. Derfor giver det mening for mange at investere uforholdsmæssigt meget i AI. Det forklarer det umiddelbart paradoksale i at cheferne for AI-firmaer helt seriøst taler om at deres eget produkt – AI – måske kan udslette menneskeheden.
I det hele taget er meget omkring AI bare marketing, og det er til at kaste op over at folk ikke fatter det. Tag bare navnet „kunstig intelligens“. Så spørger man dumt: Hvad er det egentlig for en intelligens? Hvad lærer AI os om intelligens? Er AI bevidst? Og andet i den dur. Det er idiotiske spørgsmål fordi kunstig intelligens ret åbentlyst er et performativt udsagn. Man kan groft sagt inddele sætninger i deklarative og i performative. Deklarative er nogle der bekræfter et faktum, som at „den danske konge hedder Frederik“. Omvendt udtrykket performative sætninger et ønske om at tingene skal være sandt, som når Kong Frederik siger, „Gud bevare Danmark.“ Det er ikke et faktum at teknologien er intelligent, i stedet udtrykker man et ønske om at andre skal se sin chatbot som intelligent når man kalder sit produkt for „kunstig intelligens“.
Den her fortælling om AI skaber en stress blandt ellers ordentlige mennesker. Mellemledere overalt læser det her lort på LinkedIn eller i ukritiske medier og så får de stress og søsætter i al hast et AI-produkt i deres egen virksomhed uden helt at have tænkt det igennem. Og det her er hele meningen med det, det er en bevidst marketingsstrategi. I det offentlige har det resulteret i at man skal „frigøre“ – er forfærdeligt upræcist begreb – mindst 10.000 stillinger i det offentlige, men potentielt op mod 100.000. Alle mellemledere i hele den offentlige sektor har nu travlt med at prøve at proppe AI-produkter ind over det hele, for de kan ligesom godt regne ud at de bliver tvunget til det hvis de ikke selv gør det. AI fremstår lige nu som et tilbud man ikke kan afslå.
Et godt eksempel på hvorfor det er farligt er historien om verdens angiveligt rigeste mand, Elon Musk, og hans rolle i den amerikanske regering med sit DOGE-program. Det eksplicitte formål var at effektivisere den amerikanske stat ved at bruge kunstig intelligens, primært chatbots. Programmet truede en lang række embedsmænd i den amerikanske stat, og fyrede også mange af dem. Men nu er programmet lukket igen, og det var på overfladen ikke nogen succes. Hvis man kigger på statistik over udgifterne i den amerikanske centraladministration, så er det ikke lykkedes at bringe udgifterne ned, og noget tyder på at de omvendt er steget under præsiden Trump. Men det er i virkeligheden lykkedes ret godt, for det har haft en stor magtpolitisk konsekvens. Mange amerikanske medier har beskrevet hvordan embedsmænd har været forvirrede og frygtede for deres job. Samtidig sker der det på alle niveauer af magt i USA at rettigheder og principper bliver overtrådt. Trump og hans kumpaner bryder loven hele tiden, og de kan få lov til det fordi der ofte ikke er nogen i embedsværket der siger nej.
Det er selvfølgelig på et helt anden skala, men jeg oplever et vist stik af den samme frygt og usikkerhed. Det er selvfølgelig mit problem, men nu er det så også mig der skriver bloggen. Men samtidig er det et samfundsproblem vil jeg påstå. En af de store pointer fra den såkaldte Magtudredningen 2.0, en undersøgelse af demokratiets vilkår i Danmark, fandt at det står dårligere til end den første undersøgelse fra omkring årtusindeskiftet, og en af grundende er at embedsværket står svarer relativt til politikerne, særligt dem i regeringen. At indføre AI skubber den bevægelse endnu hurtigere i den gale retning. Det går i sidste ende ud over vores rettigheder som borgere, vi får dårligere services og et system der er mindre i stand til at værne om vores rettigheder. Vinderne er den politiske elite hvis magt bliver mindre udfordret, og så kapitalen der står til at tjene penge, og hvis magtposition relativt til arbejdstagerne bliver forbedret.
Tak fordi du læste med. Det går op for mig at jeg er sprunget ud i en ny genre: den politiske analyse. Det blev dog en lidt for lang og for rodet tekst den her gang. Jeg tror det handler om at jeg tænker meget over AI, og at jeg ret åbentlyst ikke er færdig med at tænke over det. Jeg skriver en tekst om måneden, det er mit dogme, og så bliver det sku ikke lige godt hver gang vel. Men næste måned har jeg en meget klar idé om, og jeg kan godt love at det bliver ikke-kunstig intelligent!
from
Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.
= Mercynew Shelter= 28 Years Later: The Bone Temple-2 The Housemaid-1 Marty Supremenew The Bluff-2 Zootopia 2-1 Predator: Badlandsnew One Mile: Chapter One-4 The Wrecking Crew= A Knight of the Seven Kingdoms= The Pitt= The Rookienew Bridgertonnew Paradise-2 Shrinking+1 The Night Agent-2 Hijack-2 Star Trek: Starfleet Academy-5 FalloutHi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.
from
hex_m_hell
The Fear does not stalk its prey with cunning and stealth, as the great cat. Nor does it hunt in packs like dogs. Not does it spy its prey from a distance and loose arrows on surprise like man, though it is summoned by one who was once a man.
It does not rely on speed, nor silence, nor endurance, nor planning, for it is something else entirely.
The Fear comes with a bellowing roar and a fearsome visage it seeks not to hide, but calls attention to more and more as it moves closer. For it does not come for flesh, but catches the eyes of its victim and feasts on the terror.
When eyes are locked, in a fatal trap, mesmerized in pools of fire, it creeps ever so slightly closer. It grows louder and louder with each slow step. Often a victim could turn and run, could escape, if they could only break the gaze. For so long as any look into the eyes of this terrible creature, no movement is possible.
The Fear does not only hunt the solitary, but may be set on a village or town. Consuming its victims one by one, draining them to collapse, its power over successive victims grows stronger as they watch its slow horror.
Many will fall to their knees at the din and the fury hoping to beg themselves free, finding themselves saved for later as they feed the great monster their neighbors.
But those who know the Fear, who understand it, can escape those eyes of growing fire and raise a spear, or fork, or quiver of three arrows, and march towards it. Those who do may break its spell, that others may too rise and give chase.
And when those spears and forks find their mark, and when those arrows land, the wise who stood together find the cursed creature, manifestation of terror, was, all along, only a phantasm of light and shadow.
from
Ira Cogan
The weather was nice yesterday, so I took a walk along the East River on the Brooklyn side in Williamsburg. I then had a drink at The Turkey's Nest.
















from 下川友
昔、一目惚れした子がいる。 その子は、ドン・キホーテの入り口あたりを、恐竜のスリッパを履いてウロウロしていた。 髪は茶色のボブで、顔はよく見えなかったけれど、全体の雰囲気を見ただけで、好きになってしまった。
あの子にもう一度会いたくて、あの辺りを何度か歩き回ったけれど、結局、再会は叶わなかった。 というか、仮にまた会えたとしても、あの子が恐竜のスリッパを履いていなかったら、きっと気づけない気がする。
俺が惹かれたのは、あの子の仕草だった。 スリッパに合わせて、ちょこちょこと歩く姿が、たまらなく可愛くて。 たぶん、それだけで好きになってしまったんだと思う。
それでも、やっぱり会いたい。 だから俺は、グッズメーカーに勤めている友人に頼んで、スリッパの作り方を教えてもらった。 恐竜のスリッパを作ってネットで売れば、あの子がSNSで反応してくれるかもしれない、なんて思ったからだ。
「恐竜はもう持ってるんだから、別の動物にしたほうがいいんじゃないか?」
そんな声もあるかもしれない。でも、俺はあの子がちょっと変わった趣味の持ち主だと信じてる。
だから、もう一足、別の恐竜のスリッパを買って、並べてくれるんじゃないかって。
そういう子だったら、嬉しいなって思った。
デザインを仕上げて、メーカーに納品し、恐竜のスリッパを100足作ってネットで販売した。 すると、どこかの界隈で流行ったらしく、1週間で96足が売れてしまった。
でも、あの子らしい投稿は、どこにも見当たらなかった。
あの子が履いていたスリッパは、緑色の恐竜で、つま先がちょうど恐竜の口になっているデザインだった。 あとになって友人に、「恐竜って普通、口が上にあるから、それワニだったんじゃないの?」って言われたけど、 そんなの、どうでもよかった。俺にとっては、あれは間違いなく“恐竜のスリッパ”だった。
結局、あの子はSNSに現れなかった。何の手がかりも得られなかった。
「サンプルが2つあるから、一緒に履こうぜ」 そう言ってくれた友人と、俺たちはおそろいの恐竜スリッパを履くことにした。
from targetedjaidee
Sleepless nights.
I've noticed that I have been waking up more in the early mornings/late nights. I wonder sometimes if I should up the dosage on my medication with my doctor or not. Something to think about.
How are you? How did you sleep? I am feeling somewhat better about my situation. How have you been, being a TI? I really want to connect with others like myself. It has been very difficult to find a group of people that I can trust and a group of people where I can share my experiences without being judged or laughed at.
I am hoping this space is not like that. And even if it becomes a place where gangstalkers appear and start doing their thing: f*ck off, mate. Fr. I actually woke up this morning wanting to incorporate God's word in my posts. I am simply sharing His word, His promises, and what helps me. If it offends some of you, I do apologize (not my intention). Also, negativity is NOT helpful; if you don't have anything nice to say, simply don't say it.
I wanted to ask some of my fellow TIs if they have experienced electronic harassment/monitoring, lately? I have been. I had an app randomly show up on my phone yesterday, downloaded and everything. Ready for use (lol). I screenshotted it and then I deleted the app; however, its the invasion of privacy that I cannot stand. It makes me feel as though I cannot escape these people nor can I figure out how to feel safe within my own space. I have mentioned that this is the largest form of human trafficking in today's world. Simply because they can see, hear, and they know what is happening in my life.
I wanted to figure out who my handler is. I have mentioned that it could be my spouse or their ex. It is the nature in which things went down/have been going down. I wanted to mention something else that has happened to me over the course of years: every single medical professional I have been seen by, or met, or has treated me for something, suddenly retires after having me as a patient, or they change practices. Without letting me know, they just leave. Isn't that interesting? I find it very interesting. I personally think that they botch my treatment plans & then get paid to leave me alone. Quite literally I have been seen by over 20 different kinds of physicians in the last 6 years, all of them retired/left.
Partly why I think I am being trafficked is because of the access to medical treatment that I have been experiencing. I have veteran insurance (grateful for this), & because I don't really have the resources to pay my copayments all the time (due to all this), I find it difficult to meet with a medical professional and KEEP them as a PCP or whatever. I got a bill from a medical professional's office yesterday in the mail & I never went to see them. I had made an appointment, yes, but I never met with them. And the nurses never got back to me either to send some information. I have noticed that everywhere I go for treatment, the office “cannot verify my benefits”. Also, something I have noted. So basically, they are making me rack up medical debt so I cannot continue to meet with the doctor and receive the healthcare I need. I am very much aware of what is happening to me. They know that I know that they know that I know (lol).
I am going to share a verse this morning that God has placed on my heart to share with you all:
“When my mother & father forsake me, the LORD will pick me up.” Psalm 27:10
I share this verse to give other TIs hope. Whatever you believe in, you higher power will pick you up. I have learned this to be true, especially since my own mother and father DID this exact thing. I think it is very interesting, honestly. People will flip on anyone to make sure their pockets are full.
By God's grace, I am saved today. I am thriving, feeling alive and well. Nothing will stop me, except negativity. That always gets my energy levels down. This entire experience has propelled me to go back to school to become a paralegal. I would like to get my foot in the door somehow by working a law firm pertaining to the things I have experienced/have been experiencing. I WILL make it. And I know that is something that pisses these mfs off: I just do not give up & I stay positive.
I hope you all have a blessed morning. Remember: you are amazing, beautiful, and blessed. Don't let these animals win by any means. Stay aware of your surroundings, stay hydrated, & know that we will come out the other side of this.
Jaide owwt*
from An Open Letter
I went to a golf range today for the first time! It turns out there’s one really close to my new house, and I went with a mixture of old friends and new friends. Yesterday night I also went out clubbing, and while it wasn’t exactly the greatest experience I still went which is really nice. Today I just got home and it’s about 2 AM, because we stayed up playing board games together and it was really fun. I’ve never even been a board game person until recently and honestly I really fucking enjoy it.
I wanted to start off this post by saying at least something not about the breakup, even though it kind of is in a way. I did have a couple different moments where I essentially just broke down into tears. But also I think for the first day I woke up and my first thought was not of her. I spent a lot of time thinking about how it felt like there was two versions of her in my head. One of them was the one that was not exactly the ideal partner for me, and someone that also crossed a lot of boundaries and did a lot of hurt. That’s the version of her that I recognize is not a good relationship for me. I’m very thankful to that person for both letting me make my mistakes, showing me my struggles, and then also ultimately making the decision for me which made sure I didn’t continue to drag on the situation longer than it should have been. But there’s also the other version of her which is the one that I felt safe with, the one that I remember in my arms, and the one that I remember all of these beautiful cherished memories with. And the difficult part is reconciling with the fact that both of those people are the same. It’s weird because it feels like I can’t hold both of those truths at the same time, I can either mourn the fact that I’ve lost this innocent pure person who made me feel so incredibly safe, or I can mourn all of the bad things that happened and the incredibly difficult and painful portions of the entire process. But I can’t seem to recognize both of those at the same time in the same person. I talked with N for a while today because I think he’s a very smart person, and he gave me some interesting thoughts on it. One thing is how I can rationalize negative behavior away, but I cannot do that same thing to positive memories. There’s no justification or understanding I need to recognize how much I appreciated and really savored certain moments. I don’t need to be convinced to accept or even want those moments, because I already do inherently. And so I think the problem becomes I can intellectualize away my grief in a way, but I also cannot help but face my grief without being able to intellectualize it. And I guess what I kind of realized while driving home is that the key element I’m missing is just time. I think that’s the short answer, and the longer answer is understanding and embracing the fact that as time goes on I will recognize that my life does not necessarily get worse. There will be a lot of things that I will miss of course, but there will be a lot of things that I also do not miss. Life has a way to fill in these vacuums, and if I allow it to, it really does become something beautiful. Sometimes I just have to remind myself how it really can just be that simple. A beautiful thing about free will is the ability to just try different things with relatively no consequences. There’s no real consequence in any meaningful way to going to a new social situation, or to try to socialize in a group where I felt irrationally unsafe in. For example, I’m kind of afraid of men, and that does put me off of socialization in a couple different avenues. But today I went. And I had a great time. And maybe there’s a couple other hobbies that I’m afraid of or that I try and haven’t been crazily successful, but I can always go back and I can always do them again and I think I will be surprised with the success that I see. I think a lot about that one, of how there is a life that I’ve always wanted, and I will make it mine.
Siempre he pensado que las lagartijas son especiales. Es posible que así piensen muchas personas. Hace pocos años, paseando con mi esposa, vimos a dos de ellas atravesar un muro de lado a lado en un parque cerca de un río. Eran verdes y azules, las vimos pasar como unas flechas, y aún así tuvieron tiempo para mirarnos. Aunque esto es algo subjetivo, ambos coincidimos en que nos expresaron simpatía.
Tengo, además, un recuerdo imborrable de mi temprana infancia. Un día encontré una lagartija en la pared de mi dormitorio, al frente de mi cama. Me miró y le pude sostener la mirada, que era mínima, pero aguda. Me revisó de arriba a abajo y me dijo: “Tú eres un buen chico. Pareces inteligente también”. Aunque pude entenderla, no me arriesgué a saludarla porque no pronuncio bien el lagartano. Pero sí sonreí, le puse un vaso de agua y una galleta. Sabía que eran para él. Sé que tomó agua cuando apagué la luz y es posible que mordisqueara la galleta. Comería otras cosas, pues iba y venía del patio. Nadie más la vio. Cuando entramos en esa confianza que da la proximidad, tuvimos breves conversaciones en español. Le decía, por ejemplo, “Escóndete, que viene alguien”, y lo hacía, aunque justo en el último minuto. Era arriesgada, hábil en aparecer y desaparecer. Es posible que supiera camuflarse.
Al poco tiempo no la volví a ver. Le deseé lo mejor. Y si me estás leyendo, no olvides que eres parte de mis más bellos recuerdos.
from folgepaula
What is worth knowing
When he stepped into the room, there was a brief hesitation in him, a pause, as though he were listening for something only he could hear. I happened to look around the table of five, and a smile happened to slip out of me before I could think. That was enough. He chose his place beside me as if the decision had already been written in him. I did not expect anything from the evening, yet I felt a quiet warmth bloom in me as I watched him. His gentleness toward me wasn’t lost on anyone, it moved through the group like a small, unspoken ripple. He didn’t dress his words in any sort of ceremony, by the time he leaned in slightly and said, – “I hope, from now on, we’ll always be of one mind.” I nodded and laughed with a soft “Of course,” as if offering the usual joke that would normally preface any of our conversations, while pouring a suspicious amount of olive oil over my pizza, my eyes fixed anywhere but on his. He continued, soft but intentional: – “I was hoping tonight might bring us closer. That the evening wouldn’t pass without enabling us to entering something more of a... real conversation.” I kept starring at my plate, eventually turning my face on his direction: – “Well, to me, I don't know why would I speak of something not authentic. It felt to me as though I was served just enough to stay quiet. And as much as I don’t believe anyone would be foolish enough to provoke things merely for his own amusement, had I known all of this sooner, my estimation of such a person might have been of... I don't know, a menace.” – “A menace,” he laughed. “Oh, I love that word. How hard it is to be believed in some cases, and how impossible in others.” He went on: “And, although, in this case, I'd be the last one advocating for another man, I believe this is typical of someone who feels more than acknowledges.” I shifted, angling my body subtly toward him, my gaze lingering on his eyes, full sentence: – “Why are you saying that to me now?” – “Because that's been my lesson since I know you, and by now I can tell you only what is worth knowing.”
/2025
Insphex adds the Hexdump item to the File Browser menu to view the hex dump of the selected files. The initial implementation called the public API for adding commands at the top level of the menu.
To later move the item to the See sumbenu that groups various file viewing commands I resorted to list surgery, as the API doesn't support submenus. The problem is internal system details can and do change, which happened to the File Browser menu and led to an Insphex load error.
I fixed the issue by reverting the public API call and now the item is back at the top level of the menu.
Insphex is a hex dump tool similar to the Linux command hexdump. I wrote it in Common Lisp on Medley Interlisp.
#insphex #CommonLisp #Interlisp #Lisp
from targetedjaidee
I think the hardest part of realizing that I am a TI is the fact that I cannot trust my “family”. My parents have created falsified documentation to utilize against me. It is very interesting to watch all of this unfold. I wonder why or how my blood decided to turn their backs on me and publicly humiliate me.
My spouse's ex is an interesting individual. I met my spouse in 2020, and I quickly learned that this person (the ex) would stop at nothing to harm those that go against their views. Let me explain:
2020 was a strange year for everyone I believe. However, being that I met my spouse that year, I was in heaven. We clicked instantly & made plans for a future early on. While 2020 progressed, I learned my spouse had a past, they were honest about it. Well, I looked past it. I did not care, it had nothing to do with their present, and I was very optimistic (still am). This did not sit well with their ex-spouse. The fact that my spouse had moved on drove the ex to actually reach out to me and try to convince of a story that I just FELT in my gut was not right. Something about their version of events just made me feel like it was made up, or at best, not entirely true.
When this person reached out to me to impose their side of this “story”, I simply stated that I did not want to know, nor did it concern me. I then went on to tell them that I was late for work and would reach out to them later in the day (little white lie). Well? Right before I blocked this individual, I got hit with a Pink Floyd lyric type of response, “I am just tending to the dead roses in my garden”. (-_–) Be that as it may, still not my problem.
Following this incident came a series of what I now know to be “infiltration” of my circle, this person reached out to my siblings & sent police reports, background checks, etc. This obviously scared my family. They started to dislike my partner. Around mid 2020, this individual then proceeded to call CPS on my spouse and I, and I had to be supervised with my children for 72 hours because of their attempt at getting me away from my children. I was clean at the time, so it did not make sense.
Eventually, this person took to social media to openly express hatred & obsessive like behaviors towards my spouse & I. Constant, and I mean constant, posts about what a “deadbeat parent” my spouse was, how much they disdained our love connection, and pretty much was creating an atmosphere of defamation and slander about us. These actions drove my spouse and I to shut down social media for 4 years. Off the map, things still continued. This person had infiltrated our circle of “friends” for information and exhibited stalker-like behavior. In 2024, once my spouse and I decided to get back on the map, things only escalated. Through the platform known as “Facebook”, we learned that we had a “mole” in the friends list. Every time I would post or my spouse, it would end up as a screenshot of the post with the ex posting their own thoughts & opinions about our life together above said screenshot.
See, this wasn't envy or jealousy. This was pure coveting of my life. I had my boys, hardworking wealthy parents, and wonderful marriage. Had purchased multiple properties within two years, and it pissed this person OFF. I mean it pissed them off enough to recruit what are called “flying monkeys” to do their bidding in our smear campaign and stalking. I came to learn in 2025 the level of disdain and absolute obsession this individual(s) had/have over me.
Remember in my previous post, I mentioned that our neighborhood had been in on our gangstalking? Well, EVERYONE I had ever came into contact with basically sold their effing souls for quick buck in order to hurt me. The entire neighborhood, gas station workers, employees of MAJOR companies, joined in on gangstalking me. Former acquaintances started to fall off slowly.
That is how this works: they infiltrate your circles, learn about your insecurities, they study how you move, what you eat and listen to, who you talk to, what you watch, literally create a profile against you to then share false narratives about you & make people feel like they are doing the American thing by defending the community from you...but you have not done anything. They blacklist you. Try and get help? They won't help you and they will pay randos to harass you. They will pay anyone and everyone to make sure you stay under their thumb.
They hack your devices. Yes, they remotely access your electronics and stalk you through there. They will try and make you feel insecure & afraid, but I believe that there are some parameters in place so that we don't physically get hurt. The idea though is to make you afraid; that's one of the objectives I believe.
Well, back to the ex. I recently (within the last 90 days) posted on my story on the same platform, the actual police report from this incident they claimed happened so many years ago. And since that day, they have been absolutely silent, watching my every move online, along with their flying monkeys. SO many fake profiles pop up on my “people you may know” and I just screenshot it and save it for my case.
I mentioned that I posted in a TI group recently; I posted a public record photo, without doxxing information of the person I was posting about, and again, this was a public forum for TIs. Well, within about 2 hours, my post makes it back to one of my gangstalkers. They not only JOINED the TI group, they actually decided to slander & defame me online some more. So getting paid to f*ck with me wasn't enough, they had to attack me online. (lol)
I cannot make sh*t up, man. We have ZERO friends in common, so why would “people be sending you MY posts”. (-_–) The main goal of my gangstalking is to make me feel less than, like I would imagine other TIs experience. I still remember the moments when my spouse was actively making me feel like I was unwanted. Fr.
So? Now, like I mentioned, I do not care if they want to be with me or not, or if I am “unwanted” by anyone at that. Why? Because these people want to break me down. Oh yeah, “break down” was one of the little phrases that these people used on me to literally break down my psyche. Hyper aware of these two words. insert eye roll.
In another post I will post my experiences with petty thugs & drg dealers that were squatting at my place for some time. Also compensated to fck with me and my psyche. I cannot make this stuff up guys, this is literally my reality.
Again, I hope normies learn that this program exists and that they too should be aware of any “weird synchronicities” in their own lives. And most importantly: Fellow TIs? I hear you. I see you. I believe in you and your stories. Keep speaking up and DO NOT give up. We have got this & will come out the other side. I just know it.
Jaide owwt*
from
comfyquiet
People of Orphalese, Beauty is life when life unveils her holy face. But you are life and you are the veil. Beauty is eternity gazing at itself in a mirror.
But you are eternity and you are the mirror.
from
comfyquiet
Clear Is Kind. Unclear Is Unkind.
From article: https://brenebrown.com/articles/2018/10/15/clear-is-kind-unclear-is-unkind/ And book: Working Backwards by Tim Duggan
from
SmarterArticles

Here is a troubling scenario that plays out more often than scientists would like to admit: a research team publishes findings claiming 95 per cent confidence that air pollution exposure reduces birth weights in a particular region. Policymakers cite the study. Regulations follow. Years later, follow-up research reveals the original confidence interval was fundamentally flawed, not because the researchers made an error, but because the statistical methods they relied upon were never designed for the kind of data they were analysing.
This is not a hypothetical situation. It is a systemic problem affecting environmental science, epidemiology, economics, and climate research. When data points are spread across geographic space rather than collected independently, the mathematical assumptions underlying conventional confidence intervals break down in ways that can render those intervals meaningless. The gap between what statistics promise and what they actually deliver has remained largely invisible to policymakers and the public, hidden behind technical language and the presumed authority of numerical precision.
A team of researchers at the Massachusetts Institute of Technology has now developed a statistical method that directly confronts this problem. Their approach, published at the Conference on Neural Information Processing Systems in 2025 under the title “Smooth Sailing: Lipschitz-Driven Uncertainty Quantification for Spatial Association,” offers a fundamentally different way of thinking about uncertainty when analysing spatially dependent data. The implications extend far beyond academic statistics journals; they touch on everything from how we regulate industrial pollution to how we predict climate change impacts to how we rebuild public trust in scientific findings.
The foundation of modern statistical inference rests on assumptions about independence. When you flip a coin one hundred times, each flip does not influence the next. When you survey a thousand randomly selected individuals about their voting preferences, one person's response (in theory) does not affect another's. These assumptions allow statisticians to calculate confidence intervals that accurately reflect the uncertainty in their estimates.
The mathematical elegance of these methods has driven their adoption across virtually every scientific discipline. Researchers can plug their data into standard software packages and receive confidence intervals that appear to quantify exactly how certain they should be about their findings. The 95 per cent confidence interval has become a ubiquitous fixture of scientific communication, appearing in everything from pharmaceutical trials to climate projections to economic forecasts.
But what happens when your data points are measurements of air quality taken from sensors scattered across a metropolitan area? Or unemployment rates in neighbouring counties? Or temperature readings from weather stations positioned along a coastline? In these cases, the assumption of independence collapses. Air pollution in one city block is correlated with pollution in adjacent blocks. Economic conditions in Leeds affect conditions in Bradford. Weather patterns in Brighton influence readings in Worthing.
Waldo Tobler, a cartographer and geographer working at the University of Michigan, articulated this principle in 1970 with what became known as the First Law of Geography: “Everything is related to everything else, but near things are more related than distant things.” This observation, rooted in common sense about how the physical world operates, poses a profound challenge to statistical methods built on the assumption that observations are independent.
The implications of Tobler's Law extend far beyond academic geography. When a researcher collects data from locations scattered across a landscape, those observations are not independent samples from some abstract distribution. They are measurements of a spatially continuous phenomenon, and their values depend on their locations. A temperature reading in Oxford tells you something about the temperature in Reading. A housing price in Islington correlates with prices in neighbouring Hackney. An infection rate in one postal code relates to rates in adjacent areas.
Tamara Broderick, an associate professor in MIT's Department of Electrical Engineering and Computer Science, a member of the Laboratory for Information and Decision Systems, an affiliate of the Computer Science and Artificial Intelligence Laboratory, and senior author of the new research, explains the problem in concrete terms. “Existing methods often generate confidence intervals that are completely wrong,” she says. “A model might say it is 95 per cent confident its estimation captures the true relationship between tree cover and elevation, when it didn't capture that relationship at all.”
The consequences are not merely academic. Ignoring spatial autocorrelation, as researchers from multiple institutions have documented, leads to what statisticians call “narrowed confidence intervals,” meaning that studies appear more certain of their findings than they should be. This overconfidence can cascade through scientific literature and into public policy, creating a false sense of security about findings that may not withstand scrutiny.
The MIT research team, which included postdoctoral researcher David R. Burt, graduate student Renato Berlinghieri, and assistant professor Stephen Bates alongside Broderick, identified three specific assumptions that conventional confidence interval methods rely upon, all of which fail in spatial contexts.
David Burt, who received his PhD from Cambridge University where he studied under Professor Carl Rasmussen, brought expertise in Bayesian nonparametrics and approximate inference to the project. His background in Gaussian processes and variational inference proved essential in developing the theoretical foundations of the new approach.
The first assumption is that source data is independent and identically distributed. This implies that the probability of including one location in a dataset has no bearing on whether another location is included. But consider how the United States Environmental Protection Agency positions its air quality monitoring sensors. These sensors are not scattered randomly; they are placed strategically, with the locations of existing sensors influencing where new ones are deployed. Urban areas receive denser coverage than rural regions. Industrial zones receive more attention than residential areas. The national air monitoring system, according to a Government Accountability Office report, has limited monitoring at local scales and in rural areas.
Research published in GeoHealth in 2023 documented systematic biases in crowdsourced air quality monitoring networks such as PurpleAir and OpenAQ. While these platforms aim to democratise pollution monitoring, their sensor locations suffer from what the researchers termed “systematic racial and income biases.” Sensors tend to be deployed in predominantly white areas with higher incomes and education levels compared to census tracts with official EPA monitors. Areas with higher densities of low-cost sensors tend to report lower annual average PM2.5 concentrations than EPA monitors in all states except California, suggesting that the networks are systematically missing the most polluted areas where vulnerable populations often reside. This is not merely an equity concern; it represents a fundamental violation of the independence assumption that undermines any confidence intervals calculated from such data.
The second assumption is that the statistical model being used is perfectly correct. This assumption, the MIT team notes, is never true in practice. Real-world relationships between variables are complex, often nonlinear, and shaped by factors that may not be included in any given model. When researchers study the relationship between air pollution and birth weight, they are working with simplified representations of extraordinarily complex biological and environmental processes. The true relationship involves genetics, maternal health, nutrition, stress, access to healthcare, and countless other factors that interact in ways no model can fully capture.
The third assumption is that source data (used to build the model) is similar to target data (where predictions are made). In non-spatial contexts, this can be a reasonable approximation. But in geographic analyses, the source and target data may be fundamentally different precisely because they exist in different locations. A model trained on air quality data from Manchester may perform poorly when applied to conditions in rural Cumbria, not because of any methodological error, but because the spatial characteristics of these regions differ substantially. Urban canyons trap pollution differently than open farmland; coastal areas experience wind patterns unlike inland valleys; industrial corridors have emission profiles unlike residential suburbs.
The MIT researchers frame this as a problem of “nonrandom location shift.” Training data and target locations differ systematically, and this difference introduces bias that conventional methods cannot detect or correct. The bias is not random noise that averages out; it is systematic error that compounds across analyses.
The MIT team's solution involves replacing these problematic assumptions with a different one: spatial smoothness, mathematically formalised through what is known as Lipschitz continuity.
The concept draws on work by the nineteenth-century German mathematician Rudolf Lipschitz. A function is Lipschitz continuous if there exists some constant that bounds how quickly the function can change. In plain terms, small changes in input cannot produce dramatically large changes in output. The function is “smooth” in the sense that it cannot jump erratically from one value to another. This property, seemingly abstract, turns out to capture something fundamental about how many real-world phenomena behave across space.
Applied to spatial data, this assumption translates to a straightforward claim: variables tend to change gradually across geographic space rather than abruptly. Air pollution levels on one city block are unlikely to differ dramatically from levels on the adjacent block. Instead, pollution concentrations taper off as one moves away from sources. Soil composition shifts gradually across a landscape. Temperature varies smoothly along a coastline. Rainfall amounts change progressively from one microclimate to another.
“For these types of problems, this spatial smoothness assumption is more appropriate,” Broderick explains. “It is a better match for what is actually going on in the data.”
This is not a claim that all spatial phenomena are smooth. Obvious exceptions exist: a factory fence separates clean air from polluted air; a river divides two distinct ecosystems; an administrative boundary marks different policy regimes; a geological fault line creates abrupt changes in soil composition. But for many applications, the smoothness assumption captures reality far better than the independence assumption it replaces. And critically, the Lipschitz framework allows researchers to quantify exactly how smooth they assume the data to be, incorporating domain knowledge into the statistical procedure.
The technical innovation involves decomposing the estimation error into two components. The first is a bias term that reflects the mismatch between where training data was collected and where predictions are being made. The method bounds this bias using what mathematicians call Wasserstein-1 distance, solved through linear programming. This captures the “transportation cost” of moving probability mass from source locations to target locations, providing a rigorous measure of how different the locations are. The second is a randomness term reflecting noise in the data, estimated through quadratic programming.
The final confidence interval combines these components in a way that accounts for unknown bias while maintaining the narrowest possible interval that remains valid across all feasible values of that bias. The mathematics are sophisticated, but the intuition is not: acknowledge that your data may not perfectly represent the locations you care about, quantify how bad that mismatch could be, and incorporate that uncertainty into your confidence interval.
The approach also makes explicit something that conventional methods hide: the relationship between source data locations and target prediction locations. By requiring researchers to specify both, the method forces transparency about the inferential gap being bridged.
The MIT team validated their approach through simulations and experiments with real-world data. The results were striking in their demonstration of how badly conventional methods can fail.
In a single-covariate simulation comparing multiple methods for generating confidence intervals, only the proposed Lipschitz-driven approach and traditional Gaussian processes achieved the nominal 95 per cent coverage rate. Competing methods, including ordinary least squares with various standard error corrections such as heteroskedasticity-consistent estimators and clustered standard errors, achieved coverage rates ranging from zero to fifty per cent. In other words, methods that claimed 95 per cent confidence were wrong more than half the time. A 95 per cent confidence interval that achieves zero per cent coverage is not a confidence interval at all; it is a statistical artefact masquerading as quantified uncertainty.
A more challenging multi-covariate simulation involving ten thousand data points produced even starker results. Competing methods never exceeded thirty per cent coverage, while the Lipschitz-driven approach achieved one hundred per cent. The difference was not marginal; it was categorical. Methods that researchers routinely use and trust were failing catastrophically while the new approach succeeded completely.
The researchers also applied their method to real data on tree cover across the United States, analysing the relationship between tree cover and elevation. This application matters because understanding how environmental variables covary across landscapes informs everything from forest management to climate modelling to biodiversity conservation. Here again, the proposed method maintained the target 95 per cent coverage rate across multiple parameters, while alternatives produced coverage rates ranging from fifty-four to ninety-five per cent, with some failing entirely on certain parameters.
Importantly, the method remained reliable even when observational data contained random errors, a condition that accurately reflects real-world measurement challenges in environmental monitoring, epidemiology, and other fields. Sensors drift out of calibration; human observers make mistakes; instruments malfunction in harsh conditions. A method that fails under realistic measurement error would have limited practical value, however elegant its mathematical foundations.
While air quality monitoring provides a compelling example, the problems addressed by this research extend across virtually every domain that relies on geographically distributed data. The breadth of affected fields reveals how foundational this problem is to modern empirical science.
In epidemiology, spatial analyses are central to understanding disease patterns. Researchers use geographic data to study cancer clusters, track infectious disease spread, and investigate environmental health hazards. A 2016 study published in Environmental Health examined the relationship between air pollution and birth weight across Los Angeles County, using over nine hundred thousand birth records collected between 2001 and 2008. The researchers employed Bayesian hierarchical models to account for spatial variability in the effects, attempting to understand not just whether pollution affects birth weight on average but how that effect varies across different neighbourhoods. Even sophisticated approaches like these face the fundamental challenges the MIT team identified: models are inevitably misspecified, source and target locations differ, and observations are not independent.
The stakes in epidemiological research are particularly high. Studies examining links between highway proximity and dementia prevalence, air pollution and respiratory illness, and environmental exposures and childhood development all involve spatially correlated data. A study in Paris geocoded birth weight data to census block level, examining how effects differ by neighbourhood socioeconomic status and infant sex. Research in Kansas analysed over five hundred thousand births using spatiotemporal ensemble models at one kilometre resolution. When confidence intervals from such studies inform public health policy, the validity of those intervals matters enormously. If foundational studies overstate their certainty, policies may be based on relationships that are weaker or more variable than believed.
Economic modelling faces analogous challenges. Spatial econometrics, a field that emerged in the 1970s following work by Belgian economist Jean Paelinck, attempts to adapt econometric methods for geographic data. The field recognises that standard regression analyses can produce unstable parameter estimates and unreliable significance tests when they fail to account for spatial dependency. Researchers use these techniques to study regional economic resilience, the spatial distribution of wealth and poverty, and the effects of policy interventions that vary by location. The European Union relies on spatial economic analyses to allocate structural funds across member regions, attempting to reduce economic disparities between areas.
But as research published in Spatial Economic Analysis notes, ignoring spatial correlation can lead to “serious misspecification problems and inappropriate interpretation.” Models that fail to account for geographic dependencies may attribute effects to the wrong causes or estimate relationships with false precision. The finding that neighbouring regions tend to share economic characteristics, with high-growth areas clustered near other high-growth areas and low-growth areas similarly clustered, has profound implications for how economists model development and inequality.
Climate science faces perhaps the most consequential version of this challenge. Climate projections involve enormous spatial and temporal complexity, with multiple sources of uncertainty interacting across scales. A 2024 study published in Nature Communications examined how uncertainties from human systems (such as economic and energy models that project future emissions) combine with uncertainties from Earth systems (such as climate sensitivity and carbon cycle feedbacks) to affect temperature projections. The researchers found that uncertainty sources are not simply additive; they interact in ways that require integrated modelling approaches.
Current best estimates of equilibrium climate sensitivity, the amount of warming expected from a doubling of atmospheric carbon dioxide, range from approximately 2.5 to 4 degrees Celsius. This uncertainty has profound implications for policy, from carbon budgets to adaptation planning to the urgency of emissions reductions. Methods that improve uncertainty quantification for spatial data could help narrow these ranges or at least ensure that the stated uncertainty accurately reflects what is actually known and unknown. Climate models must work across spatial scales from global circulation patterns to regional impacts to local weather, each scale introducing its own sources of variability and uncertainty.
The timing of this methodological advance coincides with a broader crisis of confidence in scientific institutions. Data from the Pew Research Center shows that while trust in scientists remains higher than in many other institutions, it has declined since the Covid-19 pandemic. A 2024 survey of nearly ten thousand American adults found that seventy-four per cent had at least a fair amount of confidence in scientists, up slightly from seventy-three per cent the previous year but still below pre-pandemic levels.
A 2025 study surveying nearly seventy-two thousand people across sixty-eight countries, published by Cologna and colleagues, found that while seventy-eight per cent of respondents viewed scientists as competent, only forty-two per cent believed scientists listen to public concerns, and just fifty-seven per cent thought they communicate transparently. Scientists score high on expertise but lower on openness and responsiveness. This suggests that public scepticism is not primarily about competence but about communication and accountability.
More concerning are the partisan divides within individual countries. Research published in 2025 in Public Understanding of Science documented what the authors termed “historically unique” divergence in scientific trust among Americans. While scientists had traditionally enjoyed relatively stable cross-partisan confidence, recent years have seen that consensus fracture. The researchers found changes in patterns of general scientific trust emerging at the end of the Trump presidency, though it remains unclear whether these represent effects specific to that political moment or the product of decades-long processes of undermining scientific trust.
Part of this decline relates to how scientific uncertainty has been communicated and sometimes exploited. During the pandemic, policy recommendations evolved as evidence accumulated, a normal feature of science that nevertheless eroded public confidence when changes appeared inconsistent. Wear masks; do not wear masks; wear better masks. Stay six feet apart; distance matters less than ventilation. The virus spreads through droplets; actually, it spreads through aerosols. Each revision, scientifically appropriate as understanding improved, appeared to some observers as evidence of confusion or incompetence.
Uncertainty, properly acknowledged, can signal scientific honesty; poorly communicated, it becomes fodder for those who wish to dismiss inconvenient findings altogether. Research from PNAS Nexus in 2025 examined how uncertainty communication affects public trust, finding that effects depend heavily on whether the uncertainty aligns with recipients' prior beliefs. When uncertainty communication conflicts with existing beliefs, it can actually reduce trust. The implication is that scientists face a genuine dilemma: honest acknowledgement of uncertainty may undermine confidence in specific findings, yet false certainty ultimately damages the entire scientific enterprise when errors are eventually discovered.
The OECD Survey on Drivers of Trust in Public Institutions, published in 2024, found that only forty-one per cent of respondents believe governments use the best available evidence in decision making, and only thirty-nine per cent think communication about policy reforms is adequate. Evidence-based decision making is recognised as important for trust, but most people doubt it is actually happening.
Methods like the MIT approach offer a potential path forward. By producing confidence intervals that accurately reflect what is known and unknown, researchers can make claims that are more likely to withstand replication and scrutiny. Overstating certainty invites eventual correction; appropriately calibrated uncertainty builds durable credibility. When a study says it is 95 per cent confident, that claim should mean something.
The MIT research also connects to broader discussions about reproducibility in computational science. A 2020 article in the Harvard Data Science Review by Willis and Stodden examined seven reproducibility initiatives across political science, computer science, economics, statistics, and mathematics, documenting how “trust but verify” principles could be operationalised in practice.
The phrase “trust but verify,” borrowed from Cold War diplomacy, captures an emerging ethos in computational research. Scientists should be trusted to conduct research honestly, but their results should be independently verifiable. This requires sharing not just results but the data, code, and computational workflows that produced them. The National Academies of Science, Engineering, and Medicine defines reproducibility as “obtaining consistent results using the same input data, computational steps, methods, and code, and conditions of analysis.”
The replication crisis that emerged first in psychology has spread to other fields. A landmark 2015 study in Science by the Open Science Collaboration attempted to replicate one hundred psychology experiments and found that only thirty-six per cent of replications achieved statistically significant results, compared to ninety-seven per cent of original studies. Effect sizes in replications were, on average, half the magnitude of original effects. Nearly half of original effect sizes were outside the 95 per cent confidence intervals of the replication effect sizes, suggesting that the original intervals were systematically too narrow.
The problem is not limited to any single discipline. Mainstream biomedical and behavioural sciences face failure-to-replicate rates near fifty per cent. A 2016 survey of over fifteen hundred researchers published in Nature found that more than half believed science was facing a replication crisis. Contributing factors include publication bias toward positive results, small sample sizes, analytical flexibility that allows researchers to find patterns in noise, and, critically, statistical methods that overstate certainty.
Confidence intervals play a central role in this dynamic. As critics have noted, the “inadequate use of p-values and confidence intervals has severely compromised the credibility of science.” Intervals that appear precise but fail to account for data dependencies, model misspecification, or other sources of uncertainty generate findings that seem robust but cannot withstand replication attempts. A coalition of seventy-two methodologists has proposed reforms including using metrics beyond p-values, reporting effect sizes consistently, and calculating prediction intervals for replication studies.
The MIT method addresses one specific source of such failures. By providing confidence intervals that remain valid under conditions that actually occur in spatial analyses, rather than idealised conditions that rarely exist, the approach reduces the gap between claimed and actual certainty. This is not a complete solution to the reproducibility crisis, but it removes one barrier to credible inference.
Implementing the Lipschitz-driven approach requires researchers to specify a smoothness parameter, essentially a judgement about how rapidly the variable of interest can change across space. This introduces a form of subjectivity that some may find uncomfortable. The method demands that researchers make explicit an assumption that other methods leave implicit (and often violated).
In their tree cover analysis, the MIT team selected a Lipschitz constant implying that tree cover could change by no more than one percentage point per five kilometres. They arrived at this figure by balancing knowledge of uniform regions, where tree cover remains stable over large distances, against areas where elevation-driven transitions produce sharper gradients. Ablation studies showed that coverage remained robust across roughly one order of magnitude of variation in this parameter, providing some assurance that precise specification is not critical. Getting the constant approximately right matters; getting it exactly right does not.
Nevertheless, the requirement for domain expertise represents a shift from purely data-driven approaches. Researchers must bring substantive knowledge to bear on their statistical choices, a feature that some may view as a limitation and others as an appropriate integration of scientific judgement with mathematical technique. The alternative, methods that make implicit assumptions about smoothness or ignore the problem entirely, is not actually more objective; it simply hides the assumptions being made.
The method also requires computational resources, though the authors have released open-source code through GitHub that implements their approach. The linear programming for bias bounds and quadratic programming for variance estimation can handle datasets of reasonable size on standard computing infrastructure. As with many advances in statistical methodology, adoption will depend partly on accessibility and ease of use.
For policymakers who rely on scientific research to inform decisions, these methodological advances have practical implications that extend beyond academic statistics.
Environmental regulations often rely on exposure-response relationships derived from epidemiological studies. Air quality standards, for instance, are based on evidence linking pollution concentrations to health outcomes. If confidence intervals from foundational studies are too narrow, the resulting regulations may be based on false certainty. A standard that appears well-supported by evidence may rest on studies whose confidence intervals were systematically wrong. Conversely, if uncertainty is properly quantified, regulators can make more informed decisions about acceptable risk levels and safety margins.
Climate policy depends heavily on projections that involve spatial and temporal uncertainty. The Paris Agreement's goal of limiting warming to 1.5 degrees Celsius above pre-industrial levels rests on scientific estimates of carbon budgets and climate sensitivity. Better uncertainty quantification could inform how much margin policymakers should build into their targets. If we are less certain about climate sensitivity than our confidence intervals suggest, that argues for more aggressive emissions reductions, not less.
Public health interventions targeting environmental exposures, from lead remediation to air quality standards to drinking water regulations, similarly depend on studies that correctly characterise what is known and unknown. A systematic review of air pollution epidemiology published in Environmental Health Perspectives noted that “the quality of exposure data has been regarded as the Achilles heel of environmental epidemiology.” Methods that better account for spatial dependencies in exposure assessment could strengthen the evidence base for protective policies.
The MIT research represents one contribution to a broader effort to improve the reliability of scientific inference. It does not solve all problems with confidence intervals, nor does it address other sources of the reproducibility crisis, from publication bias to inadequate sample sizes to analytical flexibility. But it does solve a specific, important problem that has long been recognised but inadequately addressed.
When data varies across space, conventional statistical methods produce confidence intervals that can be, in the researchers' words, “completely wrong.” Methods that claim 95 per cent coverage achieve zero per cent. Methods designed for independent data are applied to dependent data, producing precise-looking numbers that mean nothing. The new approach produces intervals that remain valid under realistic conditions, intervals that actually deliver the coverage they promise.
For researchers working with spatial data, the practical message is clear: existing methods for uncertainty quantification may significantly understate the true uncertainty in your estimates. Alternatives now exist that better match the structure of geographic data. Using them requires more thought about smoothness assumptions and more transparency about source and target locations, but the result is inference that can be trusted.
For consumers of scientific research, whether policymakers, journalists, or members of the public, the message is more nuanced. The confidence intervals reported in published studies are not all created equal. Some rest on assumptions that hold reasonably well; others rest on assumptions that may be grossly violated. Evaluating the credibility of specific findings requires attention to methodology as well as results. A narrow confidence interval is not inherently more reliable than a wide one; what matters is whether the interval accurately reflects uncertainty given the structure of the data.
The MIT team's work exemplifies a productive response to the reproducibility crisis: rather than simply lamenting failures, developing better tools that make future failures less likely. Science advances not just through new discoveries but through improved methods of knowing, methods that more honestly and accurately characterise the boundaries of human understanding.
In an era of declining trust in institutions and increasing polarisation over scientific questions, such methodological advances matter. Not because they eliminate uncertainty, which is impossible, but because they ensure that the uncertainty we acknowledge is real and the confidence we claim is warranted. The goal is not certainty but honesty about the limits of knowledge. Statistical methods that deliver this honesty serve not just science but the societies that depend on it.
MIT News. “New method improves the reliability of statistical estimations.” Massachusetts Institute of Technology, December 2025. https://news.mit.edu/2025/new-method-improves-reliability-statistical-estimations-1212
Burt, D.R., Berlinghieri, R., Bates, S., and Broderick, T. “Smooth Sailing: Lipschitz-Driven Uncertainty Quantification for Spatial Association.” Conference on Neural Information Processing Systems, 2025. arXiv:2502.06067. https://arxiv.org/abs/2502.06067
MIT CSAIL. “New method improves the reliability of statistical estimations.” https://www.csail.mit.edu/news/new-method-improves-reliability-statistical-estimations
MIT EECS. “New method improves the reliability of statistical estimations.” https://www.eecs.mit.edu/new-method-improves-the-reliability-of-statistical-estimations/
Tobler, W.R. “A Computer Movie Simulating Urban Growth in the Detroit Region.” Economic Geography, 1970. https://en.wikipedia.org/wiki/Tobler's_first_law_of_geography
Mullins, B.J. et al. “Data-Driven Placement of PM2.5 Air Quality Sensors in the United States: An Approach to Target Urban Environmental Injustice.” GeoHealth, 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC10499371/
Jerrett, M. et al. “Spatial variability of the effect of air pollution on term birth weight: evaluating influential factors using Bayesian hierarchical models.” Environmental Health, 2016. https://ehjournal.biomedcentral.com/articles/10.1186/s12940-016-0112-5
Mohai, P. et al. “Methodologic Issues and Approaches to Spatial Epidemiology.” Environmental Health Perspectives, 2008. https://pmc.ncbi.nlm.nih.gov/articles/PMC2516558/
Willis, C. and Stodden, V. “Trust but Verify: How to Leverage Policies, Workflows, and Infrastructure to Ensure Computational Reproducibility in Publication.” Harvard Data Science Review, 2020. https://hdsr.mitpress.mit.edu/pub/f0obb31j
Open Science Collaboration. “Estimating the reproducibility of psychological science.” Science, 2015. https://www.science.org/doi/10.1126/science.aac4716
Pew Research Center. “Public Trust in Scientists and Views on Their Role in Policymaking.” November 2024. https://www.pewresearch.org/science/2024/11/14/public-trust-in-scientists-and-views-on-their-role-in-policymaking/
Milkoreit, M. and Smith, E.K. “Rapidly diverging public trust in science in the United States.” Public Understanding of Science, 2025. https://journals.sagepub.com/doi/10.1177/09636625241302970
OECD. “OECD Survey on Drivers of Trust in Public Institutions – 2024 Results.” https://www.oecd.org/en/publications/oecd-survey-on-drivers-of-trust-in-public-institutions-2024-results_9a20554b-en.html
Chan, E. et al. “Enhancing Trust in Science: Current Challenges and Recommendations.” Social and Personality Psychology Compass, 2025. https://compass.onlinelibrary.wiley.com/doi/full/10.1111/spc3.70104
Nature Communications. “Quantifying both socioeconomic and climate uncertainty in coupled human–Earth systems analysis.” 2025. https://www.nature.com/articles/s41467-025-57897-1
Anselin, L. “Spatial Econometrics.” Handbook of Applied Economic Statistics, 1999. https://web.pdx.edu/~crkl/WISE/SEAUG/papers/anselin01_CTE14.pdf
Lipschitz Continuity. Wikipedia. https://en.wikipedia.org/wiki/Lipschitz_continuity
Burt, D.R. Personal website. https://davidrburt.github.io/
GitHub Repository. “Lipschitz-Driven-Inference.” https://github.com/DavidRBurt/Lipschitz-Driven-Inference
U.S. EPA. “Ambient Air Monitoring Network Assessment Guidance.” https://www.epa.gov/sites/default/files/2020-01/documents/network-assessment-guidance.pdf
Cologna, V. et al. “Trust in scientists and their role in society across 68 countries.” Science Communication, 2025.
National Academies of Sciences, Engineering, and Medicine. “Reproducibility and Replicability in Science.” 2019. https://www.ncbi.nlm.nih.gov/books/NBK547523/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk