from Roscoe's Quick Notes

Today brings two Road Races to fans of the sport. First up from the INDYCAR Racing Series will be the Firestone Grand Prix of St. Petersburg. And later this afternoon we'll have from the NASCAR Cup Series, the DuraMAX Texas Grand Prix.

I'll be bringing both races in from a local TV station OTA, via my old rabbit-ears antennae, rather than streaming via the Internet, thereby avoiding that annoying “buffering” that comes with extended Internet streaming.

And the adventure continues.

 
Read more...

from Ira Cogan

Cory Doctorow once again helps me make sense of the world

Fascinating read from Ars Technica about Wikipedia having to blacklist an archiving site via waxy

Perhaps you heard about an AI agent publishing a hit piece on an open source maintainer. I can't even put into words the implications of something like this. Our tech overlords are ruining the world. also via waxy

Wandering Arrow's latest Lobster Liberation Report.

An aside about how the truth matters and facts matter. I appreciate the Clintons right now. This isn't a commentary on if their politics are good or bad. It's a commentary about something a lot of people don't seem to care about or appreciate. The gist is they were like: Oh, you have questions? Swear me in so I can talk about it under oath. Well, I appreciate it.

 
Read more...

from st. lazarus

How Holocaust memory and cultivated extremism were institutionally converted into political immunity and regional fragmentation

The Parable of Dinah

Who Is the Victim in the Dinah Story? - TheTorah.com

In Genesis 34, Dinah, Jacob's daughter, goes out to see the women of the neighboring land. Shechem, son of Hamor – a local chieftan, the most powerful man in the region – sees her, takes her, and lies with her by force—a violation, a trauma, a wound. Her brothers seek to avenge her. But they do not act on grief alone. They act on calculation.

“Intermarry with us,” they tell Shechem's people. “Give us your daughters, take ours. But first—become like us. Submit to circumcision.”

Shechem agrees. The whole city agrees. They undergo the rite, entering into covenant with Israel's God. On the third day, while they recover, Simeon and Levi enter the city armed and slay every male. They plunder the houses, take the women and children captive.

Jacob rebukes them: “You have made me odious to the inhabitants of the land, and I am few in number. If they gather against me, I shall be destroyed.”

The brothers answer: “Should he treat our sister like a harlot?”

The moral reflex is sound. But the strategic architecture is cunning. The brothers manufactured the conditions for massacre—first inducing vulnerability, then exploiting it. They created the very danger they claimed to be avenging. The wound to Dinah became the pretext for eliminating Shechem entirely.

This is the Shechem Cycle: create dependency, induce vulnerability, strike, then invoke the original wound as moral insulation against criticism.

Israel's modern Shield-Conduit architecture operates by this same logic. The Shield is not the Holocaust itself—it is the institutional apparatus that converts Holocaust memory into political immunity, just as Simeon and Levi converted Dinah's trauma into carte blanche for massacre. The Conduit manufactures the “Shechems”—Islamist extremists cultivated to replace rational Arab nationalism—creating a threat that validates the Shield's “Western vanguard” narrative. The original trauma (the Holocaust, like Dinah's violation) is real and grievous. But the loop built atop it is artificial, self-reinforcing, and strategically engineered.

Israel is not merely defending against threats. It is systematically creating the conditions that make its defense appear the only moral option.


Thesis. The Revisionist Zionist lineage built an institutional apparatus—the Shield—that converts Holocaust memory into durable political immunity. That immunity sustains long-run leverage over U.S. foreign policy. The Conduit—a strategic pattern that replaces secular Arab nationalism with Islamist extremism—reframes regional conflict as civilizational rather than territorial, making the Shield’s “Western vanguard” narrative self-validating. Together, Shield and Conduit enable regional redesigns at reduced political cost.


Model

Framework:

5-player extensive-form game with evolving payoff structures across five eras.

Players:

  • Z = Zionist Movement (the strategists)
  • BE = British Empire (Pre-1945 hegemon)
  • US = United States of America (Post-1945 hegemon)
  • SA = Secular Arabs (the target for elimination)
  • IP = Islamist Proxy (the manufactured replacement)

Layers of Strategic Architecture — nested conditions that enable the game to proceed.

  • Layer 1 — Structural bankruptcy: Europe’s post-war exhaustion and dependency dynamics that shift hegemonic weight to the U.S., creating the unipolar opportunity space.
  • Layer 2 — Capture: durable influence over U.S. policy channels (Congress, executive, media, lobbying) that makes certain outcomes hard to reverse.
  • Layer 3 — The Shield: the institutional system that treats criticism of Israeli state action as morally illegitimate by routing it through Holocaust memory. The Conduit: the vector that replaces rational, negotiable secular Arab nationalism with Islamist extremism—reframing conflict from land and rights to civilization vs. barbarism, thereby validating the Shield.

L1 creates opportunity; L2 and L3 co-construct as mutually-reinforcing subsystems (Shield insulates Capture-in-progress; Capture enables harder Shield deployment). Downstream constraints emerge from this mutual stabilization.

Output:

Downstream policy outcomes—wars, occupations, alignments, territorial or regime changes.


Era I: The Demographic Deficit (1896-1941)

The Zionist project began with a structural deficit: establishing an ethno-state as a demographic minority. Jabotinsky's 1923 “Iron Wall” argued statehood required overwhelming asymmetric force to break indigenous resistance. Logic demanded absolute pragmatism across factional lines.

The 1933 Haavara Agreement (transferring 60,000 Jews/assets from Nazi Germany) demonstrated Labor Zionism's instrumental approach to rescue when aligned with territorial goals. The 1940 Patria bombing (killing 267 refugees to block British deportation) showed Haganah's willingness to sacrifice Jewish lives when the calculus demanded it. Both occurred under British Mandate authority – the BE, not the US, was the relevant hegemon.

The Madagascar Plan (1940) was a Nazi initiative to deport European Jews to the French colony; Zionist leaders monitored it as a potential (if desperate) alternative to extermination, but did not originate it.

Lehi's 1941 approach to Nazi Germany – proposing a military alliance against Britain in exchange for Jewish emigration to Palestine – represents Z's radical splinter faction operating independently of the Labor/Haganah mainstream. This distinction matters: where Labor pursued pragmatic rescue and territorial consolidation, Lehi pursued maximalist anti-imperial alignment even at the cost of ideological contamination.

Incubation and the Revisionist split (1896–1932)

  • 1896–97 — Herzl founds political Zionism. A project aimed at a legal territorial home, pursued through European imperial patronage.
  • 1917 — Balfour Declaration. Britain commits to a “national home” in Palestine, creating an early geopolitical foothold.
  • 1923 — Jabotinsky’s “Iron Wall.” A foundational Revisionist text: the native Arab population will not consent, so Zionism advances behind overwhelming force. (“Iron Wall” here means coercive security dominance as strategy.)
  • 1931 — Irgun (Etzel) founded. A militant Revisionist offshoot that rejects Haganah restraint in favor of armed action, including terror tactics. (Haganah: mainstream Jewish paramilitary; Irgun: breakaway hardline wing.)

Territorial solutions and radical escalation (1933–1941)

  • 1933-39 — Haavara (Transfer) Agreement. A deal between Zionist institutions and Nazi Germany enabling ~60,000 German Jews and assets to move to Palestine. (Haavara: a “transfer” mechanism for migration + capital.)
  • 1940 — Madagascar Plan. Nazi leadership explores mass deportation. Extermination is not yet state policy in this framing.
  • Nov 1940 — Patria bombing. Haganah detonates explosives on a refugee ship to prevent British deportation to Mauritius; the miscalculated blast kills 252-267 Jewish refugees. (Used here as a precedent claim: sacrificing Jewish lives for state-building objectives.)
  • Jan 1941 — Lehi’s Nazi alliance proposal. The Stern Gang submits a formal offer to a German diplomat in Ankara: ally with Nazi Germany, fight the British, and secure a totalitarian Jewish state. Ignored. (Lehi: small extremist Zionist militia.)

Era II: The Suez Pivot (1942-1961)

The 1942 Wannsee Conference formalized the Final Solution, signing the death warrant for Layer 1 (L1) - the decentralized network of European Jewish financial and cultural infrastructure accumulated over centuries. Six million dead. L1 was annihilated as a living system. Paradoxically, this apocalyptic loss generated something no diplomatic campaign could have manufactured: near-universal moral authority – the raw material from which Layer 3 (L3) would later be forged, and the leverage needed to begin constructing Layer 2 (L2).

But first, territory had to be secured through kinetic brutality. The 1946 King David Hotel bombing - Irgun's attack on the British administrative headquarters, killing 91 – shattered the Mandate's will to govern. The 1948 Deir Yassin massacre, in which Irgun and Lehi forces killed over 100 Palestinian villagers, was not merely an atrocity but a strategic signal: its broadcast triggered mass civilian flight across Palestine. The resulting Nakba - the ethnic cleansing of 750,000 Palestinians – secured the demographic conditions for the establishment of the State of Israel in May 1948. The Iron Wall's territorial objective was achieved.

Z quickly discovered that regional military supremacy had hard limits. The botched 1954 Lavon Affair (a false-flag bombing campaign in Egypt) and the 1956 Suez Crisis delivered a brutal lesson: tactical victories meant nothing if the global hegemon US could override the outcome with a single phone call. Eisenhower forced Israel, Britain, and France into humiliating withdrawal – the last time a US president would impose such costs on Israel. Suez was the pivot. Z diagnosed the structural vulnerability with precision: localized force fails against a hegemon's veto. Direct confrontation with US was unwinnable. The solution was not to fight the hegemon but to capture it.

L2 Foundation – The Capture Infrastructure: Post-Suez, Z began systematically constructing the machinery of US policy capture. AIPAC was founded in 1953 and began perfecting the “scorecard” system – tracking Congressional votes and making support for Israel a binary litmus test for campaign viability. The infrastructure operated through multiple channels:

  1. Congressional Capture via committee assignments (Foreign Relations, Armed Services) where pro-Israel commitments became prerequisites for leadership;

  2. Campaign Finance - bundling operations that made defiance electorally prohibitive;

  3. Media Infrastructure - cultivating editorial boards and broadcast networks where criticism became career-ending;

  4. Executive Penetration - placement of loyalists in key agencies (State, Defense, NSC).

This was L2 under construction: not yet total capture, but the ratchet mechanisms being installed.

The keystone was the 1961 Eichmann trial: Ben-Gurion captured Adolf Eichmann and televised his trial to a global audience, an act of theatrical statecraft that positioned Israel as the sole inheritor and voice of Holocaust memory. Trauma was centralized in the state apparatus. The trial, combined with Yad Vashem's expansion, the ADL's redefinition of antisemitism, and AIPAC's nascent campaign machinery, collectively constituted the Shield - the first operational component of what would become L3. The Shield rewired US's domestic political cost function: criticizing or constraining Israel was transformed from a standard foreign policy dispute into an act of moral deviance. L3 was now half-built; L2 capture was underway but incomplete.

Catastrophe and birth (1942–1948)

  • Jan 1942 — Wannsee Conference. Territorial expulsion is stymied by war; Nazi policy shifts to the “Final Solution.” In this model, the old European Jewish financial elite—the Layer-1 actors—is destroyed or looted.
  • Jul 1946 — King David Hotel bombing. Irgun (Begin) bombs British headquarters in Jerusalem, killing 91. (Framed as asymmetric terror intended to force imperial withdrawal.)
  • Apr 1948 — Deir Yassin massacre. Irgun and Lehi kill 100+ Arab civilians. Psychological terror triggers mass Palestinian flight during the Nakba. (Nakba: the 1948 catastrophe/expulsion and displacement.)
  • May 1948 — State of Israel established.

Statehood, false flags, and the Suez pivot (1950–1961)

  • 1950–51 — Baghdad bombings. Bombings target Iraqi Jewish establishments. This model cites evidence (e.g., Avi Shlaim and others) pointing to Zionist underground involvement: panic drives Iraqi Jews into emigrating for demographic bulk.
  • 1953 — Yad Vashem established. First formal institutionalization of Holocaust memory by the state. Seed of the Shield apparatus. (Yad Vashem: Israel’s national Holocaust memorial authority.)
  • 1954 — Lavon Affair (Operation Susannah). Proven Israeli false-flag operation: Egyptian Jews recruited to bomb U.S./British civilian sites in Egypt and blame the Muslim Brotherhood, aiming to keep Western troops in the Suez zone. (False flag: an operation designed to misattribute responsibility.) Conduit note: in this model, it functions as a proto-Conduit move—manufacturing an Islamist threat to secure Western alignment.
  • 1954 — AIPAC founded. Isaiah Kenen bypasses diplomacy to lobby Congress directly. Seed of the “vetocracy.” (Vetocracy: capacity to punish dissent and block policy change.)
  • 1956 — Suez Crisis (Protocol of Sèvres). A secret UK/France/Israel pact to seize the canal and topple Nasser. Eisenhower forces withdrawal via financial threats—framed here as the last time a U.S. president treated Israel as subordinate. British/French colonial power in the region breaks.
  • 1961 — Eichmann trial. Ben-Gurion captures Adolf Eichmann and televises the trial globally. Israel positions itself as the sole inheritor and voice of Holocaust memory; trauma is centralized in the state apparatus.

Claim: the moral Shield is forged.

Lesson claimed by this model: the U.S. is the only power that matters—accelerating the capture project.


Era III: The Conduit Gambit (1967-1996)

The 1967 conquest handed Z the territorial maximalism Jabotinsky had dreamed of – but also internalized a demographic time bomb. Millions of Palestinians in the West Bank and Gaza, organized under the secular PLO, presented the most dangerous kind of adversary: a negotiable one. SA (secular Arab nationalism) spoke in the language of rights, sovereignty, and international law – a framework the Western-aligned US understood and might eventually support. A successful two-state negotiation was Z's existential threat, not because it meant war, but because it meant permanent territorial compromise. The Iron Wall doctrine demanded that the indigenous population's will be broken, not accommodated.

Z's solution was a masterpiece of strategic subversion: replace the negotiable enemy with a non-negotiable one. In 1978, Israel's military governor licensed Sheikh Ahmed Yassin's Mujama al-Islamiya – the Muslim Brotherhood's Gaza branch – as a counterweight to the secular PLO. While US was simultaneously running Operation Cyclone ($3B+ to Afghan mujahideen), Z was conducting its own local experiment in Islamist cultivation. By 1987, Mujama had metastasized into Hamas – the Israeli-nurtured entity now rebranded as an implacable religious foe. This was the Conduit: the second half of L3, now completing the architecture.

The Oslo Accords of 1993 – negotiated from SA's weakened position after losing its Soviet patron – delivered a framework that Z could perpetually defer while Hamas suicide bombings provided the justification. Netanyahu's subsequent facilitation of Qatari cash transfers to Hamas formalized the Conduit as a permanent pipeline: ensuring that the most radical, religiously absolutist faction always had enough oxygen to prevent peace, but never enough power to threaten survival. The 1996 “Clean Break” paper – authored by Perle, Feith, and Wurmser for Netanyahu – codified the broader regional application: abandon containment, embrace fragmentation.

L2 Deepening – The Think Tank/Policy Pipeline: During this era, Z significantly expanded L2 capture through the neoconservative policy network. The 1976 founding of JINSA, the 1980s expansion of AEI's foreign policy shop, and the placement of allies in Reagan/Bush administrations created a direct pipeline from Likudnik strategic thinking to US executive decision-making. Key operatives (Perle, Wolfowitz, Feith, Wurmser) moved seamlessly between Israeli lobbying organizations and US defense/intelligence agencies. This was L2 operating at maximum efficiency: US policy formulation happening inside Z-designed frameworks, with dissent filtered out through the Shield mechanisms installed in Era II.

With both Shield and Conduit operational, L3 was now fully built. The stage was set to achieve Layer 2 (L2) - the total capture of US and the fragmentation of the entire region.

Construction of the unified architecture (1961–1981)

  • 1967 — Six-Day War. Israel captures the West Bank, Gaza, and Golan. In this model, the defeat breaks Nasserism—secular Arab nationalism that posed a rational, negotiable threat—creating a vacuum that religious extremism will fill. (Nasserism: pan-Arab secular nationalism associated with Gamal Abdel Nasser.)
  • 1974 — ADL publishes “The New Anti-Semitism.” Formally conflates anti-Zionism with antisemitism; criticism of the state is cast as a continuation of European Jew-hatred. Claim: the Shield gets its operating manual. (ADL: Anti-Defamation League.)
  • 1977 — Revisionists take power. Menachem Begin (ex-Irgun) is elected Prime Minister. Jabotinsky’s maximalist philosophy becomes governing doctrine; Likud rarely relinquishes power afterward.
  • 1978 — Camp David Accords. Israel neutralizes Egypt as a military threat. In this model, sidelining the primary secular Arab power clears the field and deepens the vacuum for religious extremism to fill.
  • 1978 — Israel backs Mujama al‑Islami. Israel registers Sheikh Yassin’s Islamist charity (precursor to Hamas) to split the secular PLO. Avner Cohen is often cited for the claim that Hamas was, in effect, cultivated as a counterweight. Claim: the Conduit is formalized. (PLO: Palestine Liberation Organization.)
  • 1979 — Operation Cyclone begins. U.S./Saudi intelligence funnels large sums to weaponize Wahhabi networks against the USSR in Afghanistan. In this model, a parallel Western project accelerates Conduit dynamics by building a global jihadi infrastructure later recycled across the region.

Vetocracy and the Conduit in action (1982–2000)

  • 1982 — Yinon Plan published. Oded Yinon argues (in the WZO journal Kivunim) that Israel’s survival requires balkanization of Iraq, Syria, Lebanon, and Egypt along sectarian lines. (Balkanization: fragmentation into smaller rival entities.)
  • 1982–84 — AIPAC perfects “vetocracy.” High-profile defeats of U.S. politicians (Findley ’82, Percy ’84) teach the rule: criticize Israel → Shield activates → primary challengers funded → career ends. Claim: the Layer-2.5 mechanism becomes operational.
  • 1987 — Hamas founded from Israeli‑nurtured Mujama. Islamist resistance replaces the secular PLO as the public face of Palestinian struggle. In this model, the conflict is deliberately reframed from national liberation to religious extremism.
  • 1996 — “A Clean Break.” Neocon policy paper for Netanyahu: remove Saddam and destabilize Syria—Yinon-style aims pursued through U.S. power.
  • 1996 — Netanyahu props up Hamas against the secular PA. The Conduit is treated here as actively managed: a divided Palestinian opposition prevents statehood by elevating an unacceptable negotiating partner. (PA: Palestinian Authority.)

Era IV: Full US Capture (2001-2022)

The L3 architecture reached its optimal Nash equilibrium in the aftermath of September 11. The traumatic event catalyzed a total alignment between US's grand strategy and Z's regional objectives, achieving the ultimate output: Layer 2 (L2) - the full capture of the US hegemon. Through the lens of the Global War on Terror, Z seamlessly reframed its localized territorial suppression as the vanguard of a civilizational struggle – “we are fighting your war” became the operating slogan. The Conduit's locally cultivated Islamists were mapped onto the global threat matrix; the Shield ensured no domestic political actor in US could question the alignment.

L2 Total Capture – The Operating Mechanisms: Post-9/11, the L2 infrastructure achieved complete lock-in. Congressional voting patterns show 90%+ pro-Israel uniformity across both parties – defection is statistically nonexistent. Key mechanisms include:

  1. Committee gatekeeping - leadership on Foreign Relations and Armed Services committees reserved for the most reliable allies;

  2. State-level coordination - governors compete to be “most pro-Israel” as pathway to national office;

  3. Media self-censorship - editorial rooms internalize Shield constraints, rendering external enforcement unnecessary;

  4. Academic policing - BDS suppression through state legislatures, donor pressure on universities;

  5. Executive-branch colonization - key NSC, State, and Defense positions staffed from a vetted pipeline.

The “Clean Break” team (Perle, Feith, Wurmser) occupied Under Secretary and Assistant Secretary positions, directly authoring the Iraq invasion rationale. This is L2 achieved: US policy formulation happening inside Z-designed frameworks, with execution via US military.

The hegemon US was weaponized to execute the Clean Break agenda, expending American blood and $8 trillion to systematically dismantle the remaining secular Arab states: Iraq (2003), Libya (2011), Syria (2011-). Saddam, Gaddafi, Assad – each a secular nationalist (SA), each replaced by sectarian chaos (IP). The Yinon Plan's vision of regional fragmentation along ethnic and confessional lines was realized not by Israeli troops, but by American ones. SA was eliminated as a strategic actor – Ba'ath destroyed, PLO reduced to a security subcontractor, secular Arab nationalism a museum piece.

During this era, Shield and Conduit operated as a flawless complementary system. The IHRA definition of antisemitism – pushed globally as a legal standard – upgraded the Shield from social taboo to quasi-legal prohibition, criminalizing structural critiques of the state. The 2018 US embassy move to Jerusalem demonstrated US's total capture. The Abraham Accords (2020) represented the zenith of L2: regional normalization achieved over the heads of the Palestinian population, effectively deleting them as a variable in the geopolitical matrix. The architecture appeared permanently self-sustaining.

The Shield–Conduit loop determines U.S. policy (2001–2022)

2001 — 9/11. U.S. foreign policy enters an era of intensified militarization. In this model, the Conduit’s geopolitical effect is realized: the conflict is reframed from land and rights (diplomatically winnable) to civilization vs. Islamic barbarism, validating the Shield’s “Western vanguard” narrative.

2003 — U.S. invasion of Iraq. Neocon advocacy drives removal of Saddam; Iraq fractures along sectarian lines. Claim: Layer-2 capture is in full operation. In this model, removing the secular Ba’athist regime clears the field for Al‑Qaeda in Iraq and later ISIS—Conduit logic repeating at scale. (Ba’athism: secular Arab nationalist political ideology.)

2011–15 — Arab Spring and Syrian destruction. Libya and Syria shatter; Israel strikes Syria with limited constraint. The Yinon map is treated here as essentially realized. Weakening secular regimes (e.g., Assad’s) clears the field for ISIS and al‑Nusra, sustaining the “barbaric periphery” used to justify permanent Western military presence.

2018 — U.S. embassy moved to Jerusalem; exit from the Iran Deal. Zenith of vetocracy in this framing.


Era V: The Limit Condition (2023-)

The fatal flaw of L3 was the assumption of perpetual control over nonlinear IP. October 7 exposed the catastrophic limit condition of the Conduit: the non-negotiable religious entity Z had cultivated for decades mutated beyond its design parameters and pierced the Iron Wall itself. Hamas – the organization Israeli military governors had licensed in 1978, that Netanyahu had sustained via Qatari cash to prevent Palestinian statehood – executed the deadliest attack on Israeli civilians in the state's history. The internal shock forced Z into a massive kinetic response that the architecture was never designed to justify at this scale.

The response triggered the simultaneous fracturing of both L3 components. The ICJ genocide case (filed by South Africa, supported by 50+ nations) represents the first successful structural challenge to the Shield since its institutionalization. The memorial consensus is cracking along generational and geographic fault lines: Western youth, the Global South, and even diaspora Jewish communities are decoupling criticism of the state from antisemitism – the precise conflation the Shield was engineered to enforce. More critically, L1's residual capital – the deep global sympathy rooted in the historical destruction of European Jewry – is being actively burned for short-term political cover. The last reserves of L1 are consumed to defend a collapsing L3.

L2 Under Pressure: The Gaza campaign has created unprecedented stress fractures in L2 capture. For the first time since Suez 1956, the uniformity of US Congressional support shows cracks – dissenting statements from Squad members, growing progressive caucus pressure, staffer protests. More significantly, executive-branch friction is visible: State Department officials resigning over Gaza policy, CIA assessments leaking that contradict Israeli claims, military leaders questioning open-ended support. The media self-censorship shield is cracking too – social media allows atrocity footage to bypass editorial gatekeepers. The L2 infrastructure was built for a different era of information control. Whether these cracks widen to breaches depends on whether the Shield can regenerate faster than it erodes.

Multipolar actors (China, BRICS) provide alternative institutional frameworks beyond US's veto, eroding the L2 capture structure. US faces an impossible choice: maintain loyalty to a rapidly delegitimizing proxy (hemorrhaging soft power) or breach the Shield for the first time since 1956. Z confronts an accelerating terminal paradox: maintaining the Iron Wall now requires levels of kinetic force that guarantee the collapse of the diplomatic immunity - L3 - spent seventy years engineering. The architecture is consuming itself.

The unified architecture under stress (2023–2026)

Oct 2023 — October 7 attack and Gaza war. Layer-2 capture is framed here at peak intensity. Hamas—cast in this model as a cultivated counterweight—delivers catastrophic blowback. The Conduit bites its creator, exposing volatility at the core of the architecture.

2024 — Fracture of memorial consensus. South Africa brings an ICJ case; campus protests surge; ADL urges treating protests as antisemitic. Claim: the Shield begins to crack. (ICJ: International Court of Justice.)

2025–26 — Limit condition. Layer 1 erodes under debt and multipolar pressure. The Shield yields diminishing returns under algorithmic dissemination.


Conclusion

The Shield and the Conduit form a unified strategic architecture.

The Shield is not the Holocaust itself. It is the institutional apparatus built around Holocaust memory—heritable, self-defending, activated automatically when Israeli state action is challenged. In this model, any sufficiently large pogrom could have supplied the raw material; the Revisionist contribution is the capture mechanism.

The Conduit addresses what secular Arab nationalism posed in this model: rationality, negotiability, and potential Western sympathy. By replacing Nasserists, PLO secularists, and Ba’athists with cultivated Islamist alternatives, the conflict is reframed from land and rights to civilization vs. barbarism. That reframing makes the Shield’s “Western vanguard” narrative self-validating—Israel becomes the frontline against the extremism the Conduit produces.

Suez 1956 remains the pivot. It is framed here as the last time a U.S. president overrode Israeli interests, teaching the Revisionist project that capturing the U.S. hegemon was the sole strategic imperative. Everything after—AIPAC, the Shield, the Conduit—flows from that lesson. The question, in this model, is whether the Shield can survive the erosion of both its Layer-1 economic substrate and the blowback generated by the Conduit.

 
Read more...

from intueor

På en ellers ret kedelig og almindelig tirsdag før juleferien var jeg til et møde på mit arbejde. Indkaldelse i Outlook og en Powerpoint-præsentation i det store mødelokale. Men hvad jeg ellers troede ville være en ret kedelig begivenhed fik mig mere og mere op at køre. Det var et møde om hvordan vi i fremtiden skal bruge AI-chatbots på arbejdspladsen, og bag efter havde jeg en følelse som jeg stadig har svært ved at bestemme, men frem for alt forstår jeg nu at jeg var sur. 

Tilfældet vil have det at jeg arbejder i den del af det offentlige hvor man årligt i løbet af efteråret holder forhandlinger om løn og bonusser, og jeg havde derfor i perioden op til tænkt meget over hvad jeg kan gøre de næste par år for at ligge godt til her, og de overvejelser lå derfor ikke lang væk i min bevidsthed. Jeg kunne som mødet skred frem konstatere at stort set alle de punkter som jeg havde udset mig som mine fremtidige kompetencer, var noget som denne chatbot i stedet skal kunne i fremtiden. Det blev præsenteret som en glædelig nyhed, men jeg kunne ikke tolke det som andet end et konflikt der startede her, og det gjorde mig sur. Det var en udfordring af mine muligheder for at gøre karriere og sikre mig en god position at forhandle løn ud fra. For at gøre ond værre var der samtidig blevet varslet fyringer af mere end 600 medarbejdere i staten, noget som mit kontor dog var gået fri fra, og her var det flere stedet blevet sagt at det ikke var så stort et problem for Statens serviceniveau fordi produktiviteten kommer til at stige med AI. Mit humør blev for alvor punkteret da vi fik at vide at det takket være chatbotten i fremtiden ikke kommer til at betyde så meget hvis en medarbejder forlader arbejdspladsen. Igen præsenteret som en glædelig nyhed – som om det bare ville være super fedt. Men det er jo en katastrofe, tænkte jeg, for hvis jeg har én interesse som lønmodtager, så er det at det betyder noget hvis jeg siger op.

1.

Jeg er måske lidt mere kritisk overfor det her end gennemsnittet, det indrømmer jeg. På sin vis har jeg måske også søgt konfrontationen, ventet på at den opstod. Det skyldes at jeg i længere tid har fulgt de lidt mere kritiske technyheder i det hele taget, men særligt er blevet fascineret af den uafhængige journalist Ed Zitron der med base i USA dækker AI-industrien i et nyhedsbrev og en podcast. Zitron bruger et interessant greb som ofte slår fejl, men som han mestrer ret fint: den sure indignation. Det er tydeligt at læse – og særligt høre – i hans podcast at han er skide sur, men han er det på en måde så det ikke kompromitterer ham, og i stedet blot gør ham mere troværdig. Nok fordi det er autentisk, men også fordi han i øvrigt kan finde ud af at forklare hvorfor han er sur, vreden forhindrer ham ikke i at formulere sammenhængende analyser og argumenter. Det har for mig været befriende fordi det ofte kan være forløsende at se andre være sure på en autentisk måde. Forløsende at høre at det er okay at være sur over nogle ting som man vitterligt synes er langt ude.

Zitron påstår at store dele af den amerikanske AI-branche lyver, og at det går ud over deres kunder. Både overfor de helt umiddelbare kunder til firmaerne som køber en chatbot hos OpenAI eller Microsoft, grafikkort hos Nvidia og så videre, men også de mange almindelige mennesker der har deres investeringer bundet op i det amerikanske aktiemarked – eksempelvis via aktiefonde – der er kommet i en bobbel på grund af AI-firmaernes overvurderede aktiepriser. Zitron har også en teori – hvis man kan kalde det det – for hvorfor de lyver, og som går ud på at de tjener penge på det. Ingen af de store AI-firmaer tjener på nuværende tidspunkt penge på deres kunder, da det koster flere penge at lave alle lave alle de chatbeskeder som brugerne gerne vil have fra deres chatbots end firmaerne kan få ind via et almindeligt månedligt abonnement. Det kan i nogle tilfælde være okay for en virksomhed at miste penge på sine kunder, særligt fordi det her stadig er relativt unge firmaer i vækst, og derfor kan det give mening i en periode bruger flere penge end man tjener for at etablere sig på markedet, og så siden begynde at tjene penge, når ført man har skabt sig et godt kundegrundlag. Problemet er imidlertid at de ikke har nogen strategi for at vende det her: der er ikke rigtig nogen udsigt til at de kan vende rundt og begynde at tjene penge. Det er et problem som de ligesom skubber foran sig og som de ikke ser ud til at kunne løse. Det betyder at de store AI-firmaers primære indtægt er investeringer i firmaet, og det betyder at de bliver nød til at hele tiden at opretholde en fortælling om at det helt store gennembrud er lige rundt om hjørnet. Eksempelvis talte de allesammen om „General Artifical Intelligence“ i lang tid, og aktier i OpenAI blev solgt på løftet om at de ville være det første firma til at opnå dette obskure fænomen som gik ud på at chatbots gik fra bare at være en chatbot til på en eller anden måde at være „mere“ intelligent, og kunne andet en bare at være en chatbot. Noget der er ret meningsløst, men ikke desto mindre noget som folk har investeret milliarder i, og som flere CEO’s talte alvorligt om. Men i dag har alle droppet ideen.

Det her er kan i høj grad få lov til at ske fordi medierne ikke kan finde ud af at stille kritiske spørgsmål fordi de ikke forstår den her dynamik. Et eksempel på det er min egen far. Han er journalist, og for et stykke tid siden var han sammen med nogle andre journalister på et besøg i Silicon Valley. Han kom tilbage og erklærede at det her AI „kommer til at ændre alting“. Jeg var ikke så overbevist, og jeg synes mere at han lød som en der var blevet frelst. Han fortalte at de havde mødt „ham der har lavet Second Life“. Philip Rosedale, som han hedder, fik et hit med computerprogrammet Second Life tilbage i 00’erne, hvor man kunne „leve“ i en virtuel anden verden – praktisk talt vil et slags computerspil. Second Life er karakteristisk for den slags produkter som den her branche lever af fordi det er relativt ligegyldigt i dag. Det var på sit højeste i løbet af 00’erne, og selv om det faktisk overraskede mig at finde ud af at serverne stadig er oppe at køre den dag i dag, så kunne jeg på en hverdagsaften kun finde 10 online, så vidt jeg kunne forstå interfacet. Ham der fyret der har skabt Second Life har altså ikke skabt et firma med en vedvarende salgsucces. På trods af at jeg er rimelig antikapitalistisk indstillet, så har jeg en vis respekt for virksomheder der skaber gode arbejdspladser, ordentlige produktet og som over en årrække har glade kunder og medarbejdere – men det er ikke tilfældet her. I stedet har han skabt et spil der blev hypet i ret kort periode, og så tjent sine penge på den hype. Det skyldes at i Second Life kunne man købe ting man ejede fra udvikleren af spillet som man så kunne sælge videre i en semi-åben økonomi inde i selve spillet. På en måde var han forud for sin tid, for kosmetiske ting i computerspil er nu en enorm forretning. Mange spillere troede derfor at de købte et investeringsobjekt fordi det troede at Second Life ville blive den næste store ting. Det er ret tydeligt i dag at det ikke blev den næste store ting, men nok mennesker troede på det i tilstrækkelig lang tid til at de brugte en masse penge i deres online-shop. Læser man i dag på Philip Rosedales hjemmeside – altså ham fyren bag, hvis du havde glemt det –, så kan man se at han har prøvet at gøre sig selv kunsten efter og ramme den næste store ting, men det lykkedes ikke, og han har lavet både et nu lukket VR start-up og et ditto AI. 

Sat lidt på spidsen så tænker alle lidt som ham i dag. Alle er blevet kasino-hjernevasket, og det skyldes at de rigtig store penge de sidste mange år er tjent gennem aktiemarkedet. Den store drøm er ikke at sidde på en kontorgang med mørkebrune mahogni-paneler eller gå at hilse høfligt på sine ansatte og kunderne som Mads Skjern i Matador eller Waage Sandøs patriark Kaj Holger i Krøniken. Altså den gamle, konservative fortælling om det samfundsundstøttende forretningsliv. Nej, drømmen for moderne forretningsmænd er at sælge sine andele af et eller andet start-up på det rigtige tidspunkt, og så være ligeglad med hvordan det ellers går. Elon Musk er godt nok blevet velhavende på at sælge biler, men han er blevet verdens rigeste mand på sine aktier.

Det betyder for det første at det bliver en vinderstrategi at lyve. Du er alligevel ikke afhængig af relationer på lang sigt – for der er slet ikke noget langt sigte –, og derfor kan du lige så godt lyve. Fortælle noget bullshit om dine produkter som at din bil kan køre fuldautomatisk, at din chatbot bliver selvbevidst, at chatbotten kan fikse cancer eller hvad ved jeg. Markederne reagerer positivt på den slag, i hvert fald ind til den dag hvor de ikke gør det længere – hvor boblen springer. Men hvis man bare sælger sine aktier før det, så kan man jo være ligeglad.

2.

Hvad har det med mig at gøre? På sin vis ikke noget. Det er ikke et stort amerikansk firma som leverer den chatbot jeg skal bruge på mit arbejde fordi Trump – heldigvis for vores offentlige IT – har været en idiot, og nu tør man ikke bruge amerikanske chatbots i de dele af Staten der behandler fortrolige oplysninger. Det er dog ikke kun godt, for reelt set har man i min styrelse valgt en dobbelt-op model hvor man både betaler for Chat-GPT-abonnementer til alle og vil betale for mindre løsninger til specielle opgaver. Men samtidig har det alt med mig at gøre, for den tankegang som det er lykkedes de store amerikanske firmaer at fremavle findes også i Danmark.

Som et led i deres strategi for at skabe flere investeringer i sig selv, så har de amerikanske AI-firmaer opbygget en slags FOMO. Selv min far har slugt fortællingen om at AI kommer til at ændre det hele, og at verden står foran en kæmpe omvæltning. Den historie kommer både i en dystopisk og en utopisk version: AI kommer til at redde verden og vi kommer til at leve super luksus eller et form for Termianator-scenarie hvor AI dræber os alle sammmen. Begge dele er lige idiotiske, men fælles for dem er at de bliver næret – i hvert fald offentligt – af stort set alle de store spillere i AI-branchen. Det er en effektive reklamestrategi fordi hvis AI kommer til at forandre verden fuldstændigt afgørende, så er det rationelt at investere sine sparepenge i de firmaer der udvikler AI. Det er endda rationelt at investere mere end hvad de nøgterne modeller for fornuftig investering siger – for de kan jo ikke indregne miraklet som kunstig intelligens kommer til at bringe til verden, eksempelvis det føromtalte AGI.

Det minder på den måde meget om „Pascals væddemål“, en berømt fidus formuleret af filosoffen Blaise Pascal tilbage i det 17. århundrede. Pascals væddemål går ud på at det er det rigtige at leve som kristen, for hvis Gud ikke eksisterer alligevel, så har du bare levet lidt mere nedern end du ellers ville, ved eksempelvis kun at kunne spise fisk under fasten eller afstå fra at dyrke bøssesex – og omvendt hvis Gud så rent faktisk eksisterer, så står du til at tjene en uendelig gevinst i himmeriget. Væddemålets pointe er altså at det er rationelt for dig at opgiver ret lidt værdi ved at leve som kristen frem for ateist for at viden en uendelig høj værdi ved at komme i himlen. Analogien med AI er at det virker rationelt at opgive relativt lidt værdi – ens sparepenge som man alligevel vil investere – mod potentielt at vinde en næsten uendelig værdi når AI bliver superintelligent eller overtager verdensherredømmet eller sådannoget. Derfor giver det mening for mange at investere uforholdsmæssigt meget i AI. Det forklarer det umiddelbart paradoksale i at cheferne for AI-firmaer helt seriøst taler om at deres eget produkt – AI – måske kan udslette menneskeheden.

I det hele taget er meget omkring AI bare marketing, og det er til at kaste op over at folk ikke fatter det. Tag bare navnet „kunstig intelligens“. Så spørger man dumt: Hvad er det egentlig for en intelligens? Hvad lærer AI os om intelligens? Er AI bevidst? Og andet i den dur. Det er idiotiske spørgsmål fordi kunstig intelligens ret åbentlyst er et performativt udsagn. Man kan groft sagt inddele sætninger i deklarative og i performative. Deklarative er nogle der bekræfter et faktum, som at „den danske konge hedder Frederik“. Omvendt udtrykket performative sætninger et ønske om at tingene skal være sandt, som når Kong Frederik siger, „Gud bevare Danmark.“ Det er ikke et faktum at teknologien er intelligent, i stedet udtrykker man et ønske om at andre skal se sin chatbot som intelligent når man kalder sit produkt for „kunstig intelligens“. 

Den her fortælling om AI skaber en stress blandt ellers ordentlige mennesker. Mellemledere overalt læser det her lort på LinkedIn eller i ukritiske medier og så får de stress og søsætter i al hast et AI-produkt i deres egen virksomhed uden helt at have tænkt det igennem. Og det her er hele meningen med det, det er en bevidst marketingsstrategi. I det offentlige har det resulteret i at man skal „frigøre“ – er forfærdeligt upræcist begreb – mindst 10.000 stillinger i det offentlige, men potentielt op mod 100.000. Alle mellemledere i hele den offentlige sektor har nu travlt med at prøve at proppe AI-produkter ind over det hele, for de kan ligesom godt regne ud at de bliver tvunget til det hvis de ikke selv gør det. AI fremstår lige nu som et tilbud man ikke kan afslå. 

Et godt eksempel på hvorfor det er farligt er historien om verdens angiveligt rigeste mand, Elon Musk, og hans rolle i den amerikanske regering med sit DOGE-program. Det eksplicitte formål var at effektivisere den amerikanske stat ved at bruge kunstig intelligens, primært chatbots. Programmet truede en lang række embedsmænd i den amerikanske stat, og fyrede også mange af dem. Men nu er programmet lukket igen, og det var på overfladen ikke nogen succes. Hvis man kigger på statistik over udgifterne i den amerikanske centraladministration, så er det ikke lykkedes at bringe udgifterne ned, og noget tyder på at de omvendt er steget under præsiden Trump. Men det er i virkeligheden lykkedes ret godt, for det har haft en stor magtpolitisk konsekvens. Mange amerikanske medier har beskrevet hvordan embedsmænd har været forvirrede og frygtede for deres job. Samtidig sker der det på alle niveauer af magt i USA at rettigheder og principper bliver overtrådt. Trump og hans kumpaner bryder loven hele tiden, og de kan få lov til det fordi der ofte ikke er nogen i embedsværket der siger nej.

Det er selvfølgelig på et helt anden skala, men jeg oplever et vist stik af den samme frygt og usikkerhed. Det er selvfølgelig mit problem, men nu er det så også mig der skriver bloggen. Men samtidig er det et samfundsproblem vil jeg påstå. En af de store pointer fra den såkaldte Magtudredningen 2.0, en undersøgelse af demokratiets vilkår i Danmark, fandt at det står dårligere til end den første undersøgelse fra omkring årtusindeskiftet, og en af grundende er at embedsværket står svarer relativt til politikerne, særligt dem i regeringen. At indføre AI skubber den bevægelse endnu hurtigere i den gale retning. Det går i sidste ende ud over vores rettigheder som borgere, vi får dårligere services og et system der er mindre i stand til at værne om vores rettigheder. Vinderne er den politiske elite hvis magt bliver mindre udfordret, og så kapitalen der står til at tjene penge, og hvis magtposition relativt til arbejdstagerne bliver forbedret.

3.

Tak fordi du læste med. Det går op for mig at jeg er sprunget ud i en ny genre: den politiske analyse. Det blev dog en lidt for lang og for rodet tekst den her gang. Jeg tror det handler om at jeg tænker meget over AI, og at jeg ret åbentlyst ikke er færdig med at tænke over det. Jeg skriver en tekst om måneden, det er mit dogme, og så bliver det sku ikke lige godt hver gang vel. Men næste måned har jeg en meget klar idé om, og jeg kan godt love at det bliver ikke-kunstig intelligent! 

 
Læs videre...

from Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.

Anticipated Movies

Anticipated Shows

Returing Favorites

Most Watched Movies this Week

Most Watched Shows this Week


Hi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.


 
Read more...

from hex_m_hell

The Fear does not stalk its prey with cunning and stealth, as the great cat. Nor does it hunt in packs like dogs. Not does it spy its prey from a distance and loose arrows on surprise like man, though it is summoned by one who was once a man.

It does not rely on speed, nor silence, nor endurance, nor planning, for it is something else entirely.

The Fear comes with a bellowing roar and a fearsome visage it seeks not to hide, but calls attention to more and more as it moves closer. For it does not come for flesh, but catches the eyes of its victim and feasts on the terror.

When eyes are locked, in a fatal trap, mesmerized in pools of fire, it creeps ever so slightly closer. It grows louder and louder with each slow step. Often a victim could turn and run, could escape, if they could only break the gaze. For so long as any look into the eyes of this terrible creature, no movement is possible.

The Fear does not only hunt the solitary, but may be set on a village or town. Consuming its victims one by one, draining them to collapse, its power over successive victims grows stronger as they watch its slow horror.

Many will fall to their knees at the din and the fury hoping to beg themselves free, finding themselves saved for later as they feed the great monster their neighbors.

But those who know the Fear, who understand it, can escape those eyes of growing fire and raise a spear, or quiver of three arrows, and march towards it. Those who do may break its spell, that others may too rise and give chase.

And when those spears find their mark, and when those arrows land, the wise who stood together find the cursed creature, manifestation of terror, was, all along, only a phantasm of light and shadow.

 
Read more...

from 下川友

昔、一目惚れした子がいる。 その子は、ドン・キホーテの入り口あたりを、恐竜のスリッパを履いてウロウロしていた。 髪は茶色のボブで、顔はよく見えなかったけれど、全体の雰囲気を見ただけで、好きになってしまった。

あの子にもう一度会いたくて、あの辺りを何度か歩き回ったけれど、結局、再会は叶わなかった。 というか、仮にまた会えたとしても、あの子が恐竜のスリッパを履いていなかったら、きっと気づけない気がする。

俺が惹かれたのは、あの子の仕草だった。 スリッパに合わせて、ちょこちょこと歩く姿が、たまらなく可愛くて。 たぶん、それだけで好きになってしまったんだと思う。

それでも、やっぱり会いたい。 だから俺は、グッズメーカーに勤めている友人に頼んで、スリッパの作り方を教えてもらった。 恐竜のスリッパを作ってネットで売れば、あの子がSNSで反応してくれるかもしれない、なんて思ったからだ。

「恐竜はもう持ってるんだから、別の動物にしたほうがいいんじゃないか?」
そんな声もあるかもしれない。でも、俺はあの子がちょっと変わった趣味の持ち主だと信じてる。 だから、もう一足、別の恐竜のスリッパを買って、並べてくれるんじゃないかって。 そういう子だったら、嬉しいなって思った。

デザインを仕上げて、メーカーに納品し、恐竜のスリッパを100足作ってネットで販売した。 すると、どこかの界隈で流行ったらしく、1週間で96足が売れてしまった。

でも、あの子らしい投稿は、どこにも見当たらなかった。

あの子が履いていたスリッパは、緑色の恐竜で、つま先がちょうど恐竜の口になっているデザインだった。 あとになって友人に、「恐竜って普通、口が上にあるから、それワニだったんじゃないの?」って言われたけど、 そんなの、どうでもよかった。俺にとっては、あれは間違いなく“恐竜のスリッパ”だった。

結局、あの子はSNSに現れなかった。何の手がかりも得られなかった。

「サンプルが2つあるから、一緒に履こうぜ」 そう言ってくれた友人と、俺たちはおそろいの恐竜スリッパを履くことにした。

 
もっと読む…

from An Open Letter

I went to a golf range today for the first time! It turns out there’s one really close to my new house, and I went with a mixture of old friends and new friends. Yesterday night I also went out clubbing, and while it wasn’t exactly the greatest experience I still went which is really nice. Today I just got home and it’s about 2 AM, because we stayed up playing board games together and it was really fun. I’ve never even been a board game person until recently and honestly I really fucking enjoy it.

I wanted to start off this post by saying at least something not about the breakup, even though it kind of is in a way. I did have a couple different moments where I essentially just broke down into tears. But also I think for the first day I woke up and my first thought was not of her. I spent a lot of time thinking about how it felt like there was two versions of her in my head. One of them was the one that was not exactly the ideal partner for me, and someone that also crossed a lot of boundaries and did a lot of hurt. That’s the version of her that I recognize is not a good relationship for me. I’m very thankful to that person for both letting me make my mistakes, showing me my struggles, and then also ultimately making the decision for me which made sure I didn’t continue to drag on the situation longer than it should have been. But there’s also the other version of her which is the one that I felt safe with, the one that I remember in my arms, and the one that I remember all of these beautiful cherished memories with. And the difficult part is reconciling with the fact that both of those people are the same. It’s weird because it feels like I can’t hold both of those truths at the same time, I can either mourn the fact that I’ve lost this innocent pure person who made me feel so incredibly safe, or I can mourn all of the bad things that happened and the incredibly difficult and painful portions of the entire process. But I can’t seem to recognize both of those at the same time in the same person. I talked with N for a while today because I think he’s a very smart person, and he gave me some interesting thoughts on it. One thing is how I can rationalize negative behavior away, but I cannot do that same thing to positive memories. There’s no justification or understanding I need to recognize how much I appreciated and really savored certain moments. I don’t need to be convinced to accept or even want those moments, because I already do inherently. And so I think the problem becomes I can intellectualize away my grief in a way, but I also cannot help but face my grief without being able to intellectualize it. And I guess what I kind of realized while driving home is that the key element I’m missing is just time. I think that’s the short answer, and the longer answer is understanding and embracing the fact that as time goes on I will recognize that my life does not necessarily get worse. There will be a lot of things that I will miss of course, but there will be a lot of things that I also do not miss. Life has a way to fill in these vacuums, and if I allow it to, it really does become something beautiful. Sometimes I just have to remind myself how it really can just be that simple. A beautiful thing about free will is the ability to just try different things with relatively no consequences. There’s no real consequence in any meaningful way to going to a new social situation, or to try to socialize in a group where I felt irrationally unsafe in. For example, I’m kind of afraid of men, and that does put me off of socialization in a couple different avenues. But today I went. And I had a great time. And maybe there’s a couple other hobbies that I’m afraid of or that I try and haven’t been crazily successful, but I can always go back and I can always do them again and I think I will be surprised with the success that I see. I think a lot about that one, of how there is a life that I’ve always wanted, and I will make it mine.

 
Read more...

from Crónicas del oso pardo

Siempre he pensado que las lagartijas son especiales. Es posible que así piensen muchas personas. Hace pocos años, paseando con mi esposa, vimos a dos de ellas atravesar un muro de lado a lado en un parque cerca de un río. Eran verdes y azules, las vimos pasar como unas flechas, y aún así tuvieron tiempo para mirarnos. Aunque esto es algo subjetivo, ambos coincidimos en que nos expresaron simpatía.

Tengo, además, un recuerdo imborrable de mi temprana infancia. Un día encontré una lagartija en la pared de mi dormitorio, al frente de mi cama. Me miró y le pude sostener la mirada, que era mínima, pero aguda. Me revisó de arriba a abajo y me dijo: “Tú eres un buen chico. Pareces inteligente también”. Aunque pude entenderla, no me arriesgué a saludarla porque no pronuncio bien el lagartano. Pero sí sonreí, le puse un vaso de agua y una galleta. Sabía que eran para él. Sé que tomó agua cuando apagué la luz y es posible que mordisqueara la galleta. Comería otras cosas, pues iba y venía del patio. Nadie más la vio. Cuando entramos en esa confianza que da la proximidad, tuvimos breves conversaciones en español. Le decía, por ejemplo, “Escóndete, que viene alguien”, y lo hacía, aunque justo en el último minuto. Era arriesgada, hábil en aparecer y desaparecer. Es posible que supiera camuflarse.

Al poco tiempo no la volví a ver. Le deseé lo mejor. Y si me estás leyendo, no olvides que eres parte de mis más bellos recuerdos.

 
Leer más...

from Paolo Amoroso's Journal

Insphex adds the Hexdump item to the File Browser menu to view the hex dump of the selected files. The initial implementation called the public API for adding commands at the top level of the menu.

To later move the item to the See sumbenu that groups various file viewing commands I resorted to list surgery, as the API doesn't support submenus. The problem is internal system details can and do change, which happened to the File Browser menu and led to an Insphex load error.

I fixed the issue by reverting the public API call and now the item is back at the top level of the menu.

Insphex is a hex dump tool similar to the Linux command hexdump. I wrote it in Common Lisp on Medley Interlisp.

#insphex #CommonLisp #Interlisp #Lisp

 
Read more... Discuss...

from SmarterArticles

Here is a troubling scenario that plays out more often than scientists would like to admit: a research team publishes findings claiming 95 per cent confidence that air pollution exposure reduces birth weights in a particular region. Policymakers cite the study. Regulations follow. Years later, follow-up research reveals the original confidence interval was fundamentally flawed, not because the researchers made an error, but because the statistical methods they relied upon were never designed for the kind of data they were analysing.

This is not a hypothetical situation. It is a systemic problem affecting environmental science, epidemiology, economics, and climate research. When data points are spread across geographic space rather than collected independently, the mathematical assumptions underlying conventional confidence intervals break down in ways that can render those intervals meaningless. The gap between what statistics promise and what they actually deliver has remained largely invisible to policymakers and the public, hidden behind technical language and the presumed authority of numerical precision.

A team of researchers at the Massachusetts Institute of Technology has now developed a statistical method that directly confronts this problem. Their approach, published at the Conference on Neural Information Processing Systems in 2025 under the title “Smooth Sailing: Lipschitz-Driven Uncertainty Quantification for Spatial Association,” offers a fundamentally different way of thinking about uncertainty when analysing spatially dependent data. The implications extend far beyond academic statistics journals; they touch on everything from how we regulate industrial pollution to how we predict climate change impacts to how we rebuild public trust in scientific findings.

The Independence Illusion

The foundation of modern statistical inference rests on assumptions about independence. When you flip a coin one hundred times, each flip does not influence the next. When you survey a thousand randomly selected individuals about their voting preferences, one person's response (in theory) does not affect another's. These assumptions allow statisticians to calculate confidence intervals that accurately reflect the uncertainty in their estimates.

The mathematical elegance of these methods has driven their adoption across virtually every scientific discipline. Researchers can plug their data into standard software packages and receive confidence intervals that appear to quantify exactly how certain they should be about their findings. The 95 per cent confidence interval has become a ubiquitous fixture of scientific communication, appearing in everything from pharmaceutical trials to climate projections to economic forecasts.

But what happens when your data points are measurements of air quality taken from sensors scattered across a metropolitan area? Or unemployment rates in neighbouring counties? Or temperature readings from weather stations positioned along a coastline? In these cases, the assumption of independence collapses. Air pollution in one city block is correlated with pollution in adjacent blocks. Economic conditions in Leeds affect conditions in Bradford. Weather patterns in Brighton influence readings in Worthing.

Waldo Tobler, a cartographer and geographer working at the University of Michigan, articulated this principle in 1970 with what became known as the First Law of Geography: “Everything is related to everything else, but near things are more related than distant things.” This observation, rooted in common sense about how the physical world operates, poses a profound challenge to statistical methods built on the assumption that observations are independent.

The implications of Tobler's Law extend far beyond academic geography. When a researcher collects data from locations scattered across a landscape, those observations are not independent samples from some abstract distribution. They are measurements of a spatially continuous phenomenon, and their values depend on their locations. A temperature reading in Oxford tells you something about the temperature in Reading. A housing price in Islington correlates with prices in neighbouring Hackney. An infection rate in one postal code relates to rates in adjacent areas.

Tamara Broderick, an associate professor in MIT's Department of Electrical Engineering and Computer Science, a member of the Laboratory for Information and Decision Systems, an affiliate of the Computer Science and Artificial Intelligence Laboratory, and senior author of the new research, explains the problem in concrete terms. “Existing methods often generate confidence intervals that are completely wrong,” she says. “A model might say it is 95 per cent confident its estimation captures the true relationship between tree cover and elevation, when it didn't capture that relationship at all.”

The consequences are not merely academic. Ignoring spatial autocorrelation, as researchers from multiple institutions have documented, leads to what statisticians call “narrowed confidence intervals,” meaning that studies appear more certain of their findings than they should be. This overconfidence can cascade through scientific literature and into public policy, creating a false sense of security about findings that may not withstand scrutiny.

Three Assumptions, All Violated

The MIT research team, which included postdoctoral researcher David R. Burt, graduate student Renato Berlinghieri, and assistant professor Stephen Bates alongside Broderick, identified three specific assumptions that conventional confidence interval methods rely upon, all of which fail in spatial contexts.

David Burt, who received his PhD from Cambridge University where he studied under Professor Carl Rasmussen, brought expertise in Bayesian nonparametrics and approximate inference to the project. His background in Gaussian processes and variational inference proved essential in developing the theoretical foundations of the new approach.

The first assumption is that source data is independent and identically distributed. This implies that the probability of including one location in a dataset has no bearing on whether another location is included. But consider how the United States Environmental Protection Agency positions its air quality monitoring sensors. These sensors are not scattered randomly; they are placed strategically, with the locations of existing sensors influencing where new ones are deployed. Urban areas receive denser coverage than rural regions. Industrial zones receive more attention than residential areas. The national air monitoring system, according to a Government Accountability Office report, has limited monitoring at local scales and in rural areas.

Research published in GeoHealth in 2023 documented systematic biases in crowdsourced air quality monitoring networks such as PurpleAir and OpenAQ. While these platforms aim to democratise pollution monitoring, their sensor locations suffer from what the researchers termed “systematic racial and income biases.” Sensors tend to be deployed in predominantly white areas with higher incomes and education levels compared to census tracts with official EPA monitors. Areas with higher densities of low-cost sensors tend to report lower annual average PM2.5 concentrations than EPA monitors in all states except California, suggesting that the networks are systematically missing the most polluted areas where vulnerable populations often reside. This is not merely an equity concern; it represents a fundamental violation of the independence assumption that undermines any confidence intervals calculated from such data.

The second assumption is that the statistical model being used is perfectly correct. This assumption, the MIT team notes, is never true in practice. Real-world relationships between variables are complex, often nonlinear, and shaped by factors that may not be included in any given model. When researchers study the relationship between air pollution and birth weight, they are working with simplified representations of extraordinarily complex biological and environmental processes. The true relationship involves genetics, maternal health, nutrition, stress, access to healthcare, and countless other factors that interact in ways no model can fully capture.

The third assumption is that source data (used to build the model) is similar to target data (where predictions are made). In non-spatial contexts, this can be a reasonable approximation. But in geographic analyses, the source and target data may be fundamentally different precisely because they exist in different locations. A model trained on air quality data from Manchester may perform poorly when applied to conditions in rural Cumbria, not because of any methodological error, but because the spatial characteristics of these regions differ substantially. Urban canyons trap pollution differently than open farmland; coastal areas experience wind patterns unlike inland valleys; industrial corridors have emission profiles unlike residential suburbs.

The MIT researchers frame this as a problem of “nonrandom location shift.” Training data and target locations differ systematically, and this difference introduces bias that conventional methods cannot detect or correct. The bias is not random noise that averages out; it is systematic error that compounds across analyses.

Enter Spatial Smoothness

The MIT team's solution involves replacing these problematic assumptions with a different one: spatial smoothness, mathematically formalised through what is known as Lipschitz continuity.

The concept draws on work by the nineteenth-century German mathematician Rudolf Lipschitz. A function is Lipschitz continuous if there exists some constant that bounds how quickly the function can change. In plain terms, small changes in input cannot produce dramatically large changes in output. The function is “smooth” in the sense that it cannot jump erratically from one value to another. This property, seemingly abstract, turns out to capture something fundamental about how many real-world phenomena behave across space.

Applied to spatial data, this assumption translates to a straightforward claim: variables tend to change gradually across geographic space rather than abruptly. Air pollution levels on one city block are unlikely to differ dramatically from levels on the adjacent block. Instead, pollution concentrations taper off as one moves away from sources. Soil composition shifts gradually across a landscape. Temperature varies smoothly along a coastline. Rainfall amounts change progressively from one microclimate to another.

“For these types of problems, this spatial smoothness assumption is more appropriate,” Broderick explains. “It is a better match for what is actually going on in the data.”

This is not a claim that all spatial phenomena are smooth. Obvious exceptions exist: a factory fence separates clean air from polluted air; a river divides two distinct ecosystems; an administrative boundary marks different policy regimes; a geological fault line creates abrupt changes in soil composition. But for many applications, the smoothness assumption captures reality far better than the independence assumption it replaces. And critically, the Lipschitz framework allows researchers to quantify exactly how smooth they assume the data to be, incorporating domain knowledge into the statistical procedure.

The technical innovation involves decomposing the estimation error into two components. The first is a bias term that reflects the mismatch between where training data was collected and where predictions are being made. The method bounds this bias using what mathematicians call Wasserstein-1 distance, solved through linear programming. This captures the “transportation cost” of moving probability mass from source locations to target locations, providing a rigorous measure of how different the locations are. The second is a randomness term reflecting noise in the data, estimated through quadratic programming.

The final confidence interval combines these components in a way that accounts for unknown bias while maintaining the narrowest possible interval that remains valid across all feasible values of that bias. The mathematics are sophisticated, but the intuition is not: acknowledge that your data may not perfectly represent the locations you care about, quantify how bad that mismatch could be, and incorporate that uncertainty into your confidence interval.

The approach also makes explicit something that conventional methods hide: the relationship between source data locations and target prediction locations. By requiring researchers to specify both, the method forces transparency about the inferential gap being bridged.

Validation Through Comparison

The MIT team validated their approach through simulations and experiments with real-world data. The results were striking in their demonstration of how badly conventional methods can fail.

In a single-covariate simulation comparing multiple methods for generating confidence intervals, only the proposed Lipschitz-driven approach and traditional Gaussian processes achieved the nominal 95 per cent coverage rate. Competing methods, including ordinary least squares with various standard error corrections such as heteroskedasticity-consistent estimators and clustered standard errors, achieved coverage rates ranging from zero to fifty per cent. In other words, methods that claimed 95 per cent confidence were wrong more than half the time. A 95 per cent confidence interval that achieves zero per cent coverage is not a confidence interval at all; it is a statistical artefact masquerading as quantified uncertainty.

A more challenging multi-covariate simulation involving ten thousand data points produced even starker results. Competing methods never exceeded thirty per cent coverage, while the Lipschitz-driven approach achieved one hundred per cent. The difference was not marginal; it was categorical. Methods that researchers routinely use and trust were failing catastrophically while the new approach succeeded completely.

The researchers also applied their method to real data on tree cover across the United States, analysing the relationship between tree cover and elevation. This application matters because understanding how environmental variables covary across landscapes informs everything from forest management to climate modelling to biodiversity conservation. Here again, the proposed method maintained the target 95 per cent coverage rate across multiple parameters, while alternatives produced coverage rates ranging from fifty-four to ninety-five per cent, with some failing entirely on certain parameters.

Importantly, the method remained reliable even when observational data contained random errors, a condition that accurately reflects real-world measurement challenges in environmental monitoring, epidemiology, and other fields. Sensors drift out of calibration; human observers make mistakes; instruments malfunction in harsh conditions. A method that fails under realistic measurement error would have limited practical value, however elegant its mathematical foundations.

From Sensors to Birth Certificates: Applications Across Domains

While air quality monitoring provides a compelling example, the problems addressed by this research extend across virtually every domain that relies on geographically distributed data. The breadth of affected fields reveals how foundational this problem is to modern empirical science.

In epidemiology, spatial analyses are central to understanding disease patterns. Researchers use geographic data to study cancer clusters, track infectious disease spread, and investigate environmental health hazards. A 2016 study published in Environmental Health examined the relationship between air pollution and birth weight across Los Angeles County, using over nine hundred thousand birth records collected between 2001 and 2008. The researchers employed Bayesian hierarchical models to account for spatial variability in the effects, attempting to understand not just whether pollution affects birth weight on average but how that effect varies across different neighbourhoods. Even sophisticated approaches like these face the fundamental challenges the MIT team identified: models are inevitably misspecified, source and target locations differ, and observations are not independent.

The stakes in epidemiological research are particularly high. Studies examining links between highway proximity and dementia prevalence, air pollution and respiratory illness, and environmental exposures and childhood development all involve spatially correlated data. A study in Paris geocoded birth weight data to census block level, examining how effects differ by neighbourhood socioeconomic status and infant sex. Research in Kansas analysed over five hundred thousand births using spatiotemporal ensemble models at one kilometre resolution. When confidence intervals from such studies inform public health policy, the validity of those intervals matters enormously. If foundational studies overstate their certainty, policies may be based on relationships that are weaker or more variable than believed.

Economic modelling faces analogous challenges. Spatial econometrics, a field that emerged in the 1970s following work by Belgian economist Jean Paelinck, attempts to adapt econometric methods for geographic data. The field recognises that standard regression analyses can produce unstable parameter estimates and unreliable significance tests when they fail to account for spatial dependency. Researchers use these techniques to study regional economic resilience, the spatial distribution of wealth and poverty, and the effects of policy interventions that vary by location. The European Union relies on spatial economic analyses to allocate structural funds across member regions, attempting to reduce economic disparities between areas.

But as research published in Spatial Economic Analysis notes, ignoring spatial correlation can lead to “serious misspecification problems and inappropriate interpretation.” Models that fail to account for geographic dependencies may attribute effects to the wrong causes or estimate relationships with false precision. The finding that neighbouring regions tend to share economic characteristics, with high-growth areas clustered near other high-growth areas and low-growth areas similarly clustered, has profound implications for how economists model development and inequality.

Climate science faces perhaps the most consequential version of this challenge. Climate projections involve enormous spatial and temporal complexity, with multiple sources of uncertainty interacting across scales. A 2024 study published in Nature Communications examined how uncertainties from human systems (such as economic and energy models that project future emissions) combine with uncertainties from Earth systems (such as climate sensitivity and carbon cycle feedbacks) to affect temperature projections. The researchers found that uncertainty sources are not simply additive; they interact in ways that require integrated modelling approaches.

Current best estimates of equilibrium climate sensitivity, the amount of warming expected from a doubling of atmospheric carbon dioxide, range from approximately 2.5 to 4 degrees Celsius. This uncertainty has profound implications for policy, from carbon budgets to adaptation planning to the urgency of emissions reductions. Methods that improve uncertainty quantification for spatial data could help narrow these ranges or at least ensure that the stated uncertainty accurately reflects what is actually known and unknown. Climate models must work across spatial scales from global circulation patterns to regional impacts to local weather, each scale introducing its own sources of variability and uncertainty.

The Trust Deficit in Science

The timing of this methodological advance coincides with a broader crisis of confidence in scientific institutions. Data from the Pew Research Center shows that while trust in scientists remains higher than in many other institutions, it has declined since the Covid-19 pandemic. A 2024 survey of nearly ten thousand American adults found that seventy-four per cent had at least a fair amount of confidence in scientists, up slightly from seventy-three per cent the previous year but still below pre-pandemic levels.

A 2025 study surveying nearly seventy-two thousand people across sixty-eight countries, published by Cologna and colleagues, found that while seventy-eight per cent of respondents viewed scientists as competent, only forty-two per cent believed scientists listen to public concerns, and just fifty-seven per cent thought they communicate transparently. Scientists score high on expertise but lower on openness and responsiveness. This suggests that public scepticism is not primarily about competence but about communication and accountability.

More concerning are the partisan divides within individual countries. Research published in 2025 in Public Understanding of Science documented what the authors termed “historically unique” divergence in scientific trust among Americans. While scientists had traditionally enjoyed relatively stable cross-partisan confidence, recent years have seen that consensus fracture. The researchers found changes in patterns of general scientific trust emerging at the end of the Trump presidency, though it remains unclear whether these represent effects specific to that political moment or the product of decades-long processes of undermining scientific trust.

Part of this decline relates to how scientific uncertainty has been communicated and sometimes exploited. During the pandemic, policy recommendations evolved as evidence accumulated, a normal feature of science that nevertheless eroded public confidence when changes appeared inconsistent. Wear masks; do not wear masks; wear better masks. Stay six feet apart; distance matters less than ventilation. The virus spreads through droplets; actually, it spreads through aerosols. Each revision, scientifically appropriate as understanding improved, appeared to some observers as evidence of confusion or incompetence.

Uncertainty, properly acknowledged, can signal scientific honesty; poorly communicated, it becomes fodder for those who wish to dismiss inconvenient findings altogether. Research from PNAS Nexus in 2025 examined how uncertainty communication affects public trust, finding that effects depend heavily on whether the uncertainty aligns with recipients' prior beliefs. When uncertainty communication conflicts with existing beliefs, it can actually reduce trust. The implication is that scientists face a genuine dilemma: honest acknowledgement of uncertainty may undermine confidence in specific findings, yet false certainty ultimately damages the entire scientific enterprise when errors are eventually discovered.

The OECD Survey on Drivers of Trust in Public Institutions, published in 2024, found that only forty-one per cent of respondents believe governments use the best available evidence in decision making, and only thirty-nine per cent think communication about policy reforms is adequate. Evidence-based decision making is recognised as important for trust, but most people doubt it is actually happening.

Methods like the MIT approach offer a potential path forward. By producing confidence intervals that accurately reflect what is known and unknown, researchers can make claims that are more likely to withstand replication and scrutiny. Overstating certainty invites eventual correction; appropriately calibrated uncertainty builds durable credibility. When a study says it is 95 per cent confident, that claim should mean something.

Computational Reproducibility and the Trust but Verify Imperative

The MIT research also connects to broader discussions about reproducibility in computational science. A 2020 article in the Harvard Data Science Review by Willis and Stodden examined seven reproducibility initiatives across political science, computer science, economics, statistics, and mathematics, documenting how “trust but verify” principles could be operationalised in practice.

The phrase “trust but verify,” borrowed from Cold War diplomacy, captures an emerging ethos in computational research. Scientists should be trusted to conduct research honestly, but their results should be independently verifiable. This requires sharing not just results but the data, code, and computational workflows that produced them. The National Academies of Science, Engineering, and Medicine defines reproducibility as “obtaining consistent results using the same input data, computational steps, methods, and code, and conditions of analysis.”

The replication crisis that emerged first in psychology has spread to other fields. A landmark 2015 study in Science by the Open Science Collaboration attempted to replicate one hundred psychology experiments and found that only thirty-six per cent of replications achieved statistically significant results, compared to ninety-seven per cent of original studies. Effect sizes in replications were, on average, half the magnitude of original effects. Nearly half of original effect sizes were outside the 95 per cent confidence intervals of the replication effect sizes, suggesting that the original intervals were systematically too narrow.

The problem is not limited to any single discipline. Mainstream biomedical and behavioural sciences face failure-to-replicate rates near fifty per cent. A 2016 survey of over fifteen hundred researchers published in Nature found that more than half believed science was facing a replication crisis. Contributing factors include publication bias toward positive results, small sample sizes, analytical flexibility that allows researchers to find patterns in noise, and, critically, statistical methods that overstate certainty.

Confidence intervals play a central role in this dynamic. As critics have noted, the “inadequate use of p-values and confidence intervals has severely compromised the credibility of science.” Intervals that appear precise but fail to account for data dependencies, model misspecification, or other sources of uncertainty generate findings that seem robust but cannot withstand replication attempts. A coalition of seventy-two methodologists has proposed reforms including using metrics beyond p-values, reporting effect sizes consistently, and calculating prediction intervals for replication studies.

The MIT method addresses one specific source of such failures. By providing confidence intervals that remain valid under conditions that actually occur in spatial analyses, rather than idealised conditions that rarely exist, the approach reduces the gap between claimed and actual certainty. This is not a complete solution to the reproducibility crisis, but it removes one barrier to credible inference.

Practical Considerations and Limitations

Implementing the Lipschitz-driven approach requires researchers to specify a smoothness parameter, essentially a judgement about how rapidly the variable of interest can change across space. This introduces a form of subjectivity that some may find uncomfortable. The method demands that researchers make explicit an assumption that other methods leave implicit (and often violated).

In their tree cover analysis, the MIT team selected a Lipschitz constant implying that tree cover could change by no more than one percentage point per five kilometres. They arrived at this figure by balancing knowledge of uniform regions, where tree cover remains stable over large distances, against areas where elevation-driven transitions produce sharper gradients. Ablation studies showed that coverage remained robust across roughly one order of magnitude of variation in this parameter, providing some assurance that precise specification is not critical. Getting the constant approximately right matters; getting it exactly right does not.

Nevertheless, the requirement for domain expertise represents a shift from purely data-driven approaches. Researchers must bring substantive knowledge to bear on their statistical choices, a feature that some may view as a limitation and others as an appropriate integration of scientific judgement with mathematical technique. The alternative, methods that make implicit assumptions about smoothness or ignore the problem entirely, is not actually more objective; it simply hides the assumptions being made.

The method also requires computational resources, though the authors have released open-source code through GitHub that implements their approach. The linear programming for bias bounds and quadratic programming for variance estimation can handle datasets of reasonable size on standard computing infrastructure. As with many advances in statistical methodology, adoption will depend partly on accessibility and ease of use.

Implications for Policy and Governance

For policymakers who rely on scientific research to inform decisions, these methodological advances have practical implications that extend beyond academic statistics.

Environmental regulations often rely on exposure-response relationships derived from epidemiological studies. Air quality standards, for instance, are based on evidence linking pollution concentrations to health outcomes. If confidence intervals from foundational studies are too narrow, the resulting regulations may be based on false certainty. A standard that appears well-supported by evidence may rest on studies whose confidence intervals were systematically wrong. Conversely, if uncertainty is properly quantified, regulators can make more informed decisions about acceptable risk levels and safety margins.

Climate policy depends heavily on projections that involve spatial and temporal uncertainty. The Paris Agreement's goal of limiting warming to 1.5 degrees Celsius above pre-industrial levels rests on scientific estimates of carbon budgets and climate sensitivity. Better uncertainty quantification could inform how much margin policymakers should build into their targets. If we are less certain about climate sensitivity than our confidence intervals suggest, that argues for more aggressive emissions reductions, not less.

Public health interventions targeting environmental exposures, from lead remediation to air quality standards to drinking water regulations, similarly depend on studies that correctly characterise what is known and unknown. A systematic review of air pollution epidemiology published in Environmental Health Perspectives noted that “the quality of exposure data has been regarded as the Achilles heel of environmental epidemiology.” Methods that better account for spatial dependencies in exposure assessment could strengthen the evidence base for protective policies.

Towards More Honest Science

The MIT research represents one contribution to a broader effort to improve the reliability of scientific inference. It does not solve all problems with confidence intervals, nor does it address other sources of the reproducibility crisis, from publication bias to inadequate sample sizes to analytical flexibility. But it does solve a specific, important problem that has long been recognised but inadequately addressed.

When data varies across space, conventional statistical methods produce confidence intervals that can be, in the researchers' words, “completely wrong.” Methods that claim 95 per cent coverage achieve zero per cent. Methods designed for independent data are applied to dependent data, producing precise-looking numbers that mean nothing. The new approach produces intervals that remain valid under realistic conditions, intervals that actually deliver the coverage they promise.

For researchers working with spatial data, the practical message is clear: existing methods for uncertainty quantification may significantly understate the true uncertainty in your estimates. Alternatives now exist that better match the structure of geographic data. Using them requires more thought about smoothness assumptions and more transparency about source and target locations, but the result is inference that can be trusted.

For consumers of scientific research, whether policymakers, journalists, or members of the public, the message is more nuanced. The confidence intervals reported in published studies are not all created equal. Some rest on assumptions that hold reasonably well; others rest on assumptions that may be grossly violated. Evaluating the credibility of specific findings requires attention to methodology as well as results. A narrow confidence interval is not inherently more reliable than a wide one; what matters is whether the interval accurately reflects uncertainty given the structure of the data.

The MIT team's work exemplifies a productive response to the reproducibility crisis: rather than simply lamenting failures, developing better tools that make future failures less likely. Science advances not just through new discoveries but through improved methods of knowing, methods that more honestly and accurately characterise the boundaries of human understanding.

In an era of declining trust in institutions and increasing polarisation over scientific questions, such methodological advances matter. Not because they eliminate uncertainty, which is impossible, but because they ensure that the uncertainty we acknowledge is real and the confidence we claim is warranted. The goal is not certainty but honesty about the limits of knowledge. Statistical methods that deliver this honesty serve not just science but the societies that depend on it.


References and Sources

  1. MIT News. “New method improves the reliability of statistical estimations.” Massachusetts Institute of Technology, December 2025. https://news.mit.edu/2025/new-method-improves-reliability-statistical-estimations-1212

  2. Burt, D.R., Berlinghieri, R., Bates, S., and Broderick, T. “Smooth Sailing: Lipschitz-Driven Uncertainty Quantification for Spatial Association.” Conference on Neural Information Processing Systems, 2025. arXiv:2502.06067. https://arxiv.org/abs/2502.06067

  3. MIT CSAIL. “New method improves the reliability of statistical estimations.” https://www.csail.mit.edu/news/new-method-improves-reliability-statistical-estimations

  4. MIT EECS. “New method improves the reliability of statistical estimations.” https://www.eecs.mit.edu/new-method-improves-the-reliability-of-statistical-estimations/

  5. Tobler, W.R. “A Computer Movie Simulating Urban Growth in the Detroit Region.” Economic Geography, 1970. https://en.wikipedia.org/wiki/Tobler's_first_law_of_geography

  6. Mullins, B.J. et al. “Data-Driven Placement of PM2.5 Air Quality Sensors in the United States: An Approach to Target Urban Environmental Injustice.” GeoHealth, 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC10499371/

  7. Jerrett, M. et al. “Spatial variability of the effect of air pollution on term birth weight: evaluating influential factors using Bayesian hierarchical models.” Environmental Health, 2016. https://ehjournal.biomedcentral.com/articles/10.1186/s12940-016-0112-5

  8. Mohai, P. et al. “Methodologic Issues and Approaches to Spatial Epidemiology.” Environmental Health Perspectives, 2008. https://pmc.ncbi.nlm.nih.gov/articles/PMC2516558/

  9. Willis, C. and Stodden, V. “Trust but Verify: How to Leverage Policies, Workflows, and Infrastructure to Ensure Computational Reproducibility in Publication.” Harvard Data Science Review, 2020. https://hdsr.mitpress.mit.edu/pub/f0obb31j

  10. Open Science Collaboration. “Estimating the reproducibility of psychological science.” Science, 2015. https://www.science.org/doi/10.1126/science.aac4716

  11. Pew Research Center. “Public Trust in Scientists and Views on Their Role in Policymaking.” November 2024. https://www.pewresearch.org/science/2024/11/14/public-trust-in-scientists-and-views-on-their-role-in-policymaking/

  12. Milkoreit, M. and Smith, E.K. “Rapidly diverging public trust in science in the United States.” Public Understanding of Science, 2025. https://journals.sagepub.com/doi/10.1177/09636625241302970

  13. OECD. “OECD Survey on Drivers of Trust in Public Institutions – 2024 Results.” https://www.oecd.org/en/publications/oecd-survey-on-drivers-of-trust-in-public-institutions-2024-results_9a20554b-en.html

  14. Chan, E. et al. “Enhancing Trust in Science: Current Challenges and Recommendations.” Social and Personality Psychology Compass, 2025. https://compass.onlinelibrary.wiley.com/doi/full/10.1111/spc3.70104

  15. Nature Communications. “Quantifying both socioeconomic and climate uncertainty in coupled human–Earth systems analysis.” 2025. https://www.nature.com/articles/s41467-025-57897-1

  16. Anselin, L. “Spatial Econometrics.” Handbook of Applied Economic Statistics, 1999. https://web.pdx.edu/~crkl/WISE/SEAUG/papers/anselin01_CTE14.pdf

  17. Lipschitz Continuity. Wikipedia. https://en.wikipedia.org/wiki/Lipschitz_continuity

  18. Burt, D.R. Personal website. https://davidrburt.github.io/

  19. GitHub Repository. “Lipschitz-Driven-Inference.” https://github.com/DavidRBurt/Lipschitz-Driven-Inference

  20. U.S. EPA. “Ambient Air Monitoring Network Assessment Guidance.” https://www.epa.gov/sites/default/files/2020-01/documents/network-assessment-guidance.pdf

  21. Cologna, V. et al. “Trust in scientists and their role in society across 68 countries.” Science Communication, 2025.

  22. National Academies of Sciences, Engineering, and Medicine. “Reproducibility and Replicability in Science.” 2019. https://www.ncbi.nlm.nih.gov/books/NBK547523/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * After a calm, restful Sunday I'm listening now to pregame coverage ahead of tonight's men's basketball game between the Baylor Bears and the UCF Knights. This will be my basketball game before bedtime. After spending 3 hours this afternoon listening to a Texas Rangers MLB Spring Training win over the LA Dodgers, I'm reminded how fortunate it is that I'm a radio sports listener. It allows me to read my prayers while listening to games. There's no way I could do that if I was trying to follow the games on TV.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 226.08 lbs. * bp= 155/91 (62)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 07:10 – 1 Philly cheese steak sandwich * 09:10 – bowl of noodles with beef and onions * 12:55 – 4 HEB Bakery cookies * 13:55 – 1 fresh banana * 15:20 – 1 bowl of home made beef and vegetable soup * 16:20 – 2 more cookies

Activities, Chores, etc.: * 07:20 – bank accounts activity monitored * 07:30 – read, pray, follow news reports from various sources, surf the socials, listen to music, and nap * 13:10 – listening now to 105.3 The Fan – Dallas ahead of the MLB Spring Training Game between the Texas Rangers and the Los Angeles Dodgers. * 17:00 – and the Rangers win 7 to 6. * 17:15 – read, pray, follow news reports from various sources, surf the socials, * 18:40 – tuned into the Baylor Sports Media Network for pregame coverage ahead of tonight's men's basketball game between the Baylor Bears and the UCF Knights.

Chess: * 11:05 – moved in all pending CC games

 
Read more...

from The happy place

The first spring rain now is falling onto the snow covered lands and thereby is creating a dangerous surface.

Like a slush made of dirt water, ice concealed by shallow puddles

Moreover

Wet icicles falling from the roofs (all of them) onto this aforementioned ground are making the sidewalks more perilous than the roads they fringe, as the dangers there are everywhere.

And the gravel is looking dirty and hard yet they provide safety of course they do.

And today I wore my sunglasses

Because the sun shines brighter now, it’s the spring sun.

Sometimes, however, the interesting thing happens indoors:

There was a big clog in the sewage behind the shower cubicle today.

It’s always been slow, but today it reached a point where something had to be done, as shower water spilled onto the bathroom floor.

I don’t think it’s ever been cleaned out before, because to move it, first the wall mounted bathroom cabinet needs to be cleared and unmounted. Then there is enough room to move cubicle to reveal the floor drain underneath. Like a 15-puzzle or a Tetris in reverse.

I was always curious to see how it looked behind and beneath this cubicle, and have been yearning to clean it thoroughly, because it looked just like I imagined it would with the black cluster of dust moths which were satisfying to clean out.

And

In the trap was the mother of all clogs, a fascinating very solid mass of god knows what, a last living (for I felt it was a living thing) remnant from the old lady, the previous owner who died in this apartment, and thousands of other things like a complete ecosystem or something with a color I haven’t seen before oddly fascinating:

I just drew it out with my bare hands — there weren’t any gloves nearby, and having my mind made up, I now felt an urgency to handle this promptly — feeling the surprisingly solid slimy mass with something hard like eggshells inside as i pulled it out and tossed it into the bin.

and now, having put all together once more, water runs freely through the drain, better than I ever would’ve thought possible.

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog