from The happy place

It’s the busy week where I deliver some value here and there, eat candy out of a woven basket and just try to move forward one step at a time

I have two Umamusume horse girls now with S rating, I am getting the hang of it

Maybe this evening I will have a beer and light a fire in the fireplace

Yes

I feel myself drawn to the flames they are dangerously warm and deadly, just like thousands of millions of other things

It’s all so fragile …

Do you believe in the afterlife?

I am not sure

And if there is a hell, I hope not…

I think generally this with Hell is unfair to neurotic people who picture themselves burning in Hell for masturbating, while others walk the earth as terrible people, committing atrocities, while never doubting for one second that heaven will wait for them

It’s not fair

This world

 
Read more... Discuss...

from The happy place

This Easter, the snow lay thick and wet like a cold blanket of misery. The rainy snow fell on my face and on my cheek it felt like icy tears.

And yes the clouds they finally gave way to let some sunshine through, but still it will take some time for all of the snow again to melt.

But it feels easier today.

I even walk around with a vague smile on my face

And I think it’ll all work out in the end.

 
Read more... Discuss...

from Notes I Won’t Reread

Nothing happened today. Not even enough to complain about properly. Stayed away from social media. Not out of discipline, no, just can’t stand it. same people repeating the same thoughts like they invented them. It’s not even annoying anymore, just predictable. Like background noise, you forget it there.

Routine (if you’ll call it that) is still the same. Work, get it done without thinking too much about it. Played a bit, more out of habit than interest. Zoned out for longer than I should’ve. Drinking whatever’s around. sleeping like it’s an escape plan, not a necessity

No highs, no lows. Just a flat line pretending to be a day.

Sincerely, Ahmed

 
Read more... Discuss...

from Kavânin-i Osmâniyye

19. yüzyıl’da Osmanlı’da avukatlığa dair yapılmış değerli çalışmalar mevcut. Ancak benim merak ettiğim Avukatlığın günlük pratiği nasıl işliyor.

Buna dair Mehâkim-i nizamiye dâva vekilleri hakkında nizâmnâme ve ücret tarifesi bize biraz fikir verebilir. Özellikle de ücret tarifesi mesleğin işleyişine dair fikir veriyor.

Kabaca özetlersek (ilk derece mahkemeleri için):

  • Reyname Ücreti: 50 kuruş. Bu anladığım kadarıyla, günümüzde, görüşme ücretine denk geliyor. Örneğin Avukatın Kitabı, Özkent, s. 53’te bunu “istişare” ücreti olarak verilmiş.
  • Dava ile ilgili dilekçeler: Yüz elli kelimeyi geçmeyen dilekçeler için 30 kuruş, geçerse her ekstra yüz kelimede ekstra 5 kuruş. İlamlara itiraz dilekçesi 25 kuruş.
  • Celse Ücretleri: Kesin olarak görülen (istinaf yolu kapalı) işlerde her muhakeme için 30 kuruş, istinaf yolu açık olanlarda 50 kuruş.

Burada ilginç olan, günümüzde sözlü danışmanın, en azından teoride, saat üzerinden ücretlendirilirken ve yapılan işin türüne göre ücret alınırken geçmişte dilekçedeki kelime sayısının belirleyici olması. Elbette pratikte bu nasıl işliyordu, ne kadar işliyordu, şimdilik bilmiyoruz 😀

Peki bu ücretlerin o dönemdeki anlamı neydi?

Google’da hızlı bir araştırma ile Kemal Karpat’ın 1880'de Kayseri Sancağı'nın Sosyal, Ekonomik ve İdari Durumu: İngiltere'nin Anadolu Konsolos Yardımcısı Lieutenant Ferdinant Bennet'in Raporu (Ekim 1880) çalışmasına (Bayram Bayraktar çevirisini yapmış) ulaşıyoruz.

Sayfa 887’de, 1880’de Kayseri’deki ücretleri görüyoruz:

Devam ediyoruz. Sayfa 888’de Kayseri’deki pazar fiyatlarını görüyoruz.

Yani verilen avukatlık tarifesine göre en basit dilekçe karşılığında 30 kuruş ile bir çift çizme alınabiliyor 😀. Benzer şekilde avukatlık ücretleri yukarıdaki tabloda verilen aylık/haftalık işçi ücretlerinin çok üstünde görünüyor.

Elbette bu karşılaştırma kesin bir sonuca varmaya yetmiyor. Aradaki dört yıllık fark ve enflasyonun da dikkate alınması gerekiyor. Ama yine de verilen tarifeyi bağlama oturtmak için fikir veriyor.

Biraz daha ilerliyoruz. GÜVEN, T., KARAOĞLU, Ö. (2020). VELİEFENDİ BASMA FABRİKASI’NDA İŞÇİ ÜCRETLERİ (1848-1876). Abant İzzet Baysal Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 20(2),389-412 çalışmasına ulaşıyoruz.

Sayfa 402’de Veliefendi basma fabrikasında 1876’da ortalama çalışan ücretlerinin 300 kuruş olduğunu görüyoruz. Yani 10 kısa dilekçe ücretine denk geliyor 🙂.

Tekrar etmiş olalım: Tarife gerçekten ne kadar uygulanıyordu, bildiğim kadarıyla henüz elimizde bir bilgi yok.

 
Weiterlesen...

from Skinny Dipping

[8.iv.26.f : mercredi | A33] Depuis lundi, j’ai retourné ou recommencé un projet ?? … mais pas un projet, plus comme une nouvelle mode de vie, un vie bilingue, un vie où je lis et écrit en français et en anglais, tout les deux … et même parler en français de temps en temps, quand j’ai l’opportunité et aussi (et c’est très important: le japonais aussi, la langue et la culture, et aussi un exploration des certains idées japonais dont commence dans haïku et Zen)

Ce que suit est un enregistrement de ce mode quotidien à l’intersection de anglais, français, et Japon. Je ne veux pas écrire sur les techniques d’apprendre un autre langue parce que j’sus un debutant et ch’pas rien que tu ne sais pas beaucoup plus mieux que moi … oui, mon français est fracturé aussi mais il y a un raison que je révélerai en bon temps, peu un peu … comme ils parlent : pardon mon français

et maintenant — je commence …

un mot du jour : galvauder, ce qui signifie le compromettre d’un avantage, ou d’un don, ou d’une qualité par un mauvais usage, voir aussi: souiller et clicher. / je trouve ce mot dans un article sur mingei par Takeaki Tajimi [ici], la phrase: « je souhaite explorer avec vous le sens véritable de certains mots japonais — comme wabi-sabimono no awareyūgen, entre autres — des mots souvent galvaudés, non seulement en France mais aussi au Japon, où leur usage s’est éloigné de leur sens d’origine. »

C’était le mot wabi-sabi que m’amène ici … une reference à Japon et wabi-sabi dans une conversation sur Youtube [en Français Facile : ici voir minute 15:00] entre Juliette et Leo. Juliette dirait qu’elle aimerait visiter le Japon. Pendant j’écouterais cette conversation, j’ai le trouvé incroyable juste combien de sujets qu’ils mentionnaient dans cette conversation bref que je m’intéresse: les films d’auteurs comme Godard et Truffaut, les romans d’Annie Ernaux, et même Japon et wabi-sabi!

Le septembre dernier, Rachel (ma femme) et moi visitaient Manhattan pour quelque jours, pour une petite vacance … septembre était mois trois de m’année de maîtriser le français [voir §105 / #7936 pour un explication de mon projet à apprendre le français couramment … le commencement de ce blog, La pêche du jour] … et j’était très impressionnant pour le Japon, j’avais juste à commencer étudier japonaise comme si je suis un parlant français. Sur cette petite vacance, j’ai acheté un tas de livres, dont l’un était The Beauty of Everyday Things de Soetsu Yanagi et quand j’écoutais Juliette (sur le podcast) mentionner wabi-sabi je me souviens ce livre et le mot inventé par Soetsu …

Takeaki Tajimi : « Le mot mingei a été inventé par Yanagi Sōetsu (柳宗悦, 1889–1961), un penseur japonais majeur du XXe siècle, dont les réflexions se sont principalement portées sur l’artisanat. Il est également à l’origine de la fondation du Nihon Mingeikan (Musée du mingei japonais) à Komaba, à Tokyo. »

à la fin … je cherchais pour des informations sur le web concernant mingei et wabi-sabi et je trouvais un autre vodcast par Mylène Muller intitulé « Wabi-sabi : 5 idées concrètes que changeant la vie » [regarde ici] … je trouve que c’est drôle que tout les Youtubeurs promis les solutions totales et très simple pour le changement de la vie, par example, j’ai trouvé un vodcast de Matt Brooks-Green [ici] sur comment tenir un journal peut améliorer votre maîtrise de n’importe quoi langue … c’est très simple, écrit chaque jour dans ton carnet, c’est mieux si tu écrit par la main avec un stylo sur la papier … évidement, écriture c’est equivalent à parlant : et voila! c’était ma intuition quand j’ai commencé m’année de maîtriser le français, si j’écris chaque jour en français, je l’apprendrai … ce n’est qu’une question de temps.

 
Read more...

from 下川友

電車から別の電車へ乗り換えをする。 女の子が、全ての出来事に対して、昨日のことのように語ってた。 それを見てると、ちょうど良いタイミングで雨が降ってきた。 女の子がリュックを背負った瞬間に、周囲からの関心が外を向き、全色灰色になった。

電車で席に座る。 決めてきた事を口にしようとすると、呼吸が反転するのを、友人に指摘される事を思い出す。 思い出すという行為そのものが、強い嫌悪として立ち上がる時がある。 服を干す場所で、友人がタバコを吸いながら、ネット番組の感想を言ってたのも、ついでに思い出された。

電車から家に歩く。 若い時には、喋る際に全身が自然と使われている状態があった。 嘘。そんな時はなかった。若かった頃から老けている。

言葉を選ばないことで、出会った日の記憶が欠落する事がある。 それも嘘。言葉を選ばなかったことを後悔して、よく覚えている。

紙袋を持った瞬間に、デート相手との関係が一気に崩れた事がある。 嘘。デートに劇的な思い出があったら良いのにと、思うだけ。

自問されては、それに答え、嘘をついている。 自分からカウンセリングを受けている気分になる。

家に帰る。誰もいない。 安心が訪れた途端、身体が急に冷えていく変化が起きていた。 表情が一種類しかないことが、自分の家のデザインと、よくマッチしていた。

 
もっと読む…

from Ira Cogan

The disappearing in-between by Abey Koshy Itty spoke to me. Here’s a quote from it:

Then there's infinite scroll, which was invented in 2006 by a designer named Aza Raskin.

His intent was simple: make browsing more seamless. But the feature removed every natural stopping point.

There's no bottom of the page. No moment where your brain gets a chance to ask, “do I actually want to keep going?”

Raskin has since expressed deep regret about his creation, estimating that infinite scrolling wastes roughly 200,000 human lifetimes per day.

Read that number again. 200,000 human lifetimes. Per day.

I don’t recall how I stumbled across this site and post. I sometimes (okay, always) have a lot of browser tabs open and just go through them and save what interests me but don’t always keep track of how I got to there. Anyway, this is stuff I think about often.

-Ira

 
Read more...

from Crónicas del oso pardo

Swamiji, nuestro maestro querido, trabajó de muchacho vendiendo comida en las calles de Mumbai hasta que pudo reunir algo de dinero para iniciar su peregrinación al monte Kailash.

En un templo del camino, escuchó un himno en sánscrito y sin entender tan sagradas palabras se iluminó por la gracia del Señor Ganesha, la deidad protectora del templo, quien lo llevó a los pies del Maestro.

Al bajar del monte Kailash llegó a un lago poblado de mosquitos, donde tuvo una visión del sufrimiento de los seres en el inframundo, lloró y sintió el impulso de llevarles alivio. Así apareció en ese reino terrible.

En la oscura morada de las almas atormentadas, los condenados observaron que los demonios se inclinaban y se quedaban paralizados ante Swamiji. Por eso, algunos pensaron que se trataba de un jefe de demonios. Al cruzar su mirada con la de algún espíritu doliente, este desaparecía, por lo que muchos creyeron que los enviaba a los círculos de los peores tormentos. Los que pensaron así evitaron mirarlo.

Cuando se marchó Swamiji, los demonios recobraron su fuerza, y uno de ellos, dijo:

-¡Qué poder tiene este santo! Cuando mira a uno de estos desgraciados y les queda un mínimo de devoción, los manda sin más a los palacios celestiales. -No es por echar leña al fuego -dijo otro demonio- pero se llevó también a dos colegas. -Sí, eso pasa con los que dudan -concluyó el primero.

 
Leer más...

from Micropoemas

Sin freno, se dinamita, encubre, contamina, se pone palos en las ruedas (y sueña con tocar el cielo).

 
Leer más...

from Mitchell Report

I have been watching the Artemis II mission off and on. I saw these pictures on the NASA website, and here are a few that I really like. They definitely got me thinking.

I have always been fascinated by space and the Heavens. I would like to go to space, but not like we do today. If I went, I would want it to be on a Star Trek type shuttle or ship. Our spacecraft, much like our planes, are little more than thin tin cans.

Looking at these pictures really affected me. The Moon is very dead and very unwelcoming, and space is the same way. Then, seeing our planet “Earth” from that vantage point just shows the miracle God made for us and the love Jesus purchased for us. Why would you want to go anywhere else?

A detailed image of Earth taken from space, showing the planet against a black background dotted with small stars. The view focuses on the Eastern Hemisphere, prominently featuring Australia with its reddish-brown landmass on the left side of the globe. Surrounding Australia are vast expanses of deep blue ocean with swirling white cloud formations scattered across the atmosphere. The curvature of the Earth is clearly visible, with a thin, bright blue atmospheric glow outlining the planet’s edge. Near the bottom right of the image, a bright white star or planet is visible in space. The overall scene captures the beauty and fragility of Earth from a distant vantage point in space.Hello, World NASA astronaut and Artemis II Commander Reid Wiseman took this picture of Earth from the Orion spacecraft's window on April 2, 2026, after completing the translunar injection burn.A detailed, high-resolution photograph of the full moon centered against a completely black night sky. The moon appears bright white with various shades of gray, showing its textured surface clearly. Visible are numerous craters, darker lunar maria (large, flat basaltic plains), and lighter highland areas. The moon’s round shape is well-defined, and the contrast between the illuminated surface and the dark sky highlights the moon’s detailed topography. No other objects, stars, or light sources are visible in the image. The overall composition focuses solely on the moon, emphasizing its natural features and surface details.The Nearside of the Moon (April 4, 2026) - A view of the nearside of the Moon, the side we always see from Earth. Some of the far side is visible, as well, on the left edge, just beyond the black patch that is Orientale basin, a nearly 600-mile-wide crater that straddles the Moon’s ne
A detailed view of the Moon's surface dominates the foreground, showing a vast expanse covered with numerous craters of varying sizes and depths. The surface appears gray and textured, with shadows accentuating the rugged terrain and crater rims. In the background, partially visible above the Moon's horizon, is the Earth, appearing as a bright, blue and white sphere. The Earth’s surface shows cloud formations and oceanic areas, illuminated by sunlight, contrasting sharply against the blackness of space. The image captures the stark contrast between the barren, cratered lunar surface and the vibrant, life-supporting Earth rising behind it. The overall scene conveys a sense of vastness and isolation in space.A Setting Earth (April 6, 2026) – The lunar surface fills the frame in sharp detail, as seen during the Artemis II lunar flyby, while a distant Earth sets in the background. This image was captured at 6:41 p.m. EDT, on April 6, 2026, just three minutes before the Orion spacecraft andA view from the surface of the Moon showing its gray, cratered terrain in the foreground. The lunar surface is covered with numerous small and medium-sized impact craters, giving it a rough and pockmarked appearance. Beyond the Moon's horizon, the Earth is visible partially illuminated against the blackness of space. The Earth appears as a blue and white crescent with visible cloud formations and oceanic areas, with the shadowed portion blending into the dark background. The image captures the stark contrast between the barren, cratered lunar surface and the vibrant, cloud-covered Earth rising above it. The overall scene conveys a sense of vastness and isolation in space.Earthset Earthset captured through the Orion spacecraft window at 6:41 p.m. EDT, April 6, 2026, during the Artemis II crew’s flyby of the Moon.
Source: NASA — April 2026

I don't know how people can look at these incredible images and not think there is a grand designer. I am staying where we should. Think about it, in the oceans or in space, you will always need a suit that could puncture, rupture, or run out of life-saving air or water. But God made our Earth its own spacesuit that self-replicates the air and water we need.

These pictures are just beautiful. Space, the Moon, and Mars are places to visit for a day or two, but not places to live. It would be very isolating, even with other people. Look at that multicolored marble. It is home, and it is just beautiful.

#opinion #currentevents #inspiration

 
Read more... Discuss...

from Vino-Films

I watched as they walked together down a busy Brooklyn avenue.

They didn’t look like a couple. Just a respectful proximity.

What stayed with me was the grasp.

It was kind. Loving.

Something I hadn’t seen in a while.

A slightly hunched elderly woman, her frail, age-spotted hands with chipped pink fingernails, held the elbow of a much taller man.

He paced his stride to match hers.

She focused on her steps, carrying a quiet grace.

He didn’t look around.

Not to see who was watching, but to make sure she was okay.

I walked into a franchised burrito shop right after.

The feeling didn’t follow me in.

There was no line.

One customer already eating.

I didn’t feel welcomed.

Her expression said enough before she spoke.

She offered no guidance as I ordered.

“Well, it’s all written there.”

Flat. Unmoved.

A couple walked in behind me.

I suddenly felt exposed. Out of place.

“Don’t you prompt customers?” I asked.

She smiled.

It didn’t match the moment.

She finished the order. No mention of utensils. No effort.

I paid. Left. Hungry and irritated.

The couple behind me got the welcome I didn’t.

I called another location.

That led to the district manager.

She knew exactly who I was talking about.

Refund. Gift card.

But that wasn’t the point.

Maybe no one had slowed their stride for the employee in a long time.

All Social: https://beacons.ai/vinofilms

#brooklyn #ny #kindness #anger #vinofilms #vinofilmsarchives

 
Read more...

from SmarterArticles

In November 2025, Yann LeCun walked into Mark Zuckerberg's office and told his boss he was leaving. After twelve years building Meta's AI research operation into one of the most respected in the world, the Turing Award winner had decided that the entire industry was heading in the wrong direction. Four months later, his new venture, Advanced Machine Intelligence Labs, announced the largest seed round in European startup history: $1.03 billion to build AI systems that do not merely predict the next word in a sentence, but understand how physical reality actually works.

The money is staggering. The ambition is larger. And the question it raises is one that should unsettle anyone paying attention: if we succeed in building machines that can model the physical world with superhuman fidelity, will we have any idea what those machines actually know?

Welcome to the age of world models, where the gap between what AI understands and what we understand about AI threatens to become the defining tension of the next decade.

A Turing Winner's Trillion-Dollar Heresy

LeCun has never been shy about his contrarian streak. Even whilst serving as Meta's chief AI scientist, he publicly and repeatedly argued that the industry's obsession with large language models was fundamentally misguided. “Scaling them up will not allow us to reach AGI,” he has said, a position that put him at odds with the prevailing orthodoxy at OpenAI, Google, and, increasingly, within his own employer. His departure, first confirmed in a December 2025 LinkedIn post, was not merely a career move. It was a declaration of intellectual war.

AMI Labs, headquartered in Paris with additional offices in New York, Montreal, and Singapore, is built around a deceptively simple thesis: real intelligence does not begin in language. It begins in the world. The company's technical foundation is LeCun's Joint Embedding Predictive Architecture, or JEPA, a framework he first proposed in a 2022 position paper titled “A Path Towards Autonomous Intelligence.” Where large language models like ChatGPT, Claude, and Gemini learn by predicting the next token in a sequence of text, JEPA learns by predicting abstract representations of sensory data. It does not try to reconstruct every pixel or predict every word. Instead, it learns to capture the structural, meaningful patterns that govern how environments behave and change over time.

The distinction matters enormously. LeCun has used the example of video prediction to illustrate the point: trying to forecast every pixel of a future video frame is computationally ruinous, because the world is full of chaotic, unpredictable details like flickering leaves, shifting shadows, and textured surfaces. A generative model wastes enormous capacity modelling this noise. JEPA sidesteps the problem entirely by operating in an abstract embedding space, focusing on the low-entropy, structural aspects of a scene rather than its surface-level chaos.

The $1.03 billion seed round, which values AMI at $3.5 billion pre-money, drew an extraordinary roster of backers. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Additional investors include NVIDIA, Temasek, Samsung, Toyota Ventures, and Bpifrance, alongside individuals such as Jeff Bezos, Mark Cuban, and Eric Schmidt. LeCun initially sought approximately 500 million euros, according to a leaked pitch deck reported by Sifted. Demand far exceeded that figure.

Day-to-day operations are led by Alexandre LeBrun, the French entrepreneur who previously founded and ran Nabla, a medical AI startup. The leadership roster also includes Saining Xie, formerly of Google DeepMind, as chief science officer; Pascale Fung as chief research and innovation officer; Michael Rabbat as VP of world models; and Laurent Solly, Meta's former VP for Europe, as chief operating officer. LeCun himself serves as executive chairman whilst maintaining his professorship at New York University.

LeBrun has been candid about the timeline. “AMI Labs is a very ambitious project, because it starts with fundamental research,” he has said. “It's not your typical applied AI startup that can release a product in three months.” Within three to five years, LeCun has stated, the goal is to produce “fairly universal intelligent systems” capable of deployment across virtually any domain requiring machine intelligence. The initial commercial targets include healthcare, robotics, wearables, and industrial automation.

What World Models Actually Are (and Why They Change Everything)

To grasp why a billion dollars is flowing into world models, you need to understand what they are and why the current generation of AI systems falls short. A world model, in its simplest formulation, is an AI system designed to understand and predict how the physical world works. Gravity, motion, cause and effect, spatial relationships, object permanence: these are the kinds of knowledge that a world model attempts to internalise, not through explicit programming, but through learning from vast quantities of sensory data.

This is not an entirely new idea. The concept of internal models of reality has deep roots in cognitive science, where researchers have long argued that human intelligence depends on our brain's ability to simulate possible futures before we act. When you reach for a glass of water, you do not consciously calculate trajectories and grip forces. Your brain runs a rapid internal simulation, predicting what will happen and adjusting on the fly. World models attempt to give machines a similar capability.

Google DeepMind CEO Demis Hassabis, the 2024 Nobel laureate in Chemistry, has articulated the problem with current approaches in characteristically vivid terms. At the India AI Impact Summit in February 2026, he described today's AI systems as possessing “jagged intelligence,” explaining: “Today's systems can get gold medals in the International Maths Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way. A true general intelligence system shouldn't have that kind of jaggedness.” Large language models, Hassabis has argued, are ultimately sophisticated probability predictors. They do not genuinely understand the physical laws of the real world.

Fei-Fei Li, the Stanford professor often described as the “godmother of AI” for her foundational work on ImageNet, has put it even more bluntly. LLMs, she has said, are like “wordsmiths in the dark,” possessing elaborate linguistic ability but lacking spatial intelligence and physical experience. Her own company, World Labs, released its Marble world model in November 2025, capable of generating entire 3D worlds from a text prompt, image, video, or rough layout. World Labs is now reportedly in discussions at a $5 billion valuation after raising $230 million in funding.

The broader landscape is moving rapidly. Google DeepMind launched Genie 3, the first real-time interactive world model capable of generating navigable 3D environments at 24 frames per second, maintaining strict object permanence and consistent physics without a separate memory module. NVIDIA's Cosmos platform, announced at CES 2025 and trained on 9,000 trillion tokens drawn from 20 million hours of real-world data, has surpassed 2 million downloads. Waymo has built its autonomous vehicle world model on top of Genie 3, using it to train self-driving cars in simulated environments. Reports indicate that OpenAI triggered a “code red” response to Genie 3's capabilities, accelerating efforts to add spatial understanding to GPT-5.

Over $1.3 billion in funding flowed into world model startups in early 2026 alone. This is not a niche research interest. It is rapidly becoming the central front in the race towards more capable AI.

The Architecture of Understanding

AMI Labs' approach differs from its competitors in important ways. Where World Labs focuses on generating photorealistic 3D environments and DeepMind's Genie 3 emphasises interactive simulation, JEPA is fundamentally about learning representations rather than generating outputs.

The architecture works through a deceptively elegant mechanism. JEPA takes a pair of related inputs, such as consecutive video frames or adjacent image patches, and encodes each into an abstract representation using separate encoder networks. A predictor module then attempts to forecast the representation of the “target” input from the representation of the “context” input. Crucially, this prediction happens entirely in abstract embedding space, never at the level of raw pixels or tokens.

This creates what amounts to a learned physics engine. The system develops an internal model of how things relate to one another and how they change over time, without being burdened by the task of reconstructing surface-level details. An optional latent variable, often denoted as z, allows the model to account for inherent uncertainty, representing different hypothetical scenarios for aspects of the target that the context alone cannot determine.

Several variants already exist. I-JEPA learns by predicting representations of image regions from other regions, developing abstract understanding of visual scenes without explicit labels. V-JEPA extends this to video, predicting missing or masked parts of video sequences in representation space, pre-trained entirely with unlabelled data. VL-JEPA adds vision-language capability, predicting continuous embeddings of target texts rather than generating tokens autoregressively, achieving stronger performance with 50 per cent fewer trainable parameters.

The promise is tantalising. An AI system built on JEPA principles could, in theory, develop the kind of intuitive physical understanding that enables a child to predict that pushing a table will move the book sitting on it. It could reason about cause and effect, plan actions in the physical world, and adapt to novel situations without the brittleness that characterises current systems.

But there is a catch. And it is a significant one.

The Understanding Gap Widens

Here is the paradox at the heart of the world models revolution: the better these systems become at understanding physical reality, the harder they become for us to understand. We are constructing machines designed to build rich internal representations of how the world works, and we have strikingly little ability to inspect, interpret, or verify what those representations actually contain.

This is not a new problem, but world models threaten to make it dramatically worse. The interpretability challenges that plague current large language models are already formidable. Mechanistic interpretability, the effort to reverse-engineer neural networks into human-understandable components, has been recognised by MIT Technology Review as a “breakthrough technology for 2026.” Yet the field remains at what researchers describe as a critical inflection point, with genuine progress coexisting alongside fundamental barriers.

The core difficulty is what researchers call superposition. Because there are more features that a neural network needs to represent than there are dimensions available to represent them, the network compresses information in ways that produce polysemantic neurons, individual units that contribute to multiple, semantically distinct features. Understanding what a network “knows” requires disentangling this compressed representation, and the dominant tool for doing so, sparse autoencoders, faces serious unsolved problems. Reconstruction error remains stubbornly high, with 10 to 40 per cent performance degradation. Features split and absorb in unpredictable ways. And the results depend heavily on the specific dataset used.

Anthropic, the AI safety company, has made mechanistic interpretability a central focus, extracting interpretable features from its Claude 3 Sonnet model using sparse autoencoders and publishing results showing features related to deception, sycophancy, bias, and dangerous content. Their attribution graphs, released in March 2025, can successfully trace computational paths for roughly 25 per cent of prompts. For the remaining 75 per cent, the computational pathways remain opaque.

A 2025 paper published at the International Conference on Learning Representations proved that many circuit-finding queries in neural networks are NP-hard, remain fixed-parameter intractable, and are inapproximable under standard computational assumptions. In plain language: for many of the questions we most urgently need to answer about what neural networks are doing, there may be no efficient algorithm that can provide the answer.

Now consider what happens when you move from language models to world models. JEPA operates in abstract embedding spaces that are, by design, removed from human-interpretable inputs and outputs. A language model at least traffics in words, which we can read. A world model's internal representations are abstract mathematical objects encoding relationships between physical phenomena. The interpretability challenge is not merely scaled up. It is qualitatively different.

The field is split on how to respond. Anthropic has set the ambitious goal of being able to “reliably detect most AI model problems by 2027.” Google DeepMind, meanwhile, has pivoted away from sparse autoencoders towards what it calls “pragmatic interpretability,” an acknowledgement that full mechanistic understanding of frontier models may be neither achievable nor necessary. Corti, a Danish AI company, has developed GIM (Gradient Interaction Modifications), a gradient-based method that has topped the Hugging Face Mechanistic Interpretability Benchmark, offering improved accuracy for identifying which components in a model are responsible for specific behaviours. But even these advances represent incremental progress against an exponentially growing challenge.

When Physics Engines Dream

The practical implications of AI systems that can simulate physical reality extend far beyond academic curiosity. Consider the domains AMI Labs is targeting: healthcare, robotics, wearables, and industrial automation. In each of these fields, the consequences of AI misunderstanding the physical world range from costly to catastrophic.

AMI Labs has already established a partnership with Nabla, the healthtech company LeBrun previously founded, providing a direct conduit to the healthcare sector. In medicine, the hallucinations that plague large language models are not merely embarrassing; they can be lethal. A world model that genuinely understands human physiology, drug interactions, and disease progression could revolutionise clinical decision-making. But the opacity of that understanding creates a novel kind of risk: a system that is right for reasons nobody can articulate, or wrong for reasons nobody can detect.

In robotics, world models promise to solve one of the field's most persistent bottlenecks. Training robots in the physical world is slow, expensive, and dangerous. World models enable training in simulation, where a robot can experience millions of scenarios in hours rather than years. NVIDIA's Cosmos platform already allows autonomous vehicle and robotics developers to synthesise rare, dangerous edge-case conditions that would be prohibitively risky to create in reality. But the fidelity of the simulation depends entirely on the accuracy of the world model, and verifying that accuracy requires understanding what the model has learned, which brings us back to the interpretability gap.

The autonomous vehicle industry illustrates the stakes with particular clarity. Waymo's decision to build its world model on Google DeepMind's Genie 3 represents a bet that AI-generated simulations can adequately capture the chaotic complexity of real-world driving. The potential benefits are enormous: safer vehicles, faster development cycles, dramatically reduced testing costs. The potential risks are equally significant. If the world model contains subtle errors in its understanding of physics (the way light refracts in rain, the friction coefficient of wet roads, the behaviour of pedestrians at unmarked crossings) those errors will be systematically baked into every vehicle trained on the simulation.

Governing What We Cannot See

The regulatory landscape is struggling to keep pace with these developments. The European Union's AI Act, the world's most comprehensive legal framework for artificial intelligence, entered into force in August 2024 and will be fully applicable by August 2026. Its risk-based classification system imposes graduated obligations based on potential harm, with penalties reaching up to 35 million euros or 7 per cent of global annual turnover for the most serious violations.

But the AI Act was designed primarily with current AI systems in mind. Its requirements for high-risk systems, including documented risk management, robust data governance, detailed technical documentation, automatic logging, human oversight, and safeguards for accuracy and robustness, assume a level of inspectability that world models may not provide. How do you document the risk management of a system whose internal representations of physical reality are abstract mathematical objects that resist human interpretation? How do you ensure “human oversight” of a physics simulation running in an embedding space that no human can directly perceive?

The European Council, on 13 March 2026, agreed a position to streamline rules on artificial intelligence, whilst the Commission's Digital Omnibus package, submitted in November 2025, proposed adjusting the timeline for high-risk system obligations. But these adjustments are largely procedural. The fundamental question of how to regulate AI systems whose internal workings are opaque to their creators remains unaddressed.

At the broader international level, the AI Impact Summit 2026 in New Delhi produced a Leaders' Declaration recognising that “AI's promise is best realised only when its benefits are shared by humanity.” The International Institute for Management Development's AI Safety Clock, which began at 29 minutes to midnight in September 2024, now stands at 18 minutes to midnight as of March 2026, reflecting growing expert concern about the pace of AI development relative to safety measures.

In the United States, the NIST AI Risk Management Framework and ISO/IEC 42001 provide voluntary guidelines, but nothing approaching the binding force of the EU's approach. China's own regulatory framework focuses on algorithmic transparency and content generation, but similarly lacks specific provisions for world models. The result is a patchwork of rules designed for yesterday's AI, applied to tomorrow's.

Voices From Both Sides of the Divide

The debate over world models and their implications has produced sharp divisions amongst the people who understand these systems best.

LeCun himself has been consistently dismissive of existential risk concerns. He has called discussion of AI-driven existential catastrophe “premature,” “preposterous,” and “complete B.S.,” arguing that superintelligent machines will have no inherent desire for self-preservation and that AI can be made safe through continuous, iterative refinement. His position is that the path to safety runs through open science and open source, not through restriction and secrecy. Staying true to this philosophy, AMI Labs has committed to publishing its research and releasing substantial code as open source. “We will also make a lot of code open source,” LeBrun has confirmed.

Geoffrey Hinton, who shared the 2018 Turing Award with LeCun and Yoshua Bengio for their contributions to deep learning, occupies the opposite pole. The researcher often described as the “Godfather of AI” has warned that advanced AI will become “much smarter than us” and render controls ineffective. At the Ai4 conference in 2025, Hinton proposed a “mother AI” concept to safeguard against potential AI takeover scenarios. Their public disagreements have become one of the defining intellectual conflicts in the field.

The broader expert community is similarly divided. Roman Yampolskiy, a computer scientist at the University of Louisville known for his work on AI safety, estimates a 99 per cent chance of an AI-caused existential catastrophe. LeCun places that probability at effectively zero. A survey of AI experts published in early 2025 found that many researchers, while highly skilled in machine learning, have limited exposure to core AI safety concepts, and that those least familiar with safety research are also the least concerned about catastrophic risk.

AGI timeline estimates vary wildly. Elon Musk has predicted AGI by 2026. Dario Amodei, CEO of Anthropic, has suggested 2026 or 2027. NVIDIA CEO Jensen Huang places the date at 2029. LeCun himself has argued it will take several more decades for machines to exceed human intelligence. Gary Marcus, the cognitive scientist and persistent AI sceptic, has suggested the timeline could be 10 or even 100 years.

What is notable about the world models debate is that it cuts across these existing fault lines. You do not need to believe in imminent superintelligence to be concerned about the understanding gap. A world model does not need to be superintelligent to be dangerous if it is deployed in high-stakes domains whilst remaining fundamentally opaque. The risk is not necessarily that AI becomes too smart. It is that AI becomes smart enough to matter in ways we cannot verify.

Reading the Black Box, Through a Glass Darkly

The technical community has not been idle in the face of these challenges. New architectures and methods are emerging that offer at least partial responses to the interpretability crisis.

Kolmogorov-Arnold Networks, or KANs, represent a fundamentally different neural network architecture that decomposes higher-dimensional functions into one-dimensional functions, increasing interpretability and allowing scientists to identify important features, reveal modular structures, and discover symbolic formulae in scientific data. However, their interpretability diminishes as network size increases, presenting a familiar scalability challenge: the very systems we most need to understand are the ones that resist understanding most stubbornly.

The collaborative paper published in January 2025 by 29 researchers across 18 organisations established the field's consensus open problems for mechanistic interpretability. Core concepts like “feature” still lack rigorous mathematical definitions. Computational complexity results prove that many interpretability queries are intractable. And practical methods continue to underperform simple baselines on safety-relevant tasks.

There is also the question of whether full interpretability is even the right goal. Some researchers argue for a more pragmatic approach: rather than trying to understand everything a model knows, develop reliable methods for detecting when a model is likely to fail. This is the philosophy behind DeepMind's pivot to pragmatic interpretability and behind Hassabis's proposed “Einstein test” for AGI, which asks whether an AI system trained on all human knowledge up to 1911 could independently discover general relativity. If it cannot, Hassabis argues, it remains “a very sophisticated pattern matcher” regardless of its other capabilities.

LeCun, characteristically, sees the problem differently. He has argued that the architecture itself is the solution: by designing systems that learn structured, abstract representations rather than opaque statistical correlations, world models could ultimately be more interpretable than language models, not less. JEPA's operation in abstract embedding space is, in his view, a feature rather than a bug, because those embeddings encode the meaningful structural relationships that humans also rely on to understand the world, even if the format is different.

This is an optimistic reading. Whether it proves correct will depend on research that has not yet been conducted, using methods that have not yet been invented, applied to systems that have not yet been built. In the meantime, the money is flowing, the labs are hiring, and the world models are being trained.

Europe's Unlikely Gambit

There is a geopolitical dimension to this story that deserves attention. LeCun has stated that there “is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American.” AMI Labs, with its Paris headquarters and European seed record, is positioning itself to fill that void.

The timing is deliberate. The EU's AI Continent Action Plan, published in April 2025, aims to make Europe a global leader in AI whilst safeguarding democratic values. France's state investment bank Bpifrance is amongst AMI's backers. The company's open research commitment aligns with European regulatory philosophy, which emphasises transparency and accountability in ways that closed American labs like OpenAI and Anthropic have been criticised for resisting.

But Europe's track record in turning fundamental research into commercially dominant technology is, to put it diplomatically, mixed. AMI Labs' $1.03 billion seed round is enormous, but it pales beside the tens of billions flowing into American and Chinese AI labs. LeBrun has acknowledged the challenge, noting that AMI will prioritise quality over quantity in building its team across its four global locations. The question is whether a commitment to open science and European values can coexist with the scale of resources needed to compete at the frontier.

The second-largest seed round ever, raised by the American firm Thinking Machines Lab in June 2025 at $2 billion, provides a sobering comparison. The world models race is global, and capital alone will not determine the winner. But capital certainly helps.

Sleepwalking With Eyes Open

So, are we sleepwalking into a future where AI understands the world better than we do, without us understanding the AI? The honest answer is: we might be, but not in the way the question implies.

The framing of “sleepwalking” suggests unawareness, but the striking thing about the current moment is how many people are aware of the problem and how few solutions are available. The researchers building world models know that interpretability is an unsolved challenge. The regulators drafting AI governance frameworks know that their rules were designed for a different generation of technology. The investors writing billion-dollar cheques know that the commercial applications are years away and the fundamental research questions remain open.

The danger is not ignorance. It is a collective decision to proceed despite uncertainty, driven by competitive pressure, scientific ambition, and the genuine potential of these systems to solve real problems. When LeCun talks about world models revolutionising healthcare by eliminating the hallucinations that make LLMs dangerous in clinical settings, he is not wrong about the potential. When Hassabis describes the need for AI that can reason about physics rather than merely predicting word probabilities, he is identifying a real limitation of current systems. When Fei-Fei Li argues for spatial intelligence as the next frontier, she is pointing towards capabilities that could transform robotics, manufacturing, and scientific discovery.

But potential is not proof. And the understanding gap, the asymmetry between AI's growing capacity to model reality and our limited capacity to model the AI, is real and widening. Every billion dollars invested in making world models more capable should, in principle, be matched by investment in making them more transparent. The evidence suggests that ratio is nowhere close to balanced.

The world models era is not something that is coming. It is here. AMI Labs' billion-dollar bet, backed by some of the most sophisticated investors and researchers on the planet, is one data point amongst many. The question is not whether machines will learn to simulate physical reality. It is whether we will develop the tools to understand what they have learned before the consequences of not understanding become irreversible.

LeCun has said that within three to five years, AMI aims to produce “fairly universal intelligent systems.” The AI Safety Clock stands at 18 minutes to midnight. And the gap between what AI can model and what humans can comprehend about those models grows wider with every training run.

We are not sleepwalking. We are walking with our eyes open, into a future whose shape we can see but whose details remain, for now, profoundly and perhaps permanently, beyond our ability to fully perceive.

References

  1. TechCrunch, “Yann LeCun's AMI Labs raises $1.03B to build world models,” 9 March 2026. https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/

  2. TechCrunch, “Who's behind AMI Labs, Yann LeCun's 'world model' startup,” 23 January 2026. https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/

  3. MIT Technology Review, “Yann LeCun's new venture is a contrarian bet against large language models,” 22 January 2026. https://www.technologyreview.com/2026/01/22/1131661/yann-lecuns-new-venture-ami-labs/

  4. Sifted, “Yann LeCun's AMI Labs raises $1bn in Europe's biggest seed round,” March 2026. https://sifted.eu/articles/yann-lecun-ami-labs-meta-funding-round-nvidia

  5. Crunchbase News, “Turing Winner LeCun's New 'World Model' AI Lab Raises $1B In Europe's Largest Seed Round Ever,” March 2026. https://news.crunchbase.com/venture/world-model-ai-lab-ami-raises-europes-largest-seed-round/

  6. TechCrunch, “Yann LeCun confirms his new 'world model' startup, reportedly seeks $5B+ valuation,” 19 December 2025. https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-new-world-model-startup-reportedly-seeks-5b-valuation/

  7. Meta AI Blog, “V-JEPA: The next step toward advanced machine intelligence,” 2024. https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/

  8. Meta AI Blog, “I-JEPA: The first AI model based on Yann LeCun's vision for more human-like AI,” 2023. https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/

  9. Introl, “World Models Race 2026: How LeCun, DeepMind, and others compete,” 2026. https://introl.com/blog/world-models-race-agi-2026

  10. News9live, “India AI Impact Summit 2026: DeepMind CEO Demis Hassabis says current AI still 'Jagged' and learning,” February 2026. https://www.news9live.com/technology/artificial-intelligence/india-ai-summit-2026-deepmind-hassabis-ai-jagged-learning-2932470

  11. Storyboard18, “Demis Hassabis says AGI not here yet, calls current AI 'jagged intelligence,'” 2026. https://www.storyboard18.com/brand-makers/google-deepmind-ceo-says-agi-not-here-yet-calls-current-ai-jagged-intelligence-90028.htm

  12. European Commission, “AI Act: Shaping Europe's digital future,” 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  13. European Council, “Council agrees position to streamline rules on Artificial Intelligence,” 13 March 2026. https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/

  14. TIME, “Meta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk,” 2024. https://time.com/6694432/yann-lecun-meta-ai-interview/

  15. WebProNews, “Yann LeCun and Geoffrey Hinton Clash on AI Safety in 2025,” 2025. https://www.webpronews.com/yann-lecun-and-geoffrey-hinton-clash-on-ai-safety-in-2025/

  16. arXiv, “Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts,” February 2025. https://arxiv.org/html/2502.14870v1

  17. Transformer Circuits, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet,” 2024. https://transformer-circuits.pub/2024/scaling-monosemanticity/

  18. Springer Nature, “Recent Emerging Techniques in Explainable Artificial Intelligence,” 2025. https://link.springer.com/article/10.1007/s11063-025-11732-2

  19. Futurum Group, “Yann LeCun's AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?” March 2026. https://futurumgroup.com/insights/yann-lecuns-ami-raises-1bn-seed-round-is-the-world-model-era-finally-here/

  20. The Next Web, “Yann LeCun just raised $1bn to prove the AI industry has got it wrong,” March 2026. https://thenextweb.com/news/yann-lecun-ami-labs-world-models-billion

  21. Corti, “Corti introduces GIM: Benchmark-leading method for understanding AI model behavior,” 2025. https://www.corti.ai/stories/gim-a-new-standard-for-mechanistic-interpretability

  22. PhysOrg, “Kolmogorov-Arnold networks bridge AI and scientific discovery by increasing interpretability,” December 2025. https://phys.org/news/2025-12-kolmogorov-arnold-networks-bridge-ai.html

  23. Sombrainc, “An Ultimate Guide to AI Regulations and Governance in 2026,” 2026. https://sombrainc.com/blog/ai-regulations-2026-eu-ai-act

  24. Zaruko, “The Einstein Test: Why AGI Is Not Around the Corner,” 2026. https://zaruko.com/insights/the-einstein-test


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: I'm tuned in to 105.3 The Fan – Dallas, for the pregame show then the call of tonight's game between my Texas Rangers and the Seattle Mariners. By the time the game ends I'll have finished the night's prayers and will be ready to retire for the night.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 227.74 lbs. * bp= 154/90 (65)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:10 – crispy oatmeal cookies * 08:45 – 1 ham & cheese sandwich * 10:00 – baked fish and vegetables * 13:50 – clam soup & saltine crackers * 16:00 – 1 fresh apple

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:00 – bank accounts activity monitored * 06:00 – read, write, pray, follow news reports from various sources, surf the socials, nap, * 11:00 – listening to the Markley, van Camp and Robbins Show * 12:00 to 13:15 – watch old game shows with Sylvia * 13:30 – read, pray, follow news reports from various sources * 15:30 – listen to The Jack Riccardi Show * 17:00 – tuned in to 105.3 The Fan – Dallas well ahead of tonight's Rangers / Mariners game.

Chess: * 14:20 – moved in all pending CC games, winning one

 
Read more...

from 💚

Our Father Who art in Heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in Heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil

Amen

Jesus is Lord! Come Lord Jesus!

Come Lord Jesus! Christ is Lord!

 
Read more...

Join the writers on Write.as.

Start writing or create a blog