from Micropoemas

Sin freno, se dinamita, encubre, contamina, se pone palos en las ruedas (y sueña con tocar el cielo).

 
Leer más...

from Mitchell Report

I have been watching the Artemis II mission off and on. I saw these pictures on the NASA website, and here are a few that I really like. They definitely got me thinking.

I have always been fascinated by space and the Heavens. I would like to go to space, but not like we do today. If I went, I would want it to be on a Star Trek type shuttle or ship. Our spacecraft, much like our planes, are little more than thin tin cans.

Looking at these pictures really affected me. The Moon is very dead and very unwelcoming, and space is the same way. Then, seeing our planet “Earth” from that vantage point just shows the miracle God made for us and the love Jesus purchased for us. Why would you want to go anywhere else?

A detailed image of Earth taken from space, showing the planet against a black background dotted with small stars. The view focuses on the Eastern Hemisphere, prominently featuring Australia with its reddish-brown landmass on the left side of the globe. Surrounding Australia are vast expanses of deep blue ocean with swirling white cloud formations scattered across the atmosphere. The curvature of the Earth is clearly visible, with a thin, bright blue atmospheric glow outlining the planet’s edge. Near the bottom right of the image, a bright white star or planet is visible in space. The overall scene captures the beauty and fragility of Earth from a distant vantage point in space.Hello, World NASA astronaut and Artemis II Commander Reid Wiseman took this picture of Earth from the Orion spacecraft's window on April 2, 2026, after completing the translunar injection burn.A detailed, high-resolution photograph of the full moon centered against a completely black night sky. The moon appears bright white with various shades of gray, showing its textured surface clearly. Visible are numerous craters, darker lunar maria (large, flat basaltic plains), and lighter highland areas. The moon’s round shape is well-defined, and the contrast between the illuminated surface and the dark sky highlights the moon’s detailed topography. No other objects, stars, or light sources are visible in the image. The overall composition focuses solely on the moon, emphasizing its natural features and surface details.The Nearside of the Moon (April 4, 2026) - A view of the nearside of the Moon, the side we always see from Earth. Some of the far side is visible, as well, on the left edge, just beyond the black patch that is Orientale basin, a nearly 600-mile-wide crater that straddles the Moon’s ne
A detailed view of the Moon's surface dominates the foreground, showing a vast expanse covered with numerous craters of varying sizes and depths. The surface appears gray and textured, with shadows accentuating the rugged terrain and crater rims. In the background, partially visible above the Moon's horizon, is the Earth, appearing as a bright, blue and white sphere. The Earth’s surface shows cloud formations and oceanic areas, illuminated by sunlight, contrasting sharply against the blackness of space. The image captures the stark contrast between the barren, cratered lunar surface and the vibrant, life-supporting Earth rising behind it. The overall scene conveys a sense of vastness and isolation in space.A Setting Earth (April 6, 2026) – The lunar surface fills the frame in sharp detail, as seen during the Artemis II lunar flyby, while a distant Earth sets in the background. This image was captured at 6:41 p.m. EDT, on April 6, 2026, just three minutes before the Orion spacecraft andA view from the surface of the Moon showing its gray, cratered terrain in the foreground. The lunar surface is covered with numerous small and medium-sized impact craters, giving it a rough and pockmarked appearance. Beyond the Moon's horizon, the Earth is visible partially illuminated against the blackness of space. The Earth appears as a blue and white crescent with visible cloud formations and oceanic areas, with the shadowed portion blending into the dark background. The image captures the stark contrast between the barren, cratered lunar surface and the vibrant, cloud-covered Earth rising above it. The overall scene conveys a sense of vastness and isolation in space.Earthset Earthset captured through the Orion spacecraft window at 6:41 p.m. EDT, April 6, 2026, during the Artemis II crew’s flyby of the Moon.
Source: NASA — April 2026

I don't know how people can look at these incredible images and not think there is a grand designer. I am staying where we should. Think about it, in the oceans or in space, you will always need a suit that could puncture, rupture, or run out of life-saving air or water. But God made our Earth its own spacesuit that self-replicates the air and water we need.

These pictures are just beautiful. Space, the Moon, and Mars are places to visit for a day or two, but not places to live. It would be very isolating, even with other people. Look at that multicolored marble. It is home, and it is just beautiful.

#opinion #currentevents #inspiration

 
Read more... Discuss...

from Vino-Films

I watched as they walked together down a busy Brooklyn avenue.

They didn’t look like a couple. Just a respectful proximity.

What stayed with me was the grasp.

It was kind. Loving.

Something I hadn’t seen in a while.

A slightly hunched elderly woman, her frail, age-spotted hands with chipped pink fingernails, held the elbow of a much taller man.

He paced his stride to match hers.

She focused on her steps, carrying a quiet grace.

He didn’t look around.

Not to see who was watching, but to make sure she was okay.

I walked into a franchised burrito shop right after.

The feeling didn’t follow me in.

There was no line.

One customer already eating.

I didn’t feel welcomed.

Her expression said enough before she spoke.

She offered no guidance as I ordered.

“Well, it’s all written there.”

Flat. Unmoved.

A couple walked in behind me.

I suddenly felt exposed. Out of place.

“Don’t you prompt customers?” I asked.

She smiled.

It didn’t match the moment.

She finished the order. No mention of utensils. No effort.

I paid. Left. Hungry and irritated.

The couple behind me got the welcome I didn’t.

I called another location.

That led to the district manager.

She knew exactly who I was talking about.

Refund. Gift card.

But that wasn’t the point.

Maybe no one had slowed their stride for the employee in a long time.

All Social: https://beacons.ai/vinofilms

#brooklyn #ny #kindness #anger #vinofilms #vinofilmsarchives

 
Read more...

from SmarterArticles

In November 2025, Yann LeCun walked into Mark Zuckerberg's office and told his boss he was leaving. After twelve years building Meta's AI research operation into one of the most respected in the world, the Turing Award winner had decided that the entire industry was heading in the wrong direction. Four months later, his new venture, Advanced Machine Intelligence Labs, announced the largest seed round in European startup history: $1.03 billion to build AI systems that do not merely predict the next word in a sentence, but understand how physical reality actually works.

The money is staggering. The ambition is larger. And the question it raises is one that should unsettle anyone paying attention: if we succeed in building machines that can model the physical world with superhuman fidelity, will we have any idea what those machines actually know?

Welcome to the age of world models, where the gap between what AI understands and what we understand about AI threatens to become the defining tension of the next decade.

A Turing Winner's Trillion-Dollar Heresy

LeCun has never been shy about his contrarian streak. Even whilst serving as Meta's chief AI scientist, he publicly and repeatedly argued that the industry's obsession with large language models was fundamentally misguided. “Scaling them up will not allow us to reach AGI,” he has said, a position that put him at odds with the prevailing orthodoxy at OpenAI, Google, and, increasingly, within his own employer. His departure, first confirmed in a December 2025 LinkedIn post, was not merely a career move. It was a declaration of intellectual war.

AMI Labs, headquartered in Paris with additional offices in New York, Montreal, and Singapore, is built around a deceptively simple thesis: real intelligence does not begin in language. It begins in the world. The company's technical foundation is LeCun's Joint Embedding Predictive Architecture, or JEPA, a framework he first proposed in a 2022 position paper titled “A Path Towards Autonomous Intelligence.” Where large language models like ChatGPT, Claude, and Gemini learn by predicting the next token in a sequence of text, JEPA learns by predicting abstract representations of sensory data. It does not try to reconstruct every pixel or predict every word. Instead, it learns to capture the structural, meaningful patterns that govern how environments behave and change over time.

The distinction matters enormously. LeCun has used the example of video prediction to illustrate the point: trying to forecast every pixel of a future video frame is computationally ruinous, because the world is full of chaotic, unpredictable details like flickering leaves, shifting shadows, and textured surfaces. A generative model wastes enormous capacity modelling this noise. JEPA sidesteps the problem entirely by operating in an abstract embedding space, focusing on the low-entropy, structural aspects of a scene rather than its surface-level chaos.

The $1.03 billion seed round, which values AMI at $3.5 billion pre-money, drew an extraordinary roster of backers. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Additional investors include NVIDIA, Temasek, Samsung, Toyota Ventures, and Bpifrance, alongside individuals such as Jeff Bezos, Mark Cuban, and Eric Schmidt. LeCun initially sought approximately 500 million euros, according to a leaked pitch deck reported by Sifted. Demand far exceeded that figure.

Day-to-day operations are led by Alexandre LeBrun, the French entrepreneur who previously founded and ran Nabla, a medical AI startup. The leadership roster also includes Saining Xie, formerly of Google DeepMind, as chief science officer; Pascale Fung as chief research and innovation officer; Michael Rabbat as VP of world models; and Laurent Solly, Meta's former VP for Europe, as chief operating officer. LeCun himself serves as executive chairman whilst maintaining his professorship at New York University.

LeBrun has been candid about the timeline. “AMI Labs is a very ambitious project, because it starts with fundamental research,” he has said. “It's not your typical applied AI startup that can release a product in three months.” Within three to five years, LeCun has stated, the goal is to produce “fairly universal intelligent systems” capable of deployment across virtually any domain requiring machine intelligence. The initial commercial targets include healthcare, robotics, wearables, and industrial automation.

What World Models Actually Are (and Why They Change Everything)

To grasp why a billion dollars is flowing into world models, you need to understand what they are and why the current generation of AI systems falls short. A world model, in its simplest formulation, is an AI system designed to understand and predict how the physical world works. Gravity, motion, cause and effect, spatial relationships, object permanence: these are the kinds of knowledge that a world model attempts to internalise, not through explicit programming, but through learning from vast quantities of sensory data.

This is not an entirely new idea. The concept of internal models of reality has deep roots in cognitive science, where researchers have long argued that human intelligence depends on our brain's ability to simulate possible futures before we act. When you reach for a glass of water, you do not consciously calculate trajectories and grip forces. Your brain runs a rapid internal simulation, predicting what will happen and adjusting on the fly. World models attempt to give machines a similar capability.

Google DeepMind CEO Demis Hassabis, the 2024 Nobel laureate in Chemistry, has articulated the problem with current approaches in characteristically vivid terms. At the India AI Impact Summit in February 2026, he described today's AI systems as possessing “jagged intelligence,” explaining: “Today's systems can get gold medals in the International Maths Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way. A true general intelligence system shouldn't have that kind of jaggedness.” Large language models, Hassabis has argued, are ultimately sophisticated probability predictors. They do not genuinely understand the physical laws of the real world.

Fei-Fei Li, the Stanford professor often described as the “godmother of AI” for her foundational work on ImageNet, has put it even more bluntly. LLMs, she has said, are like “wordsmiths in the dark,” possessing elaborate linguistic ability but lacking spatial intelligence and physical experience. Her own company, World Labs, released its Marble world model in November 2025, capable of generating entire 3D worlds from a text prompt, image, video, or rough layout. World Labs is now reportedly in discussions at a $5 billion valuation after raising $230 million in funding.

The broader landscape is moving rapidly. Google DeepMind launched Genie 3, the first real-time interactive world model capable of generating navigable 3D environments at 24 frames per second, maintaining strict object permanence and consistent physics without a separate memory module. NVIDIA's Cosmos platform, announced at CES 2025 and trained on 9,000 trillion tokens drawn from 20 million hours of real-world data, has surpassed 2 million downloads. Waymo has built its autonomous vehicle world model on top of Genie 3, using it to train self-driving cars in simulated environments. Reports indicate that OpenAI triggered a “code red” response to Genie 3's capabilities, accelerating efforts to add spatial understanding to GPT-5.

Over $1.3 billion in funding flowed into world model startups in early 2026 alone. This is not a niche research interest. It is rapidly becoming the central front in the race towards more capable AI.

The Architecture of Understanding

AMI Labs' approach differs from its competitors in important ways. Where World Labs focuses on generating photorealistic 3D environments and DeepMind's Genie 3 emphasises interactive simulation, JEPA is fundamentally about learning representations rather than generating outputs.

The architecture works through a deceptively elegant mechanism. JEPA takes a pair of related inputs, such as consecutive video frames or adjacent image patches, and encodes each into an abstract representation using separate encoder networks. A predictor module then attempts to forecast the representation of the “target” input from the representation of the “context” input. Crucially, this prediction happens entirely in abstract embedding space, never at the level of raw pixels or tokens.

This creates what amounts to a learned physics engine. The system develops an internal model of how things relate to one another and how they change over time, without being burdened by the task of reconstructing surface-level details. An optional latent variable, often denoted as z, allows the model to account for inherent uncertainty, representing different hypothetical scenarios for aspects of the target that the context alone cannot determine.

Several variants already exist. I-JEPA learns by predicting representations of image regions from other regions, developing abstract understanding of visual scenes without explicit labels. V-JEPA extends this to video, predicting missing or masked parts of video sequences in representation space, pre-trained entirely with unlabelled data. VL-JEPA adds vision-language capability, predicting continuous embeddings of target texts rather than generating tokens autoregressively, achieving stronger performance with 50 per cent fewer trainable parameters.

The promise is tantalising. An AI system built on JEPA principles could, in theory, develop the kind of intuitive physical understanding that enables a child to predict that pushing a table will move the book sitting on it. It could reason about cause and effect, plan actions in the physical world, and adapt to novel situations without the brittleness that characterises current systems.

But there is a catch. And it is a significant one.

The Understanding Gap Widens

Here is the paradox at the heart of the world models revolution: the better these systems become at understanding physical reality, the harder they become for us to understand. We are constructing machines designed to build rich internal representations of how the world works, and we have strikingly little ability to inspect, interpret, or verify what those representations actually contain.

This is not a new problem, but world models threaten to make it dramatically worse. The interpretability challenges that plague current large language models are already formidable. Mechanistic interpretability, the effort to reverse-engineer neural networks into human-understandable components, has been recognised by MIT Technology Review as a “breakthrough technology for 2026.” Yet the field remains at what researchers describe as a critical inflection point, with genuine progress coexisting alongside fundamental barriers.

The core difficulty is what researchers call superposition. Because there are more features that a neural network needs to represent than there are dimensions available to represent them, the network compresses information in ways that produce polysemantic neurons, individual units that contribute to multiple, semantically distinct features. Understanding what a network “knows” requires disentangling this compressed representation, and the dominant tool for doing so, sparse autoencoders, faces serious unsolved problems. Reconstruction error remains stubbornly high, with 10 to 40 per cent performance degradation. Features split and absorb in unpredictable ways. And the results depend heavily on the specific dataset used.

Anthropic, the AI safety company, has made mechanistic interpretability a central focus, extracting interpretable features from its Claude 3 Sonnet model using sparse autoencoders and publishing results showing features related to deception, sycophancy, bias, and dangerous content. Their attribution graphs, released in March 2025, can successfully trace computational paths for roughly 25 per cent of prompts. For the remaining 75 per cent, the computational pathways remain opaque.

A 2025 paper published at the International Conference on Learning Representations proved that many circuit-finding queries in neural networks are NP-hard, remain fixed-parameter intractable, and are inapproximable under standard computational assumptions. In plain language: for many of the questions we most urgently need to answer about what neural networks are doing, there may be no efficient algorithm that can provide the answer.

Now consider what happens when you move from language models to world models. JEPA operates in abstract embedding spaces that are, by design, removed from human-interpretable inputs and outputs. A language model at least traffics in words, which we can read. A world model's internal representations are abstract mathematical objects encoding relationships between physical phenomena. The interpretability challenge is not merely scaled up. It is qualitatively different.

The field is split on how to respond. Anthropic has set the ambitious goal of being able to “reliably detect most AI model problems by 2027.” Google DeepMind, meanwhile, has pivoted away from sparse autoencoders towards what it calls “pragmatic interpretability,” an acknowledgement that full mechanistic understanding of frontier models may be neither achievable nor necessary. Corti, a Danish AI company, has developed GIM (Gradient Interaction Modifications), a gradient-based method that has topped the Hugging Face Mechanistic Interpretability Benchmark, offering improved accuracy for identifying which components in a model are responsible for specific behaviours. But even these advances represent incremental progress against an exponentially growing challenge.

When Physics Engines Dream

The practical implications of AI systems that can simulate physical reality extend far beyond academic curiosity. Consider the domains AMI Labs is targeting: healthcare, robotics, wearables, and industrial automation. In each of these fields, the consequences of AI misunderstanding the physical world range from costly to catastrophic.

AMI Labs has already established a partnership with Nabla, the healthtech company LeBrun previously founded, providing a direct conduit to the healthcare sector. In medicine, the hallucinations that plague large language models are not merely embarrassing; they can be lethal. A world model that genuinely understands human physiology, drug interactions, and disease progression could revolutionise clinical decision-making. But the opacity of that understanding creates a novel kind of risk: a system that is right for reasons nobody can articulate, or wrong for reasons nobody can detect.

In robotics, world models promise to solve one of the field's most persistent bottlenecks. Training robots in the physical world is slow, expensive, and dangerous. World models enable training in simulation, where a robot can experience millions of scenarios in hours rather than years. NVIDIA's Cosmos platform already allows autonomous vehicle and robotics developers to synthesise rare, dangerous edge-case conditions that would be prohibitively risky to create in reality. But the fidelity of the simulation depends entirely on the accuracy of the world model, and verifying that accuracy requires understanding what the model has learned, which brings us back to the interpretability gap.

The autonomous vehicle industry illustrates the stakes with particular clarity. Waymo's decision to build its world model on Google DeepMind's Genie 3 represents a bet that AI-generated simulations can adequately capture the chaotic complexity of real-world driving. The potential benefits are enormous: safer vehicles, faster development cycles, dramatically reduced testing costs. The potential risks are equally significant. If the world model contains subtle errors in its understanding of physics (the way light refracts in rain, the friction coefficient of wet roads, the behaviour of pedestrians at unmarked crossings) those errors will be systematically baked into every vehicle trained on the simulation.

Governing What We Cannot See

The regulatory landscape is struggling to keep pace with these developments. The European Union's AI Act, the world's most comprehensive legal framework for artificial intelligence, entered into force in August 2024 and will be fully applicable by August 2026. Its risk-based classification system imposes graduated obligations based on potential harm, with penalties reaching up to 35 million euros or 7 per cent of global annual turnover for the most serious violations.

But the AI Act was designed primarily with current AI systems in mind. Its requirements for high-risk systems, including documented risk management, robust data governance, detailed technical documentation, automatic logging, human oversight, and safeguards for accuracy and robustness, assume a level of inspectability that world models may not provide. How do you document the risk management of a system whose internal representations of physical reality are abstract mathematical objects that resist human interpretation? How do you ensure “human oversight” of a physics simulation running in an embedding space that no human can directly perceive?

The European Council, on 13 March 2026, agreed a position to streamline rules on artificial intelligence, whilst the Commission's Digital Omnibus package, submitted in November 2025, proposed adjusting the timeline for high-risk system obligations. But these adjustments are largely procedural. The fundamental question of how to regulate AI systems whose internal workings are opaque to their creators remains unaddressed.

At the broader international level, the AI Impact Summit 2026 in New Delhi produced a Leaders' Declaration recognising that “AI's promise is best realised only when its benefits are shared by humanity.” The International Institute for Management Development's AI Safety Clock, which began at 29 minutes to midnight in September 2024, now stands at 18 minutes to midnight as of March 2026, reflecting growing expert concern about the pace of AI development relative to safety measures.

In the United States, the NIST AI Risk Management Framework and ISO/IEC 42001 provide voluntary guidelines, but nothing approaching the binding force of the EU's approach. China's own regulatory framework focuses on algorithmic transparency and content generation, but similarly lacks specific provisions for world models. The result is a patchwork of rules designed for yesterday's AI, applied to tomorrow's.

Voices From Both Sides of the Divide

The debate over world models and their implications has produced sharp divisions amongst the people who understand these systems best.

LeCun himself has been consistently dismissive of existential risk concerns. He has called discussion of AI-driven existential catastrophe “premature,” “preposterous,” and “complete B.S.,” arguing that superintelligent machines will have no inherent desire for self-preservation and that AI can be made safe through continuous, iterative refinement. His position is that the path to safety runs through open science and open source, not through restriction and secrecy. Staying true to this philosophy, AMI Labs has committed to publishing its research and releasing substantial code as open source. “We will also make a lot of code open source,” LeBrun has confirmed.

Geoffrey Hinton, who shared the 2018 Turing Award with LeCun and Yoshua Bengio for their contributions to deep learning, occupies the opposite pole. The researcher often described as the “Godfather of AI” has warned that advanced AI will become “much smarter than us” and render controls ineffective. At the Ai4 conference in 2025, Hinton proposed a “mother AI” concept to safeguard against potential AI takeover scenarios. Their public disagreements have become one of the defining intellectual conflicts in the field.

The broader expert community is similarly divided. Roman Yampolskiy, a computer scientist at the University of Louisville known for his work on AI safety, estimates a 99 per cent chance of an AI-caused existential catastrophe. LeCun places that probability at effectively zero. A survey of AI experts published in early 2025 found that many researchers, while highly skilled in machine learning, have limited exposure to core AI safety concepts, and that those least familiar with safety research are also the least concerned about catastrophic risk.

AGI timeline estimates vary wildly. Elon Musk has predicted AGI by 2026. Dario Amodei, CEO of Anthropic, has suggested 2026 or 2027. NVIDIA CEO Jensen Huang places the date at 2029. LeCun himself has argued it will take several more decades for machines to exceed human intelligence. Gary Marcus, the cognitive scientist and persistent AI sceptic, has suggested the timeline could be 10 or even 100 years.

What is notable about the world models debate is that it cuts across these existing fault lines. You do not need to believe in imminent superintelligence to be concerned about the understanding gap. A world model does not need to be superintelligent to be dangerous if it is deployed in high-stakes domains whilst remaining fundamentally opaque. The risk is not necessarily that AI becomes too smart. It is that AI becomes smart enough to matter in ways we cannot verify.

Reading the Black Box, Through a Glass Darkly

The technical community has not been idle in the face of these challenges. New architectures and methods are emerging that offer at least partial responses to the interpretability crisis.

Kolmogorov-Arnold Networks, or KANs, represent a fundamentally different neural network architecture that decomposes higher-dimensional functions into one-dimensional functions, increasing interpretability and allowing scientists to identify important features, reveal modular structures, and discover symbolic formulae in scientific data. However, their interpretability diminishes as network size increases, presenting a familiar scalability challenge: the very systems we most need to understand are the ones that resist understanding most stubbornly.

The collaborative paper published in January 2025 by 29 researchers across 18 organisations established the field's consensus open problems for mechanistic interpretability. Core concepts like “feature” still lack rigorous mathematical definitions. Computational complexity results prove that many interpretability queries are intractable. And practical methods continue to underperform simple baselines on safety-relevant tasks.

There is also the question of whether full interpretability is even the right goal. Some researchers argue for a more pragmatic approach: rather than trying to understand everything a model knows, develop reliable methods for detecting when a model is likely to fail. This is the philosophy behind DeepMind's pivot to pragmatic interpretability and behind Hassabis's proposed “Einstein test” for AGI, which asks whether an AI system trained on all human knowledge up to 1911 could independently discover general relativity. If it cannot, Hassabis argues, it remains “a very sophisticated pattern matcher” regardless of its other capabilities.

LeCun, characteristically, sees the problem differently. He has argued that the architecture itself is the solution: by designing systems that learn structured, abstract representations rather than opaque statistical correlations, world models could ultimately be more interpretable than language models, not less. JEPA's operation in abstract embedding space is, in his view, a feature rather than a bug, because those embeddings encode the meaningful structural relationships that humans also rely on to understand the world, even if the format is different.

This is an optimistic reading. Whether it proves correct will depend on research that has not yet been conducted, using methods that have not yet been invented, applied to systems that have not yet been built. In the meantime, the money is flowing, the labs are hiring, and the world models are being trained.

Europe's Unlikely Gambit

There is a geopolitical dimension to this story that deserves attention. LeCun has stated that there “is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American.” AMI Labs, with its Paris headquarters and European seed record, is positioning itself to fill that void.

The timing is deliberate. The EU's AI Continent Action Plan, published in April 2025, aims to make Europe a global leader in AI whilst safeguarding democratic values. France's state investment bank Bpifrance is amongst AMI's backers. The company's open research commitment aligns with European regulatory philosophy, which emphasises transparency and accountability in ways that closed American labs like OpenAI and Anthropic have been criticised for resisting.

But Europe's track record in turning fundamental research into commercially dominant technology is, to put it diplomatically, mixed. AMI Labs' $1.03 billion seed round is enormous, but it pales beside the tens of billions flowing into American and Chinese AI labs. LeBrun has acknowledged the challenge, noting that AMI will prioritise quality over quantity in building its team across its four global locations. The question is whether a commitment to open science and European values can coexist with the scale of resources needed to compete at the frontier.

The second-largest seed round ever, raised by the American firm Thinking Machines Lab in June 2025 at $2 billion, provides a sobering comparison. The world models race is global, and capital alone will not determine the winner. But capital certainly helps.

Sleepwalking With Eyes Open

So, are we sleepwalking into a future where AI understands the world better than we do, without us understanding the AI? The honest answer is: we might be, but not in the way the question implies.

The framing of “sleepwalking” suggests unawareness, but the striking thing about the current moment is how many people are aware of the problem and how few solutions are available. The researchers building world models know that interpretability is an unsolved challenge. The regulators drafting AI governance frameworks know that their rules were designed for a different generation of technology. The investors writing billion-dollar cheques know that the commercial applications are years away and the fundamental research questions remain open.

The danger is not ignorance. It is a collective decision to proceed despite uncertainty, driven by competitive pressure, scientific ambition, and the genuine potential of these systems to solve real problems. When LeCun talks about world models revolutionising healthcare by eliminating the hallucinations that make LLMs dangerous in clinical settings, he is not wrong about the potential. When Hassabis describes the need for AI that can reason about physics rather than merely predicting word probabilities, he is identifying a real limitation of current systems. When Fei-Fei Li argues for spatial intelligence as the next frontier, she is pointing towards capabilities that could transform robotics, manufacturing, and scientific discovery.

But potential is not proof. And the understanding gap, the asymmetry between AI's growing capacity to model reality and our limited capacity to model the AI, is real and widening. Every billion dollars invested in making world models more capable should, in principle, be matched by investment in making them more transparent. The evidence suggests that ratio is nowhere close to balanced.

The world models era is not something that is coming. It is here. AMI Labs' billion-dollar bet, backed by some of the most sophisticated investors and researchers on the planet, is one data point amongst many. The question is not whether machines will learn to simulate physical reality. It is whether we will develop the tools to understand what they have learned before the consequences of not understanding become irreversible.

LeCun has said that within three to five years, AMI aims to produce “fairly universal intelligent systems.” The AI Safety Clock stands at 18 minutes to midnight. And the gap between what AI can model and what humans can comprehend about those models grows wider with every training run.

We are not sleepwalking. We are walking with our eyes open, into a future whose shape we can see but whose details remain, for now, profoundly and perhaps permanently, beyond our ability to fully perceive.

References

  1. TechCrunch, “Yann LeCun's AMI Labs raises $1.03B to build world models,” 9 March 2026. https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/

  2. TechCrunch, “Who's behind AMI Labs, Yann LeCun's 'world model' startup,” 23 January 2026. https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/

  3. MIT Technology Review, “Yann LeCun's new venture is a contrarian bet against large language models,” 22 January 2026. https://www.technologyreview.com/2026/01/22/1131661/yann-lecuns-new-venture-ami-labs/

  4. Sifted, “Yann LeCun's AMI Labs raises $1bn in Europe's biggest seed round,” March 2026. https://sifted.eu/articles/yann-lecun-ami-labs-meta-funding-round-nvidia

  5. Crunchbase News, “Turing Winner LeCun's New 'World Model' AI Lab Raises $1B In Europe's Largest Seed Round Ever,” March 2026. https://news.crunchbase.com/venture/world-model-ai-lab-ami-raises-europes-largest-seed-round/

  6. TechCrunch, “Yann LeCun confirms his new 'world model' startup, reportedly seeks $5B+ valuation,” 19 December 2025. https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-new-world-model-startup-reportedly-seeks-5b-valuation/

  7. Meta AI Blog, “V-JEPA: The next step toward advanced machine intelligence,” 2024. https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/

  8. Meta AI Blog, “I-JEPA: The first AI model based on Yann LeCun's vision for more human-like AI,” 2023. https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/

  9. Introl, “World Models Race 2026: How LeCun, DeepMind, and others compete,” 2026. https://introl.com/blog/world-models-race-agi-2026

  10. News9live, “India AI Impact Summit 2026: DeepMind CEO Demis Hassabis says current AI still 'Jagged' and learning,” February 2026. https://www.news9live.com/technology/artificial-intelligence/india-ai-summit-2026-deepmind-hassabis-ai-jagged-learning-2932470

  11. Storyboard18, “Demis Hassabis says AGI not here yet, calls current AI 'jagged intelligence,'” 2026. https://www.storyboard18.com/brand-makers/google-deepmind-ceo-says-agi-not-here-yet-calls-current-ai-jagged-intelligence-90028.htm

  12. European Commission, “AI Act: Shaping Europe's digital future,” 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  13. European Council, “Council agrees position to streamline rules on Artificial Intelligence,” 13 March 2026. https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/

  14. TIME, “Meta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk,” 2024. https://time.com/6694432/yann-lecun-meta-ai-interview/

  15. WebProNews, “Yann LeCun and Geoffrey Hinton Clash on AI Safety in 2025,” 2025. https://www.webpronews.com/yann-lecun-and-geoffrey-hinton-clash-on-ai-safety-in-2025/

  16. arXiv, “Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts,” February 2025. https://arxiv.org/html/2502.14870v1

  17. Transformer Circuits, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet,” 2024. https://transformer-circuits.pub/2024/scaling-monosemanticity/

  18. Springer Nature, “Recent Emerging Techniques in Explainable Artificial Intelligence,” 2025. https://link.springer.com/article/10.1007/s11063-025-11732-2

  19. Futurum Group, “Yann LeCun's AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?” March 2026. https://futurumgroup.com/insights/yann-lecuns-ami-raises-1bn-seed-round-is-the-world-model-era-finally-here/

  20. The Next Web, “Yann LeCun just raised $1bn to prove the AI industry has got it wrong,” March 2026. https://thenextweb.com/news/yann-lecun-ami-labs-world-models-billion

  21. Corti, “Corti introduces GIM: Benchmark-leading method for understanding AI model behavior,” 2025. https://www.corti.ai/stories/gim-a-new-standard-for-mechanistic-interpretability

  22. PhysOrg, “Kolmogorov-Arnold networks bridge AI and scientific discovery by increasing interpretability,” December 2025. https://phys.org/news/2025-12-kolmogorov-arnold-networks-bridge-ai.html

  23. Sombrainc, “An Ultimate Guide to AI Regulations and Governance in 2026,” 2026. https://sombrainc.com/blog/ai-regulations-2026-eu-ai-act

  24. Zaruko, “The Einstein Test: Why AGI Is Not Around the Corner,” 2026. https://zaruko.com/insights/the-einstein-test


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: I'm tuned in to 105.3 The Fan – Dallas, for the pregame show then the call of tonight's game between my Texas Rangers and the Seattle Mariners. By the time the game ends I'll have finished the night's prayers and will be ready to retire for the night.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 227.74 lbs. * bp= 154/90 (65)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:10 – crispy oatmeal cookies * 08:45 – 1 ham & cheese sandwich * 10:00 – baked fish and vegetables * 13:50 – clam soup & saltine crackers * 16:00 – 1 fresh apple

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:00 – bank accounts activity monitored * 06:00 – read, write, pray, follow news reports from various sources, surf the socials, nap, * 11:00 – listening to the Markley, van Camp and Robbins Show * 12:00 to 13:15 – watch old game shows with Sylvia * 13:30 – read, pray, follow news reports from various sources * 15:30 – listen to The Jack Riccardi Show * 17:00 – tuned in to 105.3 The Fan – Dallas well ahead of tonight's Rangers / Mariners game.

Chess: * 14:20 – moved in all pending CC games, winning one

 
Read more...

from Café histoire

S’il est un retour que je n’avais pas prévu, c’est bien celui de mon iPad Air M2.

En fait, l’arrivée impromptue du MacBook Neo a amené de nouvelles réflexions. En premier lieu, j’ai pensé que celui-ci marquait définitivement la mise a placard à terme de mon iPad Air.

Diptyque : MacBook Neo – MacBook Air

Ce MacBook Neo faisait coup double : offrir une alternative plus abordable au MacBook Air au moment du renouvellement et ranger définitivement dans une niche l’iPad et plus particulièrement l’iPad Air. Concernant ce dernier, il est plus cher que le MacBook Neo que l’on prenne la version à 256GB ou celle à 512GB.

Mais finalement, les dimensions du MacBook Neo sont très proches de celles du MacBook Air et plus éloignées de celles d’un iPad Air 11”. Bizarrement, c’est cette faible différence de dimensions avec le MacBook Air et un écart significatif avec l’iPad Air qui m’ont interpellé en premier. Le MacBook Neo reste ainsi éloigné de mon ancien MacBook 12” à la portabilité légendaire. Ce dernier est ainsi plus proche de mon iPad Air 11”.

D’un autre côté, l’écart de prix est plus conséquent entre la MacBook Neo et un MacBook Air qu’entre un MacBook Neo et un iPad Air M4. Cela creuse encore l’écart qualité-prix d’un côté et n’est pas un facteur déterminant de l’autre. D’autant plus qu’avec un stockage de 256GB pour un iPad Air, vous disposerez de suffisamment de place et de l’équivalent d’un Touch ID seulement disponible sur la version à 512GB du MacBook Neo. Au niveau du prix, il faut donc considérer, à mon avis, l’iPad Air 256GB avec le MacBook Neo 512GB.

Peu de différence de gabarit donc entre le MacBook Neo et la MacBook Air 13”. En utilisation, il y a bien une petite différence : 13” vs 13,6”. Rien de rédhibitoire ou alors vous utilisez déjà un écran externe pour y brancher votre MacBook Air lorsque vous êtes à la maison ou au travail. Vous en ferez alors de même avec le MacBook Neo.

Pour le 80% ou plus de vos tâches quotidienne, le MacBook Neo est d’un rapport qualité-prix imbattable en comparaison avec un MacBook Air, Et vous garderez un produit statutaire. Les influenceurs ne manqueront pas de vous le marteler et de vous en convaincre si ce n’est pas déjà fait. A raison quand on observe la différence de prix. Et la fabrication est vraiment au top.

Personnellement, un test rapide en magasin m’a convaincu au niveau de la qualité de la frappe et de la disposition du clavier du MacBook Neo. Clairement, si je devais changer mon MacBook Air M2, j’opterai pour le MacBook Neo et peut-être même pour sa version de base à 256GB (associé à un de mes disques durs externes SSD).

Diptyque : MacBook Neo – iPad Air

Par contre, je me rends compte que le MacBook Neo remplacerait plus difficilement mon iPad Air. D’abord la puissance de mon iPad Air est équivalente et même légèrement supérieure, grâce à son multicoeur, à celle du MacBook Neo. Ses dimensions plus réduites (et d’un bout) ajoutent un autre élément distinctif.

Pour un usage mobile ou en mobilité surtout à moto, je recherche le produit le plus compact possible. Les différences claires de taille entre mon iPad Air M2 et le MacBook Neo facilitent mon choix. Comme je dispose aussi d’un Magic Keyboard compatible entre les version d’iPad Air, je peux continuer un bout avec ce clavier et je le rentabiliserai le jour où je devrais passer à un iPad Air plus récent et puissant. Je dirais même qu’un actuel iPad Air M4 m’apporterai des améliorations plus substantielles qu’un MacBook Neo dans mes usages au quotidien. Pour un prix comparable.

Plus largement et pendant des années, Apple et les influenceurs se sont acharnés à nous vanter l’iPad comme un remplacement possible (souhaitable) de MacBook pour le commun des mortels alors que leurs usages et leurs fonctionnalités diffèrent largement. Le MacBook Neo enterre définitivement cette chimère. La suite nous dira si c’est au détriment à terme de la famille iPad et desquels en particulier.

Dans mes usages, l’iPad Air avant le MacBook Neo

J’en arrive à la conclusion, me concernant, que l’intérêt de l’iPad est de centrer son attention sur une application (prise de note) ou d’un processus (traitement de ses photos). On prend son temps plutôt que de passer d’une application à l’autre. Cette tâche, elle est réalisable un peu n’importe où et dans des espaces et temps successifs/consécutifs.

Ma redécouverte de l’iPad tient aussi de la prise en main (après bien des atermoiement et un travail avec d’autres solutions) de l’application Obsidian (MacOs, iPad OS, Windows, Linux, Android).

Je suis nettement plus productif et rapidement pour la réalisation de mes billets de blogs avec mon iPad Air qu’avec mon MacBook Air. L’importation et le tri de mes images est rapide et fluide avec Photo d’Apple. Il s’en suit un traitement simple et basique des images simple avec Photomator. Je rédige rapidement mes billets de blogs dans Jetpack et j’insère facilement mes images agrémentant ou à la base de mon article. C’est d’autant plus simple que l’ajout d’image se fait à l’intérieur de la publication. Ce soir, je viens de rédiger et planifier plusieurs articles pour mon blog dans un temps record.

Et je peux aussi écouter de la musique tout en rédigeant mes textes.

Les nouveautés intéressantes de iPadOS 26

En revenant à mon iPad et après bien des hésitations, je suis passé à iPadOS 26. Je découvre petit à petit son potentiel. Je n’avais capté une bonne partie des éléments intéressants.

Curieusement, là aussi, alors que ce nouvel OS rapproche en quelque sorte les univers de l’iPad et des Macs, j’y vois d’abord un intérêt pour des usages différenciés entre iPad et MacBook.

Une autre manière de le dire est que ce nouvel OS, tout en se rapprochant de celui du Mac, met d’abord en valeur l’usage de la tablette bien qu’il subsiste quelques incohérences relevées dans cette vidéo :

Par ailleurs, concernant l’iPad, je me demande s’il ne faudra pas parler plutôt de multi-fenêtrage que de multitâche.

Toujours est-il que je me retrouve avec un usage renouvelé de mon iPad Air. Je suis tellement ravi que je viens de commander la version 2 du Magic Keyboard qui devrait en améliorer l’usage.

Tags : #AuCafé #MacBook #iPad #Neo #Air

 
Read more... Discuss...

from Dear Anxious Teacher

Don't rely on other people to discipline your students. Once that door closes, they won't be in the room with you. Your behavior management plan and system should do the “talking.” Relying on security and principals to remove students (unless they are distracting the class, fighting, or being extremely disrespectful) will simply undermine your authority.  You'll be viewed as less. You need to have a classroom management plan: a set of rules and a general list of consequences posted somewhere in your classroom. ISS and detention doesn't really work today; sometimes it feels like a reward for a student.  Students getting an “out of school” suspension is like a mini-vacation. 

I create a participation grade category that credits students for being focused and respectful during the lesson. If a student is sleeping, changing their seat, or being disrespectful (off task), I take points off their weekly grade. Students have come up to me asking about their participation grade. I'll add comments like slept on 3/1 or didn't follow instruction. Instead of tossing students out of your class. Try figuring other ways to handle them that meet both of your needs. Pick and choose your battles today. For me, disrespect is never to be tolerated. The annoying behaviors like a kid sleeping, a little side chatter, students not working, heads down, cheating on assignments, and more are dealt with in the class. Try these steps to help with behaviors. 

1. Redirect with proximity (teach closer to the student)

2. Observe the student to see the “why” of their behavior. A couple of quick glances. Behavior can be caused by the following ideas: attention seeking, wanting power, escape/avoidance, student boredom, challenging work, easy work, hunger/thirst needs, out of school or family problems, disabilities, or sensory issues. We sometimes automatically assume the behavior is attention seeking, but you may be surprised when some students tell you exactly what they need. 

3. Conference with them quickly (lower your stature, talk in whispers). “Hey what's up? Are you okay? Can you chill out a little bit. What's wrong? Why are you acting like this?

4. If the above doesn't work or resolve the problem, issue a warning. Now with warnings, you need to follow through. 

5. Depending on the behavior, if it can be managed in the class, the student should not earn participation points for being disruptive. 

6. If the behavior is out of control, they will need to be removed. You can start by just asking nicely for this student to head out of your class. Don't be mean or a jerk about it. “I need you to grab your stuff and head to the office.” You might need to get this child escorted for safety purposes. The psychology of it is to say it respectfully and professionally. 

Note: If your class culture is healthy and warm, you won't have to do this usually at all. I hardly kick anyone out of my class unless the behavior is really escalating. 

7. Stay calm at all times. This is probably the hardest. Remember, they are children or teenagers. Give them respect because they aren't perfectly developed human beings. Students also want you to “lose your head” and maybe catch you off guard. Control yourself as much as you can. Take a few deep breaths. Don't take anything personally. 

8. Document if the student is removed and follow your classroom and building procedures. 

Note: Don't make this a habit of kicking students out. Building rapport and connection is the key to get rid of the “back and forth” disrespect and awkward tension between teachers and students. Good relationships solve so much of the above issues. When I have a student removed from a class, the principals usually know the child must have done something pretty bad in my class. Student down deep still respect you when you discipline with kindness. Think—it's hard being a jerk to a nice person. So stay calm and kind with issuing consequences. Keep students in your class and figure out what they need from you. It might just be something that surprises you. It's exhausting! I know! 

 
Read more... Discuss...

from ThruxBets

The blog is still waiting for it’s first winner, with Mission Command only managing third at Pontefract to make it 5 places out of 7 each way selections.

And its up the A1 from Pontefract to Catterick for some action on Wednesday.


3.23 Catterick Taking a chance here with Mick Appleby’s WAY TO DUBAI. The 7yo doesn’t have the most attractive of profiles, with just 1 win from 38 starts, but he should strip fitter than plenty of his competitors today who look like they will need the run. Today also represents his first foray into class 5 company on the flat, some 21lbs lower than his highest OR. The return to 7f should be better for him and can hopefully be involved at the business end.

WAY TO DUBAI // 0.5pt E/W @ 12/1 4 places (Bet365)


5.05 Catterick With 6/1 available at Bet365, I’m siding each way with Adrian Keatley’s FRANCISCOS PIECE in the penultimate race of the day. This is a significant drop in class for the 4yo who finished 2nd in the Redcar 2yo trophy in 2024 and was then pitched into very decent handicaps without much (any!) success. I’m hoping these shallower waters against inferior opposition will let us see him at his new level and can hopefully improve both his trainer and jockey’s excellent record at the track.

FRANCISCOS PIECE // 0.5pt E/W @ 6/1 (Bet365)


 
Read more...

Join the writers on Write.as.

Start writing or create a blog