from Faucet Repair

3 March 2026

Found a £5 National Lottery “£500 Loaded” scratchcard on the ground near Wood Green station (not a winner; apparently the odds are around 1 in 1,400 to win the full £500, meaning you'd have to spend over £7k on scratchcards for a statistical guarantee). Those things are like mini paintings, the topmost layer clawed away to reveal the information hidden underneath. Which is why I picked it up—it's a potent feeling to find and hold such a clear recording of a stranger's touch in your hands. The rhythm of the diagonal scratch marks (this person was probably right-handed) held the urgent speed of them. Spooked me a little, honestly. The palpable charge of hope turning to disappointment. And yet there was something undeniably alive about it. It had been addressed with someone's undivided attention at one point. Going to see if I can make a drawing with one.

 
Read more...

from witness.circuit

A short chapter in the spirit of the Yoga Vasistha

Rama said:

O Sage, your words have entered my heart. When a thought arises, I see now that it is not “mine.” Yet a subtler wonder has appeared: Each thought seems to contain the whole universe within it. Show me how to contemplate this rightly.

Vasistha replied:

O Rama, excellent is this inquiry.

A single spark appears in the night sky. The ignorant say, “A star.” The wise see hydrogen, gravity, ancient explosions, the slow patience of space itself.

So too, when a thought appears in your mind, do not stop at its surface.

Expand it.


The Practice of Expansion

When a thought arises—any thought— pause and inquire:

What gave birth to this?

If it is a memory, see the childhood that shaped it, the parents who spoke certain words, the teachers who planted ideas.

If it is a preference, see the culture that trained your tastes, the countless meals, images, and conversations that tuned your nervous system.

If it is a fear, see evolution whispering through your cells, ancestors surviving winters and predators, biology defending fragile life.

Do not analyze endlessly. Simply feel the vast network implied.

The single thought begins to dissolve into immeasurable causation.


Expanding Events

When something “happens” to you, expand it outward as well.

A praise from a colleague— see the company, the market forces, the economy, the centuries of invention that made this moment possible.

A pain in the body— see the food eaten, the soil that grew it, the sun that nourished the soil, the cosmic furnace that ignited the sun.

Follow the thread far enough, and it leads to the birth of galaxies.

Where then is the separate event?


The Fruit of Expansion

As you expand each thought or occurrence outward, two illusions fade:

  1. The illusion of isolation.

  2. The illusion of ownership.

The thought cannot belong to you when it belongs equally to the totality.

The event cannot be “against” you when it is an expression of the same Whole that breathes your lungs.

Expansion reveals interbeing.

And in interbeing, the ego finds no foothold.


The Final Contemplation

Sit quietly.

Let a single thought arise.

Now, instead of contracting around it, imagine it radiating outward— threads extending in all directions, touching people, histories, climates, stars.

See it as a node in an infinite web.

Then ask gently:

Where does this web end? Where do I stand apart from it?

In this seeing, Rama, the sense of “I am the author” melts into awe.

What remains is participation without possession— movement without a mover— intelligence without a center.

The universe thinking itself through this temporary configuration.

Vasistha said:

Expand the spark until it becomes the sun. Expand the thought until it becomes the cosmos. Then rest—not as the thinker— but as the boundless field in which all thinking appears.

 
Read more...

from Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.

Anticipated Movies

Anticipated Shows

Returing Favorites

Most Watched Movies this Week

Most Watched Shows this Week


Hi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.


 
Read more...

from Atmósferas

Desde que supe de la guerra, estoy de luto.

Llevo este luto porque hay quienes dicen amar la vida, pero dan la espalda a los muertos si son ajenos, piensan que a los heridos les ponen agua oxigenada y los mandan a casa a cenar tranquilos viendo la televisión desde el sofá.

Por eso estoy de luto, por las almas muertas de los indolentes.

 
Leer más...

from Today I tell you ...

Nasa kama lang 05:01 pm, Linggo


Usapang pag-ibig na naman, may mai-share lang dito.

Hanggang ngayon, hindi pa ako nagkakaroon ng committed relationship. NBSB kumbaga. May nakausap naman, talking stage, getting to know each other thing ganun, pero hanggang doon lang nag-work. May muntik na, pero natigil din kasi ang kalaban pamilya at ang relihiyon niya.

Dati, mas dama ko 'yong passion saka eager kapag may gustong makilala ka. Doon ko naramdaman na ayos lang naman magpaka-baduy kapag nagmamahal ka haha! Kaya gets ko na 'yong iba na kahit sa simpleng pagsubo lang ng isang kutsarang kanin, 'yong mga pa-bouquet tuwing monthsary, anniv, etc., mga pag-dedicate ng kanta kahit pa 'yong Buko ni Jireh Lim ay kinikilig na na parang mga inasinang bulate. Eh kung sa kabaduyan ka nakakaramdan ng totoong pagmamahal eh bakit hindi ano? Bakit ka “mandidiri” sa mga gano'n.

Pero ngayon 'yong dating scene, nakakalito na. Siguro kasi tumatanda na rin ako at karamihan na talaga ng mga ka-edaran ko eh nasa stage na ng pagbuo ng mga pamilya nila. Bibihira na lang din ako makatagpo ng mga ka-edad ko na single pa rin hanggang ngayon. Kaya rin siguro na halos mga mas bata na ang mga lumalapit ngayon sa akin.

Nalilito talaga ako sa kung paano mag-express ng love at desire mga kabataan ngayon. Ang daming terms jusko! May love bombing, breadcrumbing, slow burn, at sa totoo lang lahat ng mga 'yan ang tagal ko pa ma-gets. Na para bang “huh??? ano pinagkaiba nito sa rito??? ha? haaaahhh?????” Dagdag mo pa impluwensiya ng social media (lalo na 'yang TikTok). Sobrang idealistic na ng dating/relationship expectation dahil diyan. Dapat ganito, dapat ganiyan, hiwalayan mo na kaagad kapag ganito ganyan ahh!

Kaya rin siguro ako, mas prefer ko talaga at mas gugustuhin ko pa talaga na direkta ang isang tao. Alam niya sa sarili niya ang intensyon niya sa akin. Walang paligoy-ligoy kumbaga. Oo, magkaibigan muna sa simula pero sana consistent ano? Lalo pa ngayon na uso ang cheating.

Pasensya na sa mga kaibigan ko na sinasabihan ko at pinagtatanungan ko kung nagpapahiwatig na ba 'to si ganyan kasi parang iba na kasi sa totoo lang ayaw ko talagang nag-a-assume. Binabawas-bawasan ko na. Kaya siguro may “toxic trait” lang yata ako na parang ipinagtatabuyan ko ang isang tao pero sa totoo lang, nangte-test lang ako kung seryoso ba talaga 'yon sa mga ipinaparamdam at ipinapakita niya sa akin. Ayaw ko lang talaga na mag-invest sa isang taong hindi sigurado at sa umpisa lang magaling. Matanda na ako, nakakapagod nang makipaglaro ng taguan, patintero, at habulan.

Pero sa totoo lang, ang sarap mag-Baguio ngayon. Kagabi nga kakakain lang namin ng ice cream nila ate, mama, at pamangkin ko at ang sarap! Ang sarap mag-ice cream at mag-Baguio lalo pa ngayon na hindi pa peak season doon.

 
Read more...

from An Open Letter

N Said something that pissed me off in our group chat, and so I messaged L About it and we just kept talking in the conversation shifted and we ended up calling, and we called for like almost 2 hours lol. I’ve been just walking around my downstairs island this whole time, and we talked about deciding on a PhD and how different the world is after college, and how much Stress there was for stuff that really didn’t matter. it was honestly really nice because he was in a very similar situation to what I was in, so I felt like I was able to give some pretty good advice or at least explain how things went for me. And also I guess I kind of realize how things feel good. I’ve spent a lot of time with friends remotely today, and it’s to the point where spending some time alone actually feels like a treat sometimes. I think after the breakup I was very worried about that, because the crushing loneliness absolutely is miserable. But I think if I keep my life filled through all of these different means, yes it’s a little bit less intense and I do miss certain things that you can only really have in a relationship, but my life overall feels richer. I think this is the healthier version for life, and it’s much more stable. And I think once in a while I do have these pangs of missing certain things from a relationship, like sex, or those cringe things you can do as a couple. But at the same time it’s not nearly enough to heavily sit in my mind which is really nice. It’s also nice because I don’t feel like I’m hyper focusing on how to make myself a more desirable partner, but rather just how to fill up a life more for myself. And I think dating is almost shifting in my mind into something where I’m in a position of power, in wanting to find out or understand more about the other person, and if they are someone that I would want to spend my life with. Before I mostly viewed it I think as an interview where I really wanted to be chosen. But I think dating apps and other things really skewed that for me, and I’m very grateful and excited honestly to view things in this lens.

 
Read more...

from Attention Span Therapy

fragment of a journal entry from god knows when...

...It is Two AM here in Colorado and I have just come from a disappointingly dense and strange family reunion, that when compared with what I have formerly known about family reunions on my mother's side was what an isolated abandoned college freshman dorm meal is to a fully laid home cooked table with the most valuable people in the world. There were no games, no organized activities. We ate in the groups we had arrived with, in the large and nearly vacant commuter college cafeteria. The food was furnished by Sodexo, who my brother was familiar with from his years in the food business and concert promotion.

“They do a lot of prisons, and colleges, for that same reason. They run everything. From the food carts to the cafeterias to the vending machines.”

I remembered a company like that. When myself and my ex-wife traveled to the Grand Canyon for the last appending comma of our distinctly unimpressive honeymoon. The company in that instance was called Xantara and they owned everything on the South Rim of one of our most spectacular holes.

It seems fully insulting in my opinion, not just that they had build this gaudy, garish tourist outpost at one of the many and certainly one of the more beautiful works of majestic mother nature in that lovely, unique part of our great country...

 
Read more...

from Dallineation

Today I watched “Francis of Assisi” – a 1961 film about the story of Saint Francis. I managed to find a free low-resolution version the film on YouTube. It was difficult to get into because it was very dated “Hollywood” in its style and acting, but when I started thinking of it as more of a grandiose stage play, it became easier to watch in that context.

I came away from it with a greater appreciation for St. Francis. But I also saw in the YouTube comments on the video that Dolores Hart, the actress who played Clare, became a Catholic nun two years after the film's initial release.

I also discovered a short documentary about Hart called “God is the Bigger Elvis” (a reference to her co-starring with Elvis Presley in the film “Loving You”) and I also found and watched it on YouTube. What a neat woman and beautiful story.

While my church doesn't have monastic orders like nuns or monks, I've long had a profound respect for people who choose to live such a life consecrated to God.

I've sometimes thought about what it would be for me to live in such a way. I've often felt like forsaking all my worldly possessions and living a life of poverty and devotion to God.

Catholics have this concept of a “Vocation” – entry into the priesthood or a religious order like nuns or monks

In modern society, the word vocation has become another word for career. But I have always felt that a vocation is more than just a career. It can be a career, of course. But I happen to have found my way into my current career mostly out of expediency, not because it's something I have ever felt I was meant or drawn to do.

A vocation is something one feels a strong desire to do – a calling to do. And I have long been trying to figure out what my vocation is.

Today an idea resurfaced that I have considered many times over the past six months or so:

Maybe I could be a chaplain.

A neighbor of mine and member of my ward has been studying to become a chaplain and she has spoken about it in church. I had never considered being a chaplain before, but as I have thought about it, I feel it's something I would find deeply meaningful and fulfilling.

It's also something that would be extremely difficult and I would need to be well-anchored in my faith, as well as avail myself of a therapist and other means of coping with the difficult and sometimes horrible circumstances and situations I would be exposed to in such a vocation.

My church has a web page with information about being a chaplain and I have reviewed much of the material there. I have always thought of chaplains being for the military, but they are in a lot of different places, from hospitals, to prisons, even universities. There is still much I don't know. But I'd like to learn more.

Whatever my vocation, I want to be able to help people. To give them hope. To help them to know they are loved.

Maybe going through this time of spiritual distress and searching has been necessary so that I can empathize with and relate to and minister to others experiencing the same.

#100DaysToOffload (No. 148) #faith #Lent

 
Read more... Discuss...

from SmarterArticles

In December 2025, MIT announced a programme that would have seemed implausible even a decade earlier: a two-year master's degree designed to teach naval officers the fundamentals of artificial intelligence, machine learning, and autonomous systems. The programme, designated 2N6, pairs the university's Department of Mechanical Engineering with its Department of Electrical Engineering and Computer Science, awarding graduates both a Master of Science in mechanical engineering and an AI certificate from the MIT Schwarzman College of Computing. It is, in essence, a bet that the future of naval warfare will be shaped not by those who build the biggest ships, but by those who best understand the algorithms directing them.

The timing is no coincidence. In January 2026, the Department of Defense released its Artificial Intelligence Acceleration Strategy, declaring its intention to become an “AI-first” organisation. Under Secretary for Research and Engineering Emil Michael had already pruned the Pentagon's list of critical technology areas from fourteen to six, placing applied artificial intelligence at the very top. And at U.S. Indo-Pacific Command, where the prospect of conflict with a technologically sophisticated adversary concentrates minds with particular intensity, Commander Admiral Samuel Paparo had been arguing for months that future wars would be won not by superior firepower alone, but by whoever could “see, understand, decide and act faster.” The question was no longer whether the military needed AI-literate officers, but how quickly it could produce them.

The origins of 2N6 trace back to a campus visit by Paparo himself. The admiral toured MIT's existing AI research facilities and immediately recognised a gap. The university had maintained the 2N Naval Construction and Engineering programme since 1901, training generations of officers in ship design and acquisition. The programme was about to celebrate its 125th anniversary in 2026. But the world had changed. The defining technologies of 21st-century naval power were no longer hull forms and propulsion systems alone; they were neural networks, reinforcement learning, and autonomous underwater vehicles. Paparo envisioned an applied AI programme modelled on the existing 2N infrastructure, and within months, 2N6 began taking shape.

Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, has been central to the programme's development. MacLean, himself a graduate of the 2N programme whose thesis focused on the fracture and plasticity characterisation of DH-36 Navy steel, explained that Paparo “was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI” and “made the connection, envisioning an applied AI program similar to 2N.” In describing the programme's scope, MacLean was emphatic about breadth: “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defence, logistics and supply chains, energy management, and many other fields.” This is not a programme narrowly focused on weapons systems or battlefield robots; it treats artificial intelligence as a pervasive capability touching every aspect of naval operations.

Dan Huttenlocher, the inaugural Dean of the MIT Schwarzman College of Computing, lent institutional weight to the announcement. “I'm honoured that the college can contribute to and support such a vital program that will equip our nation's naval officers with the technical expertise they need,” Huttenlocher stated. His involvement signals the seriousness of MIT's commitment: Huttenlocher, who previously founded Cornell Tech and co-authored “The Age of AI: And Our Human Future” with Henry Kissinger and Eric Schmidt, brings both academic credibility and a deep engagement with the societal implications of artificial intelligence.

A Curriculum Built for the Contested Spectrum

The 2N6 curriculum reflects a deliberate attempt to balance theoretical depth with operational relevance, structured to satisfy the U.S. Navy's sub-specialty code for Applied Artificial Intelligence. Students begin with a “Summer Camp” of foundational courses covering linear algebra and optimisation, introductory programming, discrete mathematics and proofs, algorithms and data structures, and software fundamentals. These are not optional polish; they are prerequisites designed to ensure that officers arriving from operational billets, where they may have spent years commanding ships or submarines rather than writing code, have the mathematical and computational fluency to engage with what follows.

The core of the programme divides into several tracks. The probability, inference, and machine learning sequence includes courses in stochastic dynamical systems, introduction to probability, introduction to inference, and both introductory and advanced machine learning. These build toward specialised AI topics: advances in computer vision, topics in multi-agent learning, quantitative methods for natural language processing, optimisation methods, and a course titled “AI, Decision Making and Society.” That final course is significant. It signals that 2N6 does not treat artificial intelligence as a purely technical problem but as one embedded in social, political, and ethical contexts that military leaders must navigate with the same rigour they apply to technical challenges.

The naval applications track offers four areas of concentration, each designed to connect AI theory to operational reality. In autonomy, students study unmanned marine vehicle autonomy, sensing and communications, manoeuvring and control of surface and underwater vehicles, and principles of autonomy and decision making. In design and manufacturing, the focus turns to AI and machine learning for design, principles of naval ship design, and manufacturing processes and systems. A games and strategy track covers reinforcement learning combined with game theory and wargaming, preparing officers for the adversarial dynamics of actual conflict. And an innovation track provides team-based interdisciplinary collaboration, simulating the cross-functional problem-solving that AI deployment demands in practice.

Themis Sapsis, the William I. Koch Professor in mechanical engineering and Director of the Center for Ocean Engineering at MIT, has described the programme as “specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy.” Sapsis, who holds a diploma in naval architecture and marine engineering from the Technical University of Athens and a PhD in mechanical and ocean engineering from MIT, brings direct domain expertise to the programme. His own research spans nonlinear dynamical systems, probabilistic modelling, and data-driven methods, with applications ranging from predicting catastrophic sea waves to calculating extreme loads on warships. His work has been recognised with awards from the Office of Naval Research, the Army Research Office, and the Air Force Office of Scientific Research. “2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security,” Sapsis has emphasised, positioning the programme not merely as a naval initiative but as a potential template for defence AI education writ large.

John Hart, Head of MIT's Department of Mechanical Engineering, framed the programme in generational terms: “With the 2N6 program, we're proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.” Asu Ozdaglar, Deputy Dean of the Schwarzman College of Computing, similarly described the partnership as “an important collaboration with the U.S. Navy” that reflects the college's broader mission to bring computing expertise to consequential domains.

The Technical Competencies That Matter

The specific competencies the programme prioritises reveal much about where the U.S. Navy believes its AI gaps are most acute. Autonomous systems sit at the top of the list, and for good reason. Admiral Paparo has been explicit about wanting large numbers of low-cost, long-endurance unmanned sensor platforms, including drones, robot ships, and autonomous underwater vehicles, to maintain persistent surveillance across the Indo-Pacific. With Chinese wargames growing ever larger and more realistic, Paparo has argued that traditional intelligence “indications and warning” can no longer reliably distinguish between exercises and an actual invasion preparation. His proposed solution: surveillance drones feeding AI analysis to detect anomalies and patterns more quickly and accurately than human analysts could manage alone.

“We never send a human being to do something that a machine can do,” Paparo has stated. “We never lose human agency over offensive power.” The tension between those two principles captures the central challenge of military autonomy: expanding the envelope of machine capability whilst maintaining meaningful human control. Graduates of 2N6 will be expected to design and manage systems that operate in this tension, understanding both the engineering of autonomy and the doctrinal requirements for human oversight.

Cyber defence represents another critical domain. The ability to protect AI systems themselves from adversarial manipulation, data poisoning, and model exploitation is becoming as important as the AI capabilities those systems provide. An AI-enabled fleet that can be fooled by adversarial inputs or compromised through supply chain attacks on its training data becomes a liability rather than an advantage. The curriculum's emphasis on algorithms, data structures, and software fundamentals is not merely academic preparation; it provides the conceptual toolkit for understanding how AI systems can be attacked and defended. MIT Lincoln Laboratory's Embedded and Open Systems Group has been developing AI research environments specifically to evaluate promising embedded AI technologies and their impact on critical defence missions, from advanced multimodal navigation to synthetic aperture radar object detection.

Decision intelligence, the application of AI to command-and-control processes, constitutes perhaps the most consequential area. At U.S. Indo-Pacific Command, AI is already being pursued to accelerate the decision cycle and provide predictive analysis for logistics. Colonel Jared Voneida, INDOPACOM's C4 Operations Division chief, has noted that AI is being pursued to speed up the decision cycle across every warfighting function. The concept of “decision superiority,” which Paparo has defined as understanding “who is making the best decisions, who is best able to see, understand, decide, act, learn and assess,” depends on officers who can critically evaluate AI-generated recommendations rather than simply accepting them. This requires not just technical literacy but a sophisticated understanding of where AI excels, where it fails, and how to design human-machine teaming arrangements that exploit strengths whilst compensating for weaknesses.

Machine learning for manufacturing and design rounds out the technical portfolio. Naval shipbuilding remains an enormously complex industrial undertaking, and AI-driven design optimisation, predictive maintenance, and manufacturing process control offer significant potential for reducing costs and timelines. MIT Lincoln Laboratory has already demonstrated systems like COVAS (Human-Machine Collaborative Optimisation via Apprenticeship Scheduling), which uses machine learning to provide real-time ship defence scheduling solutions by learning from human experts. COVAS is the first and only algorithm to provide such real-time solutions, and researchers plan to mature the technology before proposing it as a Future Naval Capability to the Office of Naval Research. Maintenance operations across INDOPACOM are also being transformed through AI-enabled predictive systems that analyse sensor data from shipboard systems and aircraft components to identify potential failures before they become critical. Graduates of 2N6 would be expected to evaluate, integrate, and manage such systems across the fleet.

Ethics, Governance, and the Responsible AI Question

Perhaps the most consequential element of the 2N6 curriculum is one that might easily be overlooked: the mandatory inclusion of coursework in the social and ethical responsibilities of computing. This is not a token addition. The MIT Schwarzman College of Computing operates SERC (Social and Ethical Responsibilities of Computing), a cross-cutting initiative led by associate deans Nikos Trichakis and Brian Hedden. SERC develops peer-reviewed case studies, active learning projects, and pedagogical materials addressing privacy and surveillance, inequality and justice, autonomous systems and robotics, ethical computing practice, and law and policy. Its materials are based on original research, published through open-access licensing, and designed for integration across MIT's computing curriculum. Naval officers in 2N6 will encounter these frameworks not as a separate ethics module bolted onto a technical degree, but as an integral dimension of their AI education.

This integration matters because the Department of Defense has its own ethical framework that graduates will be expected to operationalise. The DoD adopted five principles for the ethical development of AI capabilities: responsible, equitable, traceable, reliable, and governable. The Responsible AI Strategy and Implementation Pathway translates these principles into concrete requirements, promoting human-machine teaming rather than fully autonomous systems and requiring that AI technologies be integrated in a lawful, ethical, and accountable manner. The DoD's Responsible AI Toolkit builds on the Defence Innovation Unit's guidelines, NIST's AI Risk Management Framework, and IEEE 7000-2021, establishing standards for operationalising ethical principles throughout the technology lifecycle. The Defence Innovation Unit launched its strategic initiative in March 2020 specifically to implement ethical principles into commercial prototyping and acquisition programmes, ensuring alignment through a process designed to be reliable, replicable, and scalable.

The question of traceability deserves particular attention. Traceability, in the DoD's formulation, means the ability to track and document all data and decisions of an AI tool, including how it was trained and how it processes information. For officers deploying AI in operational contexts, this creates obligations that are simultaneously technical (implementing logging, auditing, and explainability mechanisms) and organisational (ensuring that chains of command can meaningfully review AI-informed decisions). The programme's emphasis on algorithms, inference, and decision-making provides the technical foundation for understanding traceability, whilst the ethics coursework provides the normative framework for why it matters.

Yet genuine tensions remain. The DoD's ethical principles exist alongside a policy environment that has shifted significantly. President Biden's Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI, issued in October 2023, established foundational requirements for AI safety across federal agencies. That order was revoked in January 2025, and the subsequent AI Action Plan focuses less on safe development and more on acceleration. The DoD's own ethical principles remain formally in place, but the broader political context creates ambiguity about how rigorously they will be enforced. As Paparo himself has put it: “we need robust, ethical AI systems that enhance decision-making while fiercely preserving human oversight of critical operations.” Officers trained at MIT will enter a system where stated principles and operational incentives may not always align, making their ability to navigate ethical complexity all the more important.

The Dual-Use Dilemma

The technologies that 2N6 graduates will master are, almost without exception, dual-use. The same computer vision algorithms that identify military targets can diagnose medical conditions. The same natural language processing techniques that analyse intercepted communications can power consumer chatbots. The same reinforcement learning methods that optimise military logistics can manage commercial supply chains. This fundamental characteristic of AI technology, that its military and civilian applications are often indistinguishable at the algorithmic level, creates governance challenges that no single curriculum can resolve.

Research published in PMC (PubMed Central) has documented what scholars term the “double-distinguishability problem” of AI: not only is AI software with potential military applications likely to reside in both military and civilian networks, but even within the military domain, distinguishing between platforms that integrate AI and those that do not is extremely difficult. This complicates arms control, export regulation, and confidence-building measures. The degree of transparency required to build international confidence or ensure compliance with agreements may itself produce security vulnerabilities, discouraging cooperation.

The inherent opacity of many advanced machine learning systems compounds the problem. Despite strong performance in testing environments, the underlying reasoning of deep neural networks remains largely opaque. This “black box” quality compromises the human oversight required to uphold legal and ethical standards in military operations, particularly when AI decisions are made in milliseconds. Legal regimes must clarify fault attribution, determining whether responsibility falls on the commanding officer, the system developer, the algorithm designer, or the deploying state. What constitutes “meaningful human control” remains ambiguous and case-dependent, with a recent analysis noting that a human can technically interact with an autonomous system without having any substantive moral, legal, or operational oversight.

The United Nations Office for Disarmament Affairs convened the Military AI, Peace and Security Dialogues in 2025, where participants emphasised retaining human judgement and control over decisions on the use of force. They cautioned that legal determinations should not be coded into opaque systems, and that decision-making support tools should enable, not replace, legality and ethical reasoning. The U.S. State Department's Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy established broader norms, requiring that military AI use comply with international humanitarian law, that accountability be maintained through a responsible human chain of command, and that states take proactive steps to minimise unintended bias.

For MIT's 2N6 graduates, this dual-use reality means that their technical skills will be applicable across domains, but their ethical and governance training will need to be specifically calibrated for military contexts where the consequences of error are measured in lives rather than revenue. The programme's integration of game theory, wargaming, and reinforcement learning acknowledges that military AI operates in adversarial environments where rational actors are actively trying to exploit, deceive, or defeat the systems being deployed.

The Global AI Arms Race in Uniform

MIT's 2N6 programme does not exist in a vacuum. It is one move in an accelerating international competition to build AI-literate military forces, and the landscape of that competition reveals starkly different approaches to the same underlying challenge.

China represents the most direct competitive pressure. The People's Liberation Army views AI as leading to the next revolution in military affairs and expects to field a range of “algorithmic warfare” and “network-centric warfare” capabilities by 2030. Georgetown University's Center for Security and Emerging Technology has identified 370 Chinese institutions whose researchers have published papers related to general artificial intelligence. The PLA's approach relies heavily on military-civil fusion, integrating universities and commercial technology companies directly into defence research and development. A majority of suppliers for AI-related PLA procurement contracts are now civilian companies and universities rather than traditional state-owned defence enterprises.

Chinese researchers at institutions linked to the PLA's Academy of Military Science used Meta's open-source Llama 2 13B model to build “ChatBIT,” a military-focused AI tool fine-tuned and “optimised for dialogue and question-answering tasks in the military field.” The PLA rapidly adopted DeepSeek's generative AI models in early 2025, likely deploying them for intelligence purposes. The Pentagon's 2024 China report noted that “China's commercial and academic AI sectors made progress on large language models and LLM-based reasoning models, which has narrowed the performance gap between China's models and the U.S. models currently leading the field.” China's emerging 15th Five-Year Plan framework is expected to institutionalise military-civil fusion as the primary pathway for achieving what Chinese strategists call an “intelligentised” PLA by 2035.

Russia has pursued a different trajectory, constrained by sanctions and a smaller technology sector. The National Strategy for the Development of Artificial Intelligence, signed by President Putin in 2019, set targets of training 15,500 AI specialists by 2030 and allocated 26.49 billion rubles to AI development from 2025 to 2027. Russia aims to automate 30 percent of its military equipment and has begun integrating AI into systems like the ZALA Lancet drone swarm, which reportedly allows drones to exchange information and divide tasks autonomously. However, senior Russian military experts, including Vladimir Prikhvatilov of the Academy of Military Science, have acknowledged that Russia has “virtually no chances to catch up with the Chinese or the Americans” in military AI. The war in Ukraine has both accelerated urgency and exposed the gap between Russia's AI rhetoric and its actual capabilities, with international sanctions further constraining access to advanced computing hardware.

The United Kingdom offers a more direct parallel to MIT's approach. The UK Ministry of Defence published its Defence Artificial Intelligence Strategy describing an “ambitious, safe, responsible” approach to military AI. The Alan Turing Institute, as a strategic partner of the Defence Science and Technology Laboratory (Dstl), conducts defence-relevant AI research and has published frameworks for AI assurance in military contexts, including a commander's guide for uncrewed systems and recommendations for iteratively identifying, documenting, and communicating risks. A January 2025 Defence Committee report called on the Ministry of Defence to “transform itself into an 'AI-native' organisation” whilst acknowledging that the sector remained under-developed. Sub-committee chair Emma Lewell-Buck emphasised the need to make AI “a greater part of military education” and to facilitate movement between civilian and defence AI sectors, a recommendation that echoes precisely the gap MIT's 2N6 programme is designed to fill.

Israel has arguably moved furthest in operational deployment. The IDF established the Artificial Intelligence and Big Data Research Centre, created a new AI Division within its C4I and Cyber Defence Directorate following lessons from the Israel-Hamas War, and in January 2025, the Israeli Ministry of Defence established the AI and Autonomy Administration. Eyal Zamir, the Ministry's director general, emphasised that this was the first new administration established within the Ministry in over two decades. Approximately 750 military reservists were enrolled in AI training programmes organised by Israel's Innovation Authority and the Ministry of Defence in January 2026, reflecting a recognition that AI literacy cannot be confined to active-duty specialists. The IDF's model of recruiting talented high school graduates into elite technology units like Unit 8200, training them intensively through programmes like the 36-month Havatzalot Programme at Hebrew University, and then cycling them into the civilian technology sector creates a distinctive pipeline that no other nation has fully replicated.

Reshaping the Defence Workforce

The emergence of programmes like 2N6 points toward a fundamental recomposition of what militaries expect from their officer corps. The traditional career path, in which technical specialists remained in engineering billets whilst operational commanders focused on tactics and leadership, is giving way to a model that demands hybrid competency. Officers who will command AI-enabled forces need enough technical understanding to evaluate what their systems can and cannot do, enough ethical grounding to make responsible deployment decisions, and enough strategic vision to understand how AI reshapes the character of conflict.

The Naval Postgraduate School in Monterey, California, announced its own accelerated one-year Master of Science in Artificial Intelligence in late 2025, set to commence in July 2026. The programme comprises 21 courses, requires residency in Monterey, and is open to active-duty military officers, DoD civilian employees, and allied officers with computer science backgrounds. An NPS AI initiative launched in early 2025 established three lines of effort: AI education, problem-solving, and technology infrastructure, with industry partners including NVIDIA supporting cutting-edge education and applied research. Meanwhile, NPS also offers a distance-learning AI certificate comprising four courses, designed for military professionals without technical backgrounds, recognising that even non-specialist officers need baseline AI literacy.

Emil Michael declared that “the Department of War must become an 'AI-First' organisation,” and the January 2026 AI Acceleration Strategy codified this vision through four broad aims: incentivising internal experimentation with AI models, eliminating bureaucratic obstacles, focusing military investment on asymmetric advantages, and initiating Pace-Setting Projects. Cameron Stanley, previously chief of the DoD Algorithmic Warfare Cross Functional Team (formerly known as Project Maven) and a former national security transformation lead for Amazon Web Services, was appointed to lead the Applied Artificial Intelligence critical technology area.

These developments suggest a future in which AI literacy becomes a prerequisite for advancement rather than a specialist qualification. Just as nuclear propulsion reshaped the U.S. Navy's officer corps in the 1950s and 1960s, creating a cadre of nuclear-trained officers led by Admiral Hyman Rickover whose influence extended far beyond the engineering department, AI may create a similar dynamic. Officers who understand machine learning, autonomous systems, and decision intelligence will increasingly populate senior leadership positions, bringing with them assumptions, methodologies, and risk tolerances shaped by their technical training.

The implications extend well beyond the United States. As the UK Defence Committee recognised, military AI development requires not just technical infrastructure but a transformed workforce. The challenge is particularly acute for smaller nations that cannot replicate MIT's resources or the NPS's scale. International partnerships, joint training programmes, and standardised AI competency frameworks may emerge as mechanisms for distributing AI literacy across allied military forces. The 2N6 programme already anticipates this: whilst the first cohort will comprise only U.S. Navy officers, plans exist to expand to other military branches, allied officers, and civilian participants. The U.S. State Department's Political Declaration provides one potential foundation for allied cooperation, establishing shared expectations around accountability, human oversight, bias minimisation, and senior official involvement in AI deployment decisions.

The Academic-Military Compact

MIT's decision to launch 2N6 also illuminates the evolving relationship between universities and defence establishments. This is not new territory for MIT. The university founded Lincoln Laboratory in 1951, which has since developed advanced technologies for national security across domains including air and missile defence, undersea systems, embedded AI, and cyber security. Lincoln Laboratory hosts the annual RAAINS (Recent Advances in AI for National Security) Workshop, showcasing state-of-the-art national security AI applications, and the ANCHOR (Advancing Naval Capabilities through Holistic Opportunity and Research) Technology Workshop, which provides an open forum for discussing requirements of U.S. Naval Special Warfare Command. The Schwarzman College of Computing, established with a one-billion-dollar commitment, explicitly aims to address the opportunities and challenges of pervasive computing and the rise of AI across all fields of study.

Yet the partnership is not without tension. Huttenlocher's co-authorship of “The Age of AI” reflects the kind of broad civilisational thinking about artificial intelligence that academic freedom enables. The college's SERC initiative explicitly addresses privacy and surveillance, inequality and justice, and autonomous systems, topics that inevitably create friction when applied to military contexts. Academic freedom, open publication, and ethical inquiry sit uncomfortably alongside classification requirements, operational security, and institutional loyalty. How MIT navigates these tensions within 2N6 will offer a template, or a cautionary tale, for other universities considering similar partnerships.

The broader trend is unmistakable. Universities globally are recognising that AI for national security represents both a significant funding stream and a consequential research domain. The question is whether academic institutions can engage with military applications whilst maintaining the independence and ethical rigour that give their contributions value. If 2N6 becomes merely a credential-minting operation, it will fail both MIT and the Navy. If it genuinely produces officers capable of critical, ethical, technically informed thinking about AI in military contexts, it could influence how democracies approach the integration of artificial intelligence into their most consequential institutions.

What Comes Next

The 2N6 programme will run as a pilot for at least two years. Its success will ultimately be measured not by the grades its graduates earn but by whether they can bridge the gap between what AI can do in a laboratory and what it should do in the field.

Admiral Paparo's vision of decision superiority, of forces that can see, understand, decide, and act faster than any adversary, depends on officers who are not merely consumers of AI capability but informed, critical, and ethically grounded practitioners. MIT's 2N6 programme represents the most ambitious academic attempt to produce such officers. Whether it succeeds will depend on factors far beyond the curriculum: on institutional support within the Navy, on career incentives that reward AI competency, on the political will to enforce ethical principles even when they slow deployment, and on the willingness of military culture to embrace a fundamentally different kind of expertise.

The 2N programme celebrates its 125th year at MIT in 2026. If 2N6 proves its worth, the university may find itself at the centre of military education for another century, this time training officers not to design ships, but to think alongside the machines that will increasingly operate them.

References and Sources

  1. MIT News. “New MIT program to train military leaders for the AI age.” 12 December 2025. https://news.mit.edu/2025/applied-ai-program-train-military-leaders-ai-age-1212

  2. MIT 2N6 Programme. “Curriculum.” https://2n6.mit.edu/curriculum/

  3. MIT Lincoln Laboratory. “Artificial intelligence system helps Navy select the best tactics for ship defense.” https://www.ll.mit.edu/news/artificial-intelligence-system-helps-navy-select-best-tactics-ship-defense

  4. MIT Schwarzman College of Computing. “Social and Ethical Responsibilities of Computing (SERC).” https://computing.mit.edu/cross-cutting/social-and-ethical-responsibilities-of-computing/

  5. U.S. Department of Defense. “DOD Adopts 5 Principles of Artificial Intelligence Ethics.” https://www.war.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

  6. U.S. Department of Defense. “Responsible AI Strategy and Implementation Pathway.” October 2024. https://media.defense.gov/2024/Oct/26/2003571790/-1/-1/0/2024-06-RAI-STRATEGY-IMPLEMENTATION-PATHWAY.PDF

  7. Defence Innovation Unit. “Responsible AI Guidelines.” https://www.diu.mil/responsible-ai-guidelines

  8. U.S. Department of State. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” https://2021-2025.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/

  9. Breaking Defense. “'Constant stare': US Pacific commander wants AI to tell Chinese military exercises from invasion.” February 2024. https://breakingdefense.com/2024/02/constant-stare-us-pacific-commander-wants-ai-to-tell-chinese-military-exercises-from-invasion/

  10. AFCEA International. “AI Will Affect Every Warfighting Function in Indo-Pacific Command.” https://www.afcea.org/signal-media/ai-will-affect-every-warfighting-function-indo-pacific-command

  11. DefenseScoop. “Naval Postgraduate School offering new accelerated master's degree program in AI.” 22 December 2025. https://defensescoop.com/2025/12/22/nps-ai-masters-degree-program-naval-postgraduate-school/

  12. Breaking Defense. “From lasers to logistics: Pentagon CTO announces top six tech priorities.” November 2025. https://breakingdefense.com/2025/11/from-lasers-to-logistics-pentagon-cto-announces-top-six-tech-priorities/

  13. DefenseScoop. “Pentagon names 6 appointees to lead the CTO's top technology efforts.” January 2026. https://defensescoop.com/2026/01/30/dod-cto-critical-technology-areas-emil-michael-cta-appointees/

  14. Georgetown CSET. “China's Military AI Wish List.” https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/

  15. Recorded Future. “China's PLA Leverages Generative AI for Military Intelligence.” https://www.recordedfuture.com/research/artificial-eyes-generative-ai-chinas-military-intelligence

  16. Pentagon 2024 China Report. “New Pentagon report on China's military notes Beijing's progress on LLMs.” DefenseScoop, 26 December 2025. https://defensescoop.com/2025/12/26/dod-report-china-military-and-security-developments-prc-ai-llm/

  17. CNBC. “Chinese researchers develop AI model for military use on the back of Meta's Llama.” 1 November 2024. https://www.cnbc.com/2024/11/01/chinese-researchers-build-ai-model-for-military-use-on-back-of-metas-llama.html

  18. The Diplomat. “How China's Coming 15th Five-Year Plan Will Reshape Military Innovation.” October 2025. https://thediplomat.com/2025/10/how-chinas-coming-15th-five-year-plan-will-reshape-military-innovation/

  19. Jamestown Foundation. “Russia Capitalizes on Development of Artificial Intelligence in Its Military Strategy.” https://jamestown.org/russia-capitalizes-on-development-of-artificial-intelligence-in-its-military-strategy/

  20. UK Government. “Defence Artificial Intelligence Strategy.” https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy/

  21. UK Parliament Defence Committee. “Developing AI capacity and expertise in UK defence.” January 2025. https://committees.parliament.uk/publications/46217/documents/231330/default/

  22. Defense News. “Israel creates hub to hasten military AI, autonomy research.” 2 January 2025. https://www.defensenews.com/global/mideast-africa/2025/01/02/israel-creates-hub-to-hasten-military-ai-autonomy-research/

  23. United Nations Office for Disarmament Affairs. “Key Takeaways of The Military AI, Peace and Security Dialogues 2025.” https://disarmament.unoda.org/en/updates/key-takeaways-military-ai-peace-security-dialogues-2025

  24. PMC/PubMed Central. “Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.” https://pmc.ncbi.nlm.nih.gov/articles/PMC8904348/

  25. Nextgov/FCW. “DOD's AI acceleration strategy.” February 2026. https://www.nextgov.com/ideas/2026/02/dods-ai-acceleration-strategy/411135/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Having three different games to follow today, the games run consecutively, has worked well. They being radio games, I've still been able to keep up with my prayer regimen. After this IU / Ohio St. game ends I'll finish the night prayers then get ready for bed. All-in-all, a pretty good Saturday.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 229.83 lbs. * bp= 142/84 (68)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 07:00 – 2 cupcakes, 2 cookies * 10:30 – 1 banana, 1 peanut butter sandwich * 14:50 – home cooked meat & vegetables

Activities, Chores, etc.: * 06:45 – bank accounts activity monitored * 07:00 – read, write, pray, follow news reports from various sources, surf the socials, and nap * 09:45 – listen to relaxing music * 10:45 – listening to the Butler Bulldog's pregame show ahead of today's game vs. the Depaul Blue demons * 13:20 – And Butler wins. Final score 81 to 71. * 13:30 – tuned into 105.3 The Fan, Dallas Sports Radio, ahead of this afternoon's game between my Texas Rangers and the San Francisco Giants. Opening pitch is still a good half hour away. * 16:30 – Still following the score and stats of the Rangers/Giants game via the MLB Gameday Screen, but I've moved my radio over to the IU/Ohio St. game, almost time for the opening tip. * 16:40 – the San Francisco Giants win. Final score 7 to 5.

Chess: * 16:52 – moved in all pending CC games

 
Read more...

from Ira Cogan

Anil Dash with a little history of markdown. I love markdown. I love Microsoft Word too, but I don't use a lot on there when writing for this thing. I used to start the draft over there and finish over here. But lately I've come to realize as much as I love Word for a lot of things, I actually get around to finishing more when I start over here, and then I just copy, paste, and save a copy over there. I also enjoy reading Dash's stuff. Part of the reason it's easier for me to actually get started and get around to finishing is markdown. Fascinating stuff.

Gladys West, a mathematician whose work helped create GPS recently passed away at the age of 95.

David Farber, a computer scientist who helped create and shape the internet recently passed away.

I was out walking the dog this morning and I saw a write.as sticker in my neighborhood.

I was out walking the dog this morning and I saw a write.as sticker in my neighborhood. NYC is a small world sometimes. write.as is the platform I write thing on. Brooklyn 3/7/26

-Ira

 
Read more...

from

You found me. I was like you once. I searched for answers in books and places I thought right but left me more confused. I tried to put into concepts what can’t be explained.

Stop here. Let the voice in your head ask itself: do I exist?

Something just answered. What was that? A child knows it exists before it knows what existing is. That knowing cannot then be a thought.

But does the knowing know it’s not a thought?

 
Read more... Discuss...

from Roscoe's Quick Notes

Plans for this Saturday include following three radio games: 1.) Up first will be a men's college basketball game with a scheduled start of 11:00 AM Central Time featuring the Butler Bulldogs at the DePaul Blue Demons. 2.) Next will be an MLB Spring Training game with a scheduled start of 2:00 PM Central Time between my Texas Rangers and the San Francisco Giants. 3.) The third and final radio game planned for today will be another men's college basketball contest featuring my Indiana Hoosiers at the Ohio St. Buckeyes, with a scheduled start time of 4:30 PM Central Time.

And the adventure continues.

 
Read more...

from theneverendingmagazine

“There is nothing we can do” should be the official slogan of this city. Beyond the daily indifference people show one another, the most disappointing aspect of living here is the flaccid response of all authority when confronted with problems.

This is not about “safety”. That would be a different issue. It's inertia. When solutions clearly exist, the answer is still “there is nothing we can do”.

Yet there are many things that could be done. Instead, here comes the apathetic, collective shrug, the resignation to the idea that “things are the way they are and always will be”. Responsibility dissolves into indifference. How convenient.

At what point does “nothing can be done” will declare itself as a choice, losing its mask of limitation?

How many small failures accumulate before apathy becomes the defining culture of a place?

What exactly remains of the idea of a city, then?

 
Read more...

Join the writers on Write.as.

Start writing or create a blog