Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
fragment of a journal entry from god knows when...
...It is Two AM here in Colorado and I have just come from a disappointingly dense and strange family reunion, that when compared with what I have formerly known about family reunions on my mother's side was what an isolated abandoned college freshman dorm meal is to a fully laid home cooked table with the most valuable people in the world. There were no games, no organized activities. We ate in the groups we had arrived with, in the large and nearly vacant commuter college cafeteria. The food was furnished by Sodexo, who my brother was familiar with from his years in the food business and concert promotion.
“They do a lot of prisons, and colleges, for that same reason. They run everything. From the food carts to the cafeterias to the vending machines.”
I remembered a company like that. When myself and my ex-wife traveled to the Grand Canyon for the last appending comma of our distinctly unimpressive honeymoon. The company in that instance was called Xantara and they owned everything on the South Rim of one of our most spectacular holes.
It seems fully insulting in my opinion, not just that they had build this gaudy, garish tourist outpost at one of the many and certainly one of the more beautiful works of majestic mother nature in that lovely, unique part of our great country...
from Dallineation
Today I watched “Francis of Assisi” – a 1961 film about the story of Saint Francis. I managed to find a free low-resolution version the film on YouTube. It was difficult to get into because it was very dated “Hollywood” in its style and acting, but when I started thinking of it as more of a grandiose stage play, it became easier to watch in that context.
I came away from it with a greater appreciation for St. Francis. But I also saw in the YouTube comments on the video that Dolores Hart, the actress who played Clare, became a Catholic nun two years after the film's initial release.
I also discovered a short documentary about Hart called “God is the Bigger Elvis” (a reference to her co-starring with Elvis Presley in the film “Loving You”) and I also found and watched it on YouTube. What a neat woman and beautiful story.
While my church doesn't have monastic orders like nuns or monks, I've long had a profound respect for people who choose to live such a life consecrated to God.
I've sometimes thought about what it would be for me to live in such a way. I've often felt like forsaking all my worldly possessions and living a life of poverty and devotion to God.
Catholics have this concept of a “Vocation” – entry into the priesthood or a religious order like nuns or monks
In modern society, the word vocation has become another word for career. But I have always felt that a vocation is more than just a career. It can be a career, of course. But I happen to have found my way into my current career mostly out of expediency, not because it's something I have ever felt I was meant or drawn to do.
A vocation is something one feels a strong desire to do – a calling to do. And I have long been trying to figure out what my vocation is.
Today an idea resurfaced that I have considered many times over the past six months or so:
Maybe I could be a chaplain.
A neighbor of mine and member of my ward has been studying to become a chaplain and she has spoken about it in church. I had never considered being a chaplain before, but as I have thought about it, I feel it's something I would find deeply meaningful and fulfilling.
It's also something that would be extremely difficult and I would need to be well-anchored in my faith, as well as avail myself of a therapist and other means of coping with the difficult and sometimes horrible circumstances and situations I would be exposed to in such a vocation.
My church has a web page with information about being a chaplain and I have reviewed much of the material there. I have always thought of chaplains being for the military, but they are in a lot of different places, from hospitals, to prisons, even universities. There is still much I don't know. But I'd like to learn more.
Whatever my vocation, I want to be able to help people. To give them hope. To help them to know they are loved.
Maybe going through this time of spiritual distress and searching has been necessary so that I can empathize with and relate to and minister to others experiencing the same.
#100DaysToOffload (No. 148) #faith #Lent
from
SmarterArticles

In December 2025, MIT announced a programme that would have seemed implausible even a decade earlier: a two-year master's degree designed to teach naval officers the fundamentals of artificial intelligence, machine learning, and autonomous systems. The programme, designated 2N6, pairs the university's Department of Mechanical Engineering with its Department of Electrical Engineering and Computer Science, awarding graduates both a Master of Science in mechanical engineering and an AI certificate from the MIT Schwarzman College of Computing. It is, in essence, a bet that the future of naval warfare will be shaped not by those who build the biggest ships, but by those who best understand the algorithms directing them.
The timing is no coincidence. In January 2026, the Department of Defense released its Artificial Intelligence Acceleration Strategy, declaring its intention to become an “AI-first” organisation. Under Secretary for Research and Engineering Emil Michael had already pruned the Pentagon's list of critical technology areas from fourteen to six, placing applied artificial intelligence at the very top. And at U.S. Indo-Pacific Command, where the prospect of conflict with a technologically sophisticated adversary concentrates minds with particular intensity, Commander Admiral Samuel Paparo had been arguing for months that future wars would be won not by superior firepower alone, but by whoever could “see, understand, decide and act faster.” The question was no longer whether the military needed AI-literate officers, but how quickly it could produce them.
The origins of 2N6 trace back to a campus visit by Paparo himself. The admiral toured MIT's existing AI research facilities and immediately recognised a gap. The university had maintained the 2N Naval Construction and Engineering programme since 1901, training generations of officers in ship design and acquisition. The programme was about to celebrate its 125th anniversary in 2026. But the world had changed. The defining technologies of 21st-century naval power were no longer hull forms and propulsion systems alone; they were neural networks, reinforcement learning, and autonomous underwater vehicles. Paparo envisioned an applied AI programme modelled on the existing 2N infrastructure, and within months, 2N6 began taking shape.
Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, has been central to the programme's development. MacLean, himself a graduate of the 2N programme whose thesis focused on the fracture and plasticity characterisation of DH-36 Navy steel, explained that Paparo “was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI” and “made the connection, envisioning an applied AI program similar to 2N.” In describing the programme's scope, MacLean was emphatic about breadth: “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defence, logistics and supply chains, energy management, and many other fields.” This is not a programme narrowly focused on weapons systems or battlefield robots; it treats artificial intelligence as a pervasive capability touching every aspect of naval operations.
Dan Huttenlocher, the inaugural Dean of the MIT Schwarzman College of Computing, lent institutional weight to the announcement. “I'm honoured that the college can contribute to and support such a vital program that will equip our nation's naval officers with the technical expertise they need,” Huttenlocher stated. His involvement signals the seriousness of MIT's commitment: Huttenlocher, who previously founded Cornell Tech and co-authored “The Age of AI: And Our Human Future” with Henry Kissinger and Eric Schmidt, brings both academic credibility and a deep engagement with the societal implications of artificial intelligence.
The 2N6 curriculum reflects a deliberate attempt to balance theoretical depth with operational relevance, structured to satisfy the U.S. Navy's sub-specialty code for Applied Artificial Intelligence. Students begin with a “Summer Camp” of foundational courses covering linear algebra and optimisation, introductory programming, discrete mathematics and proofs, algorithms and data structures, and software fundamentals. These are not optional polish; they are prerequisites designed to ensure that officers arriving from operational billets, where they may have spent years commanding ships or submarines rather than writing code, have the mathematical and computational fluency to engage with what follows.
The core of the programme divides into several tracks. The probability, inference, and machine learning sequence includes courses in stochastic dynamical systems, introduction to probability, introduction to inference, and both introductory and advanced machine learning. These build toward specialised AI topics: advances in computer vision, topics in multi-agent learning, quantitative methods for natural language processing, optimisation methods, and a course titled “AI, Decision Making and Society.” That final course is significant. It signals that 2N6 does not treat artificial intelligence as a purely technical problem but as one embedded in social, political, and ethical contexts that military leaders must navigate with the same rigour they apply to technical challenges.
The naval applications track offers four areas of concentration, each designed to connect AI theory to operational reality. In autonomy, students study unmanned marine vehicle autonomy, sensing and communications, manoeuvring and control of surface and underwater vehicles, and principles of autonomy and decision making. In design and manufacturing, the focus turns to AI and machine learning for design, principles of naval ship design, and manufacturing processes and systems. A games and strategy track covers reinforcement learning combined with game theory and wargaming, preparing officers for the adversarial dynamics of actual conflict. And an innovation track provides team-based interdisciplinary collaboration, simulating the cross-functional problem-solving that AI deployment demands in practice.
Themis Sapsis, the William I. Koch Professor in mechanical engineering and Director of the Center for Ocean Engineering at MIT, has described the programme as “specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy.” Sapsis, who holds a diploma in naval architecture and marine engineering from the Technical University of Athens and a PhD in mechanical and ocean engineering from MIT, brings direct domain expertise to the programme. His own research spans nonlinear dynamical systems, probabilistic modelling, and data-driven methods, with applications ranging from predicting catastrophic sea waves to calculating extreme loads on warships. His work has been recognised with awards from the Office of Naval Research, the Army Research Office, and the Air Force Office of Scientific Research. “2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security,” Sapsis has emphasised, positioning the programme not merely as a naval initiative but as a potential template for defence AI education writ large.
John Hart, Head of MIT's Department of Mechanical Engineering, framed the programme in generational terms: “With the 2N6 program, we're proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.” Asu Ozdaglar, Deputy Dean of the Schwarzman College of Computing, similarly described the partnership as “an important collaboration with the U.S. Navy” that reflects the college's broader mission to bring computing expertise to consequential domains.
The specific competencies the programme prioritises reveal much about where the U.S. Navy believes its AI gaps are most acute. Autonomous systems sit at the top of the list, and for good reason. Admiral Paparo has been explicit about wanting large numbers of low-cost, long-endurance unmanned sensor platforms, including drones, robot ships, and autonomous underwater vehicles, to maintain persistent surveillance across the Indo-Pacific. With Chinese wargames growing ever larger and more realistic, Paparo has argued that traditional intelligence “indications and warning” can no longer reliably distinguish between exercises and an actual invasion preparation. His proposed solution: surveillance drones feeding AI analysis to detect anomalies and patterns more quickly and accurately than human analysts could manage alone.
“We never send a human being to do something that a machine can do,” Paparo has stated. “We never lose human agency over offensive power.” The tension between those two principles captures the central challenge of military autonomy: expanding the envelope of machine capability whilst maintaining meaningful human control. Graduates of 2N6 will be expected to design and manage systems that operate in this tension, understanding both the engineering of autonomy and the doctrinal requirements for human oversight.
Cyber defence represents another critical domain. The ability to protect AI systems themselves from adversarial manipulation, data poisoning, and model exploitation is becoming as important as the AI capabilities those systems provide. An AI-enabled fleet that can be fooled by adversarial inputs or compromised through supply chain attacks on its training data becomes a liability rather than an advantage. The curriculum's emphasis on algorithms, data structures, and software fundamentals is not merely academic preparation; it provides the conceptual toolkit for understanding how AI systems can be attacked and defended. MIT Lincoln Laboratory's Embedded and Open Systems Group has been developing AI research environments specifically to evaluate promising embedded AI technologies and their impact on critical defence missions, from advanced multimodal navigation to synthetic aperture radar object detection.
Decision intelligence, the application of AI to command-and-control processes, constitutes perhaps the most consequential area. At U.S. Indo-Pacific Command, AI is already being pursued to accelerate the decision cycle and provide predictive analysis for logistics. Colonel Jared Voneida, INDOPACOM's C4 Operations Division chief, has noted that AI is being pursued to speed up the decision cycle across every warfighting function. The concept of “decision superiority,” which Paparo has defined as understanding “who is making the best decisions, who is best able to see, understand, decide, act, learn and assess,” depends on officers who can critically evaluate AI-generated recommendations rather than simply accepting them. This requires not just technical literacy but a sophisticated understanding of where AI excels, where it fails, and how to design human-machine teaming arrangements that exploit strengths whilst compensating for weaknesses.
Machine learning for manufacturing and design rounds out the technical portfolio. Naval shipbuilding remains an enormously complex industrial undertaking, and AI-driven design optimisation, predictive maintenance, and manufacturing process control offer significant potential for reducing costs and timelines. MIT Lincoln Laboratory has already demonstrated systems like COVAS (Human-Machine Collaborative Optimisation via Apprenticeship Scheduling), which uses machine learning to provide real-time ship defence scheduling solutions by learning from human experts. COVAS is the first and only algorithm to provide such real-time solutions, and researchers plan to mature the technology before proposing it as a Future Naval Capability to the Office of Naval Research. Maintenance operations across INDOPACOM are also being transformed through AI-enabled predictive systems that analyse sensor data from shipboard systems and aircraft components to identify potential failures before they become critical. Graduates of 2N6 would be expected to evaluate, integrate, and manage such systems across the fleet.
Perhaps the most consequential element of the 2N6 curriculum is one that might easily be overlooked: the mandatory inclusion of coursework in the social and ethical responsibilities of computing. This is not a token addition. The MIT Schwarzman College of Computing operates SERC (Social and Ethical Responsibilities of Computing), a cross-cutting initiative led by associate deans Nikos Trichakis and Brian Hedden. SERC develops peer-reviewed case studies, active learning projects, and pedagogical materials addressing privacy and surveillance, inequality and justice, autonomous systems and robotics, ethical computing practice, and law and policy. Its materials are based on original research, published through open-access licensing, and designed for integration across MIT's computing curriculum. Naval officers in 2N6 will encounter these frameworks not as a separate ethics module bolted onto a technical degree, but as an integral dimension of their AI education.
This integration matters because the Department of Defense has its own ethical framework that graduates will be expected to operationalise. The DoD adopted five principles for the ethical development of AI capabilities: responsible, equitable, traceable, reliable, and governable. The Responsible AI Strategy and Implementation Pathway translates these principles into concrete requirements, promoting human-machine teaming rather than fully autonomous systems and requiring that AI technologies be integrated in a lawful, ethical, and accountable manner. The DoD's Responsible AI Toolkit builds on the Defence Innovation Unit's guidelines, NIST's AI Risk Management Framework, and IEEE 7000-2021, establishing standards for operationalising ethical principles throughout the technology lifecycle. The Defence Innovation Unit launched its strategic initiative in March 2020 specifically to implement ethical principles into commercial prototyping and acquisition programmes, ensuring alignment through a process designed to be reliable, replicable, and scalable.
The question of traceability deserves particular attention. Traceability, in the DoD's formulation, means the ability to track and document all data and decisions of an AI tool, including how it was trained and how it processes information. For officers deploying AI in operational contexts, this creates obligations that are simultaneously technical (implementing logging, auditing, and explainability mechanisms) and organisational (ensuring that chains of command can meaningfully review AI-informed decisions). The programme's emphasis on algorithms, inference, and decision-making provides the technical foundation for understanding traceability, whilst the ethics coursework provides the normative framework for why it matters.
Yet genuine tensions remain. The DoD's ethical principles exist alongside a policy environment that has shifted significantly. President Biden's Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI, issued in October 2023, established foundational requirements for AI safety across federal agencies. That order was revoked in January 2025, and the subsequent AI Action Plan focuses less on safe development and more on acceleration. The DoD's own ethical principles remain formally in place, but the broader political context creates ambiguity about how rigorously they will be enforced. As Paparo himself has put it: “we need robust, ethical AI systems that enhance decision-making while fiercely preserving human oversight of critical operations.” Officers trained at MIT will enter a system where stated principles and operational incentives may not always align, making their ability to navigate ethical complexity all the more important.
The technologies that 2N6 graduates will master are, almost without exception, dual-use. The same computer vision algorithms that identify military targets can diagnose medical conditions. The same natural language processing techniques that analyse intercepted communications can power consumer chatbots. The same reinforcement learning methods that optimise military logistics can manage commercial supply chains. This fundamental characteristic of AI technology, that its military and civilian applications are often indistinguishable at the algorithmic level, creates governance challenges that no single curriculum can resolve.
Research published in PMC (PubMed Central) has documented what scholars term the “double-distinguishability problem” of AI: not only is AI software with potential military applications likely to reside in both military and civilian networks, but even within the military domain, distinguishing between platforms that integrate AI and those that do not is extremely difficult. This complicates arms control, export regulation, and confidence-building measures. The degree of transparency required to build international confidence or ensure compliance with agreements may itself produce security vulnerabilities, discouraging cooperation.
The inherent opacity of many advanced machine learning systems compounds the problem. Despite strong performance in testing environments, the underlying reasoning of deep neural networks remains largely opaque. This “black box” quality compromises the human oversight required to uphold legal and ethical standards in military operations, particularly when AI decisions are made in milliseconds. Legal regimes must clarify fault attribution, determining whether responsibility falls on the commanding officer, the system developer, the algorithm designer, or the deploying state. What constitutes “meaningful human control” remains ambiguous and case-dependent, with a recent analysis noting that a human can technically interact with an autonomous system without having any substantive moral, legal, or operational oversight.
The United Nations Office for Disarmament Affairs convened the Military AI, Peace and Security Dialogues in 2025, where participants emphasised retaining human judgement and control over decisions on the use of force. They cautioned that legal determinations should not be coded into opaque systems, and that decision-making support tools should enable, not replace, legality and ethical reasoning. The U.S. State Department's Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy established broader norms, requiring that military AI use comply with international humanitarian law, that accountability be maintained through a responsible human chain of command, and that states take proactive steps to minimise unintended bias.
For MIT's 2N6 graduates, this dual-use reality means that their technical skills will be applicable across domains, but their ethical and governance training will need to be specifically calibrated for military contexts where the consequences of error are measured in lives rather than revenue. The programme's integration of game theory, wargaming, and reinforcement learning acknowledges that military AI operates in adversarial environments where rational actors are actively trying to exploit, deceive, or defeat the systems being deployed.
MIT's 2N6 programme does not exist in a vacuum. It is one move in an accelerating international competition to build AI-literate military forces, and the landscape of that competition reveals starkly different approaches to the same underlying challenge.
China represents the most direct competitive pressure. The People's Liberation Army views AI as leading to the next revolution in military affairs and expects to field a range of “algorithmic warfare” and “network-centric warfare” capabilities by 2030. Georgetown University's Center for Security and Emerging Technology has identified 370 Chinese institutions whose researchers have published papers related to general artificial intelligence. The PLA's approach relies heavily on military-civil fusion, integrating universities and commercial technology companies directly into defence research and development. A majority of suppliers for AI-related PLA procurement contracts are now civilian companies and universities rather than traditional state-owned defence enterprises.
Chinese researchers at institutions linked to the PLA's Academy of Military Science used Meta's open-source Llama 2 13B model to build “ChatBIT,” a military-focused AI tool fine-tuned and “optimised for dialogue and question-answering tasks in the military field.” The PLA rapidly adopted DeepSeek's generative AI models in early 2025, likely deploying them for intelligence purposes. The Pentagon's 2024 China report noted that “China's commercial and academic AI sectors made progress on large language models and LLM-based reasoning models, which has narrowed the performance gap between China's models and the U.S. models currently leading the field.” China's emerging 15th Five-Year Plan framework is expected to institutionalise military-civil fusion as the primary pathway for achieving what Chinese strategists call an “intelligentised” PLA by 2035.
Russia has pursued a different trajectory, constrained by sanctions and a smaller technology sector. The National Strategy for the Development of Artificial Intelligence, signed by President Putin in 2019, set targets of training 15,500 AI specialists by 2030 and allocated 26.49 billion rubles to AI development from 2025 to 2027. Russia aims to automate 30 percent of its military equipment and has begun integrating AI into systems like the ZALA Lancet drone swarm, which reportedly allows drones to exchange information and divide tasks autonomously. However, senior Russian military experts, including Vladimir Prikhvatilov of the Academy of Military Science, have acknowledged that Russia has “virtually no chances to catch up with the Chinese or the Americans” in military AI. The war in Ukraine has both accelerated urgency and exposed the gap between Russia's AI rhetoric and its actual capabilities, with international sanctions further constraining access to advanced computing hardware.
The United Kingdom offers a more direct parallel to MIT's approach. The UK Ministry of Defence published its Defence Artificial Intelligence Strategy describing an “ambitious, safe, responsible” approach to military AI. The Alan Turing Institute, as a strategic partner of the Defence Science and Technology Laboratory (Dstl), conducts defence-relevant AI research and has published frameworks for AI assurance in military contexts, including a commander's guide for uncrewed systems and recommendations for iteratively identifying, documenting, and communicating risks. A January 2025 Defence Committee report called on the Ministry of Defence to “transform itself into an 'AI-native' organisation” whilst acknowledging that the sector remained under-developed. Sub-committee chair Emma Lewell-Buck emphasised the need to make AI “a greater part of military education” and to facilitate movement between civilian and defence AI sectors, a recommendation that echoes precisely the gap MIT's 2N6 programme is designed to fill.
Israel has arguably moved furthest in operational deployment. The IDF established the Artificial Intelligence and Big Data Research Centre, created a new AI Division within its C4I and Cyber Defence Directorate following lessons from the Israel-Hamas War, and in January 2025, the Israeli Ministry of Defence established the AI and Autonomy Administration. Eyal Zamir, the Ministry's director general, emphasised that this was the first new administration established within the Ministry in over two decades. Approximately 750 military reservists were enrolled in AI training programmes organised by Israel's Innovation Authority and the Ministry of Defence in January 2026, reflecting a recognition that AI literacy cannot be confined to active-duty specialists. The IDF's model of recruiting talented high school graduates into elite technology units like Unit 8200, training them intensively through programmes like the 36-month Havatzalot Programme at Hebrew University, and then cycling them into the civilian technology sector creates a distinctive pipeline that no other nation has fully replicated.
The emergence of programmes like 2N6 points toward a fundamental recomposition of what militaries expect from their officer corps. The traditional career path, in which technical specialists remained in engineering billets whilst operational commanders focused on tactics and leadership, is giving way to a model that demands hybrid competency. Officers who will command AI-enabled forces need enough technical understanding to evaluate what their systems can and cannot do, enough ethical grounding to make responsible deployment decisions, and enough strategic vision to understand how AI reshapes the character of conflict.
The Naval Postgraduate School in Monterey, California, announced its own accelerated one-year Master of Science in Artificial Intelligence in late 2025, set to commence in July 2026. The programme comprises 21 courses, requires residency in Monterey, and is open to active-duty military officers, DoD civilian employees, and allied officers with computer science backgrounds. An NPS AI initiative launched in early 2025 established three lines of effort: AI education, problem-solving, and technology infrastructure, with industry partners including NVIDIA supporting cutting-edge education and applied research. Meanwhile, NPS also offers a distance-learning AI certificate comprising four courses, designed for military professionals without technical backgrounds, recognising that even non-specialist officers need baseline AI literacy.
Emil Michael declared that “the Department of War must become an 'AI-First' organisation,” and the January 2026 AI Acceleration Strategy codified this vision through four broad aims: incentivising internal experimentation with AI models, eliminating bureaucratic obstacles, focusing military investment on asymmetric advantages, and initiating Pace-Setting Projects. Cameron Stanley, previously chief of the DoD Algorithmic Warfare Cross Functional Team (formerly known as Project Maven) and a former national security transformation lead for Amazon Web Services, was appointed to lead the Applied Artificial Intelligence critical technology area.
These developments suggest a future in which AI literacy becomes a prerequisite for advancement rather than a specialist qualification. Just as nuclear propulsion reshaped the U.S. Navy's officer corps in the 1950s and 1960s, creating a cadre of nuclear-trained officers led by Admiral Hyman Rickover whose influence extended far beyond the engineering department, AI may create a similar dynamic. Officers who understand machine learning, autonomous systems, and decision intelligence will increasingly populate senior leadership positions, bringing with them assumptions, methodologies, and risk tolerances shaped by their technical training.
The implications extend well beyond the United States. As the UK Defence Committee recognised, military AI development requires not just technical infrastructure but a transformed workforce. The challenge is particularly acute for smaller nations that cannot replicate MIT's resources or the NPS's scale. International partnerships, joint training programmes, and standardised AI competency frameworks may emerge as mechanisms for distributing AI literacy across allied military forces. The 2N6 programme already anticipates this: whilst the first cohort will comprise only U.S. Navy officers, plans exist to expand to other military branches, allied officers, and civilian participants. The U.S. State Department's Political Declaration provides one potential foundation for allied cooperation, establishing shared expectations around accountability, human oversight, bias minimisation, and senior official involvement in AI deployment decisions.
MIT's decision to launch 2N6 also illuminates the evolving relationship between universities and defence establishments. This is not new territory for MIT. The university founded Lincoln Laboratory in 1951, which has since developed advanced technologies for national security across domains including air and missile defence, undersea systems, embedded AI, and cyber security. Lincoln Laboratory hosts the annual RAAINS (Recent Advances in AI for National Security) Workshop, showcasing state-of-the-art national security AI applications, and the ANCHOR (Advancing Naval Capabilities through Holistic Opportunity and Research) Technology Workshop, which provides an open forum for discussing requirements of U.S. Naval Special Warfare Command. The Schwarzman College of Computing, established with a one-billion-dollar commitment, explicitly aims to address the opportunities and challenges of pervasive computing and the rise of AI across all fields of study.
Yet the partnership is not without tension. Huttenlocher's co-authorship of “The Age of AI” reflects the kind of broad civilisational thinking about artificial intelligence that academic freedom enables. The college's SERC initiative explicitly addresses privacy and surveillance, inequality and justice, and autonomous systems, topics that inevitably create friction when applied to military contexts. Academic freedom, open publication, and ethical inquiry sit uncomfortably alongside classification requirements, operational security, and institutional loyalty. How MIT navigates these tensions within 2N6 will offer a template, or a cautionary tale, for other universities considering similar partnerships.
The broader trend is unmistakable. Universities globally are recognising that AI for national security represents both a significant funding stream and a consequential research domain. The question is whether academic institutions can engage with military applications whilst maintaining the independence and ethical rigour that give their contributions value. If 2N6 becomes merely a credential-minting operation, it will fail both MIT and the Navy. If it genuinely produces officers capable of critical, ethical, technically informed thinking about AI in military contexts, it could influence how democracies approach the integration of artificial intelligence into their most consequential institutions.
The 2N6 programme will run as a pilot for at least two years. Its success will ultimately be measured not by the grades its graduates earn but by whether they can bridge the gap between what AI can do in a laboratory and what it should do in the field.
Admiral Paparo's vision of decision superiority, of forces that can see, understand, decide, and act faster than any adversary, depends on officers who are not merely consumers of AI capability but informed, critical, and ethically grounded practitioners. MIT's 2N6 programme represents the most ambitious academic attempt to produce such officers. Whether it succeeds will depend on factors far beyond the curriculum: on institutional support within the Navy, on career incentives that reward AI competency, on the political will to enforce ethical principles even when they slow deployment, and on the willingness of military culture to embrace a fundamentally different kind of expertise.
The 2N programme celebrates its 125th year at MIT in 2026. If 2N6 proves its worth, the university may find itself at the centre of military education for another century, this time training officers not to design ships, but to think alongside the machines that will increasingly operate them.
MIT News. “New MIT program to train military leaders for the AI age.” 12 December 2025. https://news.mit.edu/2025/applied-ai-program-train-military-leaders-ai-age-1212
MIT 2N6 Programme. “Curriculum.” https://2n6.mit.edu/curriculum/
MIT Lincoln Laboratory. “Artificial intelligence system helps Navy select the best tactics for ship defense.” https://www.ll.mit.edu/news/artificial-intelligence-system-helps-navy-select-best-tactics-ship-defense
MIT Schwarzman College of Computing. “Social and Ethical Responsibilities of Computing (SERC).” https://computing.mit.edu/cross-cutting/social-and-ethical-responsibilities-of-computing/
U.S. Department of Defense. “DOD Adopts 5 Principles of Artificial Intelligence Ethics.” https://www.war.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/
U.S. Department of Defense. “Responsible AI Strategy and Implementation Pathway.” October 2024. https://media.defense.gov/2024/Oct/26/2003571790/-1/-1/0/2024-06-RAI-STRATEGY-IMPLEMENTATION-PATHWAY.PDF
Defence Innovation Unit. “Responsible AI Guidelines.” https://www.diu.mil/responsible-ai-guidelines
U.S. Department of State. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” https://2021-2025.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/
Breaking Defense. “'Constant stare': US Pacific commander wants AI to tell Chinese military exercises from invasion.” February 2024. https://breakingdefense.com/2024/02/constant-stare-us-pacific-commander-wants-ai-to-tell-chinese-military-exercises-from-invasion/
AFCEA International. “AI Will Affect Every Warfighting Function in Indo-Pacific Command.” https://www.afcea.org/signal-media/ai-will-affect-every-warfighting-function-indo-pacific-command
DefenseScoop. “Naval Postgraduate School offering new accelerated master's degree program in AI.” 22 December 2025. https://defensescoop.com/2025/12/22/nps-ai-masters-degree-program-naval-postgraduate-school/
Breaking Defense. “From lasers to logistics: Pentagon CTO announces top six tech priorities.” November 2025. https://breakingdefense.com/2025/11/from-lasers-to-logistics-pentagon-cto-announces-top-six-tech-priorities/
DefenseScoop. “Pentagon names 6 appointees to lead the CTO's top technology efforts.” January 2026. https://defensescoop.com/2026/01/30/dod-cto-critical-technology-areas-emil-michael-cta-appointees/
Georgetown CSET. “China's Military AI Wish List.” https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/
Recorded Future. “China's PLA Leverages Generative AI for Military Intelligence.” https://www.recordedfuture.com/research/artificial-eyes-generative-ai-chinas-military-intelligence
Pentagon 2024 China Report. “New Pentagon report on China's military notes Beijing's progress on LLMs.” DefenseScoop, 26 December 2025. https://defensescoop.com/2025/12/26/dod-report-china-military-and-security-developments-prc-ai-llm/
CNBC. “Chinese researchers develop AI model for military use on the back of Meta's Llama.” 1 November 2024. https://www.cnbc.com/2024/11/01/chinese-researchers-build-ai-model-for-military-use-on-back-of-metas-llama.html
The Diplomat. “How China's Coming 15th Five-Year Plan Will Reshape Military Innovation.” October 2025. https://thediplomat.com/2025/10/how-chinas-coming-15th-five-year-plan-will-reshape-military-innovation/
Jamestown Foundation. “Russia Capitalizes on Development of Artificial Intelligence in Its Military Strategy.” https://jamestown.org/russia-capitalizes-on-development-of-artificial-intelligence-in-its-military-strategy/
UK Government. “Defence Artificial Intelligence Strategy.” https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy/
UK Parliament Defence Committee. “Developing AI capacity and expertise in UK defence.” January 2025. https://committees.parliament.uk/publications/46217/documents/231330/default/
Defense News. “Israel creates hub to hasten military AI, autonomy research.” 2 January 2025. https://www.defensenews.com/global/mideast-africa/2025/01/02/israel-creates-hub-to-hasten-military-ai-autonomy-research/
United Nations Office for Disarmament Affairs. “Key Takeaways of The Military AI, Peace and Security Dialogues 2025.” https://disarmament.unoda.org/en/updates/key-takeaways-military-ai-peace-security-dialogues-2025
PMC/PubMed Central. “Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.” https://pmc.ncbi.nlm.nih.gov/articles/PMC8904348/
Nextgov/FCW. “DOD's AI acceleration strategy.” February 2026. https://www.nextgov.com/ideas/2026/02/dods-ai-acceleration-strategy/411135/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * Having three different games to follow today, the games run consecutively, has worked well. They being radio games, I've still been able to keep up with my prayer regimen. After this IU / Ohio St. game ends I'll finish the night prayers then get ready for bed. All-in-all, a pretty good Saturday.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 229.83 lbs. * bp= 142/84 (68)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:00 – 2 cupcakes, 2 cookies * 10:30 – 1 banana, 1 peanut butter sandwich * 14:50 – home cooked meat & vegetables
Activities, Chores, etc.: * 06:45 – bank accounts activity monitored * 07:00 – read, write, pray, follow news reports from various sources, surf the socials, and nap * 09:45 – listen to relaxing music * 10:45 – listening to the Butler Bulldog's pregame show ahead of today's game vs. the Depaul Blue demons * 13:20 – And Butler wins. Final score 81 to 71. * 13:30 – tuned into 105.3 The Fan, Dallas Sports Radio, ahead of this afternoon's game between my Texas Rangers and the San Francisco Giants. Opening pitch is still a good half hour away. * 16:30 – Still following the score and stats of the Rangers/Giants game via the MLB Gameday Screen, but I've moved my radio over to the IU/Ohio St. game, almost time for the opening tip. * 16:40 – the San Francisco Giants win. Final score 7 to 5.
Chess: * 16:52 – moved in all pending CC games
from
Ira Cogan
Anil Dash with a little history of markdown. I love markdown. I love Microsoft Word too, but I don't use a lot on there when writing for this thing. I used to start the draft over there and finish over here. But lately I've come to realize as much as I love Word for a lot of things, I actually get around to finishing more when I start over here, and then I just copy, paste, and save a copy over there. I also enjoy reading Dash's stuff. Part of the reason it's easier for me to actually get started and get around to finishing is markdown. Fascinating stuff.
Gladys West, a mathematician whose work helped create GPS recently passed away at the age of 95.
David Farber, a computer scientist who helped create and shape the internet recently passed away.

I was out walking the dog this morning and I saw a write.as sticker in my neighborhood. NYC is a small world sometimes. write.as is the platform I write thing on. Brooklyn 3/7/26
-Ira
from
Reflections
Tomorrow may rain, so I'll follow the sun
—Paul McCartney
#Life #Quotes
from
Have A Good Day
We’re having a fantastic time at the New Colossus Festival in New York City. While it’s smaller than SXSW or The Great Escape, it has the same effect: feeling inspired and making connections.
from ‡
You found me. I was like you once. I searched for answers in books and places I thought right but left me more confused. I tried to put into concepts what can’t be explained.
Stop here. Let the voice in your head ask itself: do I exist?
Something just answered. What was that? A child knows it exists before it knows what existing is. That knowing cannot then be a thought.
But does the knowing know it’s not a thought?
from
Roscoe's Quick Notes
Plans for this Saturday include following three radio games: 1.) Up first will be a men's college basketball game with a scheduled start of 11:00 AM Central Time featuring the Butler Bulldogs at the DePaul Blue Demons. 2.) Next will be an MLB Spring Training game with a scheduled start of 2:00 PM Central Time between my Texas Rangers and the San Francisco Giants. 3.) The third and final radio game planned for today will be another men's college basketball contest featuring my Indiana Hoosiers at the Ohio St. Buckeyes, with a scheduled start time of 4:30 PM Central Time.
And the adventure continues.
from
Kroeber
Propõe Cal Newport, num contexto de bioética: imaginemos que alguém se põe a refletir nos perigos que a biotecnologia pode trazer ao mundo. E que essa pessoa se pergunta, e se fosse possível clonar ovos de dinossauro? Partindo desse pressuposto, essa pessoa a seguir desenvolve, durante vinte anos, um trabalho de reflexão sobre os perigos de um mundo com dinossauros à solta. Teoriza em detalhe sobre a dificuldade que seria controlar dinossauros, qual o tamanho e as características de vedações que realmente conseguissem conter tiranossauros, sobre que tipo de dardos tranquilizantes seriam suficientemente eficazes para fazer adormecer animais de porte tão imenso.
Ora bem, isto, diz Newport, é o que se está a fazer com a Inteligência Artificial. Os doomers, como Eliezer Yudkowsky, que avisam sobre o perigo de uma superinteligência emergindo a partir do software actual, partem do pressuposto que é possível, a partir dos actuais modelos de linguagem e agentes de inteligência artificial, gerar uma superinteligência autónoma, com objectivos e intenções. E a seguir passam o tempo a imaginar quão difícil seria conter uma superinteligência à solta, esquecendo-se de que tudo o que dizem parte de um pressuposto. Primeiro há que pressupor que é possível criar uma superinteligência. Tudo o resto é imaginação. E a ciência informática actual não tem ideia de como gerar uma superinteligência.
A isto chama Cal Newport a falácia do filósofo (the philosopher's fallacy). No seu canal sobre ontologia, Casey Hart refere-se a esta falácia como “Blueprint Bias”, algo que poderíamos traduzir muito desajeitadamente como viés de design inicial. Basicamente, esta falácia consiste em tratar um pressuposto como um facto, ou em esquecer que tudo o que se diz assenta num pressuposto. Quer no caso hipotético do aviso sobre o perigo de dinossauros à solta, quer no caso de uma super inteligência artificial à solta, tudo se passa dentro da experiência de pensamento. Este tipo de experiências são muito caras aos escritores de ficção científica. As premissa “e se...?” são a base de muitas histórias. Já a ciência e as decisões políticas precisam de assentar na realidade. Todo este alarido doomer, com cenários de apocalipse à Matrix distrai-nos das verdadeiras questões éticas do nosso tempo, no que toca à inteligência artificial: o seu uso em armas autónomas e em vigilância massiva, o roubo descarado de dados pessoais e trabalho de artistas, o potencial de concentração ainda maior de poder e corrupção das democracias, a facilidade com que agora se produz fake news. Nada disto necessita de superinteligências, nada disto é inevitável.

“There is nothing we can do” should be the official slogan of this city. Beyond the daily indifference people show one another, the most disappointing aspect of living here is the flaccid response of all authority when confronted with problems.
This is not about “safety”. That would be a different issue. It's inertia. When solutions clearly exist, the answer is still “there is nothing we can do”.
Yet there are many things that could be done. Instead, here comes the apathetic, collective shrug, the resignation to the idea that “things are the way they are and always will be”. Responsibility dissolves into indifference. How convenient.
At what point does “nothing can be done” will declare itself as a choice, losing its mask of limitation?
How many small failures accumulate before apathy becomes the defining culture of a place?
What exactly remains of the idea of a city, then?
from brendan halpin
Went to see the Baz Luhrman-directed Elvis doc last night. It starts with a recap of Elvis’ career up to that point, notably omitting the ‘68 Comeback Special, presumably because that’s better than any of the Vegas footage that follows. Then we see some rehearsals, and then we get to the live shows.
The movie is GORGEOUS. Just an absolute super-saturated feast for the eyes. Luhrmann and Elvis seem to share views about subtlety, which is to say I’m not sure either was/is familiar with the concept, so subject and filmmaker are a great match. And it’s a bold move on Luhrmann’s part to try to redeem the most widely ridiculed and derided stage of Elvis’s career. And, for the most part, he succeeds.
We see the band being loose and having fun in rehearsals, and the joy Elvis got from performing is infectious to the band, the live audience, and the movie audience. And God knows we all need a little joy these days.
So far so good, though I have one quibble with the performer and one with the filmmaker.
Elvis loved performing and would often make jokes, often at the expense of the material, to entertain the audience, as when, in EPIC, he changes the “Are You Lonesome Tonight” lyrics to “do you gaze at your forehead and wish you had hair.” This makes him a fun performer to watch, but it means that he, and therefore the audience, are kept at an ironic distance from the songs. Which is a shame because he was a gifted singer who could wring something real even out of bad material. The performance of “Suspicious Minds” in this movie shows what he can do when he’s actually trying, and it’s spectacular.
Still, if you go into this movie as a non fan trying to understand why Elvis mattered, this movie probably won’t help you understand. I encourage you to seek out the sit down shows from the comeback special—they didn’t give Elvis a guitar strap, so he had to channel all his energy into the songs. It’s stunning.
As for Luhrmann, he’s kind of mistitled this movie. I don’t think theres A single song that we get to see performed start to finish without interview voiceovers or cuts to rehearsal footage or other footage of Elvis working the crowd or fleeing the crowd or driving around Vegas, etc. So it’s not really Elvis Presley in Concert, because at a concert you get to hear the whole song.
Still, it’s been a rough week, and this movie made me happy for an hour and a half, which, in the year 2026, is about the highest recommendation I can give.
from 下川友
夜の海辺だった。波の音だけが続いている。 俺は砂の上に座り、ずっと同じことを考えていた。
「子供の頃、漫画が好きで夢中になっていたのに、今はまるで感動しないな」
隣に座っている鳴海さんは、波を見たまま言った。
「予測できてしまう脳になるからだよ。大人はストーリーの型を知りすぎているからね」
鳴海さんは、生物物理学研究室の俺の先輩だ。 今は研究室で助手をやっている。
俺は反論する。
「予想できないことは世の中にたくさんありますよ。でも、予想できなくても関心がないことの方が多い。予想できないことが楽しいわけじゃないです」
鳴海さんは少し黙ってから言った。
「じゃあ、ドーパミンの反応が弱くなるからだ。新しい体験が減るからだね」
「新しい体験も世の中にたくさんあります。でも、やったことのない経験でも、興味がないことの方が多いですよ。新しい体験が必ず感動に繋がるわけではないです」
「じゃあ、世界を旅する主人公に感情移入するのが難しくなるから?」
「今でも世界を冒険できるならしたいですよ。でも漫画は読みたいと思わない。感情移入できるかどうかとは関係ないです」
「想像力の使い方が変わるから?」
「自分の想像力が衰えているとは思えません。でも面白いとは思わない」
「うーん」
それから先輩は言った。
「子供の頃は漫画が『自分の問題』だった。でも大人になると『他人の問題』になる、というのは?」
「どういうことですか」
「人間の興味は『新しさ』と『自分との関係』で決まる。子供の頃、漫画は自分の人生と強く結びついていた。強くなりたい、認められたい、友達が欲しい。それがリアルな欲望だった。でも大人になると欲望が変わる。安定、金、地位、家族、時間。少年漫画のテーマは、もう自分の問題じゃなくなる」
俺は少し考える。
「それは一理あります。でも、子供の頃の自分と漫画は直接関係がないのに、なぜ子供の頃は関心があるんですか?」
「子供はまだ社会経験が少ない。友達関係、勝ち負け、仲間、ルール、勇気。そういうものを理解している途中なんだ。漫画はそれを、極端にわかりやすい形で圧縮して見せる。現実の友達関係は複雑で曖昧だけど、漫画の友情は明確だ。現実の努力は報われないことが多いけど、漫画の努力は成長として可視化される」
「つまり、漫画は現実のモデル?」
「そう。子供の脳は単純化された世界モデルを好む。そしてもう一つ大きいのは、『安全な社会シミュレーション』としての機能だ」
「シミュレーション?」
「人間の子供は狩りができない。戦えない。社会を運営できない。でも将来はそれをやる必要がある。そこで進化したのが、安全に社会を学ぶ方法だ。ごっこ遊びや、物語だ。危険な現実を体験する前に、頭の中で練習するんだ」
波を見る。
しばらく沈黙が続いた。 波の音だけが聞こえる。
俺は言った。
「子供が漫画と自分を重ねるのは、本当に子供に『可能性』があるからじゃないですか? だったら、わざわざ脳がそういう仕組みを持つ必要はないんじゃないですかね」
先輩はゆっくりと答えた。
「それは順序が逆だね」
「逆?」
「『可能性があるから重ねる』んじゃない。『重ねる仕組みがあるから可能性を学習できる』んだ」
俺は先輩の横顔を見る。
「人間の子供は極端に未完成で生まれる。鹿は数時間で立ち上がる。猫は数週間で自立する。でも人間は十年以上かかる。社会のルール、人間関係、協力、裏切り、戦略。他者を模倣して学ぶ」
「物語の主人公は、学習しやすい行動モデルなのか」
「そう。明確な目標、強い感情、行動、結果。全部揃っている。子供の脳は主人公を見ると自動的に『自分ならどうするか』を計算する。それはかなり自動的で、能動的だ」
俺は言った。
「じゃあ、物語は『未来の自分の候補』を見せている」
「そうだ。脳は主人公A、主人公B、主人公Cを見ながら、無意識に『勇敢な自分』『優しい自分』『狡猾な自分』を試している。人格パターンのシミュレーションだ」
「大人になるとそれが弱まるのは?」
「能力、性格、社会的位置が固定されるからだ。可能性空間が狭まる。主人公を見ても『これは自分ではない』と判断する。同一化回路が弱まる」
俺は、そもそも先輩に何を聞きたかったんだっけと思う。
「魔法が使えたり空を飛べたりするキャラに子供が憧れるのはなぜでしょうか。それは明確に、未来の自分ではないのでは?」
先輩は笑う。 俺からすると、どうでもいいところで。
「子供は『制約』を強く感じている。体が小さい、力が弱い、行動範囲が狭い、大人に決定権がある。常に『できないこと』を経験している。空を飛ぶ、魔法を使う、巨大な力を持つ。そういう能力は、制約を一気に突破する象徴なんだ」
「飛行は移動の制約、魔法は因果の制約、超力は身体の制約ってことですね」
「そう。魔法は自由の象徴だ。人間の脳は『能力の拡張』を自然に想像する。速く走る→もっと速く→空を飛ぶ、というように。そして子供は『不可能』をまだ完全には理解していない。現実と空想の境界が柔らかい」
俺は言う。
「でも、その『能力拡張』って、実際に人間の機能にあるんじゃないですか? 人間は環境に応じて少しずつ体の形を変えてきている。それは、人間が強く念じたからでは?」
先輩は少し間を置いた。
「結論から言うと、『意思そのものが遺伝子を直接変えて進化を起こした』という証拠は今のところない。進化の仕組みは、遺伝子にランダムな変化が起きて、環境に合う個体が生き残り、その遺伝子が広がる、というのが基本的な理解だ」
「じゃあ、意思は関係ないんですか?」
「いや、間接的にはある。意思は環境を変えることができる。人間は農業を始め、牧畜をし、都市を作った。それが環境を変え、進化の方向に影響を与えた。たとえば乳糖耐性。昔の人間は大人になると牛乳を消化できなかった。でも牧畜文化が生まれて、牛乳を飲める人と飲めない人で生存率が変わり、牛乳を消化できる遺伝子が広がった」
意思、行動、環境、進化。
「そう。文化が進化を作る」
夜の海はどこまでも暗く、でも月明かりが細く波を照らしていた。
「それでも」と俺は言う。
先輩はこちらを見ずに波を見ている。
「それでも、俺はまだ、意思による進化を諦めたくないです」
波の音が聞こえる。 先輩はしばらく黙って、それから言った。
「私も、君に諦めてほしくないかもしれないね」
風が少し強くなった。 俺は砂の上に立ったまま、ずっと海を見ていた。
波の音だけが続いている。 子供の頃、漫画の中で見た広い海みたいな夜だった。
from targetedjaidee
Gratitude List:
Prayer list:
How’s everyone doing? I’m doing pretty good. Feeling immense feelings of gratitude. I have come to conclusion that I WANT to let go. I WANT to let God. His timing is so perfect, I believe I’ve mentioned this before.
I cannot believe how much grace He has for me. I’m am so glad.
Short & sweet today, the verse is as follows:
3 But the Lord is faithful, and he will strengthen you and protect you from the evil one. (2 Thessalonians 3:3 NIV11)
Love ya!
Jaide owwt*
Leyó que tirarse de los lóbulos de las orejas y golpearse la punta de la nariz con la palma de la mano izquierda fomenta la creatividad. Reflexionó unos segundos sobre el particular y le pareció una payasada.
Continuó leyendo y a renglón seguido se enteró de que era una técnica ancestral que un sabio de la India llevó a China. Se tocó la barbilla y le pareció prudente experimentar, por si se trataba de la práctica de algún yoga o gesto ritual y, por tanto, con un sentido más profundo de lo que parecía a simple vista.
Continuó leyendo y supo que otros estudiosos consideraban que esta práctica era todavía más antigua, mencionada en un papiro ptolemaico que añade el apoyarse con un solo pie mientras se realiza, preferentemente el derecho, si se tiene. Pensando en esto, se dijo: “Aquí da en el blanco. El pie derecho es el contrario de la mano izquierda en el plano inferior y por tanto establece el equilibrio. Esta enseñanza es profunda”.
Por la noche, antes de cenar, su hija de ocho años lo vio hacer la práctica. De inmediato se puso a imitarlo aunque después del golpe a la nariz la niña sacó la lengua y comenzó a cantar con tal dulzura que parecía un ángel.
Luego de que ella saliera corriendo a sus cosas, él siguió ejercitándose en la práctica, incorporando a la misma la sacada de lengua que estimó reglamentaria. Pero no cantó como la niña, y aunque hizo las muecas de Torrebruno, su voz sonaba como la de Cher.
-Esto no debe ser bueno -se dijo. Y guardó el libro bajo llave.
from An Open Letter
Every passing day gets easier, as I recognize more and more that I had a lot of good memories with her, but at the same time she is not the person that I would want to spend my life with. I think I have both learned that I need to be a lot more picky in love, and also how if I am not happy being a single or happy with my life, I’m so much more susceptible to a bad relationship. To be completely honest, this last relationship happened a big part because I had just moved and I was struggling to make friends and I was struggling with that loneliness. But I absolutely have that dog in me. I can make friends, I can be an extrovert at Will, I can organize events, I can garner people around me to do things, because I’ve put in that effort before and I get to reap the rewards from that. Keep in mind that things are not difficult, they are just unfamiliar.
It’s weird, it’s been 10 days since we’ve broken up but I feel a sense of peace. I don’t hate her at all, and there’s still of course a couple of things that still hurt that I need to just get exposed to, and also to wait for time. Something I’ve had to be conscious enough is not attributing certain attributes about her as bad things, like for example the fact that she played Valorant or would do certain things at the gym, I noticed that sometimes I get the urge to pull away from it, and I tell myself that “Oh in the future I wouldn’t want a partner like that”. But those things aren’t the issue, it’s more things like the lack of accountability, or the feeling of having to drag someone into adulthood. Those are valid things, but the rest aren’t that deep. Either way I’m very excited because tomorrow I’m going to make a complete song with S, and I’m excited to just spend time with him and make something stupid.