Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
Anonymous
There's a specific kind of panic you feel when a spreadsheet that runs your entire production freezes. Not crashes — freezes. You watch the cursor spin, knowing that somewhere in those 47 tabs and 15MB of formulas, your morning just died. We make paint. Artist paint, in a small workshop in Kraków. Five people, 200+ products, and for years, one massive Excel file that held everything together — recipes, inventory, costs, orders. It worked. Until it didn't. The breaking point came on a Tuesday. A customer needed to know which batch of Cadmium Yellow went into their order three months ago. Traceability — something any real manufacturer should have. We didn't. We had crossed fingers and a VLOOKUP that sometimes returned #REF. That night I started building something. Not because I wanted to become a software developer — I wanted to stop being afraid of Tuesdays. Eighteen months later, that “something” became a real product. We use it every day now. Other small makers started asking about it. Eventually we opened it up at krafte.app I still don't think of myself as a tech founder. I'm a guy who makes paint and got tired of spreadsheets.
from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
🌹
And to this day unpare Speaking high to thus about The statement of the wind in truth Nary was wood in favour To seek the fall become- And it did hay A passion for the year Summering in constant Making death a place apart To hear the siren song A temperate mouth and be; To get along, Nary is a scar And custom swim To minds bend and this A favourite fact That all who poe are witness In filing this for just petition A parcel leans ahend This severance day A year of nine and six And flaming shoe- Passions of sweet and size ten The simple seed to Rome And thus begin That a rose is beautiful And grower be.
from 下川友
俺は死んでしまい、天国の一歩手前で待機していた。 天界人に「ちょっとここで待っててね」と言われ、案内された部屋は何年も掃除されていないようで、埃っぽい匂いがした。どうやら本当に一時的に待つだけの場所らしい。
窓の外だけが異様で、静かな空気とは裏腹に赤く染まっていた。 「天国にも火事があるのか?」と思ったが、そこは厨房で、おばさんたちが火を恐れず中華料理を作っているだけだった。
「あのチャーハンはいつ食べられるんだろう」と考えていると、「こちらへ」と呼ばれ、滝の中を通り抜けた。
空には斜めに傾いた巨大な円盤が浮かんでいる。 その大きさに似合わず、静かなエネルギーで保たれているのが伝わってきた。
天国での新しい家に着き、電球を取り替える。 家の中なのに、薄いピンク色の風が吹いていた。
電球を替え終えると、いつの間にか俺は屋台船に乗っていた。 どうやら電球を付け替えるという行為は、場所と場所をつなぐ役割を持っているらしい。
上空は夜で、足元の景色は早朝だった。 小麦畑を走る少年の表情はよく見えない。
竜巻が小麦を巻き上げ、小さな島へ運んでいくのを見届けると、そこには重たいピアノだけが残っていて、おばあさんが静かに演奏していた。
天界人から注意事項とルールを説明されたが、俺は死後でも相変わらず人の話を聞いておらず、「ずっと家にいられるんですかね?」と質問をかぶせてしまった。 すると「進んでいれば、きっと外でも大丈夫ですよ」と、まだ理解できない返答を残して歩き去っていった。 壁が少ないせいで、遠くに行ってもその姿が小さく見えていた。
そこへ、80年代のアイドルのような紫髪の女性が現れ、「新しいリネンがあるから」と腕を引っ張る。 「理念?」と一瞬思ったが、案内された部屋にはベッドシーツやタオルが山のように積まれていた。
「リネン室が新しくなったのか」と思ったが、そもそも前のリネン室を知らない。 「使うときはここから取ればいいんですか?」と聞くと、 「ここは本当にただ積んであるだけで、取ったりはしないのよ。少しだけ天界に“重さ”を足すためだけにあるの」と言われた。
そこへ4つ打ちのテクノとともに汽車がやってきた。 「天界にも4つ打ちがあるんだ」と思っていると、運転手が「乗ってください、まだ戻れますよ」と声をかけてきた。 「まだ戻れるのか」と思った瞬間、汽車とは反対方向から「白湯が入ったわよ」と声がした。
白湯が飲みたくて、俺は汽車に乗らず白湯を選んだ。 汽車に乗らなかったことが“本当の死”を決定づけたのだと思い、駅員に「さようなら」と言ったら、 「いや、白湯飲むまで待ってますよ」と返され、まだ生きている側に近いことを知った。
明日はきっと布団で目を覚まし、荻窪の喫茶店に行く。
Non-Christians seem to think that the Incarnation implies some particular merit or excellence in humanity. But of course it implies just the reverse: a particular demerit and depravity. No creature that deserved Redemption would need to be redeemed. They that are whole need not the physician. Christ died for men precisely because men are not worth dying for; to make them worth it.
— C.S. Lewis, The World's Last Night, chapter 6
#culture #quotes #theology
from An Open Letter
E helped me move a ton more stuff and I’m stressed but things are slowly settling down.
from
The happy place
Let’s visit the memory banks, and see if there’s something interesting stored within.
Let’s see…
Once at high school I’d gotten a reward for being basically a warm person, and this other guy he said to me I hadn’t earned it, but I just said to him he was jealous and that shut him up to my (then) surprise.
I guess we both were around thirteen years of age.
It was the ninth graders who had some sort of show in the aula where they handed such prizes out.
They weren’t all benign.
A tradition I think.
Now I have seen this same person in adult shape, working extra at the gas station.
When I saw him, I felt nothing.
from
Talk to Fa
It wasn’t the right time until now. It’s crazy how such a simple act had to wait. I wanted to say just one phrase, but I hesitated for a long time. I couldn’t say it because it wasn’t in me. Many have said it to me. Part of me wondered if they really knew what it meant. If they actually had it in them. If they felt it. If they were it. I only want to say what I mean and mean what I say. I’ve said things I now regret. We can’t take back what we’ve said. Words are powerful like that.

I took off the month of January from this blog, but February is here and I am back at it! The storytelling of Samia has been on my mind recently, so for this month I have decided to recommend the album of hers which has stuck with me the most: Honey.
Honey is an album written mostly in the second person and in past tense. It has the feeling of an extended reminiscence, with equal parts horror, longing, and melancholia. « How much better can anything get than sitting on your porch remembering it? », Samia asks on “To Me It Was”. Like most questions posed by the album, this one goes unanswered.
From the very first track, “Kill Her Freak Out”, Samia makes clear that she is not an entirely rational narrator, and she makes no claims to moral authority. What you get, again and again, is nothing more or less than genuine emotion filtered thru dozens of tiny scenes. You are left to grapple with the implications of her verses on your own. Most of the tracks feature a dangerous undercurrent of irony: You are made to question both her emotional response and your own standing to make such judgments. When she sings, for example, « To me, it was a good time », does this affirmation salvage the situation? Or is her acceptance part of the problem which is being portrayed?
Nowhere is this ambiguity more cutting than in “Breathing Song”—a song about sexual assault, described by the artist as “probably the least enjoyable song of all time”, which is itself simultaneously a very true and false assessment—but it is the subtle, pernicious way it penetrates songs like “Honey” that lend it its true power.
Favourite track: “To Me It Was” is potent but listenable, and optimistic even as it evokes melancholy.
#AlbumOfTheWeek
from
SmarterArticles

In December 2025, something remarkable happened in the fractious world of artificial intelligence. Anthropic, OpenAI, Google, Microsoft, and a constellation of other technology giants announced they were joining forces under the Linux Foundation to create the Agentic AI Foundation. The initiative would consolidate three competing protocols into a neutral consortium: Anthropic's Model Context Protocol, Block's Goose agent framework, and OpenAI's AGENTS.md convention. After years of proprietary warfare, the industry appeared to be converging on shared infrastructure for the age of autonomous software agents.
The timing could not have been more significant. According to the Linux Foundation announcement, MCP server downloads had grown from roughly 100,000 in November 2024 to over 8 million by April 2025. The ecosystem now boasts over 5,800 MCP servers and 300 MCP clients, with major deployments at Block, Bloomberg, Amazon, and hundreds of Fortune 500 companies. RedMonk analysts described MCP's adoption curve as reminiscent of Docker's rapid market saturation, the fastest standard uptake the firm had ever observed.
Yet beneath this apparent unity lies a troubling question that few in the industry seem willing to confront directly. What happens when you standardise the plumbing before you fully understand what will flow through it? What if the orchestration patterns being cemented into protocol specifications today prove fundamentally misaligned with the reasoning capabilities that will emerge tomorrow?
The history of technology is littered with standards that seemed essential at the time but later constrained innovation in ways their creators never anticipated. The OSI networking model, Ada programming language, and countless other well-intentioned standardisation efforts demonstrate how premature consensus can lock entire ecosystems into architectural choices that later prove suboptimal. As one researcher noted in a University of Michigan analysis, standardisation increases technological efficiency but can also prolong existing technologies to an excessive degree by inhibiting investments in novel developments.
The stakes in the agentic AI standardisation race are considerably higher than previous technology transitions. We are not merely deciding how software components communicate. We are potentially determining the architectural assumptions that will govern how artificial intelligence decomposes problems, executes autonomous tasks, and integrates with human workflows for decades to come.
To understand why the industry is rushing toward standardisation, one must first appreciate the economic pressures that have made fragmented agentic infrastructure increasingly untenable. The current landscape resembles the early days of mobile computing, when every manufacturer implemented its own charging connector and data protocol. Developers building agentic applications face a bewildering array of frameworks, each with its own conventions for tool integration, memory management, and inter-agent communication.
The numbers tell a compelling story. Gartner reported a staggering 1,445% surge in multi-agent system inquiries from the first quarter of 2024 to the second quarter of 2025. Industry analysts project the agentic AI market will surge from 7.8 billion dollars today to over 52 billion dollars by 2030. Gartner further predicts that 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025.
This explosive growth has created intense pressure for interoperability. When Google announced its Agent2Agent protocol in April 2025, it launched with support from more than 50 technology partners including Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, and Workday. The protocol was designed to enable agents built by different vendors to discover each other, negotiate capabilities, and coordinate actions across enterprise environments.
The competitive dynamics are straightforward. If the Agentic AI Foundation's standards become dominant, companies that previously held APIs hostage will be pressured to interoperate. Google and Microsoft could find it increasingly necessary to support MCP and AGENTS.md generically, lest customers demand cross-platform agents. The open ecosystem effectively buys customers choice, giving a competitive advantage to adherence.
Yet this race toward consensus obscures a fundamental tension. The Model Context Protocol was designed primarily to solve the problem of connecting AI systems to external tools and data sources. As Anthropic's original announcement explained, even the most sophisticated models are constrained by their isolation from data, trapped behind information silos and legacy systems. MCP provides a universal interface for reading files, executing functions, and handling contextual prompts. Think of it as USB-C for AI applications.
But USB-C was standardised after decades of experience with peripheral connectivity. The fundamental patterns for how humans interact with external devices were well understood. The same cannot be said for agentic AI. The field is evolving so rapidly that the orchestration patterns appropriate for today's language models may prove entirely inadequate for the reasoning systems emerging over the next several years.
The reasoning model revolution of 2024 and 2025 has fundamentally altered how software engineering tasks can be decomposed and executed. OpenAI's o3, Google's Gemini 3 with Deep Think mode, and DeepSeek's R1 represent a qualitative shift in capability that extends far beyond incremental improvements in benchmark scores.
The pace of advancement has been staggering. In November 2025, Google introduced Gemini 3, positioning it as its most capable system to date, deployed from day one across Search, the Gemini app, AI Studio, Vertex AI, and the Gemini CLI. Gemini 3 Pro scores 1501 Elo on LMArena, achieving top leaderboard position, alongside 91.9% on GPQA Diamond and 76.2% on SWE-bench Verified for real-world software engineering tasks. The Deep Think mode pushes scientific reasoning benchmarks into the low to mid nineties, placing Gemini 3 at the front of late 2025 capabilities. By December 2025, Google was processing over one trillion tokens per day through its API.
Consider the broader transformation in software development. OpenAI reports that GPT-5 scores 74.9% on SWE-bench Verified compared to 69.1% for o3. On Aider polyglot, an evaluation of code editing, GPT-5 achieves 88%, representing a one-third reduction in error rate compared to o3. DeepSeek's R1 demonstrated that reasoning abilities can be incentivised through pure reinforcement learning, obviating the need for human-labelled reasoning trajectories. The company's research shows that such training facilitates the emergent development of advanced reasoning patterns including self-verification, reflection, and dynamic strategy adaptation. DeepSeek is now preparing to launch a fully autonomous AI agent by late 2025, signalling a shift from chatbots to practical, real-world agentic AI.
These capabilities demand fundamentally different decomposition strategies than the tool-calling patterns embedded in current protocols. A reasoning model that can plan multi-step tasks, execute on them, and continue to reason about results to update its plans represents a different computational paradigm than a model that simply calls predefined functions in response to user prompts.
The 2025 DORA Report captures this transformation in stark terms. AI adoption is near-universal, with 90% of survey respondents reporting they use AI at work. More than 80% believe it has increased their productivity. Yet AI adoption continues to have a negative relationship with software delivery stability. The researchers estimate that between two people who share the same traits, environment, and processes, the person with higher AI adoption will report higher levels of individual effectiveness but also higher levels of software delivery instability.
This productivity-stability paradox suggests that current development practices are struggling to accommodate the new capabilities. The DORA team found that AI coding assistants dramatically boost individual output, with 21% more tasks completed and 98% more pull requests merged, but organisational delivery metrics remain flat. Speed without stability, as the researchers concluded, is accelerated chaos.
The danger of premature standardisation lies not in the protocols themselves but in the architectural assumptions they embed. When developers build applications around specific orchestration patterns, those patterns become load-bearing infrastructure that cannot easily be replaced.
Microsoft's October 2025 decision to merge AutoGen with Semantic Kernel into a unified Microsoft Agent Framework illustrates both the problem and the attempted solution. The company recognised that framework fragmentation was creating confusion among developers, with multiple competing options each requiring different approaches to agent construction. General availability is set for the first quarter of 2026, with production service level agreements, multi-language support, and deep Azure integration.
Yet this consolidation also demonstrates how quickly architectural choices become entrenched. As one analysis noted, current agent frameworks are fragmented and lack enterprise features like observability, compliance, and durability. The push toward standardisation aims to address these gaps, but in doing so it may cement assumptions about how agents should be structured that prove limiting when new capabilities emerge.
The historical parallel to the OSI versus Internet protocols debate is instructive. Several central actors within OSI and Internet standardisation suggested that OSI's failure stemmed from being installed-base-hostile. The OSI protocols were not closely enough related to the already installed base of communication systems. The installed base is irreversible in the sense that radical, abrupt change of the kind implicitly assumed by OSI developers is highly unlikely.
The same irreversibility threatens agentic AI. Once thousands of enterprise applications embed MCP clients and servers, once development teams organise their workflows around specific orchestration patterns, the switching costs become prohibitive. Even if superior approaches emerge, the installed base may prevent their adoption.
Four major protocols have already emerged to handle agent communication: Model Context Protocol, Agent Communication Protocol, Agent-to-Agent Protocol, and Agent Network Protocol. Google's A2A Protocol alone has backing from over 50 companies including Microsoft and Salesforce. Yet as of September 2025, A2A development has slowed significantly, and most of the AI agent ecosystem has consolidated around MCP. Google Cloud still supports A2A for some enterprise customers, but the company has started adding MCP compatibility to its AI services. This represents a tacit acknowledgment that the developer community has chosen.
The technical standardisation debate unfolds against the backdrop of a more immediate crisis in the software development workforce. The rapid adoption of AI coding assistants has fundamentally disrupted the traditional career ladder for software engineers, with consequences that may prove more damaging than any technical limitation.
According to data from the U.S. Bureau of Labor Statistics, overall programmer employment fell a dramatic 27.5% between 2023 and 2025. A Stanford Digital Economy Study found that by July 2025, employment for software developers aged 22-25 had declined nearly 20% from its peak in late 2022. Across major U.S. technology companies, graduate hiring has dropped more than 50% compared to pre-2020 levels. In the UK, junior developer openings are down by nearly one-third since 2022.
The economics driving this shift are brutally simple. As one senior software engineer quoted by CIO observed, companies are asking why they should hire a junior developer for 90,000 dollars when GitHub Copilot costs 10 dollars. Many of the tasks once assigned to junior developers, including generating boilerplate code, writing unit tests, and maintaining APIs, are now reliably managed by AI assistants.
Industry analyst Vernon Keenan describes a quiet erosion of entry-level positions that will lead to a decline in foundational roles, a loss of mentorship opportunities, and barriers to skill development. Anthropic CEO Dario Amodei has warned that entry-level jobs are squarely in the crosshairs of automation. Salesforce CEO Marc Benioff announced the company would stop hiring new software engineers in 2025, citing AI-driven productivity gains.
The 2025 Stack Overflow Developer Survey captures the resulting tension. While 84% of developers now use or plan to use AI tools, trust has declined sharply. Only 33% of developers trust the accuracy of AI tools, while 46% actively distrust it. A mere 3% report highly trusting the output. The biggest frustration, cited by 66% of developers, is dealing with AI solutions that are almost right but not quite.
This trust deficit reflects a deeper problem. Experienced developers understand the limitations of AI-generated code but have the expertise to verify and correct it. Junior developers lack this foundation. There is sentiment that AI has made junior developers less competent, with some losing foundational skills that make for successful entry-level employees. Without proper mentorship, junior developers risk over-relying on AI.
The long-term implications are stark. The biggest challenge will be training the next generation of software architects. With fewer junior developer jobs, there will not be a natural apprenticeship to more senior roles. We risk creating a generation of developers who can prompt AI systems but cannot understand or debug the code those systems produce.
As reasoning models assume greater responsibility for code generation and system design, the locus of architectural decision-making is shifting in ways that current organisational structures are poorly equipped to handle. Prompt engineering is evolving from a novelty skill into a core architectural discipline.
The way we communicate with AI has shifted from simple trial-and-error prompts to something much more strategic, what researchers describe as prompt design as a discipline. If 2024 was about understanding the grammar of prompts, 2025 is about learning to design blueprints. Just as software architects do not just write code but design systems, prompt architects do not just write clever sentences. They shape conversations into repeatable frameworks that unlock intelligence, creativity, and precision.
The adoption statistics reflect this shift. According to the 2025 AI-Enablement Benchmark Report, the design and architecture phase of the software development lifecycle has an AI adoption rate of 52%. Teams using AI tools for design and architecture have seen a 28% increase in design iteration speed.
Yet this concentration of architectural power in prompt design creates new risks. Context engineering, as one CIO analysis describes it, is an architectural shift in how AI systems are built. Early generative AI was stateless, handling isolated interactions where prompt engineering was sufficient. Autonomous agents are fundamentally different. They persist across multiple interactions, make sequential decisions, and operate with varying levels of human oversight.
This shift demands collaboration between data engineering, enterprise architecture, security, and those who understand processes and strategy. A strong data foundation, not just prompt design, determines how well an agent performs. Agents need engineering, not just prompts.
The danger lies in concentrating too much decision-making authority in the hands of those who understand prompt patterns but lack deep domain expertise. Software architecture is not about finding a single correct answer. It is about navigating competing constraints, making tradeoffs, and defending reasoning. AI models can help reason through tradeoffs, generate architectural decision records, or compare tools, but only if prompted by someone who understands the domain deeply enough to ask the right questions.
The governance implications are significant. According to IAPP research, 50% of AI governance professionals are typically assigned to ethics, compliance, privacy, or legal teams. Yet traditional AI governance practices may not suffice with agentic systems. Governing agentic systems requires addressing their autonomy and dynamic behaviour in ways that current organisational structures are not designed to handle.
The proliferation of reasoning models with different capabilities and cost profiles is creating a new form of fragmentation that threatens to balkanise development practices. Different teams within the same organisation may adopt different model families based on their specific requirements, leading to incompatible workflows and siloed expertise.
The ARC Prize Foundation's extensive testing of reasoning systems reached a striking conclusion: there is no clear winner. Different models excel at different tasks, and the optimal choice depends heavily on specific requirements around accuracy, cost, and latency. OpenAI's o3-medium and o3-high offer the highest accuracy while sacrificing cost and time. Google's Gemini 3 Flash, released in December 2025, delivers frontier-class performance at less than a quarter of the cost of Gemini 3 Pro, with pricing of 0.50 dollars per million input tokens compared to significantly higher rates for comparable models. DeepSeek offers an aggressive pricing structure with input costs as low as 0.07 dollars per million tokens.
For enterprises focused on return on investment, these tradeoffs matter enormously. The 2025 State of AI report notes that trade-offs remain, with long contexts raising latency and cost. Because different providers trust or cherry-pick different benchmarks, it has become more difficult to evaluate agents' performance. Choosing the right agent for a particular task remains a challenge.
This complexity is driving teams toward specialisation around particular model families. Some organisations standardise on OpenAI's ecosystem for its integration with popular development tools. Others prefer Google's offerings for their multimodal capabilities and long context windows of up to 1,048,576 tokens. Still others adopt DeepSeek's open models for cost control or air-gapped deployments.
The result is a fragmentation of development practices that cuts across traditional organisational boundaries. A team building customer-facing agents may use entirely different tools and patterns than a team building internal automation. Knowledge transfer becomes difficult. Best practices diverge. The organisational learning that should flow from widespread AI adoption becomes trapped in silos.
The 2025 DORA Report identifies platform engineering as a crucial foundation for unlocking AI value, with 90% of organisations having adopted at least one platform. There is a direct correlation between high-quality internal platforms and an organisation's ability to unlock the value of AI. Yet building such platforms requires making architectural choices that may lock organisations into specific model families and orchestration patterns.
The rapid adoption of AI coding assistants has created what may be the fastest accumulation of technical debt in the history of software development. Code that works today may prove impossible to maintain tomorrow, creating hidden liabilities that will compound over time.
Forrester predicts that by 2025, more than 50% of technology decision-makers will face moderate to severe technical debt, with that number expected to hit 75% by 2026. Technical debt costs over 2.41 trillion dollars annually in the United States alone. The State of Software Delivery 2025 report by Harness found that the majority of developers spend more time debugging AI-generated code and more time resolving security vulnerabilities than before AI adoption.
The mechanisms driving this debt accumulation are distinctive. According to one analysis, there are three main vectors that generate AI technical debt: model versioning chaos, code generation bloat, and organisation fragmentation. These vectors, coupled with the speed of AI code generation, interact to cause exponential growth.
Code churn, defined as code that is added and then quickly modified or deleted, is projected to hit nearly 7% by 2025. This represents a red flag for instability and rework. As API evangelist Kin Lane observed, he has not seen so much technical debt being created in such a short period during his 35-year career in technology.
The security implications are equally concerning. A report from Ox Security titled Army of Juniors: The AI Code Security Crisis found that AI-generated code is highly functional but systematically lacking in architectural judgment. The Google 2024 DORA report found a trade-off between gains and losses with AI, where a 25% increase in AI usage quickens code reviews and benefits documentation but results in a 7.2% decrease in delivery stability.
The widening gap between organisations with clean codebases and those burdened by legacy systems creates additional stratification. Generative AI dramatically widens the gap in velocity between low-debt coding and high-debt coding. Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases struggle to adopt them. The penalty for having a high-debt codebase is now larger than ever.
Navigating the transition to reasoning-capable autonomous systems requires organisational and research structures that most institutions currently lack. The rapid pace of change demands new approaches to technology assessment, workforce development, and institutional coordination.
The World Economic Forum estimates that 40% of today's workers will need major skill updates by 2030, and in information technology that number is likely even higher. Yet the traditional mechanisms for workforce development are poorly suited to a technology that evolves faster than educational curricula can adapt.
Several research priorities emerge from this analysis. First, longitudinal studies tracking the career trajectories of software developers across the AI transition would provide crucial data for workforce planning. The Stanford Digital Economy Study demonstrates the value of such research, but more granular analysis is needed to understand which skills remain valuable, which become obsolete, and how career paths are being restructured.
Second, technical research into the interaction between standardisation and innovation in agentic systems could inform policy decisions about when and how to pursue consensus. The historical literature on standards competition provides useful frameworks, but the unique characteristics of AI systems, including their rapid capability growth and opaque decision-making, may require new analytical approaches.
Third, organisational research examining how different governance structures affect AI adoption outcomes could help enterprises design more effective oversight mechanisms. The DORA team's finding that AI amplifies existing organisational capabilities, making strong teams stronger and struggling teams worse, suggests that the organisational context matters as much as the technology itself.
Fourth, security research focused specifically on the interaction between AI code generation and vulnerability introduction could help establish appropriate safeguards. The current pattern of generating functional but architecturally flawed code suggests fundamental limitations in how models understand system-level concerns.
Finally, educational research into how programming pedagogy should adapt to AI assistance could prevent the worst outcomes of skill atrophy. If junior developers are to learn effectively in an environment where AI handles routine tasks, new teaching approaches will be needed that focus on the higher-order skills that remain uniquely human.
The confluence of standardisation pressures, reasoning model capabilities, workforce disruption, and technical debt accumulation creates a landscape that demands new approaches to software development practice. Organisations that thrive will be those that build resilience into their development processes rather than optimising purely for speed.
Several principles emerge from this analysis. First, maintain architectural optionality. Avoid deep dependencies on specific orchestration patterns that may prove limiting as capabilities evolve. Design systems with clear abstraction boundaries that allow components to be replaced as better approaches emerge.
Second, invest in human capability alongside AI tooling. The organisations that will navigate this transition successfully are those that continue developing deep technical expertise in their workforce, not those that assume AI will substitute for human understanding.
Third, measure what matters. The DORA framework's addition of rework rate as a fifth core metric reflects the recognition that traditional velocity measures miss crucial dimensions of software quality. Organisations should develop measurement systems that capture the long-term health of their codebases and development practices.
Fourth, build bridges across model families. Rather than standardising on a single AI ecosystem, develop the institutional capability to work effectively across multiple model families. This requires investment in training, tooling, and organisational learning that most enterprises have not yet made.
Fifth, participate in standards development. The architectural choices being made in protocol specifications today will shape the development landscape for years to come. Organisations with strong opinions about how agentic systems should work have an opportunity to influence those specifications before they become locked in.
The transition to reasoning-capable autonomous systems represents both an enormous opportunity and a significant risk. The opportunity lies in the productivity gains that well-deployed AI can provide. The risk lies in the second-order effects that poorly managed deployment can create. The difference between these outcomes will be determined not by the capabilities of the AI systems themselves but by the organisational wisdom with which they are deployed.
The agentic AI standardisation race presents a familiar tension in new form. The industry needs common infrastructure to enable interoperability and reduce fragmentation. Yet premature consensus risks locking in architectural assumptions that may prove fundamentally limiting.
The Model Context Protocol's rapid adoption demonstrates both the hunger for standardisation and the danger of premature lock-in. MCP achieved in one year what many standards take a decade to accomplish: genuine industry-wide adoption and governance transition to a neutral foundation. Yet the protocol was designed for a particular model of AI capability, one where agents primarily call tools and retrieve context. The reasoning models now emerging may demand entirely different decomposition strategies.
Meta's notable absence from the Agentic AI Foundation hints at alternative futures. Almost every major agentic player from Google to AWS to Microsoft has joined, but Meta has not signed on and published reports indicate it will not be joining soon. The company is reportedly shifting toward a proprietary strategy centred on a new revenue-generating model. Whether this represents a mistake or a prescient bet on different architectural approaches remains to be seen.
The historical pattern suggests that the standards which endure are those designed with sufficient flexibility to accommodate unforeseen developments. The Internet protocols succeeded where OSI failed in part because they were more tolerant of variation and evolution. The question for agentic AI is whether current standardisation efforts embed similar flexibility or whether they will constrain the systems of tomorrow to the architectural assumptions of today.
For developers, enterprises, and policymakers navigating this landscape, the imperative is to engage critically with standardisation rather than accepting it passively. The architectural choices being made now will shape the capabilities and limitations of agentic systems for years to come. Those who understand both the opportunities and the risks of premature consensus will be better positioned to influence the outcome.
The reasoning revolution is just beginning. The protocols and patterns that emerge from this moment will determine whether artificial intelligence amplifies human capability or merely accelerates the accumulation of technical debt and workforce disruption. The standards race matters, but the wisdom with which we run it matters more.
Linux Foundation (2025). “Linux Foundation Announces the Formation of the Agentic AI Foundation.” https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
Anthropic (2024). “Introducing the Model Context Protocol.” https://www.anthropic.com/news/model-context-protocol
Anthropic (2025). “Donating the Model Context Protocol and Establishing the Agentic AI Foundation.” https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation
Pento (2025). “A Year of MCP: From Internal Experiment to Industry Standard.” https://www.pento.ai/blog/a-year-of-mcp-2025-review
University of Michigan (n.d.). “Why Standardization Efforts Fail.” https://quod.lib.umich.edu/cgi/t/text/idx/j/jep/3336451.0014.103/--why-standardization-efforts-fail
InfoQ (n.d.). “Standards are Great, but Standardisation is a Really Bad Idea.” https://www.infoq.com/presentations/downey-standards-great-standardization-bad/
Google DORA (2025). “State of AI-assisted Software Development 2025.” https://dora.dev/research/2025/dora-report/
OpenAI (2025). “Introducing GPT-5 for Developers.” https://openai.com/index/introducing-gpt-5-for-developers/
Google (2025). “Gemini 3: News and Announcements.” https://blog.google/products/gemini/gemini-3-collection/
Google (2025). “Introducing Gemini 3 Flash: Benchmarks, Global Availability.” https://blog.google/products/gemini/gemini-3-flash/
DeepSeek (2025). “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.” https://arxiv.org/html/2501.12948v1
Google Developers Blog (2025). “Announcing the Agent2Agent Protocol (A2A).” https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
Stack Overflow (2025). “2025 Stack Overflow Developer Survey.” https://survey.stackoverflow.co/2025/
CIO (2025). “Demand for Junior Developers Softens as AI Takes Over.” https://www.cio.com/article/4062024/demand-for-junior-developers-softens-as-ai-takes-over.html
Stack Overflow Blog (2025). “AI vs Gen Z: How AI Has Changed the Career Pathway for Junior Developers.” https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/
IEEE Spectrum (2025). “AI Shifts Expectations for Entry Level Jobs.” https://spectrum.ieee.org/ai-effect-entry-level-jobs
Understanding AI (2025). “New Evidence Strongly Suggests AI Is Killing Jobs for Young Programmers.” https://www.understandingai.org/p/new-evidence-strongly-suggest-ai
CIO (2025). “Context Engineering: Improving AI by Moving Beyond the Prompt.” https://www.cio.com/article/4080592/context-engineering-improving-ai-by-moving-beyond-the-prompt.html
IAPP (2025). “AI Governance Profession Report 2025.” https://iapp.org/resources/article/ai-governance-profession-report
Machine Learning Mastery (2025). “7 Agentic AI Trends to Watch in 2026.” https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/
ARC Prize (2025). “We Tested Every Major AI Reasoning System. There Is No Clear Winner.” https://arcprize.org/blog/which-ai-reasoning-model-is-best
InfoQ (2025). “AI-Generated Code Creates New Wave of Technical Debt, Report Finds.” https://www.infoq.com/news/2025/11/ai-code-technical-debt/
LeadDev (2025). “How AI Generated Code Compounds Technical Debt.” https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt
IT Revolution (2025). “AI's Mirror Effect: How the 2025 DORA Report Reveals Your Organization's True Capabilities.” https://itrevolution.com/articles/ais-mirror-effect-how-the-2025-dora-report-reveals-your-organizations-true-capabilities/
RedMonk (2025). “DORA 2025: Measuring Software Delivery After AI.” https://redmonk.com/rstephens/2025/12/18/dora2025/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * A quiet day again, and a pretty day that once again let me keep the heavy front door open for several hours so fresh air could gently breeze its way into the house through the screen door. Patiently waiting now for the Spurs / Mavs game to come on the radio. Until them I'm listening to what they're calling “Easy Listening” or “Adult Standards” music playing over KAHL.
Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Health Metrics: * bw= 225.53 lbs. * bp= 145/88 (66)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:20 – toast and butter * 07:00 – 2 HEB Bakery cookies * 10:30 – fried chicked, baked potato * 12:10 – 1 bean and cheese taco * 14:45 – 1 fresh apple, 1 cookie * 16:35 – a BIG bowl of lugau
Activities, Chores, etc.: * 05:15 – listen to local news talk radio * 06:15 – bank accounts activity monitored * 06:30 – read, pray, follow news reports from various sources, surf the socials * 10;15 – watching DefCon ZerQ on Rumble, rebroadcast from Tuesday night * 12:30 – listen to the Dan Bongino Show Podccast * 14:30 – listen to local sports talk radio * 15:15 – read, write, pray, follow news reports from various sources, surf the socials
Chess: * 12:50 – moved in all pending CC games
from
barelycompiles
Rerun a command every 10 seconds:
watch -n 10 command
Monitor dirs with highlighting:
watch -d ls -l
Monitor disk usage for physical drives:
watch -n 10 'df -h | grep "^/dev"'
df=disk free, -h=human readable, ^ is regex for start of the line
Monitor top 10 node processes:
watch -n 5 'ps aux | grep node | head -10'
from folgepaula
sometimes I drift off on my couch, and when I wake in the middle of the night, I can look up at the night sky through the two small windows above me while I am caught up inside with my dog sleeping at my feet, and in those still, quiet moments, it feels like the most beautiful thing in the world. my back gets warm and I am aware of the universe I am, as right now I know who I am, although it just cannot be defined and I will certainly forget again. but moments like these will come to me once more as I will just happen to be born to the very middle of the night and I will surrender to knowing I never knew no one quite like me and I still haven’t yet and perhaps I will never know, but being on me, this thing that looks through my eyes to the night sky right now, it just humbles this very random experience of self, this experience of feeling there’s so much love precisely here, precisely everywhere, as I already am, and everyone already is.
/oct25
from Douglas Vandergraph
There is something unsettling about Luke 3, and that discomfort is precisely the point. This is not a chapter that eases you gently into reflection or offers tidy encouragement. It does not begin with warmth or reassurance. It begins with power structures, with names and titles, with emperors and governors and high priests. Luke deliberately grounds us in a world of authority, reputation, and control before he does something unexpected. He tells us that the word of God does not come to any of them. It does not come to Rome. It does not come to the palace. It does not come to the temple leadership. Instead, it comes to a man standing in the wilderness, dressed strangely, living simply, and saying uncomfortable things.
That alone should stop us in our tracks. We are conditioned to look for God’s voice in places of polish and legitimacy. We expect Him to speak through institutions, credentials, and systems that feel stable. Luke quietly dismantles that expectation. He names the rulers of the world to show us exactly where God is not speaking, and then he redirects our attention to where God is. The wilderness is not an accident. It is not a backdrop. It is the message. God chooses the margins when the center becomes too crowded with itself.
John the Baptist does not arrive with flattery. He does not try to attract people by telling them how special they are. He does not soften his message to gain followers. He speaks repentance, not as a religious buzzword, but as a radical reorientation of life. Repentance here is not about guilt for guilt’s sake. It is about alignment. It is about turning away from a life built around self-preservation and control and turning toward a life that is actually ready to receive God. John’s message is not harsh because he is cruel. It is sharp because the moment is urgent.
What Luke 3 confronts us with is the idea that preparation matters. God does not force His way into a heart that refuses to be made ready. The imagery John uses is not random. Valleys filled. Mountains brought low. Crooked paths made straight. Rough places made smooth. This is not poetic fluff. This is interior renovation. John is describing what happens when pride is leveled, despair is lifted, dishonesty is confronted, and resistance is softened. The kingdom of God does not arrive where everything is already perfect. It arrives where people are willing to let go of the lies that have been propping up their identity.
What makes John so disruptive is that he refuses to let people hide behind labels. When religious leaders approach him, he does not congratulate them for their heritage or their status. He does not tell them they are safe because of who they descend from. He dismantles that illusion immediately. Being connected to the right lineage does not exempt anyone from transformation. Faith is not inherited automatically. Obedience is not genetic. Repentance cannot be outsourced to tradition. John forces the issue back onto the individual. What are you actually doing with your life right now?
This is where Luke 3 becomes deeply uncomfortable for modern readers. We live in an age that prizes affirmation over transformation. We want to be told we are fine as we are, even when our lives are producing bitterness, fear, and fragmentation. John refuses to participate in that illusion. He speaks to crowds, not just elites, and he makes it clear that repentance is not theoretical. It shows up in how you treat people, how you use your resources, and how you wield whatever power you have.
When the crowds ask what they should do, John does not tell them to withdraw from society or perform grand religious gestures. He tells them to share. To stop exploiting. To be honest. To be content. Repentance, in Luke 3, looks painfully ordinary. It touches economics, employment, and daily behavior. It is not mystical escapism. It is lived integrity. That alone exposes how often we spiritualize faith to avoid letting it confront our habits.
There is also something deeply humbling about the way John positions himself. Despite his influence, despite the crowds, despite the speculation that he might be the Messiah, he refuses to claim that role. He points away from himself consistently. His entire identity is anchored in preparation, not fulfillment. He knows who he is and who he is not. That clarity is rare, and it is costly. John understands that his role is to decrease so that another may increase. He is not building a personal brand. He is clearing space.
This posture is especially striking in a culture obsessed with visibility and recognition. John’s power comes precisely from his refusal to center himself. He does not need to be the main event. He is content to be the voice that fades once the Word arrives. That kind of humility does not come from insecurity. It comes from confidence rooted in obedience. John knows his assignment, and he does not try to outgrow it for the sake of ego.
Then Luke does something remarkable. He introduces Jesus not with fanfare, but with submission. Jesus comes to be baptized, not because He needs repentance, but because He is willing to step into solidarity with those who do. This is one of the most profound moments in the Gospel. The sinless one enters the waters meant for confession. He does not stand above humanity. He stands with it. The heavens opening is not a reward for performance. It is an affirmation of relationship.
The voice from heaven does not announce Jesus as a conqueror or a political threat. It declares Him as a beloved Son. Before Jesus preaches, heals, or performs a single miracle in Luke’s Gospel, His identity is anchored in love. That order matters more than we often realize. Ministry flows from belonging, not the other way around. Luke 3 quietly dismantles the lie that worth is earned through productivity. Jesus is affirmed before He acts, not because of what He will do, but because of who He is.
This moment also reveals something about how God works in transition seasons. Luke 3 is a chapter of thresholds. John is preparing to fade. Jesus is preparing to emerge. Old expectations are being dismantled. New realities are forming. The Spirit moves in this in-between space, not with chaos, but with clarity. The baptism marks the shift from preparation to presence. The kingdom is no longer being announced as near. It is stepping into history in flesh and breath.
There is a quiet warning embedded here as well. Herod’s response to John reminds us that truth often threatens those invested in control. John does not fall because his message is false. He falls because it is inconvenient. Power rarely silences voices that flatter it. It silences voices that expose it. Luke does not linger on John’s imprisonment here, but he includes it deliberately. The cost of faithfulness is not hidden. Obedience does not guarantee safety. It guarantees alignment.
Luke 3 forces us to ask where we expect God to speak and whether we are willing to listen if He chooses the wilderness again. It challenges our assumptions about authority, spirituality, and readiness. It refuses to let us remain spectators. This chapter is not content to inform. It confronts. It calls. It unsettles. And that is precisely why it matters so much right now.
We live in a world saturated with noise, yet starving for truth. We have more platforms than ever, yet less clarity about what actually prepares the heart for God. Luke 3 does not offer shortcuts. It offers a path, and that path runs straight through repentance, humility, and surrender. It does not bypass discomfort. It uses it. The wilderness is not something to escape. It is often where God clears away what no longer serves life so that something new can begin.
What Luke shows us is that God is not waiting for perfect conditions. He is waiting for open hearts. He is not impressed by titles. He is attentive to obedience. He does not need grand structures to move. He needs people willing to be made ready. Luke 3 is not about a moment long past. It is about a pattern that repeats whenever God is about to do something new. And the question it leaves us with is not whether God is speaking, but whether we are prepared to hear Him.
Luke 3 does not end with fireworks. It ends with a quiet but seismic shift in how we understand authority, identity, and readiness. By the time the chapter closes, the reader has been moved from the noise of empires to the stillness of water, from the shouting of crowds to the affirmation of heaven. That movement is intentional. Luke is training us, slowly and carefully, to recognize where real transformation begins.
One of the easiest mistakes to make when reading Luke 3 is to reduce it to a historical setup chapter. We tell ourselves this is just the preface to the “real” ministry of Jesus. But Luke does not treat it that way, and neither should we. This chapter is not background noise. It is foundational. It establishes the spiritual conditions required for anything that follows to make sense. Without Luke 3, the teachings of Jesus risk being misunderstood as moral instruction rather than kingdom disruption.
John the Baptist is not merely announcing a coming figure. He is announcing a coming reality. His call to repentance is not about personal improvement; it is about readiness for a radically different way of being human. The kingdom Jesus brings will not fit inside hearts that cling to status, resentment, or self-justification. Luke 3 insists that something has to give before something new can take root.
What is striking is how ordinary John’s instructions are. When people ask what repentance looks like, he does not prescribe religious rituals. He talks about generosity, honesty, restraint, and fairness. These are not dramatic gestures. They are daily decisions. Luke is making a point here that should unsettle us. Spiritual readiness is not proven by what we claim to believe, but by how we live when belief costs us something.
This matters deeply in an age where faith is often reduced to identity signaling. Luke 3 does not allow repentance to be abstract. It presses it into the soil of real life. If you have two coats, share. If you collect taxes, stop cheating. If you carry authority, stop abusing it. None of this is mystical. All of it is costly. John is describing a life that no longer revolves around maximizing advantage. He is describing a life oriented toward justice, humility, and trust in God rather than control.
The crowds are not rejected. Soldiers are not told to abandon their posts. Tax collectors are not excluded outright. What is demanded is transformation from the inside out. Luke is showing us that the kingdom does not require withdrawal from the world, but it does require a refusal to participate in its corruption. That distinction is crucial. Too often, faith has been framed as escape. Luke 3 frames it as confrontation.
John’s insistence that lineage does not save is especially relevant. He dismantles the false security of inherited faith with brutal clarity. Being born into the right family, culture, or tradition does not substitute for a life aligned with God. Luke places this warning early because it will echo throughout Jesus’ ministry. The kingdom will consistently surprise those who assume they are insiders and welcome those who never expected an invitation.
There is also something profoundly honest about John’s self-awareness. He knows he is not the Messiah, and he does not resent that fact. He understands that his role is preparatory, not permanent. That kind of clarity is rare because it requires freedom from comparison. John does not measure his worth by how long he holds attention. He measures it by faithfulness to his assignment.
In a culture obsessed with being seen, John’s willingness to fade is radical. He does not cling to relevance. He does not attempt to transition his influence into something bigger for himself. He points forward and steps aside. Luke presents this not as weakness, but as strength. True authority, in the biblical sense, is not about self-preservation. It is about service to something larger than oneself.
Then comes the baptism of Jesus, a moment so familiar that its strangeness can be overlooked. Jesus does not arrive demanding recognition. He arrives submitting to a rite meant for repentance. Luke wants us to feel the weight of that choice. The Son of God enters the same waters as everyone else. He does not exempt Himself from human vulnerability. He steps fully into it.
This is not a performance. It is a revelation of God’s character. The heavens opening is not spectacle for the crowd. It is confirmation of intimacy. The Spirit descending is not a reward for effort, but an affirmation of identity. The voice from heaven does not announce a mission plan. It declares love. Before Jesus heals a single body or speaks a single parable in Luke’s Gospel, He is named as beloved.
That sequence matters more than we often realize. We live in a world that constantly reverses it. We are told we will be worthy once we prove ourselves. Luke 3 declares the opposite. Belonging precedes doing. Identity precedes mission. Love precedes obedience. Jesus does not act to earn the Father’s approval. He acts from it.
Luke places the genealogy immediately after this moment for a reason. Having just affirmed Jesus’ divine sonship, Luke traces His human lineage all the way back to Adam. This is not filler. It is theology. Luke is showing us that Jesus stands fully within human history while simultaneously redefining it. He is not an outsider intervening from a distance. He is humanity restored from the inside.
The inclusion of Adam at the end of the genealogy is deliberate. Luke is connecting Jesus to the whole human story, not just Israel’s. This is a universal claim. What begins in Luke 3 is not a tribal reform movement. It is the inauguration of a renewed humanity. Jesus is not merely correcting religious error. He is healing what has been fractured since the beginning.
John’s imprisonment, briefly mentioned, casts a shadow over the chapter that cannot be ignored. Luke does not romanticize obedience. Faithfulness does not shield John from consequence. This is important because it prevents us from turning Luke 3 into a prosperity narrative. Obedience does not guarantee comfort. It guarantees truth. John’s voice is silenced not because it lacked power, but because it challenged power too directly.
Luke is preparing us here for a Messiah who will not conform to expectations and a kingdom that will not align with political convenience. The cost of truth is real, and Luke does not hide it. What he also does not hide is the worth of that cost. John’s imprisonment does not negate his mission. It confirms it. The wilderness voice has done its work. The way has been prepared.
Luke 3 leaves us standing at the edge of something new. The preparation is complete. The kingdom is about to be spoken, enacted, embodied. But before any of that unfolds, Luke insists we sit with the question this chapter presses into our lives. Are we ready? Not intellectually. Not emotionally. Spiritually. Are we willing to let go of what props up our false sense of security? Are we willing to level the inner terrain that resists God’s movement?
This chapter is uncomfortable because it does not allow spectatorship. It demands participation. The wilderness still speaks, and it still speaks loudly. It calls us away from performance and toward repentance. Away from inherited assumptions and toward lived obedience. Away from self-centered faith and toward transformation that touches every part of life.
Luke 3 is not an introduction meant to be skimmed. It is a threshold meant to be crossed. And crossing it requires humility, honesty, and a willingness to be made ready for a God who refuses to stay safely contained.
Your friend, Douglas Vandergraph
Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee
#Luke3 #FaithAndRepentance #PreparingTheWay #BiblicalReflection #ChristianWriting #GospelOfLuke #SpiritualRenewal
from
Reflections
I heard this phrase recently, in a conversation where one person was trying to get through to another person who was being uncooperative. I think it's a great line, and I'm going to try to remember it for the future.
“I can explain it for you, but I can't understand it for you.”
The problem is, that's pretty curt. I don't think most people would be able to really hear that, and I think we have a responsibility to make sure our words are heard. If we know our words won't be heard, what's the point of speaking at all? Is it to feel better about ourselves? It shouldn't be, in my opinion. We have enough of that already.
For that reason, I might try something kinder first when talking with an ornery person. In the past, I've used the following, and people seem to take it well.
“I'm sorry that's not the answer you want, but that's my answer.”
Substitute the word “answer” for “request,” “advice,” or any other word as needed.
#Life #Maxims #Quotes
from
Roscoe's Quick Notes
Making it easy on myself, reducing the stress of dealing with the buffering that comes with trying to listen to Internet streaming radio, tonight I'll listen to my basketball broadcast OTA (over the air) from a local radio station. My trusty old Bose Radio will bring me WOIA's call of tonight's San Antonio Spurs vs Dallas Mavericks NBA Game.
And the adventure continues.