Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Douglas Vandergraph
There’s a strange beauty in the idea that if you don’t believe in God, you should pray that God believes in you. It sounds almost like a paradox, almost like a philosophical knot tied too tightly to pull apart, yet when you sit with it—really sit with it—we discover that it’s not a knot at all. It’s a doorway. A doorway into the quiet, overlooked truth that long before belief ever rises in us, God’s belief has already risen over us. Long before we whisper His name with sincerity or clarity, He has spoken ours with love and certainty. This entire thought—this reversal of expectation—feels like an invitation to step outside the way we’ve been trained to see faith, doubt, and divine connection, and instead walk into the raw and tender place where God meets people exactly where they are, not where they’re “supposed” to be. Talk to enough people who’ve lived through spiritual droughts, confusion, heartbreaks, and intellectual wrestling matches with the universe itself, and you’ll notice a simple pattern: almost nobody doubts God because they want to. They doubt because of wounds. They doubt because of mismatches between expectation and experience. They doubt because life hit them harder than they ever expected and religion didn’t prepare them for what real pain feels like. They doubt because the image of God they were taught did not survive contact with the world they live in. They doubt not out of rebellion, but out of exhaustion. And exhaustion doesn’t need a lecture—it needs a place to rest. That’s where this seemingly inverted sentence becomes a soft landing spot for the soul: if you don’t believe in God, pray that God believes in you. Because even the skeptic, the wounded, the bewildered, and the distant can ask one thing: “If there’s Someone out there, let them not give up on me.” That fragile, almost trembling desire reveals more about the human heart than any argument ever could.
I’ve always felt that faith isn’t born at the front door of certainty—it’s born in the side-alley moments. The quiet crises. The moments of internal contradiction when a person silently whispers to themselves, “I don’t know anymore.” But uncertainty is not the enemy of faith. Indifference is. And there’s a world of difference between someone who says, “I don’t care,” and someone who says, “I don’t know.” When a person says, “I don’t know,” there’s still a reaching happening beneath the surface. It might be small, barely visible, almost fragile, but it’s there. And I believe God honors the smallest reach. If a whisper is all you have left, Heaven listens like it’s thunder. If the only prayer you can muster is, “If You’re real, find me,” God treats that like a door swinging wide. If the heart says, “If You believe in me, show me,” then the God of all creation bends low enough to meet that heart where it stands. And all of this matters because there are people walking around today feeling like they’re not allowed to be honest with God. As if doubt disqualifies. As if questions insult Him. As if struggle means distance. But the truth is far more compassionate. God’s belief in you is not based on your belief in Him. His belief in you is anchored in His nature, not your performance. He doesn’t need your certainty to be committed to you. He doesn’t need your perfection to walk beside you. He doesn’t need your theological clarity to wrap His arms around your life. If anything, He steps closest when clarity is the hardest to find.
One of the great tragedies of spiritual culture is that people have been made to feel like faith requires flawless conviction. But think of every person in history who’s ever become anything meaningful in their walk with God—they all began in some version of confusion. They all carried questions. They all wrestled with doubts so real and so heavy they could barely lift their own heads. And yet God still moved in them. He still believed in them. He still breathed life into the places that felt hollow. If the greatest stories in Scripture were built on shaky beginnings, then why do we expect modern believers to start their journey perfectly stable? God has always done His best work in people who came to Him imperfect, unsure, unsteady, and halfway broken. Because belief isn’t a ladder—it’s a seed. And seeds don’t start strong. They start hidden. They start quiet. They start in darkness. They start in soil that doesn’t look like anything is happening at all. And yet, under that soil, life begins. Under that soil, roots take hold. Under that soil, growth starts its sacred, unseen work. Belief works the same way. It does not burst from the ground fully formed. It begins unseen. It begins inside. It begins in whispers like, “God, I don’t know You yet… but if You believe in me, help me believe in myself the way You do.”
There’s also this deep tenderness woven into that idea—that God believes in you. Just pause with that. Let it soak. The Creator believing in the created. The Eternal believing in the temporary. The One who has no beginning believing in the one still struggling to begin. He believes in your capacity to rise. He believes in your ability to heal. He believes in the parts of you you’ve written off. He believes in the version of you that you can’t quite see yet. He believes in your future while you’re still stuck in your past. He believes in your potential even if your history tries to shout otherwise. He believes in the arc of redemption written through every life that still has breath in it. God doesn’t just believe in you as you are—He believes in the you that’s becoming. And when you realize that, when you feel it not as a religious slogan but as a truth that reaches down into your bones, everything shifts. Suddenly you don’t walk like you’re abandoned—you walk like someone held. You don’t think like someone unwanted—you think like someone chosen. You don’t live like someone left behind—you live like someone God refuses to give up on.
There is a phenomenon that happens when people get hurt deeply enough: they don’t stop wanting God—they stop trusting the idea of being disappointed again. And this is where belief becomes complicated. So many people aren’t rejecting God Himself—they’re rejecting the pain attached to previous attempts at faith. They’re rejecting the versions of God handed to them by flawed voices. They’re rejecting the interpretations that hurt more than they healed. They’re rejecting the expectations that were too heavy to carry. And in that place, “I don’t believe in God” often means, “I can’t afford to be let down again.” That kind of declaration isn’t coldness—it’s self-protection. So imagine what happens when we offer them a new doorway: “If you don’t believe in God, then ask that He believes in you.” That’s not a challenge. It’s not an argument. It’s not a debate. It’s an open hand. A pathway for the weary. An invitation for those who’ve been bruised by life. A gentle whisper saying, “You don’t have to know everything. You don’t have to decide everything. You don’t have to resolve everything today. Just ask for one thing: that the One who made you hasn’t lost faith in who you can become.”
And the beauty of that ask is that it matches God’s heart perfectly. Because God has always been the God who believes before you do. Look through Scripture, through history, through the testimonies of countless lives changed—not one of them begins with someone who had it all together. They were uncertain, unqualified, unprepared, undone. God didn’t wait for them. He believed in them and then walked them forward. The fisherman who doubted himself. The woman who felt unworthy. The outcast who wondered if life held anything else. The leader who never asked to lead. The wanderer who had no direction. The broken who felt useless. They weren’t chosen because they believed—they grew because He believed. And the same story continues in our time. You don’t need perfect belief to start this journey. You need honesty. You need willingness. You need that slight leaning of the heart that says, “If You believe in me, then maybe I can take one more step.”
Think of how many people live every day feeling unseen. Feeling like their best efforts fall short. Feeling like nobody recognizes what they carry, what they fight through, what they survive. The thought that God believes in them becomes more than theology—it becomes oxygen. It becomes something that keeps them from sinking. It becomes a lifeline when they feel adrift. Because if God believes in you, then there must be something in you worth believing in. Something that hasn’t been ruined by your mistakes. Something unbroken by your past. Something untouched by the disappointments that shaped you. Something sacred. Something intentional. Something God still plans to use. And that realization alone can lift a person out of despair. It can lift them out of self-condemnation. It can lift them out of the belief that they are too far gone to matter.
When you tell someone, “Pray that God believes in you,” you’re telling them something deeply empowering: you’re saying that the relationship between God and the human soul doesn’t begin with your perfection—it begins with His persistence. His pursuit. His unwavering commitment to who you really are beneath the layers. You’re saying that God has already invested Himself in your life long before you ever learned how to look back at Him. You’re saying that faith is not a mountain you climb alone—it’s a journey where God walks toward you even as you stumble toward Him. You’re saying that the pressure to have every answer figured out is replaced with the invitation to simply be honest, open, and willing.
This idea frees people. It frees them from religious performances. It frees them from the fear that doubt separates them from God. It frees them from the lie that God is disappointed by their humanity. And in that freedom, faith grows more authentically than it ever could under pressure. Because faith that grows by force is fragile. Faith that grows by honesty is durable. And faith that grows from the realization that God believes in you before you believe in Him becomes almost unbreakable. It becomes rooted not in your own strength, but in His. Not in your consistency, but in His faithfulness. Not in your understanding, but in His insight into who you truly are.
This world is full of people who carry quiet battles nobody else knows about. Anxiety that keeps them awake at night. Guilt that eats at them in the morning. Fear that follows them like a shadow. Memories they wish they could erase. Pressure that makes them feel like they’re drowning from the inside out. These people often avoid faith conversations because they believe they’re already disqualified. They think God only wants the strong, the certain, the steady. But imagine the healing that begins when they hear: “Even if you don’t believe in God… He hasn’t stopped believing in you.” That statement alone can crack open a wall someone has held up for decades. Because suddenly, faith is no longer a competition. It’s no longer a requirement. It’s an invitation back to themselves. It’s a reminder that they are not alone in the fight to become whole.
And this is where the real transformation begins. When someone takes that first step—not a confident step, not a sophisticated step, not a doctrinally precise step—but a real step. A step like, “God, if You’re there, I need You to believe in me because I don’t know how to believe in myself.” That moment becomes sacred soil. Heaven meets people there. God bends low to that place. It’s the place where the divine and human heart breathe at the same rhythm. It’s where hope begins rebuilding its foundation. It’s where the seed of belief finally gets its chance to open. And once it opens, even slightly, even subtly, everything begins to change.
Because when belief begins to grow in the soil of honesty instead of pressure, it becomes a different kind of belief. It becomes humble. It becomes authentic. It becomes patient with itself. And most importantly, it becomes sustainable. People who try to force themselves into belief often end up exhausted, and exhaustion is not faith—it’s performance. But people who let belief grow from a place of being seen, understood, and believed in by God discover a faith that carries them instead of a faith they must constantly carry. It becomes something alive instead of something heavy. It becomes something they look forward to instead of something they're afraid they will fail at. Because when you know God already believes in you, your fear of disappointing Him begins to dissolve. You stop bracing for judgment and start opening yourself to transformation. You stop hiding from God and start letting Him into the rooms of your soul that you kept closed for years. You stop expecting perfection from yourself and start welcoming progress. This is the beginning of real faith, and it is holy in its simplicity.
There’s also another dimension to this: when God believes in you, He believes in the story He’s writing through you. People often think their story is defined by what they’ve done, but God defines your story by what He’s doing. People look at their failures and see endings; God looks at the same failures and sees setups. People see brokenness; God sees building material. People see disqualification; God sees invitation. And when you begin to understand that God isn’t writing you off, you begin to participate in the story He’s still writing. That’s when faith stops feeling like a distant concept and becomes an unfolding reality inside you. One day you wake up and realize you’re speaking differently, thinking differently, walking differently, loving differently. Not because someone told you to change, but because the God who believes in you is awakening the version of you He always knew was there.
And the beautiful thing is that God’s belief in you doesn’t just shape your inner world—it shapes how you move in the outer one. You start to walk with a quiet confidence. The kind that isn’t loud, but steady. The kind that doesn’t need to shout, but still shifts the atmosphere. When you know God believes in you, you approach challenges differently. You don’t treat them as signs you’re failing—you treat them as proof you’re growing. You don’t hide from responsibility—you rise to it. You don’t retreat in the face of adversity—you lean into purpose. Because a person who knows they are believed in becomes a person who is able to believe in what God is doing in them. This is why people of great spiritual depth don’t always start with great belief—but they always end with it. Their belief becomes the harvest of being believed in by a God who refuses to walk away from them.
And this understanding does something else—something powerful. It softens your judgment of others. When you know how patient God has been with your process, you begin to carry patience for the process of others. Suddenly you don’t look at doubters with frustration—you look at them with compassion. You don’t see skeptics as threats—you see them as people in pain. You don’t see wanderers as defiant—you see them as searching. You don’t see people struggling with faith as failures—you see them as future testimonies in progress. This is because once you truly experience a God who believes in you even when you don’t believe in Him, you begin to reflect that same belief toward those who are still struggling. You become a carrier of the same grace that carried you.
And perhaps this is one of the most transformative parts of the entire concept—that God’s belief in you becomes a model for how you treat the world around you. Instead of becoming someone who polices belief, you become someone who nurtures it. Instead of becoming someone who judges the uncertain, you become someone who walks with them. Instead of becoming someone who pressures people toward faith, you become someone who creates safe spaces where faith can grow naturally. You begin to see that belief is not a battlefield—it’s a journey. And journeys take time. They take patience. They take compassion. They take understanding. They take room to breathe. And when you carry God’s belief in you, you naturally create that room for others.
It also shifts the way you see yourself. You stop defining yourself by the worst things you’ve done. You stop defining yourself by the hardest seasons you’ve lived through. You stop defining yourself by the failures that once haunted you. Instead, you define yourself by the God who has never given up on you. And that shift changes the entire architecture of your identity. Suddenly your past isn’t your prison—it becomes the soil where your calling grows. Your regrets aren’t chains—they’re lessons. Your wounds aren’t disqualifiers—they’re testimonies waiting to be told. And when someone says, “If you don’t believe in God, pray that God believes in you,” what they’re really saying is, “Let God begin the work in you that you don’t yet know how to begin in yourself.”
And in time, faith will come. Not forced. Not rushed. Not pressured. But naturally. Quietly. Authentically. Faith will rise like morning light—gentle, gradual, revealing what has always been there but was hidden in the dark. One day you’ll look back and realize belief didn’t come the way you expected. It didn’t arrive with fireworks or arguments or sudden bursts of clarity. It arrived the way God often arrives—in the stillness, in the whisper, in the gentle stirring of a heart that finally realized it was safe to hope again. And that kind of faith is deep. It’s rooted. It’s unshakeable. Because it’s faith born from being loved, not faith born from being pressured.
If the world understood this, faith conversations would change. Instead of trying to force belief on people, we’d speak to the parts of them that long to be believed in. We’d talk to the hurt before we talked to the doubt. We’d talk to the longing before we talked to the theology. We’d talk to the heart before we talked to the doctrine. Because in the end, people aren’t looking for a God to argue with—they’re looking for a God who hasn’t abandoned them. They’re looking for a God who can handle their uncertainty. They’re looking for a God who doesn’t vanish when life gets hard. They’re looking for a God who believes they’re worth the effort of redemption. That’s the God who shows up when someone whispers that first hesitant prayer: “If You believe in me… help me believe again.”
So if you’re reading this today, and you’re wrestling with your own doubts, your own questions, your own fears, your own distance from God, let this be a soft place for your soul to land. You don’t have to pretend. You don’t have to perform. You don’t have to exaggerate your faith or hide your uncertainty. Just start with honesty. Start with the simple acknowledgment that your heart is still open enough to ask. And if you don’t know how to believe in God right now, then simply pray this: “God, I pray that Your belief in me becomes the anchor I can’t give myself.” That prayer is not small. It is not weak. It is not inadequate. It is sacred. It is powerful. And it is enough.
Because God’s belief in you has been steady from the start. He has never withdrawn it. He has never reconsidered it. He has never questioned whether you are worth the investment. His belief in you is not based on who you were, but on who He knows you can become. So take the pressure off yourself today. You are not behind. You are not failing. You are not forgotten. You are not disqualified. You are simply in process. And that process is holy.
Let this be your reminder: if you don’t believe in God right now, it’s okay. It truly is. Just pray that God believes in you. And when you do, you’re not awakening something in Him—you’re awakening something in yourself. You’re stepping into a truth that has always been waiting for you. You’re allowing God’s belief in you to breathe where doubt had stolen your breath. You’re letting the One who formed you remind you why He formed you. And eventually, you’ll discover that belief isn’t something you achieved; it’s something you received. Something that grew quietly as you allowed God’s love to work in you.
And when that happens, when belief rises from being believed in, you’ll find a faith that’s not fragile—it’s alive. It’s resilient. It’s personal. It’s rooted in relationship rather than rules. And that faith will carry you farther than you ever imagined. So keep going. Keep whispering. Keep reaching. Even your smallest prayer is big in God’s hands. Even your weakest faith is precious to Him. Even your uncertainty is welcome in His presence. And even your doubts cannot stop His belief in you.
In time, you will look back and see that faith wasn’t something you built from the ground up—it was something God breathed into the deepest parts of you from the very beginning. And that breath is still in you. That purpose is still in you. That calling is still in you. And God’s belief in you is still the foundation under your feet.
You’re not lost. You’re becoming. And Heaven has never been more certain of you than it is right now.
Your friend, Douglas Vandergraph
Watch Douglas Vandergraph's inspiring faith-based videos on YouTubehttps://www.youtube.com/@douglasvandergraph
Support the ministry by buying Douglas a coffeehttps://www.buymeacoffee.com/douglasvandergraph
from folgepaula
as sweet as possible as spontaneous as possible as sincere as possible as serene as possible as strong as possible as symbolic as possible as soothing as possible as soulful as possible
/feb26
from Sinnorientierung
A Message of Hope
Each of you is unique, unrepeatable, irreplaceable, incomparable, separate, and distinct. You have been given a body and a pyche which are sometimes similar in character type and/or traits to others, but beyond that your are a spirit person with a limited degree of freedom and a capacity to respond to life an its demands. There never was, there never is, there never will be an absolute twin, a clone, one who can replace you. You are a one of a kind and life is calling, inviting, and challenging you to become the authentic you by trancending yourself and at the same time forgetting yourself.
If you simply search for pleasure or power, you will experience something missing. You will at some moment feel empty, a void, a vacuum. You will wonder, “What's it all about?”
When the need for meaning finally occurs to you, you will beging to seach for meaning every day.
...
McKilopp, T. (1993) A MESSAGE OF HOPE, The International Forum of Logotherapy, p. 4
#LogoTherapy #FranklViktor #McKillopp #hope #UniquePerson #meaning
from
Reflections
This fairly recent obsession with metrics in the workplace is driving companies insane.
A while back, I watched a video about all the ways hotels are trying to save money by, among other things, eliminating storage space, making the bathroom less private, removing desks, and pressuring guests to work at the bar, where they can spend more money. (By the way, that bartender? They're also the receptionist.) These changes are, of course, driven by metrics like “GSS” and “ITR,” whatever the f@*k those are.
Is there a kernel of truth to all of this? Sure. Aloft Hotels are cozy, and they seem to follow this playbook. I didn't mind staying in one when I was stuck in San Francisco for one night more than ten years ago. Would I want to stay in one of their rooms during a business trip or anything else lasting more than a couple of days? Hell no. I'd like a desk and somewhere to put clothes. (I know, I'm so needy. I travel with clothes.)
Metrics are fine, sometimes, when their use is limited and their shortcomings are genuinely appreciated. Taking them too seriously and letting them make the decisions, however, is a recipe for disaster. Hard questions demand more thoughtfulness than that. “GSS” and “ITR” are meaningful until they aren't, and nobody is going to find solace in those abbreviations when generations of potential customers steer clear of your business because they actually want something good.
Sadly, I don't think most businesses think that far ahead.
Show me the metric which proves that your business isn't incurring massive risk by ignoring common sense. Until then, I don't care about “the numbers.”
#Life #SoftwareDevelopment #Tech
from Healthier
Lydia Joly, middle, on her parents’ farm circa 1967 — son, Loran, left; sister, right; my great-grandmother, back row. When great-grandmother was not visiting, I would sometimes sleep in the bed she had slept in when at the farm… “The apple doesn’t fall far from the tree?”
“Becoming Home – full film”:
https://youtu.be/NtPbAuFMI0c?si=bcCTE2fZH3PVN7vy
“The documentary “Becoming Home” touched my heart, a few years ago. Make by filmmaker Michael DuBois, he chronicled the “first year after the death of his mother. He set out to discover why she had the astounding impact on others that she did…”
Michael lives on Cape Cod, as of when he created this documentary…
“Becoming Home” is his finished story. It is the story of his mother, and her grace through life. It is the story of his childhood. And it is the story of learning to move forward after those losses, without moving away from them. Directed by Michael F. DuBois Produced by Bert Mayer and Larissa Farrell Director of Photography Mark Kammel Original Music by Derek Hamilton Featuring Music by Sky Flying By and Pete Miller”
My mother, Lydia Joly, age 87, war refugee from Piaski, Poland, with time in a relocation camp in northern Germany after World War I also — arrived Ellis Island 1950 — image by son Loran
Christmas card 2024 with Lydia’s self-made Gingerbread house
Lydia — my mother — was born in Lubelskie County, Poland.
We see her village, Piaski, here, with beautiful music…
https://youtu.be/XF04EznukOY?si=E2qJLDS5jNsJxzaI
No wonder she loves gardening and flowers…
Lydia, gardening, 2025, age 87
from
Iain Harper's Blog
Caveat: this article contains a detailed examination of the state of open source/ weight AI technology that is accurate as of February 2026. Things move fast.
I don’t make a habit of writing about wonky AI takes on social media, for obvious reasons. However, a post from an AI startup founder (there are seemingly one or two out there at the moment) caught my attention.
His complaint was that he was spending $1,000 a week on API calls for his AI agents, realised the real bottleneck was infrastructure rather than intelligence, and dropped $10,000 on a Mac Studio with an M3 Ultra and 512GB of unified memory. His argument was essentially every model is smart enough, the ceiling is infrastructure, and the future belongs to whoever removes the constraints first.
It’s a beguiling pitch and it hit a nerve because the underlying frustration is accurate. Rate limits, per-token costs, and context window restrictions do shape how people build with these models, and the desire to break free of those constraints is understandable. But the argument collapses once you look at what local models can actually do today compared to what frontier APIs deliver, and why the gap between the two is likely to persist for the foreseeable future.
To understand why, you need to look at the current open-source model ecosystem in some detail, examine what’s actually happening on the frontier, and think carefully about the conditions that would need to hold for convergence to happen.
The open-source model ecosystem has matured considerably over the past eighteen months, to the point where dismissing it as a toy would be genuinely unfair. The major families that matter right now are Meta’s Llama series, Alibaba’s Qwen line, and DeepSeek’s V3 and R1 models, with Mistral, Google’s Gemma, and Microsoft’s Phi occupying important niches for specific use cases.
DeepSeek’s R1 release in January 2025 was probably the single most consequential open-source event in the past two years. Built on a Mixture of Experts architecture with 671 billion total parameters but only 37 billion activated per forward pass, R1 achieved performance comparable to OpenAI’s o1 on reasoning benchmarks including GPQA, AIME, and Codeforces. What made it seismic was the claimed training cost: approximately $5.6 million, compared to the hundred-million-dollar-plus budgets associated with frontier models from the major Western labs. NVIDIA lost roughly $600 billion in market capitalisation in a single day when the implications sank in.
The Lawfare Institute’s analysis of DeepSeek’s achievement noted an important caveat that often gets lost in the retelling: the $5.6 million figure represents marginal training cost for the final R1 phase, and does not account for DeepSeek’s prior investment in the V3 base model, their GPU purchases (which some estimates put at 50,000 H100-class chips), or the human capital expended across years of development. The true all-in cost was substantially higher. But even with those qualifications, the efficiency gains were highly impressive, and they forced the entire industry to take algorithmic innovation as seriously as raw compute scaling.
Alibaba’s Qwen3 family, released in April 2025, pushed things further. The 235B-A22B variant uses a similar MoE approach, activating 22 billion parameters out of 235 billion, and it introduced hybrid reasoning modes that can switch between extended chain-of-thought and direct response depending on task complexity. The newer Qwen3-Coder-480B-A35B, released later in 2025, achieves 61.8% on the Aider Polyglot benchmark under full precision, which puts it in the same neighbourhood as Claude Sonnet 4 and GPT-4.1 for code generation specifically.
Meta’s Llama 4, released in early 2025, moved to natively multimodal MoE with the Scout and Maverick variants processing vision, video, and text in the same forward pass. Mistral continued to punch above its weight with the Large 3 release at 675 billion parameters, and their claim of delivering 92% of GPT-5.2’s performance at roughly 15% of the price represents the kind of value proposition that makes enterprise buyers think twice about their API contracts.
According to Menlo Ventures’ mid-2025 survey of over 150 technical leaders, open-source models now account for approximately 13% of production AI workloads, with the market increasingly structured around a durable equilibrium. Proprietary systems define the upper bound of reliability and performance for regulated or enterprise workloads, while open-source models offer cost efficiency, transparency, and customisation for specific use cases.
By any measure, this is a serious and capable ecosystem. The question is whether it’s capable enough to replace frontier APIs for agentic, high-reasoning work.
The Mac Studio with an M3 Ultra and 512GB of unified memory is genuinely impressive hardware for local inference. Apple’s unified memory architecture means the GPU, CPU, and Neural Engine all share the same memory pool without the traditional separation between system RAM and VRAM, which makes it uniquely suited to running large models that would otherwise require expensive multi-GPU setups. Real-world benchmarks show the M3 Ultra achieving approximately 2,320 tokens per second on a Qwen3-30B 4-bit model, which is competitive with an NVIDIA RTX 3090 while consuming a fraction of the power.
But the performance picture changes dramatically as model size increases. Running the larger Qwen3-235B-A22B at Q5 quantisation on the M3 Ultra yields generation speeds of approximately 5.2 tokens per second, with first-token latency of around 3.8 seconds. At Q4KM quantisation, users on the MacRumors forums report around 30 tokens per second, which is usable for interactive work but a long way from the responsiveness of cloud APIs processing multiple parallel requests on clusters of H100s or B200s. And those numbers are for the quantised versions, which brings us to the core technical problem.
Quantisation is the process of reducing the numerical precision of a model’s weights, typically from 16-bit floating point down to 8-bit or 4-bit integers, in order to shrink the model enough to fit in available memory. The trade-off is information loss, and research published at EMNLP 2025 by Mekala et al. makes the extent of that loss uncomfortably clear. Their systematic evaluation across five quantisation methods and five models found that while 8-bit quantisation preserved accuracy with only about a 0.8% drop, 4-bit methods led to substantial losses, with performance degradation of up to 59% on tasks involving long-context inputs. The degradation worsened for non-English languages and varied dramatically between models and tasks, with Llama-3.1 70B experiencing a 32% performance drop on BNB-nf4 quantisation while Qwen-2.5 72B remained relatively robust under the same conditions.
Separate research from ACL 2025 introduces an even more concerning finding for the long-term trajectory of local models. As models become better trained on more data, they actually become more sensitive to quantisation degradation. The study’s scaling laws predict that quantisation-induced degradation will worsen as training datasets grow toward 100 trillion tokens, a milestone likely to be reached within the next few years. In practical terms, this means that the models most worth running locally are precisely the ones that lose the most from being compressed to fit.
When someone says they’re using a local model, they’re usually running a quantised version of an already-smaller model than the frontier labs deploy. The experience might feel good in interactive use, but the gap becomes apparent on exactly the tasks that matter most for production agentic work. Multi-step reasoning over long contexts, complex tool use orchestration, and domain-specific accuracy where “pretty good” is materially different from “correct.”
The most persistent advantage that frontier models hold over open-source alternatives has less to do with architecture and more to do with what happens after pre-training. Reinforcement Learning from Human Feedback and its variants form a substantial part of this gap, and the economics of closing it are unfavourable for the open-source community.
RLHF works by having human annotators evaluate pairs of model outputs and indicate which response better satisfies criteria like helpfulness, accuracy, and safety. Those preferences train a reward model, which then guides further optimisation of the language model through reinforcement learning. The process turns a base model that just predicts the next token into something that follows instructions well, pushes back when appropriate, handles edge cases gracefully, and avoids the confident-but-wrong failure mode that plagues undertrained systems.
The cost of doing this well at scale is staggering. Research from Daniel Kang at Stanford estimates that high-quality human data annotation now exceeds compute costs by up to 28 times for frontier models, with the data labelling market growing at a factor of 88 between 2023 and 2024 while compute costs increased by only 1.3 times. Producing just 600 high-quality RLHF annotations can cost approximately $60,000, which is roughly 167 times more than the compute expense for the same training iteration. Meta’s post-training alignment for Llama 3.1 alone required more than $50 million and approximately 200 people.
The frontier labs have also increasingly moved beyond basic RLHF toward more sophisticated approaches. Anthropic’s Constitutional AI has the model critique its own outputs against principles derived from human values, while the broader shift toward expert annotation, particularly for code, legal reasoning, and scientific analysis, means the humans providing feedback need to be domain practitioners rather than general-purpose annotators. This is expensive, slow, and extremely difficult to replicate through the synthetic and distilled preference data that open-source projects typically rely on.
The 2025 introduction of RLTHF (Targeted Human Feedback) from research surveyed in Preprints.org offers some hope, achieving full-human-annotation-level alignment with only 6-7% of the human annotation effort by combining LLM-based initial alignment with selective human corrections. But even these efficiency gains don’t close the fundamental gap: frontier labs can afford to spend tens of millions on annotation because they recoup it through API revenue, while open-source projects face a collective action problem where the cost of annotation is concentrated but the benefits are distributed.
The picture is not uniformly bleak for open-source, and understanding where the gap has closed is as important as understanding where it hasn’t.
Code generation is the domain where convergence has happened fastest. Qwen3-Coder’s 61.8% on Aider Polyglot at full precision puts it within striking distance of frontier coding models, and the Unsloth project’s dynamic quantisation of the same model achieves 60.9% at a quarter of the memory footprint, which represents remarkably small degradation. For writing, editing, and iterating on code, a well-configured local model running on capable hardware is now a genuinely viable alternative to an API, provided you’re not relying on long-context reasoning across an entire codebase.
Classification, summarisation, and embedding tasks have been viable on local models for some time, and the performance gap for these workloads is now negligible for most practical purposes. Document processing, data extraction, and content drafting all fall into the category where open-source models deliver sufficient quality at dramatically lower cost.
The OpenRouter State of AI report’s analysis of over 100 trillion tokens of real-world usage data shows that Chinese open-source models, particularly from Alibaba and DeepSeek, have captured approximately 13% of weekly token volume with strong growth in the second half of 2025, driven by competitive quality combined with rapid iteration and dense release cycles. This adoption is concentrated in exactly the workloads described above: high-volume, well-defined tasks where cost efficiency matters more than peak reasoning capability.
Privacy-sensitive applications represent another area where local models have an intrinsic advantage that no amount of frontier improvement can overcome. MacStories’ Federico Viticci noted that running vision-language models locally on a Mac Studio for OCR and document analysis bypasses the image compression problems that plague cloud-hosted models, while keeping sensitive documents entirely on-device. For regulated industries where data sovereignty matters, local inference is a feature that frontier APIs cannot match.
If the question is whether open-source models running on consumer hardware will eventually match frontier models across all tasks, the honest answer requires examining several conditions that would need to hold simultaneously.
The first is that Mixture of Experts architectures and similar efficiency innovations would need to continue improving at their current rate, allowing models with hundreds of billions of total parameters to activate only the relevant subset for each task while maintaining quality. The early evidence from DeepSeek’s MoE approach and Qwen3’s hybrid reasoning is encouraging, but there appear to be theoretical limits to how sparse activation can get before coherence suffers on complex multi-step problems.
The second condition is that the quantisation problem would need a genuine breakthrough rather than incremental improvement. The ACL 2025 finding that better-trained models are more sensitive to quantisation is a structural headwind that current techniques are not on track to solve. Red Hat’s evaluation of over 500,000 quantised model runs found that larger models at 8-bit quantisation show negligible degradation, but the story at 4-bit, where you need to be for consumer hardware, is considerably less encouraging for anything beyond straightforward tasks.
The third and most fundamental condition is that the post-training gap would need to close, which requires either a dramatic reduction in the cost of expert human annotation or a breakthrough in synthetic preference data that produces equivalent alignment quality. The emergence of techniques like RLTHF and Online Iterative RLHF suggests the field is working on this, but the frontier labs are investing in these same efficiency gains while simultaneously scaling their annotation budgets. It’s a race where both sides are accelerating, and the side with revenue-funded annotation budgets has a structural advantage.
The fourth condition is that inference hardware would need to improve enough to make unquantised or lightly quantised large models viable on consumer devices. Apple’s unified memory architecture is the most promising path here, and the progression from M1 to M4 chips has been impressive, but even the top-spec M3 Ultra at 512GB can only run the largest MoE models at aggressive quantisation levels. The next generation of Apple Silicon with 1TB+ unified memory would change the calculus significantly, but that’s likely several years away, and memory costs just shot through the ceiling.
Given all of these dependencies, a realistic timeline for broad convergence across most production tasks is probably three to five years, with coding and structured data tasks converging first, creative and analytical tasks following, and complex multi-step reasoning with tool use remaining a frontier advantage for the longest.
The most pragmatic position right now (which is also the least satisfying one to post about), is that the future is hybrid rather than either-or. The smart deployment pattern routes high-volume, lower-stakes tasks to local models where the cost savings compound quickly and the quality gap is negligible, while reserving frontier API calls for the work that demands peak reasoning: complex multi-step planning, high-stakes domain-specific analysis, nuanced tool orchestration, and anything where being confidently wrong carries real cost.
This is approximately what the Menlo Ventures survey data suggests enterprise buyers are doing already, with model API spending more than doubling to $8.4 billion while open-source adoption stabilises around 13% of production workloads. The enterprises that are getting value from local models are not using them as wholesale API replacements; they’re using them as a complementary layer that handles the grunt work while the expensive models handle the hard problems.
There’s also the operational burden that is rarely mentioned in relation to model use. When you run models locally, you effectively become your own ML ops team. Model updates, quantisation format compatibility, prompt template differences across architectures, memory management under load, and testing when new versions drop, all of that falls on you. The API providers handle model improvements, scaling, and infrastructure, and you get a better model every few months without changing a line of code. For a small team that should be spending its time on product rather than infrastructure, that operational overhead has real cost even if it doesn’t show up on an invoice.
The future of AI probably does involve substantially more local compute than we have today. Costs will come down, architectures will improve, hardware will get more capable, and the hybrid model will become standard practice. The question is not who removes the constraints first, it’s who understands which constraints actually matter.
from audiobook-reviews

This, as were the last two, is a book I discovered in Tiny Bookshop. I like a good love story and the game's blurb sounded pretty good.
Actually listening to the book, I found it has a bit too much drama for my taste. Why are all the protagonist together with these absolute garbage people?
The love story though is well told and charming. Even if some of thoughts Harriet is having toward her crush gave me «Good Intentions» vibes in a way that did not feel appropriate for the book.
I'm also not sure we needed to hear the outcome in quite so many words. The book comes to an epic climax. Stopping there and leaving the rest to the listener's imagination would have been fine, too. It's something we do not get nearly enough of these days. At least in the books I listen to.
I want to take a moment to talk about the letter Harriet writes. In the story it goes that she sits down, in the middle of the night no less and writes the letter in one go, even refuses to proof read it. Remember too, that she is a wedding photographer and not an author or a journalist who has a lot of practice. I am sorry, but that is bullshit. That letter is so well written. Clearly, these are the carefully written words of Mhairi McFarlane and not those of Harriet. Now, I am sure that this is necessary. The letter is pretty long and we get to hear all of it and were it written in a more realistic fashion, that part of the story might be hard to get through. Nonetheless, it shattered my suspension of disbelief.
But also, it is an interesting way of doing exhibition. I've not had that in too many books before, so fair enough I guess.
Social media plays a big role in this book. It gets mentioned from the beginning, reminding us that this is a contemporary piece of work.
Anyone can make up a story, paint themselves as a victim and their adversary as the abuser. Online mobs are quick to judge and ruthless in their damnation. They don't wait around to ask if there might be another side to the story.
By making this a central plot point, the book serves as a warning, to not believe everything you see online, just because it sounds sincere and plausible. A warning that can't be made often enough in these times.
The audio quality is good, as you'd expect from any modern recording. I am, however, not too happy with the performance of Chloe Massey in reading the book.
Yes, the different people do all get their own voices. But they are not very pronounced and, worse, not very consistent either. It is especially hard to distinguish what Harriet is saying out loud and what she is merely thinking in her head, sometimes making conversations hard to follow.
This might be to blame on the book in parts — there are some books that are suited better to being made into an audiobook than others.
Overall it's still an enjoyable listen, but it could definitely be better. If you're going to listen to the book, maybe check out this other recording here. It doesn't have as many reviews as the one I listened to, but they are better, particularly concerning the recording.
If you're looking for a romantic story and are not turned off by a bit of drama, then this is definitely for you!
from
Roscoe's Quick Notes

This afternoon the Indiana University Women's Basketball Team will play their annual Pink Game. Their opponent will be the Purdue Boilermakers who will be traveling down to meet them on the floor of IU's Assembly Hall.
If the Internet doesn't crap out on me, I'll be listening to the radio call of the game streaming from B97 – The Home for IU Women's Basketball.
And the adventure continues.
from ChadNasci.com
Just testing out this platform. Here we go.
from 下川友
今日は6時半に目が覚めた。 外を見ると雪が積もっていて、早朝の澄んだ空気の中でその景色が見れたのは幸運だった。
ぼーっと眺めたり、写真を撮ったりしながら、10分ほどベランダに出ていた。 最近は腰痛予防に腹筋のインナーマッスルを鍛える習慣を続けていて、それが自信になっているのか、前より寒さに強くなった気がする。 凍える仕草をせず、少し強気に振る舞って、内心寒く感じてきた辺りで部屋に戻った。
最近は人と会う機会が、いつもより少しだけ増えている。 俺はだいたい15分前には現地に着いてしまう性格なので、待ち時間にコンビニで飲み物を買う。 「冬にしか飲まないからな」と思いながら、冬は決まってほっとレモンを選ぶ。飲むたびに「思ったより甘いな」と毎回感じている。
塀の上に猫がいたので、しばらく眺めて楽しんだ。 そのあと焦点を手前にずらして、猫をぼんやりとした輪郭で見ることで、もう一度楽しむ事ができる。 猫のことだから、表情をはっきり見なくても、きっと想像通りのあの顔をしているのだろう。
冬はみんな厚着をするから、満員電車ではそのぶん夏より乗れる人数が減る気がしているが、実際はどれくらい変わるのだろう。 朝の通勤電車は体感では乗車率200%ほどで、いつも乗れない人が出ている気がするが、電車の本数が季節で変わるという話は聞かない。 ということは、全員が厚着でも乗車数にはそこまで影響がないのだろう。
喫茶店に入ると、角砂糖の入った容器が大体置いてある。 自分はコーヒーも紅茶も基本ブラックだが、角砂糖の容器には店ごとのこだわりが出ていて、つい観察してしまう。 「自分の食卓にも置こうか」と思うものの、使う機会がない。 置き場所を考えて、最初は自分の向かいに置いてみたり、次は中央に置いてみたりするだろうが、ほとんど開けることがないまま埃をかぶり、やがて食器棚の奥にしまわれる未来を想像すると、少し寂しくなった。
喫茶店でゆっくりしていると、向かいの女性が今やっているゲームについて早口で語っていた。 体を使ったり、手を大きく振ったりして説明している様子から、本当に好きなのだとわかる。 腕を大きく振ったとき、その影が一瞬だけ机全体を覆った。
別の席では、40代くらいのおじさんが電話をしていた。 「そうかそうか、元気でやってるか」と言ったあと、「本題はここからだ」と言わんばかりに、 「お前、コンテストに出るって言って、それから出てないだろ」と相手に問いかけていた。 「いや、俺はいいんだけど、先生が『あいつはいつ出るんだ』って詰めてくるからなあ」と言っていて、じゃあこの仲介的なおじさんは一体どういう立場なのだろうと気になった。
家に帰ると、通販で買ったSサイズのニットが届いていた。 普段はMサイズだが、今回はSサイズを着こなすことに挑戦した。 着てみると見事にジャストサイズで、自分の体格にも合っていた。 「賭けに勝ったぞ」と思いつつ、普段の姿勢が悪すぎて、鏡を見ると右肩と左肩の高さが明らかに違い、歪んだボディラインが目立っていた。 この服自体はとても気に入っているので、しばらく着るだろう。 明日からはしばらく姿勢を意識して生活していく。
from
Jujupiter
I usually have six nominees instead of five for this category. It’s because movie posters are rectangular instead of square so to fit in an Instagram post, I needed six 😅 But screw that: the year in movies was just too good so I have seven entries! Melbourne International Film Festival was amazing, especially when it came to the movies coming from the Cannes selection.

And now, the nominees.
It's What's Inside by Greg Jardin

A sci-fi comedy in which a bunch of friends are given a machine that allows them to swap bodies. It’s funny at first but questions about attraction and social status show up and it becomes hilarious. A great first movie for Greg Jardin.
Red Rooms by Pascal Plante

A young woman is obsessed about a serial killer and attends the trial. This Canadian movie is highly confronting though no violence is shown, especially because it remains ambivalent all along about its main character. It takes you on a ride but finds a strange way to redeem itself at the end.
Mars Express by Jérémie Périn

In this French animated movie, set in the future on Mars, two agents investigate the disappearance of two students. It’s greatly animated, the world building is impressive and the story works really well. It’s such a shame that it bombed because it’s a real gem.
Sirāt by Oliver Laxe

In Morocco, a man and his son looking for his daughter in free parties decide to follow some partygoers deeper into the desert. This movie doesn’t really follow conventions and punches you right in the guts to remind you about some hard truths in life. Strong and beautiful.
It Was Just An Accident by Jafar Panahi

In Iran, a man kidnaps someone he thinks was his torturer in jail but before he kills him, he decides to check with other victims first. This year’s Palme d’Or is a drama, but it’s also got a strong dark sense of humour. Definitely worth a watch.
The Secret Agent by Kleber Mendonça Filho

During the Brazilian military dictatorship, a man tries to leave the country to escape a hit on his head. It’s impossible to describe the genre of this movie: is it a political thriller, magical realism or even a horny period drama?! It’s all at the same time.
A Useful Ghost by Ratchapoom Boonbunchachoke

In Thailand, a woman haunts a vacuum cleaner to reunite with her husband. This is the craziest movie I have seen this year, if not ever. I had high expectations and it did not disappoint, I laughed a lot. Let’s not forget the strong social and political commentary as well.
And the winner is… Well, I was unable to choose between those two very different movies so it’s a tie! The winners are Red Rooms by Pascal Plante and A Useful Ghost by Ratchapoom Boonbunchachoke!
#JujuAwards #MovieOfTheYear #JujuAwards2025 #BestOf2025
from
Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.
new The Housemaid+3 The Wrecking Crew-2 Anaconda-2 Greenland 2: Migration-1 Zootopia 2-3 The Ripnew We Bury the Dead-2 Predator: Badlandsnew Hamnet-3 Sinners= Fallout= A Knight of the Seven Kingdoms= The Pitt+1 High Potential-1 The Rookie+1 The Night Managernew Wonder Mannew Bridgertonnew Shrinking-4 HijackHi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.
from An Open Letter
I’m so exhausted from all the moving and headaches. We didn’t even have hot water, let alone Internet.
from Robert Galpin
charcoal sketch of dawn cyanotype sky two jackets, boots— broccoli for the chickens
from
Talk to Fa
If I have to ask for it, it’s not for me.
from
SmarterArticles

The results from Gregory Kestin's Harvard physics experiment arrived like a thunderclap. Students using an AI tutor in their dormitory rooms learned more than twice as much as their peers sitting in active learning classrooms with experienced instructors. They did it in less time. They reported feeling more engaged. Published in Scientific Reports in June 2025, the study seemed to confirm what education technology evangelists had been promising for years: artificial intelligence could finally crack the code of personalised learning at scale.
But the number that truly matters lies elsewhere. In September 2024, Khan Academy's AI-powered Khanmigo reached 700,000 students, up from just 40,000 the previous year. By the end of 2025, projections suggested more than one million students would be learning with an artificial tutor that never tires, never loses patience, and remembers every mistake a child has ever made. The question that haunts teachers, parents, and policymakers alike is brutally simple: if machines can now do what Benjamin Bloom proved most effective back in 1984, namely provide the one-to-one tutoring that outperforms group instruction by two standard deviations, does the human educator have a future?
The answer, emerging from research laboratories and classrooms across six continents, turns out to be considerably more nuanced than the headline writers would have us believe. It involves the fundamental nature of learning itself, the irreplaceable qualities that humans bring to education, and the possibility that artificial intelligence might finally liberate teachers from the very burdens that have been crushing them for decades.
In 1984, educational psychologist Benjamin Bloom published what would become one of the most cited papers in the history of educational research. Working with doctoral students at the University of Chicago, Bloom discovered that students who received one-to-one tutoring using mastery learning techniques performed two standard deviations better than students in conventional classrooms. The average tutored student scored higher than 98 per cent of students in the control group. Bloom called this “the 2 Sigma Problem” and challenged researchers to find methods of group instruction that could achieve similar results without the prohibitive cost of individual tutoring.
The study design was straightforward but powerful. School students were randomly assigned to one of three groups: conventional instruction with 30 students per teacher and periodic testing for marking, mastery learning with 30 students per teacher but tests given for feedback followed by corrective procedures, and one-to-one tutoring. The tutoring group's results were staggering. As Bloom noted, approximately 90 per cent of the tutored students attained the level of summative achievement reached by only the highest 20 per cent of the control class.
For forty years, that challenge remained largely unmet. Human tutors remained expensive, inconsistent in quality, and impossible to scale. Various technological interventions, from educational television to computer-assisted instruction, failed to close the gap. Radio, when it first entered schools, was predicted to revolutionise learning. Television promised the same. Each technology changed pedagogy in some ways but fell far short of approximating the tutorial relationship that Bloom had identified as the gold standard. Then came large language models.
The Harvard physics study, led by Kestin and Kelly Miller, offers the most rigorous evidence to date that AI tutoring might finally be approaching Bloom's benchmark. Using a crossover design with 194 undergraduate physics students, the researchers compared outcomes between in-class active learning sessions and at-home sessions with a custom AI tutor called PS2 Pal, built on GPT-4. Each student experienced both conditions for different topics, eliminating selection bias. The topics covered were surface tension and fluid flow, standard material in introductory physics courses.
The AI tutor was carefully designed to avoid common pitfalls. It was instructed to be brief, using no more than a few sentences at a time to prevent cognitive overload. It revealed solutions one step at a time rather than giving away complete answers. To combat hallucinations, the tendency of chatbots to fabricate information, the system was preloaded with all correct solutions. The scientists behind the experiment instructed the AI to avoid cognitive overload by limiting response length and to avoid giving away full solutions in a single message. The result: engagement ratings of 4.1 out of 5 for AI tutoring versus 3.6 for classroom instruction, with statistically significant improvements in learning outcomes (p < 10^-8). Motivation ratings showed a similar pattern: 3.4 out of 5 for AI tutoring compared to 3.1 for classroom instruction.
The study's authors were careful to emphasise limitations. Their population consisted entirely of Harvard undergraduates, raising questions about generalisability to community colleges, less selective institutions, younger students, or populations with different levels of technological access and comfort. “AI tutors shouldn't 'think' for students, but rather help them build critical thinking skills,” the researchers wrote. “AI tutors shouldn't replace in-person instruction, but help all students better prepare for it.”
The median study time also differed between conditions: 49 minutes for AI tutoring versus 60 minutes for classroom instruction. Students were not only learning more but doing so in less time, a finding that has significant implications for educational efficiency but also raises questions about what might be lost when learning is compressed.
While researchers debate methodology in academic journals, the global education market has already placed its bets with billions of dollars. The AI in education market reached $7.05 billion in 2025 and is projected to explode to $112.30 billion by 2034, growing at a compound annual rate of 36 per cent. The market rose from $5.47 billion in 2024 to $7.57 billion in 2025, representing a 38.4 per cent increase in a single year. Global student AI usage jumped from 66 per cent in 2024 to 92 per cent in 2025, according to industry surveys. By early 2026, an estimated 86 per cent of higher education students utilised AI as their primary research and brainstorming partner.
The adoption statistics tell a remarkable story of rapid change. A survey of 2,232 teachers across the United States found that 60 per cent used AI tools during the 2024-25 school year. Usage was higher among high school teachers at 66 per cent and early-career teachers at 69 per cent. Approximately 26 per cent of districts planned to offer AI training during the 2024-25 school year, with around 74 per cent of districts expected to train teachers by the autumn of 2025. A recent survey by EDUCAUSE of more than 800 higher education institutions found that 57 per cent were prioritising AI in 2025, up from 49 per cent the previous year.
In China, Squirrel AI Learning has been at the forefront of this transformation. Founded in 2014 and headquartered in Shanghai, the company claims more than 24 million registered students and 3,000 learning centres across more than 1,500 cities. When Squirrel AI set a Guinness World Record in September 2024 by attracting 112,718 students to an online mathematics lesson in 24 hours, its adaptive learning system generated over 108,000 unique learning pathways, tailoring instruction to 99.1 per cent of participants. The system designed 111,704 unique exercise pathways for the students, demonstrating the scalability of personalised instruction. The company reports performance improvements of up to 30 per cent compared to traditional instruction.
Tom Mitchell, the former Dean of Computer Science at Carnegie Mellon University, serves as Squirrel AI's Chief AI Officer, lending academic credibility to its technical approach. The system breaks down subjects into thousands of knowledge points. For middle school mathematics alone, it maps over 10,000 concepts, from rational numbers to the Pythagorean theorem, tracking student learning behaviours to customise instruction in real time. In December 2024, Squirrel AI announced that Guinness World Records had certified its study as the “Largest AI vs Traditional Teaching Differential Experiment,” conducted with 1,662 students across five Chinese schools over one semester. The company has also provided 10 million free accounts to some of China's poorest families, addressing equity concerns that have plagued educational technology deployment.
Carnegie Learning, another pioneer in AI-powered mathematics education, has accumulated decades of evidence. Founded by cognitive psychologists and computer scientists at Carnegie Mellon University who partnered with mathematics teachers at the Pittsburgh Public Schools, the company has been compiling and acting on data to refine and improve how students explore mathematics since 1998. In an independent study funded by the US Department of Education and conducted by the RAND Corporation, the company's blended approach nearly doubled growth in performance on standardised tests relative to typical students in the second year of implementation. The gold standard randomised trial included more than 18,000 students at 147 middle and high schools across the United States. The Institute of Education Sciences published multiple reports on the effectiveness of Carnegie's Cognitive Tutor, with five of six qualifying studies showing intermediate to significant positive effects on mathematics achievement.
Meanwhile, Duolingo has transformed language learning through its AI-first strategy, producing a 51 per cent boost in daily active users and fuelling a one billion dollar revenue forecast. The company reported 47.7 million daily active users in Q2 2025, a 40 per cent year-over-year increase, with paid subscribers rising to 10.9 million and a 37 per cent year-over-year gain. Quarterly revenue grew to $252.3 million, up 41 per cent from Q2 2024. Survey data indicates that 78 per cent of regular users of its Roleplay feature, which allows practice conversations with AI characters, feel more prepared for real-world conversations after just four weeks. The “Explain My Answer” feature, adopted by 65 per cent of users, increased course completion rates by 15 per cent. Learning speed increased 40 to 60 per cent compared to pre-2024 applications.
Khan Academy's trajectory illustrates the velocity of this transformation. Khanmigo's reach expanded 731 per cent year-over-year to reach a record number of students, teachers, and parents worldwide. The platform went from about 68,000 Khanmigo student and teacher users in partner school districts in 2023-24 to more than 700,000 in the 2024-25 school year, expanding from 45 to more than 380 district partners. When rating AI tools for learning, Common Sense Media gave Khanmigo 4 stars, rising above other AI tools such as ChatGPT and Bard for educational use. Research from Khan Academy showed that combining its platform and AI tutor with additional tools and services designed for districts made it 8 to 14 times more effective at driving student learning outcomes compared with independent learning.
Yet for all the impressive statistics and exponential growth curves, a growing body of research suggests that the most crucial elements of education remain stubbornly human.
A 2025 systematic review published in multiple peer-reviewed journals identified a troubling pattern: while AI-driven intelligent tutoring systems can improve student performance by 15 to 35 per cent, over-reliance on these systems can reduce critical thinking, creativity, and independent problem-solving. Researchers have termed this phenomenon “cognitive offloading,” the tendency of students to delegate mental work to AI rather than developing their own capabilities. Research also indicates that over-reliance on AI during practice can reduce performance in examinations taken without assistance, suggesting that AI-enhanced learning may not always translate to improved independent performance.
The ODITE 2025 Report, titled “Connected Intelligences: How AI is Redefining Personalised Learning,” warned about the excessive focus on AI's technical benefits compared to a shallow exploration of socio-emotional risks. While AI can enhance efficiency and personalise learning, the report concluded, excessive reliance may compromise essential interpersonal skills and emotional intelligence. The report called for artificial intelligence to be integrated within a pedagogy of care, not only serving performance but also recognition, inclusion, and listening.
These concerns are not merely theoretical. A study of 399 university students and 184 teachers, published in the journal Teaching and Teacher Education, found that the majority of participants argued that human teachers possess unique qualities, including critical thinking and emotions, which make them irreplaceable. The findings emphasised the importance of social-emotional competencies developed through human interactions, capacities that generative AI technologies cannot currently replicate. Participants noted that creativity and emotion are precious aspects of human quality which AI cannot replace.
Human teachers bring what researchers call “emotional intelligence” to the classroom: the ability to read subtle social cues that signal student engagement or confusion, to understand the complex personal circumstances that might affect performance, to provide the mentorship, encouragement, and emotional support that shape not just what students know but who they become. As one education researcher told the World Economic Forum: “AI cannot replace the most human dimensions of education: connection, belonging, and care. Those remain firmly in the teacher's domain.” Teachers play a vital role in guiding students to think critically about when AI adds value and when authentic human thinking and creativity are irreplaceable.
The American Psychological Association's June 2025 health advisory on AI companion software underscored these concerns in alarming terms. AI systems, the advisory warned, exploit emotional vulnerabilities through unconditional regard, triggering dependencies like digital attachment disorder while hindering social skill development. The advisory noted that manipulative design may displace or interfere with the development of healthy real-world relationships. For teenagers in particular, confusing algorithmic responses for genuine human connection can directly short-circuit developing capacities to navigate authentic social relationships and assess trustworthiness.
While AI can be a helpful supplement, genuine human connections release oxytocin, the “bonding hormone” that plays a crucial role in reducing stress and fostering emotional wellbeing. Current AI does not yet possess the empathy, intuition, and depth of understanding that humans bring to conversations. For example, a teenager feeling isolated might share their feelings with a chatbot, but the AI's responses may be generic or may not fully address deeper issues that a trained human educator would recognise and address.
Beyond emotional intelligence lies another domain where human teachers remain essential: nurturing creativity.
AI tutoring systems excel at structured learning tasks, at drilling multiplication tables, at correcting grammar mistakes, at providing step-by-step guidance through physics problems. But great teachers do not just transmit facts. They inspire curiosity, challenge students to think beyond textbooks, and encourage discussions that lead to deeper understanding. When it comes to fostering creativity and open-ended problem-solving, current AI tools fall short. They lack the capacity to recognise a student's unconventional approach as potentially brilliant rather than simply incorrect.
“In the AI era, human creativity is increasingly recognised as a critical and irreplaceable capability,” noted a 2025 analysis in Frontiers in Artificial Intelligence. Fostering creativity in education requires attention to pedagogical elements that current AI systems cannot provide: the spontaneous question that opens a new line of inquiry, the willingness to follow intellectual tangents wherever they might lead, the ability to sense when a student needs encouragement to pursue an unorthodox idea. Predictions of teachers being replaced are not new. Radio, television, calculators, even the internet: each was once thought to make educators obsolete. Instead, each changed pedagogy while reinforcing the irreplaceable role of teachers in helping students make meaning, navigate complexity, and grow as people.
UNESCO's AI Competency Framework for Teachers, launched in September 2024, explicitly addresses this tension. The framework calls for a human-centred approach that integrates AI competencies with principles of human rights and human accountability. Teachers, according to UNESCO, must be equipped not only to use AI tools effectively but also to evaluate their ethical implications and to support AI literacy in students, encouraging responsible use and critical engagement with the technology.
The framework identifies five key competency aspects: a human-centred mindset that defines the critical values and attitudes necessary for interactions between humans and AI-based systems; AI ethics that establishes essential ethical principles and regulations; AI foundations and applications that specifies transferable knowledge and skills for selecting and applying AI tools; and the ability to use AI for professional development. Since 2024, UNESCO has supported 58 countries in designing or improving digital and AI competency frameworks, curricula, and quality-assured training for educators and policymakers. During Digital Learning Week 2025, UNESCO released a new report titled “AI and education: protecting the rights of learners,” providing an urgent call to action and analysing how AI and digital technologies impact access, equity, quality, and governance in education.
Perhaps the most compelling argument against AI teacher replacement comes from an unexpected source: the teachers themselves.
A June 2025 poll conducted by the Walton Family Foundation and Gallup surveyed more than 2,200 teachers across the United States. The findings were striking: teachers who use AI tools at least weekly save an average of 5.9 hours per week, equivalent to roughly six additional weeks of time recovered across a standard school year. This “AI dividend” allows educators to reinvest in areas that matter most: building relationships with students, providing individual attention, and developing creative lessons. Teachers who engage with AI tools more frequently report greater time savings: weekly AI users save an average of 5.9 hours each week, twice as much time as those who only use AI monthly at 2.9 hours per week.
The research documented that teachers spend up to 29 hours per week on non-teaching tasks: writing emails, grading, finding classroom resources, and completing administrative work. They have high stress levels and are at risk for burnout. Nearly half of K-12 teachers report chronic burnout, with 55 per cent considering early departure from the profession, creating a district-wide crisis that threatens both stability and student outcomes. Schools with an AI policy in place are seeing a 26 per cent larger “AI dividend,” equivalent to 2.3 hours saved per week per teacher, compared with 1.7 hours in schools without such a policy.
Despite the benefits of the “AI dividend,” only 32 per cent of teachers report using AI at least weekly, while 28 per cent use it infrequently and 40 per cent still are not using it at all. Educators use AI to create worksheets at 33 per cent, modify materials to meet students' needs at 28 per cent, complete administrative work at 28 per cent, and develop assessments at 25 per cent. Teachers in schools with an AI policy are more likely to have used AI in the past year at 70 per cent versus 60 per cent for those without.
AI offers a potential solution, not by replacing teachers but by automating the tasks that drain their energy and time. Teachers report using AI to help with lesson plans, differentiate materials for students with varying needs, write portions of individualised education programmes, and communicate with families. Sixty-four per cent of surveyed teachers say the materials they modify with AI are better quality. Sixty-one per cent say AI has improved their insights about student performance, and 57 per cent say AI has led them to enhance the quality of their student feedback and grading.
Sal Khan, founder of Khan Academy and the driving force behind Khanmigo, has consistently framed AI as a teaching assistant rather than a replacement. “AI is going to become an amazing teaching assistant,” Khan stated in March 2025. “It's going to help with grading papers, writing progress reports, communicating with others, personalising their classrooms, and writing lesson plans.” He has emphasised in multiple forums that “Teachers will always be essential. Technology has the potential to bridge learning gaps, but the key is to use it as an assistant, not a substitute.” Khan invokes the historical wisdom of one-to-one tutoring: “For most of human history, if you asked someone what great education looks like, they would say it looks like a student with their tutor. Alexander the Great had Aristotle as his tutor. Aristotle had Plato. Plato had Socrates.”
This vision aligns with what researchers call the “hybrid model” of education. The World Economic Forum's Shaping the Future of Learning insight report highlights that the main impacts of AI will be in areas such as personalised learning and augmented teaching. AI lifts administrative burdens so that people in caring roles can focus on more meaningful tasks, such as mentorship. As the role of the educator shifts, teachers are moving from traditional content delivery to facilitation, coaching, and mentorship. The future classroom is not about replacing teachers but about redefining their role from deliverers of content to curators of experience.
Any serious discussion of AI in education must confront a troubling reality: the technology that promises to democratise learning may instead widen existing inequalities.
A 2025 analysis by the International Center for Academic Integrity warned that unequal access to artificial intelligence is widening the educational gap between privileged and underprivileged students. Students from lower-income backgrounds, those in rural areas, and those attending institutions with fewer resources are often at a disadvantage when it comes to accessing the technology that powers AI tools. For these students, AI could become just another divide, reinforcing the gap between those who have and those who do not. The disproportionate impact on marginalised communities, rural populations, and underfunded educational institutions limits their ability to benefit from AI-enhanced learning.
The numbers bear this out. Half of chief technology officers surveyed in 2025 reported that their college or university does not grant students institutional access to generative AI tools. More than half of students reported that most or all of their instructors prohibit the use of generative AI entirely, according to EDUCAUSE's 2025 Students and Technology Report. AI tools often require reliable internet access, powerful devices, and up-to-date software. In regions where these resources are not readily available, students are excluded from AI-enhanced learning experiences. Much current policy energy is consumed by academic integrity concerns and bans, which address real risks but can inadvertently deepen divides by treating AI primarily as a threat rather than addressing the core equity problem of unequal opportunity to learn with and from AI.
Recommendations from the Brookings Institution and other policy organisations call for treating AI competence as a universal learning outcome so every undergraduate in every discipline graduates able to use, question, and manage AI. They advocate providing equitable access to tools and training so that benefits do not depend on personal subscriptions, and investing in faculty development at scale with time, training, and incentives to redesign courses and assessments for an AI-rich environment. Proposed solutions include increased investment in digital infrastructure, the development of affordable AI-based learning tools, and the implementation of inclusive policies that prioritise equitable access to technology. Yet only 10 per cent of schools and universities have formal AI use guidelines, according to a UNESCO survey of more than 450 institutions.
The OECD Digital Education Outlook 2026 offers a more nuanced perspective. Robust research evidence demonstrates that inexperienced tutors can enhance the quality of their tutoring and improve student learning outcomes by using educational AI tools. However, the report emphasises that if AI is designed or used without pedagogical guidance, outsourcing tasks to the technology simply enhances performance with no real learning gains. Research validates that adaptive learning's positive effects on educational equity have the capability of redressing socioeconomically disadvantaged conditions by ensuring equitable availability of educational resources.
North America dominated the AI in education market with a market share of 36 per cent in 2024, while the Asia Pacific region is expected to grow at a compound annual rate of 35.3 per cent through 2030. The United States AI in education market alone was valued at $2.01 billion in 2025 and is projected to reach $32.64 billion by 2034. A White House Executive Order signed in April 2025, “Advancing Artificial Intelligence Education for American Youth,” aims to unite AI education across all levels of learning. The US Department of Education also revealed guidance supporting schools to employ existing federal grants for AI integration.
The gap between student adoption and teacher readiness presents a significant challenge. In 2025, there remains a notable gap between students' awareness of AI and teachers' readiness to implement AI in classrooms. According to Forbes, while 63 per cent of US teens are using AI tools like ChatGPT for schoolwork, only 30 per cent of teachers report feeling confident using these same AI tools. This difference emphasises the critical need for extensive support and AI training for all educators.
Experts emphasise the need for more partnerships between K-12 schools and higher education that provide mentorship, resources, and co-developed curricula with teachers. Faculty and researchers can help simplify AI for teachers, offer training, and ensure educational tools are designed with classroom realities in mind. AI brings a new level of potential to the table, a leap beyond past solutions. Instead of just saving time, AI aims to reshape how teachers manage their classrooms, offering a way to automate the administrative load, personalise student support, and free up teachers to focus on what they do best: teaching.
The key to successful AI integration is balance. AI has the potential to alleviate burnout and improve the teaching experience, but only if used thoughtfully as a tool, not a replacement. Competent, research-driven teachers are not going to be replaced by AI. The vision is AI as a classroom assistant that handles routine tasks while educators focus on what only they can provide: authentic human connection, professional judgement, and mentorship.
The evidence suggests neither a utopia of AI-powered learning nor a dystopia of displaced teachers. Instead, a more complex picture emerges: one in which artificial intelligence becomes a powerful tool that transforms rather than eliminates the human role in education.
By 2026, over 60 per cent of schools globally are projected to use AI-powered platforms. The United States has moved aggressively, with a White House Executive Order signed in April 2025 to advance AI education from K-12 through postsecondary levels. All 50 states have considered AI-related legislation. California's SB 243, signed in October 2025 and taking effect on 1 January 2026, requires operators of “companion chatbots” to maintain protocols for preventing their systems from producing content related to suicidal ideation and self-harm, with annual reports to the California Department of Public Health. New York's AI Companion Models law, effective 5 November 2025, requires notifications that state in bold capitalised letters: “THE AI COMPANION IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION.”
The PISA 2025 Learning in the Digital World assessment will focus on two competencies essential to learning with technologies: self-regulated learning and the ability to engage with digital tools. Results are expected in December 2027. Looking further ahead, the PISA 2029 Media and Artificial Intelligence Literacy assessment will examine whether young students have had opportunities to engage proactively and critically in a world where production, participation, and social networking are increasingly mediated by digital and AI tools. The OECD and European Commission's draft AI Literacy Framework, released in May 2025, aims to define global AI literacy standards for school-aged children, equipping them to use, understand, create with, and critically engage with AI.
The future English language classroom, as described by Oxford University Press, will be “human-centred, powered by AI.” Teachers will shift from traditional content delivery to facilitation, coaching, and mentorship. AI will handle routine tasks, while humans focus on what only they can provide: authentic connection, professional judgement, and the kind of mentorship that shapes lives. Hyper-personalised learning is becoming standard, with students needing tailored, real-time feedback more than ever, and AI adapting instruction moment to moment based on individual readiness.
“It is one of the biggest misnomers in education reform: that if you can give kids better technology, if you give them a laptop, if you give them better content, they will learn,” observed one education leader in a World Economic Forum report. “Children learn when they feel safe, when they feel cared for, and when they have a community of learning.”
AI tutors can adapt to a child's learning style in real time. They can provide feedback at midnight on a Saturday when no human teacher would be available. They can remember every mistake and track progress with precision no human memory could match. But they cannot inspire a love of learning in a struggling student. They cannot recognise when a child is dealing with problems at home that affect performance. They cannot model what it means to be curious, empathetic, and creative. While AI can automate some tasks, it cannot replace the human interaction and emotional support provided by teachers. There are legitimate concerns that over-reliance on AI could erode the teacher-student relationship and the social skills students develop in the classroom.
By combining the analytical power of AI with the irreplaceable human element of teaching, we can truly transform education for the next generation. Collaboration is the future. The most effective classrooms will combine human insight with AI precision, creating a hybrid model that supports personalised learning. With AI doing the busy work, teachers dedicate their time and energy to building confidence, nurturing creativity, and cultivating critical thinking skills in their students. This human touch and mentorship are invaluable and can never be fully replaced by AI.
The question is not whether AI will replace human teachers. The question is whether we will have the wisdom to use this technology in ways that enhance rather than diminish what makes education fundamentally human. As Sal Khan put it: “The question isn't whether AI will be part of education. It's how we use it responsibly to enhance learning.”
For now, the answer to that question remains in human hands.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk