Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Iain Harper's Blog
There is a messy reality of giving AI agents tools to work with. This is particularly true given that the Model Control Protocol (MCP) has become the default way to connect AI models to external tools. This has happened faster than anyone expected, and faster than the security aspects could keep up.
This article is about what’s actually involved in deploying MCP servers safely. Not the general philosophy of agent security, but the specific problems you hit when you give Claude or ChatGPT access to your filesystem, your APIs, your databases. It covers sandboxing options, policy approaches, and the trade-offs each entails.
If you’re evaluating MCP tooling or building infrastructure for tool-using agents, this should help you better understand what you’re getting into.
Side note: MCP isn't the only game in town; OpenAI has native function calling, Anthropic has a tool-use API, and LangChain has tool abstractions. So yes, there are other approaches to tool integration, but MCP has become dominant enough that its security properties matter for the ecosystem as a whole.

The Model Context Protocol defines a client-server architecture for connecting AI models to external resources. The model makes requests via an MCP client. MCP servers handle the actual interaction with filesystems, databases, APIs, and whatever else. It’s a standardised way to say “I need to read this file” and have something actually do it.
MCP wasn’t designed to be enterprise infrastructure. Anthropic released it in November 2024 as a modest open specification. Then it kind of just exploded.
As Simon Willison observed in his year-end review, “MCP’s release coincided with the models finally getting good and reliable at tool-calling, to the point that a lot of people appear to have confused MCP support as a pre-requisite for a model to use tools.” By May 2025, OpenAI, Anthropic, and Mistral had all shipped API-level support within eight days of each other.
This rapid adoption created a problem. MCP specifies communication mechanisms but doesn’t enforce authentication, authorisation, or access control. Security was an afterthought. Authentication was entirely absent from the early spec; OAuth support only landed in March 2025. Research on the MCP ecosystem found more than 1,800 MCP servers on the public internet without authentication enabled.
Security researcher Elena Cross put it amusingly and memorably: “the S in MCP stands for security.” Her analysis outlined attack vectors, including tool poisoning, silent redefinition of tools after installation, and cross-server shadowing, in which a malicious server intercepts calls intended for a trusted server.
The MCP spec does say “there SHOULD always be a human in the loop with the ability to deny tool invocations.” But as Willison points out, that SHOULD needs to be a MUST. In practice, it rarely is.
These theoretical vulnerabilities have already been exploited. A timeline of MCP incidents in 2025:
These aren’t sophisticated attacks; they’re basic security failures, such as command injection, missing auth, supply chain compromise, applied to a context where consequences are amplified by what the tools can do.
What concerns me most isn’t any specific vulnerability. It’s the cultural dynamic emerging around MCP deployment.
Johann Rehberger has written about “the Normalisation of Deviance in AI”—a concept from sociologist Diane Vaughan’s analysis of the Challenger disaster.
The core insight: organisations that repeatedly get away with ignoring safety protocols bake that attitude into their culture. It works fine… until it doesn’t. NASA knew about the O-ring problem for years. Successful launches made them stop taking it seriously.
Rehberger argues the same pattern is playing out with AI agents:
“In the world of AI, we observe companies treating probabilistic, non-deterministic, and sometimes adversarial model outputs as if they were reliable, predictable, and safe.”
Willison has been blunter. In a recent podcast:
“I think we’re due a Challenger disaster with respect to coding agent security. I think so many people, myself included, are running these coding agents practically as root, right? We’re letting them do all of this stuff.”
That “myself included” is telling. Even people who understand the risks are taking shortcuts because the friction of doing it properly is high, and nothing bad has happened yet. That’s exactly how normalisation of deviance works.
So, how do you actually deploy MCP servers with some safety margin? The most direct approach is isolation. Run servers in environments where even if they’re compromised, damage is contained (the “blast radius”).
This is basic isolation, but with containers sharing the host kernel. A container escape vulnerability, therefore, gives an attacker full host access, and container escapes do occur. For code you’ve written and audited, containers are probably fine. For anything else, they’re not enough.
gVisor implements a user-space kernel that intercepts system calls. The MCP server thinks it’s talking to Linux, but it’s talking to gVisor, which decides what to allow. Even kernel vulnerabilities don’t directly compromise the host.
The tradeoff is compatibility. gVisor implements about 70-80% of Linux syscalls. Applications that need exotic kernel features, such as advanced ioctls or eBPF, won’t work. For most MCP server workloads, this doesn’t matter. But you’ll need to test.
Firecracker, built by AWS for Lambda and Fargate, is the strongest commonly-available isolation. It offers full VM separation optimised for container-like speed. A Firecracker microVM runs its own kernel, completely separate from the host. So there is no shared kernel to exploit. The attack surface shrinks to the hypervisor, a much smaller codebase than a full OS kernel.
Startup times are reasonable (100-200ms), and resource overhead is minimal. Firecracker achieves this by being ruthlessly minimal. No USB, no graphics, no unnecessary virtual devices.
For executing untrusted or AI-generated code, Firecracker is currently the gold standard. The tradeoff is operational complexity. You need KVM support (bare-metal or nested virtualisation), different tooling than for container deployments, and more careful resource management.
Many production setups use multiple isolation levels. Trusted infrastructure in standard containers. Third-party MCP servers under gVisor. Code execution sandboxes in Firecracker, with isolation directly aligned to the threat level.
Sandboxing handles what happens when things go wrong. Manifests try to prevent things from going wrong by declaring what each component should do.
Each MCP server ships with a manifest that describes the required permissions. This includes filesystem paths, network hosts, and environment variables. At runtime, a policy engine reads the manifest, gets user consent, and configures the sandbox to enforce exactly those permissions. Nothing more.
The AgentBox project works this way. A manifest might declare read access to /project/src, write access to /project/output, and network access to api.github.com. The sandbox gets configured with exactly that. If the server tries to read /etc/passwd or connect to malicious.org, the request fails, not because a gateway blocked it, but because the capability doesn’t exist.
There are real advantages to this approach. Users see what each component requires before granting access. Suspicious permission requests stand out. The same server deploys across environments with consistent security properties.
Unfortunately, the problems are also real. Manifests can only restrict permissions they know about, so side channels and timing attacks may not be covered. Filesystem and network permissions are coarse.
A server that legitimately needs api.github.com might abuse that access in ways the manifest can’t prevent. And who creates the manifests? Who audits them? Still, explicit, auditable permission declarations beat implicit unlimited access, even if they’re imperfect.
This is something I think gets missed in most MCP observability discussions. Logging “Claude created x.ts” is useful, but the harder problems show up when you ask:
Teams get stuck when agent actions are logged after the fact, but aren’t tied to a durable execution state or policy context. You get perfect traces of what happened with no ability to answer why it was allowed to happen.
Current observability tooling (LangSmith, Arize, Langfuse, etc.) focus on the “what happened” side. Every step traced, every tool call logged, every prompt inspectable. This is useful for debugging and cost tracking, but it doesn’t answer the security question “given the policy context at this moment, should this action have been permitted?”
A better pattern treats each agent step as an explicit execution unit:
Your logs then answer not just what happened, but why it was allowed. When something goes wrong, you trace through the decision chain and see where policy should have intervened but didn’t.
This is harder than after-the-fact logging. It means integrating policy evaluation into the execution path rather than bolting observability on separately. But without it, you’re likely to end up doing forensics on incidents instead of preventing them.
A more aggressive approach is to embed security controls directly into base images. Rather than a runtime policy, you construct images where certain capabilities don’t exist.
This is security through absence. An image without a shell can’t spawn a shell. Without network utilities, no data exfiltration can happen over the network. Without write access to certain paths, those paths can’t be modified, not because policy blocks the write, but because the filesystem capability isn’t there at all.
The appeal is that you’re not trusting a policy layer. You’re not hoping gVisor correctly intercepts the dangerous syscall. The capability simply doesn’t exist at the image level.
The tradeoffs (there are always tradeoffs!) are mostly operational. You’ll need separate base images for each security profile. Updates mean rebuilding, not reconfiguring. Granularity is limited, as you can remove broad capability categories but can’t easily express “network access only to api.github.com.”
For high-security deployments where operational complexity is acceptable, this approach provides a stronger foundation than runtime enforcement alone. For most teams, it’s probably overkill, but worth knowing about.

Several frameworks are emerging to standardise MCP security patterns.
SAFE-MCP (Linux Foundation / OpenID Foundation backed) defines patterns for secure MCP deployment, grounded in common failure modes where identity, intent, and execution are distributed across clients, servers, and tools.
The AgentBox approach targets MCP servers as the enforcement point, i.e., the least common denominator across agentic AI ecosystems. Securing MCP servers protects the interaction surface and shifts enforcement closer to the system layer.
For credentials specifically, the Astrix MCP Secret Wrapper wraps any MCP server to pull secrets from a vault at runtime. So no secrets are exposed on host machines, and the server gets short-lived, scoped tokens instead of long-lived credentials.
None of these solves the fundamental problems. But they encode collective learning about what goes wrong and are worth understanding, even if you don’t need to adopt them wholesale.
MCP security in 2026 is a mess of emerging standards, competing approaches, and incidents that keep teaching us things we should have anticipated.
It’s like that box of Lego that mixes several original sets whose instructions are long gone. We have the pieces, and we sort of know what we want to build should look like, but we’re just dipping into the jumbled box to piece it together.
If I had to summarise:
Sandboxing works but costs something. gVisor and Firecracker provide real isolation. They also add operational weight. Match the isolation level to the actual threat.
Manifests help, but aren’t complete. Explicit permission declarations make the attack surface visible. They don’t prevent all attacks.
Observability needs policy context. Logging what happened isn’t enough. You need to know why it was allowed.
We’re probably going to learn some hard lessons. Too many teams are running MCP servers with excessive permissions, inadequate monitoring, and Hail Mary hopes that nothing goes wrong.
Organisations that figure this out will be able to give their agents more capability, because they can actually trust them with it. Everyone else will either hamstring their agents to the point of uselessness or find out the hard way what happens when highly capable tools meet insufficient or non-existent constraints.
from
EpicMind
![]()
Selbstgesteuertes Lernen gilt heute als eine der Schlüsselkompetenzen schlechthin. Unsere Arbeitswelt ist geprägt von Beschleunigung und Verdichtung. Eigenverantwortung wächst. Gleichzeitig bleibt oft unklar, wie Sie Ihren Lernprozess konkret strukturieren sollen, ohne sich in Methoden, Tools oder gut gemeinten Ratschlägen zu verlieren. Das FASTER-Modell von Jim Kwik, der als Lerncoach vor allem ein breites Publikum anspricht, bietet hierfür einen einfachen, aber nicht oberflächlichen Orientierungsrahmen. Ich lese es weniger als Lernmethode im engeren Sinn, sondern als Heuristik, die hilft, Aufmerksamkeit, Handlung und Wiederholung bewusst zu organisieren.
FASTER ist ein sechsstufiges Modell, das #Lernen nicht inhaltlich, sondern prozessual beschreibt. Im Zentrum steht die Idee, dass wirksames Lernen weniger von Methoden als von bewussten Entscheidungen abhängt: Woran richtet man die eigene Aufmerksamkeit aus, wie aktiv geht man mit dem Stoff um, in welchem Zustand lernt man, wie verankert man Lernzeit im Alltag und wie sichert man das Gelernte ab. Das Modell versteht Lernen damit als gestaltbaren Ablauf, der vor dem eigentlichen Lernen beginnt und erst mit gezielter Wiederaufnahme endet (Forget, Act, State, Teach, Enter, Review).
Selbstgesteuertes Lernen bedeutet nicht, alles allein zu tun. Es bedeutet, Verantwortung für Ziele, Vorgehen und Bewertung des eigenen Lernens zu übernehmen. Damit verschiebt sich der Fokus von der Vermittlung zur Gestaltung von Lernbedingungen. Genau hier setzt FASTER an. Das Modell beschreibt keine Inhalte, sondern sechs Entscheidungen, die man vor, während und nach dem Lernen treffen kann. In dieser Perspektive wird Lernen nicht optimiert, sondern gestaltet.
Der erste Schritt fordert dazu auf, Vorwissen, Ablenkung und selbst gesetzte Grenzen zeitweise auszublenden. Für selbstgesteuertes Lernen ist das zentral. Wenn man mit festen Annahmen darüber lernt, was man bereits weiss oder nicht kann, reduziert man die eigene Lernspanne erheblich. Die Idee des bewussten Vergessens korrespondiert mit dem Konzept des Pre-Testing. Ein offener Einstieg, der eigene Wissenslücken sichtbar macht, fördert Aufmerksamkeit und Lernbereitschaft stärker als der Versuch, an vermeintlich Bekanntes anzuknüpfen.
FASTER versteht Lernen explizit als aktive Tätigkeit. Das deckt sich mit gut belegten Erkenntnissen aus der Lernforschung. Strategien wie Retrieval Practice oder Elaboration zeigen, dass Behalten vor allem dann gelingt, wenn man Informationen aktiv abruft, verknüpft und umformuliert. Für selbstgesteuertes Lernen bedeutet das, sich nicht auf Lesen oder Zuhören zu beschränken, sondern bewusst mit dem Stoff zu arbeiten. Aktivität ist hier kein Bonus, sondern Voraussetzung.
Der emotionale und körperliche Zustand beeinflusst, wie Lerninhalte verarbeitet werden. Diese Einsicht ist nicht neu, doch Lernende ignorieren sie oft. Selbstgesteuertes Lernen verlangt daher auch Selbstwahrnehmung. Wenn man lernt, ohne den eigenen Zustand zu reflektieren, riskiert man oberflächliche Verarbeitung. Mental Replay, also das bewusste innere Durchgehen von Lerninhalten, zeigt, wie stark Emotion, Aufmerksamkeit und Erinnerung miteinander verbunden sind. FASTER macht diesen Zusammenhang explizit, ohne ihn theoretisch auszudeuten. Der bewusste Blick auf den eigenen Zustand schafft die Grundlage dafür, das Gelernte später auch weitergeben zu können.
Das FASTER-Modell im Überblick (eigene Darstellung mit ChatGPT)
Das Element „Teach“ greift eine der wirksamsten Lernstrategien auf: Wer etwas erklären kann, hat es in der Regel verstanden. Für selbstgesteuertes Lernen ist das besonders relevant, da externe Prüfungen oder Rückmeldungen oft fehlen. Die Vorstellung, das Gelernte jemand anderem vermitteln zu müssen, erzwingt Struktur, Präzision und Auswahl. Didaktisch lässt sich hier eine enge Verbindung zur Retrieval Practice ziehen, ergänzt durch Elaboration: Erklären bedeutet erinnern und vertiefen zugleich. Doch damit dieser Schritt gelingt, braucht es Verbindlichkeit im Alltag.
Ein oft unterschätzter Aspekt selbstgesteuerten Lernens ist die Organisation im Alltag. FASTER adressiert dies nüchtern über den Kalender. Lernzeit wird nicht als Restposten behandelt, sondern als fixe Verpflichtung. Der Kalendereintrag macht den Unterschied zur blossen To-do-Liste: Er reserviert Zeit, schafft Verbindlichkeit und reduziert die Wahrscheinlichkeit, dass andere Aufgaben dazwischenkommen. Diese Perspektive ist wenig spektakulär, aber realistisch. Ohne zeitliche Struktur bleibt selbst die beste Lernabsicht erfolglos. In der Praxis zeigt sich, dass selbstgesteuertes Lernen weniger an Motivation scheitert als an fehlender Planung. Die geplante Zeit allein reicht aber nicht – das Gelernte muss gesichert werden.
Der letzte Schritt verweist auf Spaced Practice, also verteilte Wiederholung. Diese gilt als eine der robustesten Strategien für langfristiges Behalten. Entscheidend ist, dass Wiederholen nicht als passives Durchlesen verstanden wird, sondern als aktiver Abruf. Das bedeutet: Statt Notizen erneut zu lesen, versucht man, das Gelernte aus dem Gedächtnis zu rekonstruieren. Erst danach gleicht man es mit den Unterlagen ab. Bewährt haben sich Abstände von einem Tag, einer Woche und einem Monat nach dem ersten Lernen. FASTER bleibt hier bewusst offen, bietet aber einen klaren Hinweis: Lernen endet nicht mit dem ersten Verstehen. Für selbstgesteuertes Lernen ist diese Einsicht zentral, da Lernprozesse selten extern getaktet werden.
Aus pädagogischer Sicht ist FASTER kein vollständiges Modell selbstgesteuerten Lernens. Fragen der Zieldefinition, der Erfolgskontrolle oder des Transfers bleiben weitgehend ausgeklammert. Das Modell setzt voraus, dass man weiss, was man lernen will und warum. Diese Leerstelle ist relevant, schmälert aber nicht den praktischen Wert des Ansatzes. FASTER will nicht erklären, was Lernen ist, sondern Orientierung im Lernhandeln bieten.
Ich verstehe das FASTER-Modell als praxistaugliche Heuristik für selbstgesteuertes Lernen. Es ersetzt weder didaktische Konzepte noch wissenschaftliche Modelle, schafft aber einen klaren Rahmen für bewusste Lernentscheidungen. Seine Stärke liegt in der Konzentration auf Aufmerksamkeit, Aktivität und Wiederholung. Wer selbstgesteuert lernt, findet hier keine Abkürzung, aber eine strukturierte Erinnerung daran, worauf es ankommt.
Meine Empfehlung lautet daher: Nutze FASTER nicht als Methode, sondern als Checkliste. Dort, wo Lernen ins Stocken gerät, lohnt sich der Blick auf diese sechs Schritte:
Literatur Jim Kwik (2021): Limitless. Wie du schneller lernst und dein Potenzial befreist. Gräfelfing: Next Level.
Bildquelle Jean-Étienne Liotard (1702–1789): Portrait de Marie-Adélaïde de France en tenue turque, Uffizien, Florenz, Public Domain.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Die Übersichtsgrafik zum Modell wurde basierend auf meiner Inhaltsangabe von ChatGPT (GPT-5.2) generiert. Prompt: „Erstelle mir aus nachfolgendem Text eine Infografik, im Stil von Flipcharts in Trainings. Nutze ausschliesslich meinen Text und erstelle die Infografik im Querformat, weisser Hintergrund.“
Themen #Erwachsenenbildung | #Coaching
from drpontus
When we talk about “design tools,” we often mean software: Figma, Miro, prototyping platforms, and now AI-powered assistants. These tools promise efficiency and speed. But as Ivan Illich (1973) and Marshall McLuhan (1964) argue, tools are never neutral. They extend us — but also reshape us. They amplify capability –but can also amputate forms of thinking we no longer practice.
Modern designers sit inside a tool ecosystem that subtly determines how they think, not just what they produce. Recognizing this is the first step toward reclaiming agency and imagination.
Ivan Illich, in the essay book Tools for Conviviality (1973), makes a powerful distinction between tools that amplify human autonomy and tools that invert control — becoming systems that the user must adapt to. Illich argues tools have a threshold:
At that point, tools become radical monopolies: infrastructures you must use because opting out means professional or social exile. Illich’s concern was not convenience, but autonomy:
Today’s proprietary LLM services and closed design ecosystems exhibit the traits Illich warned about:
The speed is seductive, but Illich would say: “speed becomes a trap if it funnels us toward mediocrity.”
A convivial tool, in Illich’s language, is one that:
Pen and paper are convivial. A local sketching tool or a customizable design system may be convivial. A proprietary black-box AI? Mostly not.
Marshall McLuhan’s media theory (McLuhan, 1964) strengthens Illich’s warning. McLuhan famously wrote:
”Every technology is an extension of ourselves.”
The car extends the foot. The phone extends the voice. Figma extends the designer’s hand.
But less often quoted is McLuhan’s complement:
”Every extension also amputates part of ourselves.”
When we extend memory into cloud storage, we atrophy the ability to recall. When design tools automate layout, we risk dulling the intuition of composition. McLuhan invites us to ask:
• What capacities are extended?
• What capacities quietly shrink?
• What becomes unthinkable inside a given medium?
When a tool becomes ubiquitous, it becomes the environment. And when something becomes the environment, we stop noticing it — but it continues shaping us.
For today’s designers a number of questions need answering:
McLuhan’s insight:
”The logic of the medium becomes the logic of the mind.”
When LLM platforms set the default style, tone, and structure of design reasoning, the designer’s imagination risks becoming an echo of the system.
Juhani Pallasmaa expands this in The Thinking Hand. He argues that embodied cognition — thinking through gesture, pressure, texture, speed — is not primitive but foundational:
Digital tools erase resistance. They flatten differences. They tidy too quickly.
Pallasmaa says that the hand “thinks” in ways the conscious mind cannot.
When design begins in high-fidelity tools:
This is why early-stage ideas should emerge from tools that introduce friction, not ones that algorithmically smooth it away.
Nigel Cross (2001) reminds us that design is not just visual production — it is a mode of inquiry. Designers generate knowledge by:
But if the “material at hand” is a software environment that:
…then the mode of inquiry itself is altered.
Cross emphasizes that designers must maintain intentionality — choosing representations that allow them to think, not representations that tell them what to think.
When Illich, McLuhan, Pallasmaa, and Cross are placed together, a clear picture emerges:
Illich: Tools must remain within human scale and control — or they reshape us.
McLuhan: Every tool subtly rewires our cognitio — extending and amputating.
Pallasmaa: Embodied, physical making produces deeper imagination than screen-first workflows.
Cross: Design knowledge emerges from intentional, exploratory representation — not templates.
Together, they form the intellectual backbone for the aware design professional:
Speed-driven tools funnel designers toward the average. Convivial, human-scale tools expand imagination and autonomy.
Choosing pen and paper, sketching, tangible prototyping, local tools, or open toolchains is not conservative nostalgia. It is convivial resistance — deliberately designing the conditions in which original thinking can occur.
As you reflect on your current tools, ask:
What aspect of my thinking is this tool extending?
What aspect is it amputating?
What assumptions does it normalize?
Does it encourage diversity or conformity?
Does it amplify my competence — or replace it?
Can I alter the tool — or must I adapt to it?
Does using this tool make me feel more autonomous — or more dependent?
Your goal is of course not to abandon digital tools — but to see them clearly, and to choose intentionally rather than habitually.
from
The happy place
There’s this pub just a stone throw away where my wife and I go sometimes for a Guinness, or like yesterday, two.
It’s the new version of myself: a bakery bread eating city man with clipped toe nails who does yoga. A type of artist with a wide stance and a lazy eye.
I love 🇫🇷 France!
I feel this metamorphosis where I am becoming the butterfly version of myself.
Have I become my father?
from
Talk to Fa
I was driving on winding mountain roads. I’d lowered the gear and opened the windows slightly to let the fresh air in. The trees had been burnt by the fire. The air was crisp. The fire must have brought a renewal to the area. As I slowed down to let a deer cross, I wondered whether people like me, natural explorers, travel to new places in search of our true homes and soul families. I recently read somewhere that most of us live in environments that are not suited for our unique designs. The deer stopped and looked straight into my eyes. I felt seen and somehow validated for the thought.
from
a.nihil
Two decades ago, logging into the internet meant a lot of possibilities. You could download music for free, chat rooms gave this rush of talking to strangers across the world and there was this grungy aspect of surfing websites and finding weird corners of the internet that gave both you and the internet a personality. Then, this “e”-world seemed like a novelty, mass adoption where a life without it was impossible felt like a dream one would brush away. Then the first social networks emerged and the iPhone and its many clones, followed by cheap data to the point where now being plugged into the global economy means being connected by 4 devices but the whole experience of the internet revolves around a handful of platforms and a few apps.
It is fair that the initial wonder of the internet has faded away into oblivion, it was too good to be true. To reach the economies of scale meant that standardization was inevitable, which induces the feeling of sameness but still there was the element of adding ones own personal touches to the experience of the internet. With generative AI this has changed, the last frontier of personalization on the internet has also become automated which means that every post reads the same, every video looks like a copy of a copy of a copy.
Plus, why would anyone want to create on the internet anyway? The obvious money incentive which means that certain norms have to be adhered to: You can't be too left, can't question the overlords or stirrup trouble. But otherwise with the surveilling apparatus baked deep into the experience of the web, we're just giving more data points to allow ourselves to be policed. Gen AI also means that one's content becomes the fodder for another model of your favorite LLM, further diluting the incentives to create. So the best way to enjoy the internet is as a mute spectator, a consumer rather than a creator. The internet is killing itself and us along with it and we aren't even noticing. Or maybe we aren't supposed to.
#internet #deadinternet #AI
from
Happy Duck Art
An eyedea, which I don’t have the skill to bring to fruition yet, but putting it here for the future when maybe I will have more ability.

from
SmarterArticles

The stablecoin transaction that moved $2 billion from Abu Dhabi to Binance in May 2025 looked nothing like what the cypherpunks imagined when they dreamed of digital money. There were no anonymous wallets, no cryptographic rituals, no ideological manifestos. MGX, a sovereign wealth vehicle backed by the United Arab Emirates, simply wired funds denominated in USD1, a stablecoin issued by World Liberty Financial, a company affiliated with the family of the sitting United States President. The transaction settled on blockchain rails that neither party needed to understand or even acknowledge. The technology had become invisible. The revolution had been absorbed.
This moment crystallises the central tension now confronting the cryptocurrency industry as it enters what many are calling its institutional era. Stablecoins processed over $46 trillion in transactions during 2025, rivalling Visa and PayPal in volume. BlackRock's Bitcoin ETF surpassed $100 billion in assets under management, accumulating over 800,000 BTC in less than two years. The GENIUS Act became the first major cryptocurrency legislation passed by Congress, establishing federal standards for stablecoin issuers. Tokenised real-world assets reached $33 billion, with projections suggesting the market could hit $16 trillion by 2030. By every conventional measure, cryptocurrency has succeeded beyond its founders' wildest projections.
Yet success has arrived through a mechanism that would have horrified many of those founders. Crypto went mainstream by becoming invisible, as the a16z State of Crypto 2025 report observed. The technology that was supposed to disintermediate banks now powers their backend operations. The protocol designed to resist surveillance now integrates with anti-money laundering systems. The culture that celebrated pseudonymity now onboards users through email addresses and social logins. The question is whether this represents maturation or betrayal, evolution or erasure.
The economic evidence for the invisibility approach has become overwhelming. Stripe's $1.1 billion acquisition of Bridge in February 2025 represented the payments industry's first major acknowledgement that stablecoins could serve as mainstream infrastructure rather than speculative instruments. Within three months, Stripe launched Stablecoin Financial Accounts across 101 countries, enabling businesses to hold balances in USDC and USDB while transacting seamlessly across fiat and crypto rails. The blockchain was there, handling settlement. The users never needed to know.
This pattern has repeated across traditional finance. Visa partnered with Bridge to launch card-issuing products that let cardholders spend their stablecoin balances at any merchant accepting Visa, with automatic conversion to fiat happening invisibly in the background. Klarna announced plans to issue its own stablecoin through Bridge, aiming to reduce cross-border payment costs that currently total roughly $120 billion annually. The fintech giant would become the first bank to tap Stripe's stablecoin stack for blockchain-powered payments, without requiring its customers to understand or interact with blockchain technology directly.
BlackRock has been equally explicit about treating cryptocurrency as infrastructure rather than product. Larry Fink, the firm's chief executive, declared following the Bitcoin ETF approval that “every stock and bond would eventually live on a shared digital ledger.” The company's BUIDL fund, launched on Ethereum in March 2024, has grown to manage over $2 billion in tokenised treasury assets. BlackRock has announced plans to tokenise up to $10 trillion in assets, expanding across multiple blockchain networks including Arbitrum and Polygon. For institutional investors accessing these products, the blockchain is simply plumbing, no more visible or culturally significant than the TCP/IP protocols underlying their email.
The speed of this integration has astonished even bullish observers. Bitcoin and Ethereum spot ETFs accumulated $31 billion in net inflows while processing approximately $880 billion in trading volume during 2025. An estimated 716 million people now own digital assets globally, a 16 percent increase from the previous year. More than one percent of all US dollars now exist as stablecoins on public blockchains. The numbers describe a technology that has crossed from interesting experiment to systemic relevance.
The regulatory environment has reinforced this trajectory. The GENIUS Act, signed into law in July 2025, establishes stablecoin issuers as regulated financial entities subject to the Bank Secrecy Act, with mandatory anti-money laundering programmes, sanctions compliance, and customer identification requirements. Payment stablecoins issued under the framework are explicitly not securities or commodities, freeing them from SEC and CFTC oversight while embedding them within the traditional banking regulatory apparatus. The Act requires permitted issuers to maintain one-to-one reserves in US currency or similarly liquid assets and to publish monthly disclosure of reserve details. This is not the regulatory vacuum that early cryptocurrency advocates hoped would allow decentralised alternatives to flourish. It is integration, absorption, normalisation.
Against this backdrop of institutional triumph, a parallel ecosystem continues to thrive on explicitly crypto-native principles. Pump.fun, the Solana memecoin launchpad, has facilitated the creation of over 13 million tokens since January 2024, generating more than $866 million in lifetime revenue by October 2025. At its peak, the platform accounted for nearly 90 percent of all token mints on Solana and over 80 percent of launchpad trading volume. Its July 2025 ICO raised $1.3 billion in combined private and public sales, with the $PUMP presale hauling in $500 million in minutes at a fully diluted valuation of approximately $4 billion.
This is not infrastructure seeking invisibility. This is spectacle, culture, community, identity. The meme coin total market capitalisation exceeded $78 billion in 2025, with projects like Fartcoin briefly reaching $2.5 billion in valuation. These assets have no intrinsic utility beyond their function as coordination mechanisms for communities united by shared jokes, aesthetics, and speculative conviction. They are pure culture, and their continued prominence suggests that crypto's cultural layer retains genuine economic significance even as institutional rails proliferate.
The mechanics of attention monetisation have evolved dramatically. In January 2025, a single social media post about the $TRUMP token, launched through a one-click interface on Solana, generated hundreds of millions in trading volume within hours. This represented something genuinely new: the near-instantaneous conversion of social attention into financial activity. The friction that once separated awareness from action has been reduced to a single tap.
Re7 Capital, a venture firm that has invested in Suno and other infrastructure projects, launched a $10 million SocialFi fund in 2025 specifically targeting this intersection of social platforms and blockchain participation. As Luc de Leyritz, the firm's general partner, explained: “For the first time in five years, we see a structural opportunity in early-stage crypto venture, driven by the convergence of attention, composability and capital flows in SocialFi.” The thesis is that platforms enabling rapid conversion of social attention into financial activity represent the next major adoption vector, one that preserves rather than erases crypto's cultural distinctiveness.
Farcaster exemplifies this approach. The decentralised social protocol, backed by $150 million from Paradigm and a16z, has grown to over 546,000 registered users with approximately 40,000 to 60,000 daily active users. Its defining innovation, Farcaster Frames, enables users to mint NFTs, execute trades, and claim tokens directly within social posts without leaving the application. This is not crypto becoming invisible; this is crypto becoming the medium of social interaction itself. The blockchain is not hidden infrastructure but visible identity, with on-chain activities serving as signals of community membership and cultural affiliation.
The tension between these approaches has become central to debates about crypto's future direction. Vitalik Buterin, Ethereum's co-founder, addressed this directly in a New Year's message urging the community to focus on building applications that are “truly decentralised and usable” rather than “winning the next meta.” He outlined practical tests for decentralisation: Can users keep their assets if the company behind an application disappears? How much damage can rogue insiders or compromised front-ends cause? How many lines of code must be trusted to protect users' funds?
These questions expose the gap between infrastructure and culture approaches. Invisible blockchain rails, by definition, rely on intermediaries that users must trust. When Stripe converts stablecoin balances to fiat for Visa transactions, when BlackRock custodies Bitcoin on behalf of ETF holders, when Klarna issues blockchain-powered payments, the technology may be decentralised but the user experience is not. The cypherpunk vision of individuals controlling their own keys, verifying their own transactions, and resisting surveillance has been traded for convenience and scale.
To understand what is at stake requires revisiting cryptocurrency's ideological origins. Bitcoin was not born in a vacuum; it emerged from decades of cypherpunk research, debate, and experimentation. The movement's core creed was simple: do not ask permission, build the system. Do not lobby politicians for privacy laws; create technologies that make surveillance impossible. Every point of centralisation was understood as a point of weakness, a chokepoint where power could be exercised by states or corporations against individuals.
Satoshi Nakamoto's 2008 whitepaper directly reflected these principles. By combining cryptography, decentralised consensus, and economic incentives, Bitcoin solved the double-spending problem without requiring a central authority. The vision was censorship-resistant money that allowed individuals to transact privately and securely without permission from governments or corporations. Self-custody was not merely an option but the point. The option to be your own bank, to verify rather than trust, remained open to anyone willing to exercise it.
The cypherpunks were deeply suspicious of any centralised authority, whether government agency or large bank. They saw the fight for freedom in the digital age as a technical problem, not merely a political one. Privacy, decentralisation, self-sovereignty, transparency through open-source code: these were not just preferences but foundational principles. Any compromise on these fronts represented potential capture by the very systems they sought to escape.
The success and commercialisation of Bitcoin has fractured this inheritance. Some argue that compliance with Know Your Customer requirements, integration with regulated exchanges, and accommodation of institutional custody represents necessary compromise to bring cryptocurrency to the masses and achieve mainstream legitimacy. Without these accommodations, Bitcoin would remain a niche asset forever locked out of the global financial system.
For the purist camp, this represents betrayal. Building on-ramps that require identity verification creates a surveillance network around technology designed to be pseudonymous. It links real-world identity to on-chain transactions, destroying privacy. The crypto space itself struggles with centralisation through major exchanges, custodial wallets, and regulatory requirements that conflict with the original vision.
By 2025, Bitcoin's price exceeded $120,000, driven substantially by institutional adoption through ETFs and a maturing investor base. BlackRock's IBIT has accumulated holdings representing 3.8 percent of Bitcoin's total 21 million supply. This is not the distributed ownership pattern the cypherpunks envisioned. Power has concentrated in new hands, different from but not obviously preferable to the financial institutions cryptocurrency was designed to circumvent.
If invisible infrastructure represents one future and pure speculation another, decentralised social platforms represent an attempt at synthesis. Lens Protocol, launched by the team behind the DeFi lending platform Aave, provides a social graph enabling developers to build applications with composable, user-owned content. Running on Polygon, Lens offers creators direct monetisation through subscriptions, fees from followers, and the ability to turn posts into tradable NFTs. Top users on the protocol average $1,300 monthly in creator earnings, demonstrating that blockchain participation can generate real economic value beyond speculation.
The proposition is that social identity becomes inseparable from on-chain identity. Your follower graph, your content, your reputation travel with you across applications built on the same underlying protocol. When you switch from one Lens-based application to another, you bring your audience and history. No platform can deplatform you because no platform owns your identity. This is decentralisation as lived experience rather than backend abstraction.
Farcaster offers a complementary model focused on protocol-level innovation. Three smart contracts on OP Mainnet handle security-critical functions: IdRegistry maps Farcaster IDs to Ethereum custody addresses, StorageRegistry tracks storage allocations, and KeyRegistry manages application permissions. The infrastructure is explicitly on-chain, but the user experience has been refined to approach consumer-grade accessibility. Account abstraction and social logins mean new users can start with just an email address, reducing time to first transaction from twenty minutes to under sixty seconds.
The platform's technical architecture reflects deliberate choices about where blockchain visibility matters. Storage costs approximately seven dollars per year for 5,000 posts plus reactions and follows, low enough to be accessible but high enough to discourage spam. The identity layer remains explicitly on-chain, ensuring that users maintain control over their credentials even as the application layer becomes increasingly polished.
The engagement metrics suggest these approaches resonate with users who value explicit blockchain participation. Farcaster's engagement rate of 29 interactions per user monthly compares favourably to Lens's 12, indicating higher-quality community even with smaller absolute numbers. The platform recently achieved a milestone of 100,000 funded wallets, driven partly by USDC deposit matching rewards that incentivise users to connect their financial identity to their social presence.
Yet the scale gap with mainstream platforms remains vast. Bluesky's 38 million users dwarf Farcaster's half million. Twitter's daily active users number in the hundreds of millions. For crypto-native social platforms to represent a meaningful alternative rather than a niche experiment, they must grow by orders of magnitude while preserving the properties that differentiate them. The question is whether those properties are features or bugs in the context of mainstream adoption.
Stablecoins offer the clearest lens on how the invisibility thesis is playing out in practice. The market has concentrated heavily around two issuers: Tether's USDT holds approximately 60 percent market share with a capitalisation exceeding $183 billion, while Circle's USDC holds roughly 25 percent at $73 billion. Together, these two tokens account for over 80 percent of total stablecoin market capitalisation, though that share has declined slightly as competition intensifies.
Tether dominates trading volume, accounting for over 75 percent of stablecoin activity on centralised exchanges. It remains the primary trading pair in emerging markets and maintains higher velocity on exchanges. But USDC has grown faster in 2025, with its market cap climbing 72 percent compared to USDT's 32 percent growth. Analysts attribute this to USDC's better positioning for regulated markets, particularly after USDT faced delistings in Europe due to lack of MiCA authorisation.
Circle's billion-dollar IPO marked the arrival of stablecoin issuers as mainstream financial institutions. The company's aggressive expansion into regulated markets positions USDC as the stablecoin of choice for banks, payment processors, and fintech platforms seeking compliance clarity. This is crypto becoming infrastructure in the most literal sense: a layer enabling transactions that end users never need to understand or acknowledge.
The overall stablecoin supply hit $314 billion in 2025, with the category now comprising 30 percent of all on-chain crypto transaction volume. August 2025 recorded the highest annual volume to date, reaching over $4 trillion for the year, an 83 percent increase on the same period in 2024. Tether alone saw $10 billion in profit in the first three quarters of the year. These are not metrics of a speculative sideshow but of core financial infrastructure.
The emergence of USD1, the stablecoin issued by World Liberty Financial with Trump family involvement, demonstrates how completely stablecoins have departed from crypto's countercultural origins. The token reached $3 billion in circulating supply within six months of launch, integrated with major exchanges including Binance and Tron. Its largest transaction to date, the $2 billion MGX investment in Binance, involved sovereign wealth funds, presidential family businesses, and what senators have alleged are suspicious ties to sanctioned entities. This is not disruption of financial power structures; it is their reconfiguration under blockchain labels.
The GENIUS Act's passage has accelerated this normalisation. By establishing clear regulatory frameworks, the legislation removes uncertainty that previously discouraged traditional financial institutions from engaging with stablecoins. But it also embeds stablecoins within the surveillance and compliance infrastructure that cryptocurrency was originally designed to escape. Issuers must implement anti-money laundering programmes, verify sanctions lists, and identify customers. The anonymous, permissionless transactions that defined early Bitcoin are not merely discouraged but legally prohibited for regulated stablecoin issuers.
Real-world asset tokenisation extends the invisibility thesis from currency into securities. BlackRock's BUIDL fund demonstrated that tokenised treasury assets could attract institutional capital at scale. By year-end 2025, the tokenised RWA market had grown to approximately $33 billion, with the majority concentrated in private credit and US Treasuries representing nearly 90 percent of tokenised value. The market has grown fivefold in two years, crossing from interesting experiment to systemic relevance.
The projections are staggering. A BCG-Ripple report forecasts the tokenised asset market growing from $0.6 trillion to $18.9 trillion by 2033. Animoca Brands research suggests tokenisation could eventually tap into the $400 trillion traditional finance market. Franklin Templeton, Fidelity, and other major asset managers have moved beyond pilots into production-level tokenisation of treasury products.
For institutional investors, the value proposition is efficiency: faster settlement, lower costs, continuous trading availability, fractional ownership. None of these benefits require understanding or caring about blockchain technology. The distributed ledger is simply superior infrastructure for recording ownership and executing transfers. It replaces databases, not ideologies.
This creates an interesting inversion of the original cryptocurrency value proposition. Bitcoin promised to separate money from state control. Tokenisation of real-world assets brings state-sanctioned securities onto blockchain rails, with all their existing regulatory requirements, reporting obligations, and institutional oversight intact. The technology serves traditional finance rather than replacing it.
Major financial institutions including JPMorgan, Goldman Sachs, and BNY Mellon are actively engaging in real-world asset tokenisation. Banks treat blockchain not as novelty but as infrastructure, part of the normal toolkit for financial services. Fintech companies supply connective logic between traditional systems and decentralised networks. Stablecoins, once regarded as a temporary bridge, now operate as permanent fixtures of the financial order.
What emerges from this analysis is not a single trajectory but a bifurcation. Two distinct crypto economies now operate in parallel, occasionally intersecting but fundamentally different in their relationship to culture, identity, and visibility.
The institutional economy treats blockchain as infrastructure. Its participants include BlackRock, Fidelity, Stripe, Visa, JPMorgan, and the growing ecosystem of regulated stablecoin issuers and tokenisation platforms. Value accrues through efficiency gains, cost reductions, and access to previously illiquid assets. Users of these products may never know they are interacting with blockchain technology. The culture is that of traditional finance: compliance-focused, institution-mediated, invisible.
The crypto-native economy treats blockchain as culture. Its participants include memecoin traders, decentralised social network users, DeFi power users, and communities organised around specific protocols and tokens. Value accrues through attention, community formation, and speculative conviction. Users of these products explicitly identify with blockchain participation, often displaying on-chain activity as markers of identity and affiliation. The culture is distinctively countercultural: permissionless, community-driven, visible.
DeFi total value locked surged 41 percent in Q3 2025, surpassing $160 billion for the first time since May 2022. Ethereum led growth with TVL jumping from $54 billion in July to $96.5 billion by September. Aave became the largest DeFi lending protocol with over $41 billion in TVL, growing nearly 58 percent since July. Lido ranked second with nearly $39 billion in liquid staking deposits. These are substantial numbers, demonstrating that crypto-native applications retain significant capital commitment even as institutional alternatives proliferate.
The question is whether these economies can coexist indefinitely or whether one will eventually absorb the other. The institutional thesis holds that crypto-native culture is a transitional phenomenon, the early-adopter enthusiasm that accompanies any new technology before it matures into invisible utility. By this view, memecoin speculation and decentralised social experiments are the equivalent of early internet flame wars and personal homepage culture: interesting historical artefacts that give way to professionally operated services as the technology scales.
The counter-thesis holds that crypto-native culture provides irreplaceable competitive advantages. Community formation around tokens creates user loyalty that traditional products cannot match. On-chain identity enables new forms of coordination, reputation, and governance. The transparency of blockchain operations enables trustlessness that opaque corporate structures cannot replicate. By this view, invisible infrastructure misses the point entirely, stripping away the properties that make cryptocurrency distinctive and valuable.
The debate ultimately hinges on what one considers maturation. If maturation means achieving mainstream adoption, measurable in transaction volumes, market capitalisation, and institutional participation, then the invisibility approach has clearly succeeded. Stablecoins rival Visa in volume. Bitcoin ETFs hold hundreds of billions in assets. Regulated tokenisation platforms are processing institutional-scale transactions. By these metrics, cryptocurrency has grown up.
But maturation can also mean the development of distinctive capabilities rather than assimilation into existing paradigms. By this measure, invisibility represents not maturation but abandonment. The technology that was supposed to disrupt financial intermediation has instead been adopted by intermediaries. The protocol designed to resist censorship integrates with surveillance systems. The culture celebrating individual sovereignty has been absorbed into institutional custody arrangements.
Vitalik Buterin's tests for decentralisation offer a framework for evaluating these competing claims. The walk-away test asks whether users keep their assets if the company behind an application disappears. For BlackRock ETF holders, the answer is clearly no; they hold shares in a fund that custodies assets on their behalf. For self-custody Bitcoin holders, the answer is yes by design. The insider attack test asks how much damage rogue insiders or compromised front-ends can cause. Invisible infrastructure necessarily involves more trusted intermediaries and therefore more potential attack surfaces.
The trusted computing base question asks how many lines of code must be trusted to protect users. Institutional products layer complexity upon complexity: custody arrangements, trading interfaces, fund structures, regulatory compliance systems. Each layer requires trust. The original Bitcoin thesis was that you needed to trust only the protocol itself, verifiable through open-source code and distributed consensus.
Yet crypto-native applications are not immune from these concerns. DeFi protocols have suffered billions in losses through exploits, rug pulls, and governance attacks. Memecoin platforms like Pump.fun face class-action lawsuits alleging manipulation. Decentralised social networks struggle with spam, harassment, and content moderation challenges that their permissionless architecture makes difficult to address. The choice is not between trustless perfection and trusted compromise but between different configurations of trust, risk, and capability.
Perhaps the most honest assessment is that crypto culture will persist as aesthetic residue even as the technology becomes invisible infrastructure. Early-adopter communities will continue to celebrate on-chain participation as identity markers, much as vintage computing enthusiasts celebrate command-line interfaces in an era of graphical operating systems. The technical capability for self-custody and trustless verification will remain available to those who value it, even as the overwhelming majority of users interact through intermediated products that abstract away complexity.
This is not necessarily a tragedy. Other technologies have followed similar trajectories. The internet began as a countercultural space where early adopters celebrated decentralisation and resisted commercialisation. Today, most users access the internet through devices and services controlled by a handful of corporations, but the underlying protocols remain open and the option for direct participation persists for those motivated to exercise it.
The question is whether this residual option matters. If only a tiny fraction of users ever exercise self-custody or participate in decentralised governance, does the theoretical availability of these options provide meaningful protection against centralised control? Or does the concentration of practical usage in institutional channels create the same capture risks that cryptocurrency was designed to prevent?
The $2 billion stablecoin transaction from MGX to Binance suggests an answer that satisfies neither purists nor institutionalists. The technology worked exactly as designed: value transferred across borders instantly and irrevocably, settled on a distributed ledger that neither party needed to understand. But the participants were sovereign wealth funds and exchange conglomerates, the transaction enabled by presidential family connections, and the regulatory framework that of traditional anti-money laundering compliance. This is not what the cypherpunks imagined, but it is what cryptocurrency has become.
Whether that represents maturation or abandonment depends entirely on what one hoped cryptocurrency would achieve. If the goal was efficient global payments infrastructure, the invisible approach has delivered. If the goal was liberation from institutional financial control, the invisible approach has failed precisely by succeeding. The technology escaped the sandbox of speculation and entered the real world, but the real world captured it in return.
The builders who will succeed in this environment are likely those who understand both economies and can navigate between them. Stripe's acquisition of Bridge demonstrates that institutional players recognise the value of crypto infrastructure even when stripped of cultural signifiers. Pump.fun's billion-dollar raise demonstrates that crypto-native culture retains genuine economic value even when disconnected from institutional approval. The most durable projects may be those that maintain optionality: invisible enough to achieve mainstream adoption, crypto-native enough to retain community loyalty, flexible enough to serve users with radically different relationships to the underlying technology.
The original vision has not been abandoned so much as refracted. It persists in self-custody options that most users ignore, in decentralised protocols that institutions build upon, in cultural communities that thrive in parallel with institutional rails. Cryptocurrency did not mature into a single thing. It matured into multiple things simultaneously, serving different purposes for different participants, with different relationships to the values that animated its creation.
Whether the cultural layer remains competitive advantage or becomes mere nostalgia will be determined not by technology but by the choices users make about what they value. If convenience consistently trumps sovereignty, the invisible approach will dominate and crypto culture will become historical curiosity. If enough users continue to prioritise decentralisation, self-custody, and explicit blockchain participation, the cultural layer will persist as more than aesthetic. The technology enables both futures. The question is which one we will choose.
a16z crypto. “State of Crypto 2025: The year crypto went mainstream.” October 2025. https://a16zcrypto.com/posts/article/state-of-crypto-report-2025/
Re7 Capital. “The Future of Crypto is Social.” https://re7.capital/blog/the-future-of-crypto-is-social/
The Block. “Re7 Capital bets on SocialFi with a $10 million fund targeting around 30 startups.” 2025. https://www.theblock.co/post/352562/re7-capital-socialfi-fund-crypto
CNBC. “Stripe closes $1.1 billion Bridge deal, prepares for aggressive stablecoin push.” February 2025. https://www.cnbc.com/2025/02/04/stripe-closes-1point1-billion-bridge-deal-prepares-for-stablecoin-push-.html
Stripe Newsroom. “Introducing Stablecoin Financial Accounts in 101 countries.” 2025. https://stripe.com/blog/introducing-stablecoin-financial-accounts
The White House. “Fact Sheet: President Donald J. Trump Signs GENIUS Act into Law.” July 2025. https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-president-donald-j-trump-signs-genius-act-into-law/
Morgan Lewis. “GENIUS Act Passes in US Congress: A Breakdown of the Landmark Stablecoin Law.” July 2025. https://www.morganlewis.com/pubs/2025/07/genius-act-passes-in-us-congress-a-breakdown-of-the-landmark-stablecoin-law
Business Wire. “World Liberty Financial's Stablecoin $USD1 Crosses $3 Billion in Market Capitalization.” December 2025. https://www.businesswire.com/news/home/20251225249806/en/World-Liberty-Financials-Stablecoin-USD1-Crosses-3-Billion-in-Market-Capitalization
CNBC. “Trump's World Liberty Financial jumps into stablecoin game with USD1 reveal.” March 2025. https://www.cnbc.com/2025/03/25/trumps-world-liberty-financial-jumps-into-stablecoin-game-with-usd1-reveal.html
The Block. “BlackRock's bitcoin ETF surpasses 800,000 BTC in assets under management after $4 billion inflow streak.” 2025. https://www.theblock.co/post/373966/blackrock-bitcoin-etf-ibit-800000-btc-aum
CoinDesk. “RWA Tokenization Is Going to Trillions Much Faster Than You Think.” February 2025. https://www.coindesk.com/opinion/2025/02/07/rwa-tokenization-is-going-to-trillions-much-faster-than-you-think
The Block. “Pump.fun surpasses $800 million in lifetime revenue as Solana memecoin launchpad competition heats up.” 2025. https://www.theblock.co/post/367585/pump-fun-surpasses-800-million-in-lifetime-revenue-as-solana-memecoin-launchpad-competition-heats-up
CoinDesk. “Vitalik Buterin: Ethereum at Risk If Decentralization Is Just a Catchphrase.” July 2025. https://www.coindesk.com/tech/2025/07/02/vitalik-buterin-ethereum-at-risk-if-decentralization-is-just-a-catchphrase
CryptoSlate. “10 stories that rewired digital finance in 2025 – the year crypto became infrastructure.” 2025. https://cryptoslate.com/10-stories-that-rewired-digital-finance-in-2025-the-year-crypto-became-infrastructure/
BlockEden. “Farcaster in 2025: The Protocol Paradox.” October 2025. https://blockeden.xyz/blog/2025/10/28/farcaster-in-2025-the-protocol-paradox/
Crystal Intelligence. “USDT vs USDC Q3 2025: Market Share & Dominance Analysis.” 2025. https://crystalintelligence.com/thought-leadership/usdt-maintains-dominance-while-usdc-faces-headwinds/
CoinDesk. “Tether and Circle's Dominance Is Being Put to the Test.” October 2025. https://www.coindesk.com/opinion/2025/10/11/tether-and-circle-s-dominance-is-being-put-to-the-test
The Defiant. “DeFi TVL Surges 41% in Q3 to Three-Year High.” 2025. https://thedefiant.io/news/defi/defi-tvl-surges-41-in-q3-to-three-year-high
PYMNTS. “Making Sense of Meme Coins, Digital Assets and Crypto's Future.” 2025. https://www.pymnts.com/cryptocurrency/2025/making-sense-meme-coins-digital-assets-crypto-future/
D-Central. “Bitcoin and the Cypherpunks – A Journey Towards Decentralisation and Privacy.” https://d-central.tech/bitcoin-and-the-cypherpunks/
World Economic Forum. “How will the GENIUS Act work in the US and impact the world?” July 2025. https://www.weforum.org/stories/2025/07/stablecoin-regulation-genius-act/
Andreessen Horowitz. “What Stripe's Acquisition of Bridge Means for Fintech and Stablecoins.” April 2025. https://a16z.com/newsletter/what-stripes-acquisition-of-bridge-means-for-fintech-and-stablecoins-april-2025-fintech-newsletter/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * A good Thursday: this. Some of the extra chores managed from earlier in the week have started yielding good fruit today. Little extras are already making improvements in my quality of life. Nice! And I anticipate more improvements over the days and weeks ahead.
Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Health Metrics: * bw= 218.59 lbs. * bp= 147/88 (67)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:15 – 1 banana, toast and butter * 09:30 – 1 peanut butter sandwich * 11:00 – 1 cheese sandwich * 13:40 – 4 hot dogs * 15:00 – snacking on nacho chips and hot sauce
Activities, Chores, etc.: * 04:30 – listen to local news talk radio * 05:50 – bank accounts activity monitored * 06:20 – read, pray, follow news reports from various sources, surf the socials, nap * 15:00 – listening to the The Jack Riccardi Show * 18:30 – listening now to the Pregame Show ahead of tonight's women's college basketball game between my Indiana University Hoosiers and the Ohio State Buckeyes. Opening tip is approx half an hour away. GO HOOSIERS!
Chess: * 09:15 – moved in all pending CC games
from
Café histoire
Rien d'exceptionnel, juste au retour de Châtel-St-Denis, je profite du paysage d'un jour très ordinaire et où la brume impose sa présence.

Tags : #AuCafé #photographie #sonyzve10 #viltrox35mm17
from
The happy place
I’ve been out getting some sunshine on me, but not much.
The solar flare of the Tuesday was obscured by heavy snow laden clouds.
I’ve feeling similar to the aforementioned weather: a headache, some allergic itch deep inside the nose, nearer to the brain than to the nostril. At nights I’ve been woken up by this itch, or weird bizarre unsettling dreams which I immediately forget upon waking, but which still fill me with a vague unease.
And I’ve been feeling conflicted, but where is my anger? — It too is covered by a big, gray cloud.
If only this metaphorical cloud of mine could release its rain somehow.
But I’ve done yoga. Fortunately I had my toe nails clipped. This was a stroke of luck. Nay, a good omen! Because it had slipped my mind that you do this barefoot. I will give yoga one hundred tries before I decide whether to continue.
If I could find inner peace, it would be welcome.
Maybe the case is, is that I need less peace and more war, like the warrior pose, the something warrior.
It’s like this grey cloud is inside of my brain, you know? That’s where the itch comes from.
That must be the case.
Finally: the little black dog woke me up each morning by excitedly biting my nose. He’s such a great friend.
It’s the humours which have been unbalanced! Too much black bile, I’m sure of it!
Or nay maybe too little?
from W1tN3ss
I get it. You don’t like him.
but here’s my dilemma also. He has a trap that needs to know how to remain quiet but his financial policies are bar none the best the world has ever seen.
all data points to supporting this.
Also, why should I support abortion? I don’t believe in condoning abortion but rather, I believe in personal responsibility.
don’t have a baby in the first place.
simple concept.
folks have told me:
“You should listen and talk'
talk about what?
I don’t support abortion.
So the discussion doesnt need to happen.
why must I pay for the poor judgment of others?
Personal responsibility should be taught in all schools, we would be way more prosperous.
and I get it his trap (mouth) should be more responsible too.

#trump #abortion