from Unvarnished diary of a lill Japanese mouse

JOURNAL 6 mars 2026

La pluie murmure sa triste chanson Les toits coulent goutte à goutte toute leurs larmes Sous la couette ça sent bon Bonne odeur douce de filles sorties du bain Il est tard demain je dois me lever tôt Je vais éteindre le téléphone On va se donner la main pour plonger ensemble dans l'eau noire de la nuit

 
Lire la suite...

from Holmliafolk

En dame står langs en vei. En hund reiser seg opp mot hånden hennes for en godbit. Det er tidlig på våren.

Jeg har hatt to andre hunder, en sanktbernhard og en jack russel. Men Billy er den første jeg har brukt som sporhund.

Alle hunder kan i teorien trenes opp til å bli sporhunder. Alle mine tre hunder også. Men i praksis var det bare Billy det passet for. Han elsker å bruke nesa si og er god på det.

Billy og jeg er godkjent søksekvipasje for Nitrogruppa. Vi trener etpar ganger i uka med Nitrogruppa og innimellom med Norsk Redningshund Organisasjon. Og av og til tilkalles vi for å lete etter hunder som har gått seg bort. Da kontakter jeg eieren for å få en “ren” lukt av hunden, som et teppe eller et halsbånd som bare denne hunden har vært nær, og så sporer Billy etter den lukten og utelukker alle andre.

I vinter fant vi en valp som hadde vært på rømmen et døgn i ukjent terreng. Jeg synes det er veldig fint å kunne hjelpe til. Og det virker det som Billy gjør også.

 
Read more...

from Theory of Meaning

Humanity

The point is that you learn to see the humanity in the human being and that you don’t forget to see it. Until the last moment, right down to his last breath, and even in an insane person, and even in a person who is a hardened criminal, humanity remains.

Frankl, V. E. (2024). Embracing Hope: On Freedom, Responsibility & the Meaning of Life (English Edition) Kindle. p 6

 
Weiterlesen... Discuss...

from wystswolf

how do you escape yourself? Beer. Lots of feckin’ beer.

I am here. And I left And went there.

I hated it too. So I packed my bag And sent my arse third

And it was worse than the first. Same after same Wherever I went

Was the specter Of me The shadow

Violet violence In a heart made for Love but doomed

To Be Alone

 
Read more... Discuss...

from Crónicas del oso pardo

Comenzó a darse cuenta de que al levantar el dedo para enfatizar algún aspecto de la conversación, ese algo se torcía. Para ser más claros, si decía -levantando el dedo- “soy una persona íntegra”, justamente se iba al polo contrario, decía alguna mentira y aunque de inmediato le parecía bien, su conciencia se lo reprochaba: “¿No te da vergüenza mentir?”, le decía. Y aunque trataba de no prestarle atención, luego se flagelaba en pesadillas.

Un día le contaron un cuento chino en la que el maestro le cortó un dedo a su discípulo. Aunque no lo entendió bien, por la noche, al recordar lo del dedo, algo le resonó en su interior.

Así las cosas, el sábado fue a una librería de segunda mano donde por poco dinero se llevó un excelente libro de cuentos chinos. Revisó el índice y mirando página tras página no encontró ninguna historia sobre el dedo.

Entonces volvió donde el amigo, le pidió que le repitiera la historia y al escucharla, apuntándole con el dedo, le dijo:

-Sí, sí, ese ya lo sabía.

Y el amigo le cortó una oreja.

-Ay, ay, así no es el cuento -le reprochó al amigo.

Y otro, cerrando la navaja, le dijo:

-Ya no lo olvidarás.

 
Leer más...

from EpicMind

Thorvald Erichsen: Jorde skriver hjem

„Das ist gar kein Schreiben – das ist Tippen.“ Mit dieser spitzen Bemerkung soll Truman Capote einst die Prosa seines Kollegen Jack Kerouac kommentiert haben. Die Bemerkung war polemisch gemeint, doch sie trifft einen Nerv, der bis heute empfindlich ist: Verändert das Werkzeug, mit dem wir schreiben, auch die Art, wie wir denken? Meine Antwort lautet: Ja. Und wir unterschätzen diesen Einfluss systematisch.

Wenn ein Finger eine Taste drückt, passiert neuronal wenig Aufregendes. Jede Taste erzeugt dieselbe Bewegung – nach unten, zurück. Das Gehirn schaltet rasch auf Autopilot. Handschreiben funktioniert anders: Jeder Buchstabe muss aktiv geformt werden, die Hand bewegt sich in wechselnden Richtungen, Auge und Motorik arbeiten eng zusammen. EEG-Messungen bei Zwölfjährigen und Erwachsenen zeigen, dass dabei Hirnregionen aktiv werden, die mit #Lernen, Gedächtnisbildung und sensorischer Integration verbunden sind – und zwar deutlich stärker als beim Tippen [1]. Das Schreiben mit der Hand ist kein obsoleter Umweg. Es ist eine kognitiv dichte Tätigkeit.

Diese Dichte hat Konsequenzen. Wer in einer Vorlesung mitschreibt, kann auf der Tastatur fast wörtlich festhalten, was gesagt wird – und verarbeitet dabei kaum etwas. Wer mit der Hand schreibt, muss auswählen, verdichten, umformulieren. Der Stift zwingt zur Langsamkeit, und Langsamkeit zwingt zum Denken. Studien zeigen, dass handschriftliche Notizen zu einem besseren inhaltlichen Verständnis führen als getippte, obwohl – oder gerade weil – sie kürzer sind [2]. Das Gleiche gilt für Kinder im Schriftspracherwerb: Wer Buchstaben aktiv schreibt, entwickelt die Hirnstrukturen, die später beim Lesen benötigt werden, schneller und stabiler als wer sie nur antippt [3]. Die Hand lehrt das Auge sehen.

Die Hand lehrt das Auge sehen

Nun könnte man einwenden: Das haben wir schon einmal gehört. Als die Schreibmaschine in Büros und Redaktionen einzog, klagte der Philosoph Martin Heidegger, mit ihr gehe der unmittelbare Zusammenhang zwischen Hand und Denken verloren. Die Maschine siegte trotzdem – und die Literatur überlebte. Tatsächlich entstanden durch sie neue Ausdrucksformen, etwa die typografischen Experimente der Avantgarde. Neue Werkzeuge verdrängen ältere nicht einfach; sie verschieben, was mit ihnen möglich ist. Doch dieser Befund ist kein Freispruch für die Tastatur. Er ist eine Warnung: Wer annimmt, das Werkzeug sei neutral, irrt.

Handschrift ist dabei mehr als ein kognitives Instrument. Sie ist individuell. Zwei Menschen können denselben Satz formulieren, aber ihre Schriften werden ihn verschieden erscheinen lassen, werden Tempo, Druck und Stimmung verraten. Briefe, Tagebücher, handschriftliche Manuskripte vermitteln nicht nur Inhalt, sondern eine körperliche Spur ihres Autors. Digitaler Text ist typografisch uniform. Das ist für viele Zwecke ein Vorzug. Doch etwas geht dabei verloren: die Sichtbarkeit des Denkenden hinter dem Gedachten.

Das bedeutet nicht, die Tastatur zu verdammen. Sie ist für Produktion, Bearbeitung und Verbreitung von Texten unersetzlich. Wer heute einen Artikel, ein Dokument oder eine E-Mail verfasst, denkt zu Recht mit den Fingern auf der Tastatur. Aber Schreiben ist nicht gleich Schreiben. Die Tastatur optimiert Geschwindigkeit und Volumen. Die Hand optimiert Tiefe und Verarbeitung. Wer beides vermischt, versteht keines von beidem richtig.

Zurück zu Capote. Was sein Urteil über Kerouac interessant macht, ist nicht nur die Pointe – es ist der Sprecher. Capote tippte selbst. Er arbeitete jahrelang an der Schreibmaschine, später am Computer. Und er schrieb trotzdem. Sein Einwand galt nicht dem Werkzeug als solchem, sondern der Haltung dahinter: dem Schreiben ohne Formwillen, ohne Auswahl und ohne Verlangsamung. Das „Tastatur-Geratter”, das er Kerouac vorwarf, war kein technisches Urteil. Es war ein ästhetisches – und ein kognitives.

Handschrift ist in diesem Sinne keine sentimentale Reminiszenz an Schulfüller und Tintenflecken. Sie ist eine Praxis des Denkens, die das digitale Zeitalter nicht obsolet gemacht hat, sondern dringlicher. Wer schreibt, denkt. Und wer mit der Hand schreibt, denkt – das legen die Befunde nahe – oft klarer, tiefer, aber auch langsamer. Die Langsamkeit ist aber keinMangel, sondern Methode.

Capote irrte, was Kerouac betrifft. Aber die Frage, die sein Spott aufwirft, bleibt gültig: Schreiben wir – oder tippen wir nur?


💬 Kommentieren (nur für write.as-Accounts)


Quellen [1] E. O. Askvik, F. R. van der Weel und A. L. H. van der Meer, „The importance of cursive handwriting over typewriting for learning in the classroom: A high-density EEG study of 12-year-old children and young adults,” Frontiers in Psychology, Bd. 11, Art.-Nr. 1810, 2020, doi: 10.3389/fpsyg.2020.01810.

[2] P. A. Mueller und D. M. Oppenheimer, „The pen is mightier than the keyboard: Advantages of longhand over laptop note taking,” Psychological Science, Bd. 25, Nr. 6, S. 1159–1168, 2014, doi: 10.1177/0956797614524581.

[3] K. H. James und I. Gauthier, „Letter processing automatically recruits a sensory-motor brain network,” Neuropsychologia, Bd. 44, Nr. 14, S. 2937–2949, 2006, doi: 10.1016/j.neuropsychologia.2006.06.028.

Bildquelle Thorvald Erichsen (1868–1939): Jorde skriver hjem. Vestre Gausdal, Kunstmuseum, Lillehammer, Public Domain.

Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet.

Topic #Erwachsenenbildung | #Selbstbetrachtungen

 
Weiterlesen... Discuss...

from Kroeber

#002297 – 12 de Setembro de 2025

“I don't get all choked up about yellow ribbons and American flags. I consider them symbols and I leave them for the symbol-minded”.

George Carlin

 
Leia mais...

from Insomnia, Annotated

Written by: Epikurus
February 28, 2026, 03:28 AM

Introduction.

What is Truth? At first glance, the question seems simple. Yet the deeper we go, the more complicated it becomes. We live in a world saturated with information—news alerts, social media takes, political narratives, religious interpretations—and yet clarity often feels elusive. To think clearly, we must first distinguish between facts, beliefs, and opinions, and then ask how these relate to truth itself.
This distinction becomes especially important when we move into complex domains such as religion, politics, and media, where measurable realities intersect with unseen meanings and value judgments.

Defining Truth.

Truth can be defined as how things actually are, whether we like it or not, and whether we perceive it clearly or not. Truth is not determined by consensus or comfort. It simply is.
This aligns with what philosophers call the “correspondence theory of truth”—the idea that a statement is true if it corresponds to reality (Stanford Encyclopedia of Philosophy, 2021).
Literature echoes this idea. In The Sign of Four, written by Arthur Conan Doyle, Sherlock Holmes declares:
“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
This statement captures the practical side of truth-seeking: eliminate what cannot be, and what remains—however uncomfortable—must correspond to reality.

Fact, Belief, and Opinion: Clear Distinctions.

A fact is something true independent of what anyone thinks about it. Facts can be checked, tested, or measured. For example:
  • “There is a book called Daniel in the Bible.”
  • “In Daniel 10, there is a story describing an angel delayed for 21 days by the ‘prince of Persia.’”
  • “The Earth orbits the sun.”
These claims are verifiable. One can consult a Bible, examine astronomical data, or analyze textual records. Facts exist whether or not we agree with them.
A belief, by contrast, is something a person accepts as true. It may correspond to a fact, or it may not. Beliefs live in the mind. Examples include:
  • “I believe my spouse loves me.”
  • “I believe this politician is honest.”
  • “I believe spiritual beings influence human history.”
Beliefs are not always directly measurable. They are often grounded in trust, reasoning, experience, or interpretation.
An opinion is a value-laden belief. It involves judgments about what is good or bad, better or worse, wise or foolish. For instance:
  • “This war is unjust.”
  • “This interpretation of spiritual warfare is healthier.”
  • “Framing politics as spiritual warfare is harmful.”
Opinions add evaluation. They move from what is to what ought to be.

The Problem of Access.

The difficulty is that humans never encounter “raw” truth directly. We receive information through:
  • Limited and biased senses
  • Other humans (who are also limited and biased)
  • Our own experiences, fears, and expectations
Thus, in practice, our task is not to grasp absolute truth in its fullness, but to ask:
Given what I can see, test, and cross-check, what is most likely true right now?
This epistemic humility is essential, particularly in domains that extend beyond direct measurement.

Spiritual Truth and the Limits of Measurement.

Consider the Book of Daniel, specifically chapter 10.
We can anchor ourselves in facts:
  • The text exists.
  • It contains a narrative of spiritual conflict.
  • Religious traditions interpret it as spiritual warfare.
Beyond that, we enter belief:
  • That these events occurred in unseen reality.
  • That spiritual beings influence human affairs.
  • That prayer interacts with those realities.
These cannot be tested in a laboratory. They are accepted or rejected through theological reasoning, trust in scripture, and personal experience.
And lastly, opinion:
  • “This view of spiritual warfare is comforting.”
  • “This interpretation makes God seem more loving.”
  • “This framework is psychologically healthier than doom-centered preaching.”
These statements express value judgments, not measurable claims.
Truth in this spiritual domain would mean: What is actually happening in unseen reality? Yet by definition, such matters cannot be directly measured. Thus, responsible engagement requires anchoring in verifiable facts, recognizing one’s beliefs, and distinguishing evaluative opinions layered on top.
Clarity comes from distinguishing layers, not collapsing them.

Politics, Demons, and Interpretive Lenses.

Take the belief: “Demons influence politics.”
Throughout history, people have interpreted political events through spiritual frameworks. The belief that demonic forces influence politics is not new; it appears across centuries of religious thought.
Fact:
  • Historical records show recurring patterns: propaganda, dehumanization, corruption, cycles of war. However, political outcomes demonstrably involve money, incentives, institutions, and psychology.
Belief:
  • Unseen spiritual forces may exploit human weaknesses to shape events. These claims are accepted on theological, experiential, or interpretive grounds, but they are not testable like a ballot count.
  • Prayer resists such influence is also a belief grounded in faith and anecdotal patterns rather than lab proof.
Opinion:
  • Viewing politics as spiritual warfare is either motivating or distracting. Such framing is either healthy or harmful.
Here, the key is not to collapse layers. Human-level explanations—greed, fear, trauma, ambition—account for much political behavior. A spiritual explanation may function as a metaphysical interpretation layered on top. It becomes dangerous when it replaces accountability or empirical reasoning.
A healthy approach maintains both levels:
  • Use facts for civic decisions (voting, policy evaluation, community action).
  • Hold spiritual beliefs as interpretive frameworks for meaning and prayer.

Practical Stress-Testing of Beliefs.

If one holds the belief that demons influence politics, a responsible approach includes testing its practical implications.
  • Ask what observable effects the belief predicts. If demons influence politics, one might expect coordinated deception, sudden moral collapses in leaders, or patterns that repeat despite rational explanation. Those are testable as patterns even if not as direct proof of spirits.
  • Compare explanations. Does a spiritual explanation add something that greed, fear, trauma, and power dynamics do not already explain? If yes, it might be doing real explanatory work; if no, it might just be a metaphor.
  • Keep both levels. Treat spiritual claims as interpretive frameworks for prayer and meaning, while relying on human-level facts for civic decisions.
A healthy version of this belief does not erase human responsibility. It says:
  • Demons exploit human sin, trauma, and greed.
  • Humans still choose.
  • Systems still have accountability.
  • You still have agency over where you give your attention, money, and rage.

Modern Patterns and Two-Layer Reading.

Consider recurring patterns in modern history:
  • Mass dehumanization waves through propaganda.
  • Leaders who shift from normal governance to paranoia and cruelty.
  • Repeating cycles across nations: corruption, scapegoating, war drums, collapse.
On a purely human level, psychology, power, and money explain much of this.
On a spiritual level, some interpret these as evil exploiting human weakness— “something riding those weaknesses and steering them.”
This creates a two-layer reading:
  • Layer 1 (Facts): money flows, laws, propaganda, incentives.
  • Layer 2 (Belief): spiritual forces egging those weaknesses on.
The danger lies in collapsing Layer 1 into Layer 2 and abandoning responsibility.
Modern news often blends reporting with interpretation. Consider outlets such as:
  • The Guardian
  • NPR
  • The New York Times
  • The Wall Street Journal
  • Washington Examiner
  • The Spectator
  • Reuters
  • Associated Press
A practical method for clarity:
  1. Assume everyone is selling something. Information often aims to move emotions or loyalties.

  2. Separate the layers:

    • FACTS: dates, numbers, quotes, concrete actions.
    • INTERPRETATION/BELIEF: “This proves X is evil.”
    • OPINION/EMOTION: fear, disgust, outrage, tribal loyalty.
  3. Triangulate. Compare sources with different leanings. Trust overlapping, boring facts. Treat the rest as interpretation.

  4. Add time and distance. First takes are often the noisiest. Slower, sourced reporting tends to clarify.

What survives across ideological lines is more likely factual. What differs is often interpretation or opinion.

Daily Choices Under a Spiritual & Practical Lens.

If one believes politics has a spiritual dimension, daily practice should not look like obsession. It should look like grounded discipline:
  • Be suspicious of content demanding instant outrage or dehumanization.
  • Ask, “Who profits if I’m terrified or furious right now?”
  • Guard your inputs. Limit doom-scrolling. Prefer long-form analysis over hot-take reels.
  • Aim small and local. Treat co-workers, patients, and family members with integrity.
  • Pray and act—not pray instead of act.
  • Refuse to outsource responsibility to demons.
Belief should not produce paralysis. It should produce steadiness. If truth corresponds to reality, then emotional frenzy is often an obstacle to perceiving it clearly.
A mature stance holds:
  • Humans remain morally responsible.
  • Systems require accountability.
  • Individuals retain agency in how they allocate attention, time, and action.
It’s reasonable to hold that demons could influence politics as a belief that helps make sense of recurring moral rot, but one should separate that from the hard facts about how power works and use both lenses – spiritual for meaning and practical for action. Thus, even if one believes spiritual forces operate in the world, daily life still centers on concrete actions: loving one’s family, practicing integrity at work, engaging locally, and contributing constructively. Belief in spiritual warfare does not require surrendering one’s cortisol to every headline.

Conclusion.

Truth is not whatever feels persuasive, comforting, or viral. It is how things actually are. Facts describe measurable reality. Beliefs interpret that reality. Opinions evaluate it.
Confusion arises when we blur these categories—treating beliefs as facts, or opinions as truths. Clarity emerges when we consciously separate them.
In religious and political life alike, wisdom lies not in abandoning belief, nor in pretending certainty where none exists, but in disciplined humility: anchoring in what can be verified, acknowledging interpretive layers, and refusing to let emotional manipulation replace careful reasoning.
Truth may remain partially hidden, and we may never grasp truth in its fullness. But by carefully distinguishing fact, belief, and opinion, we can move toward it with humility—and live sanely in the meantime.
Manipulation thrives on speed and emotion, not careful thought.

Sources.

Conan Doyle, A. (1890). The Sign of Four. London: Spencer Blackett.
Stanford Encyclopedia of Philosophy. (2021). “Truth.” Retrieved from https://plato.stanford.edu/entries/truth/
The Holy Bible, Book of Daniel, Chapter 10.
 
Read more...

from Dallineation

You might have noticed I've been thinking a lot about music lately. Taking a break from Twitch has allowed me to sort of “musically reset” as I have tried to focus on listening to music that elevates the soul and draws me closer to God. It is reawakening a long-dormant part of myself that once loved to play musical instruments and create my own music.

Early in my life, I wanted to pursue a career in music education. I played clarinet and tenor saxophone in concert bands, ensembles, jazz bands, jazz combos, etc. throughout my entire secondary and post-secondary education and I wanted to do nothing else. But life took me down a different path.

This morning I remembered a project I started about 22 years ago. I was serving as an executive secretary in the bishopric of the Mesa College Third Ward (a student ward at Mesa Community College in Mesa, AZ). In a bishopric meeting Bishop Burton mentioned that one of his favorite hymns was “Come unto Him” (#114 in the green hymn book) but that he wished someone would set the lyrics by Theodore E. Curtis to different music. I never forgot that comment and thought I might take a stab at putting this beautiful text with my own composition.

(1) I wander through the still of night, When solitude is ev’rywhere— Alone, beneath the starry light, And yet I know that God is there. I kneel upon the grass and pray; An answer comes without a voice. It takes my burden all away And makes my aching heart rejoice.

(2) When I am filled with strong desire And ask a boon of him, I see No miracle of living fire, But what I ask flows into me. And when the tempest rages high I feel no arm around me thrust, But ev’ry storm goes rolling by When I repose in him my trust.

(3) It matters not what may befall, What threat’ning hand hangs over me; He is my rampart through it all, My refuge from mine enemy. Come unto him all ye depressed, Ye erring souls whose eyes are dim, Ye weary ones who long for rest. Come unto him! Come unto him!

Over the years I have composed new music for this hymn on the piano, but I have never made a serious effort to arrange it or write it down.

This morning, out of the blue, this project came to my mind once again and I decided it was time to get these ideas out of my head and onto print. And also GarageBand (Apple's free music creation software). I even started thinking of some new musical ideas I could work into it.

Rather than compose it as a straightforward hymn, I want to make it a choral arrangement. This has all come about because of my recent discovery of VOCES8, the angelic-sounding vocal octet from the UK. I can't get enough of their music these days. And I've been imagining writing my own music for a group like them.

I'm not saying my version of “Come Unto Him” will ever be sung by any group, let alone the likes of VOCES8.

But I have been reminded of how integral music has been to my life and to my understanding of and faith in God. For me, music is one of the most beautiful of all of God's creations, and I believe He delights in good and sacred music. I can't imagine a Heaven without music.

I'm excited to see how this composition turns out. I'll be sure to share it with you when I finish it.

#100DaysToOffload (No. 146) #faith #Lent #music

 
Read more... Discuss...

from targetedjaidee

Creating new normals.

What does that mean to you?

To me it means that I will be breaking toxic cycles within my family & creating healthy new “norms” for generations to come.

I found a daily inventory book I had written in, back in 2022. And wow. What was written in it, to be reading it now? It all makes sense, and how God's timing is perfect, and on time. It really blows my mind. I was writing about how I felt about my family; I had written about having a good experience with my mother and how grateful I was.

In reading these entries, I noticed that my relationship with my parents has always been tumultuous. I want healthy relationships with my children. I want a healthy family dynamic with my family, not my family. That is dead and gone, unfortunately. However, I have come to the conclusion that I am to let go & let God. This whole time I have been wanting such justice & order, but God has worked everything out for my own good. He is healing me in ways I never could've imagined. I am beyond grateful.

My experiences as TI have brought me to my knees numerous times, but I am grateful for the greater connection to God. So to my gangstalkers, the people in on this? Thank you. You cannot ruin what God has ordained.

Jaide owwt*

 
Read more...

from 下川友

0

今から約一年前、私の住む街は、人間によく似た姿を持つ「妖精族」によって統治されることになった。 彼らはルサイ族と言う。 人間より上位の存在として振る舞うものの、暴力や強制で人々を従わせることはない。

むしろ、彼らは普段ほとんど姿を見せない。 それでも、街の細かな決まりごとや日々の作法は、気づけば妖精たちの感性に沿うように、静かに形を変えていった。

生活そのものは大きく変わらない。 ただ、私たちの感覚だけが、まるで本来持っていない色や質感――たとえば、淡い黄色の画用紙を脳にそっと貼り付けられたような、薄い膜のような違和感――をまとったまま日常を過ごしているようだった。

1

子供の頃、私はよく缶詰を高い棚の上に置き、わざと肘で当てて落としていた。金属の鈍い音が床に響くと、必ず誰かの視線がこちらに向く。それだけで、私は妙に満足していた。 いま思うと、あれは単に大人の気を引きたかったのだろう。

その記憶を思い出すたび、私は少し困る。 というのも、今の私は、よその子供にほとんど関心を持てないからだ。公園などで、誰かの子供が走り回っているのを見かけると、なぜか鼻の中に汗が出てくる。体の奥の方で、小さな警報が鳴るような感じがする。 自分もあんなふうに注意を欲しがっていた子供だったのに、と思う。

そういうことを考えていると、自分というものがどんな経験で出来上がっているのか、不意に思い出すことがある。

たとえば、口約束だけでジェットに乗れたことがある。 誰かが「大丈夫」と言い、私は「じゃあ」と言った。それだけで本当に乗れてしまった。 あれは妙な成功体験だった。 あれ以来、人は案外、状況の形だけで納得してしまうものだと思うようになった。もし私が昔の時代の人間だったら、展望台の上から誰かが命令してきたとき、それがどんな内容であっても、きっと正しい命令だと思い込んでいただろう。

ある日、警察が誰かに向かって言っていた。

「デパートは外から楽しむものだ」

私はその会話を通りすがりに聞いて、変なやつが街を守っているものだなと思った。

街の空気は、どこかずれている。

その日の帰り道、スーツ姿の若い男が私の横を横切った。

「スーツを正しく汚せます」

彼は誰に言うでもなく、そんなことを口にしていた。社会人一年目の人間が、どこかで覚えたばかりの言葉を試しているような声だった。

人の流れに押されるように歩いているうち、いつの間にか集合場所が変わっていることに気づいた。 俺たちは何に集合しているんだろう。

本来は手前の広場だったはずなのに、人の群れは自然と向こう岸へ移動している。私は戸惑いながらも、皆と同じように早歩きでそちらへ向かった。

橋を渡ると、水が見えた。 本当に水がきれいで、少し怖いくらいだった。 透明すぎて、そこに自分が立っていることの実感が薄れてしまう。 生きている感じがしない。

そのまま歩いていると、骨董店の中から声が聞こえてきた。

「魔女は何もしなくても年々強くなるんだよ」

店の奥で誰かがそう言っている。 私は立ち止まらなかったが、その言葉だけが耳の中に残った。

魔女という言葉を聞くと、ある人物を思い出す。

友達が一度、霊媒師みたいな格好の人を連れてきたことがあった。黒い布の重なった服を着ていて、歩くたびに少しだけ音がした。その人からは梅の匂いがしていた。

梅の匂いを嗅ぐと、私はなぜかお茶漬けが食べたくなる。 そのことを思い出しながら帰路につき、団地の公園のような場所を近道しようとした。 すると、入口のところに木こりのような男が立っていた。

「今はここは通れない」

そう言われた。 私は「あなたはここのなんなんですか」と聞いたが、男は答えなかった。

そのまま土をならす作業に戻ってしまった。 仕方なくその場で立ち止まり、周囲の木を見ていると、昔からの癖を思い出す。 風船を膨らませたあと、手についたゴムの匂いを、私はよく木に擦り付ける。 なぜそんなことをするのか、自分でもよく分からない。 木の皮に手を当てると、匂いが少し落ち着く。

その感触を持ちながら、自分の家へと帰宅した。 最近、愛犬がくしゃみをすると、その振動が骨に響くような気がする。昔はそんなことはなかった。自分の体の強さに、少し自信がなくなっている。

家に帰るとき、遠くからベランダが見えると安心する。 カーテンが風で揺れているのを見ると、それだけで気分が少し高まる。 家がそこにあるということは、まだ大丈夫だという気がするからだ。

私は半分冗談のように、自分のことをおばあちゃんだと思って暮らしている。 だから帰ったら、いつも娘にお湯で優しく洗ってもらう。 それを、ちょっと楽しみにしている。

 
もっと読む…

from SmarterArticles

When Brad Smith, Microsoft's vice chair and president, stood before cameras in December 2025 to announce his company's largest ever commitment to Canada, he did not simply unveil an infrastructure deal. He outlined a blueprint. The C$19 billion investment, spanning 2023 to 2027, with more than C$7.5 billion (approximately US$5.4 billion) earmarked for the next two years alone, was wrapped in the language of sovereignty, trust, and governance. Smith called it “the most robust digital sovereignty plan that we have announced anywhere,” building on commitments Microsoft had previously made to the European Union. But behind the soaring rhetoric lies a more complicated question, one that regulators, civil society groups, and rival governments are only beginning to wrestle with: can a single corporation's infrastructure investments actually create replicable models for responsible AI governance across jurisdictions with wildly divergent regulatory expectations?

The answer matters. It matters because the sovereign cloud market is projected to grow from US$154.69 billion in 2025 to US$823.91 billion by 2032, according to Fortune Business Insights, with Europe expected to hold the highest market share. It matters because the EU AI Act is rolling out in phases that will reshape compliance requirements for every organisation deploying AI in Europe. And it matters because Canada itself has failed to pass comprehensive AI legislation, leaving a regulatory vacuum that corporate commitments are rushing to fill. Microsoft expects to spend US$80 billion on AI-enabled data centres in its fiscal year 2025 alone, according to a January 2025 blog post by Smith, with more than half of that spending directed at US facilities. The Canadian investment, while substantial, is one piece of a global infrastructure play that spans Portugal (US$10 billion), the United Arab Emirates (US$15 billion), and dozens of other markets.

The Anatomy of a Sovereign AI Play

To understand what Microsoft is attempting in Canada, you need to see the investment as more than data centres and fibre optic cables. The C$7.5 billion will expand Microsoft's Azure Canada Central (Toronto) and Canada East (Quebec City) data centre regions, with new capacity expected to come online in the second half of 2026. These facilities will be designed for energy efficiency, renewable power, and water-saving cooling systems, features that are increasingly non-negotiable given the enormous power demands of AI workloads. Nvidia's GB200 NVL72 systems, widely used in AI data centres, are estimated to consume up to 120 kilowatts per rack, demanding liquid cooling and advanced infrastructure management.

Microsoft currently employs more than 5,300 people across 11 Canadian cities, operates a significant R&D hub in Vancouver with over 2,700 engineers, and supports an ecosystem of 17,000 partner companies that generate between C$33 billion and C$41 billion annually, supporting approximately 426,000 jobs. The company estimates that AI tools could generate up to C$40 billion in annual productivity gains for Canadian organisations. A 2025 Microsoft SMB Report found that 71% of Canadian small and medium businesses are actively using AI or generative AI, with 90% adoption among digital-native firms.

But the infrastructure spend is only one layer of a five-point digital sovereignty plan that Smith articulated as a deliberate governance architecture. The five pillars cover cybersecurity defence, data residency, privacy protection, support for Canadian AI developers, and continuity of cloud services. Each pillar addresses a distinct governance concern, and together they represent Microsoft's attempt to demonstrate that a hyperscaler can operate within national boundaries while maintaining global interoperability. On the fifth pillar, Microsoft made a distinctive pledge: to pursue legal and diplomatic remedies against any order that would suspend cloud services to Canadian customers, a commitment that goes beyond standard service-level agreements.

The cybersecurity pillar centres on a new Threat Intelligence Hub in Ottawa, staffed by Microsoft subject matter experts in threat intelligence, threat protection research, and applied AI security research. The hub will collaborate with the Royal Canadian Mounted Police (RCMP), the Canadian Centre for Cyber Security (part of the Communications Security Establishment), and other government agencies to monitor nation-state actors, ransomware groups, and AI-powered attacks. Microsoft claims access to 100 trillion daily threat signals globally, a figure that underscores the sheer scale of its intelligence apparatus. The company disclosed that its investigators had recently uncovered Chinese and North Korean operatives using fake identities for tech sector infiltration in Canada, lending urgency to the hub's establishment. Microsoft's own assessment found that in 2025, more than half of cyberattacks against Canada with known motives were financially motivated, with 80% involving data exfiltration efforts, and almost 20% targeting the healthcare and education sectors.

On data residency, Microsoft committed to processing Copilot interactions within Canadian borders by 2026, expanding Azure Local to allow organisations to run Azure capabilities in their own private cloud and on-premises environments, and launching the Sovereign AI Landing Zone (SAIL), an open-source framework hosted on GitHub designed to provide a secure foundation for deploying AI solutions within Canadian borders while maintaining privacy and compliance standards. Canada is one of 15 countries to which Microsoft is extending in-country data processing for Microsoft 365 Copilot interactions; the initiative began rolling out to Australia, the United Kingdom, India, and Japan by the end of 2025, with 11 additional countries, including Canada, scheduled for 2026.

The privacy pillar introduces confidential computing capabilities within Canadian data centre regions, keeping data encrypted and isolated even during processing. Azure Key Vault will be available to Canadian customers, supporting external key management and allowing encryption keys to remain under customer control. Microsoft has also made a contractual commitment to challenge any government demand for Canadian government or commercial customer data where it has a legal basis to do so.

When Sovereignty Meets the Sovereign Landing Zone

The technical architecture underpinning Microsoft's sovereignty claims is the Sovereign Landing Zone (SLZ), a variant of the Azure Landing Zone (ALZ) that layers additional controls for data residency, encryption, and operational oversight. In June 2025, Microsoft CEO Satya Nadella announced a broad range of sovereign cloud solutions, and the SLZ has since moved from concept to implementation. The SLZ on Terraform achieved general availability, with a Bicep implementation currently in development building on the new Bicep Azure Verified Modules for Platform Landing Zones.

The SLZ is not a separate cloud. It builds on ALZ principles but applies tighter, enforceable controls aligned with sovereign operating models. The architecture includes management-group hierarchies tailored for workload classification (Public, Confidential Online, and Confidential Corp), additional policies for data residency, and encryption at rest, in transit, and in use through confidential computing. The key design principle is enforcement over guidance: guardrails are applied at the platform level using management groups, Azure Policy, identity controls, and standardised subscription layouts. Application teams can move quickly, but only within approved boundaries. In addition to Azure's built-in policies, the SLZ provides a Sovereignty Baseline Policy initiative alongside country-specific and regulation-specific policy sets, with the set of built-in policy definitions continuing to expand.

For regulators, this architecture raises a fundamental question: does platform-level enforcement constitute genuine governance, or is it merely compliance theatre orchestrated by the very entity being regulated? The distinction matters enormously. When Microsoft embeds sovereignty controls into its infrastructure layer, it effectively sets the rules of the game. Customers can customise deployments in accordance with established regulatory frameworks. But the underlying infrastructure remains Microsoft's, subject to its design decisions, its threat models, and its commercial priorities.

This tension is not hypothetical. Under the US CLOUD Act and the Foreign Intelligence Surveillance Act (FISA), data hosted on servers owned by US companies can be subject to US law enforcement requests, regardless of where those servers are physically located. The Canadian government itself characterised FISA as a “primary risk to data sovereignty” in a 2020 white paper. Microsoft's contractual commitment to challenge such demands is welcome, but it remains a voluntary corporate pledge, not a structural guarantee. Smith told CTV in December 2025 that “no country can defend its digital sovereignty if it cannot defend its digital borders,” adding that Microsoft defends Canada's digital border “every day.” That framing reveals a core paradox: digital sovereignty premised on the goodwill of a foreign corporation is sovereignty of a peculiar, contingent sort.

The EU AI Act and the Compliance Calendar

Any discussion of replicable governance models must contend with the EU AI Act, the world's most comprehensive AI regulation, which is being implemented in phases that will reshape the compliance landscape through 2027 and beyond.

The Act entered into force on 1 August 2024, but its requirements activate at different milestones. As of 2 February 2025, AI systems posing “unacceptable risks” became strictly prohibited, including manipulative AI, predictive policing, social scoring, and real-time biometric identification in public spaces. Organisations also became required to ensure adequate AI literacy among employees involved in AI deployment.

On 2 August 2025, rules for general-purpose AI (GPAI) models took effect, requiring providers to maintain technical documentation, publish public summaries of training content using the European Commission's template, and comply with EU copyright rules. Member States were required to designate national competent authorities and adopt national laws on penalties. EU-level governance structures, including the AI Board, Scientific Panel, and Advisory Forum, had to be established.

The majority of the Act's provisions become fully applicable on 2 August 2026, including requirements for high-risk AI systems in healthcare, finance, employment, and critical infrastructure. Transparency rules under Article 50 will apply, and each Member State should have established at least one AI regulatory sandbox. Full application, including rules for high-risk AI embedded in regulated products, arrives on 2 August 2027, with a final deadline of 31 December 2030 for AI systems that are components of large-scale IT systems.

Finland has already moved ahead of the pack, activating national supervision laws on 1 January 2026 and becoming the first EU Member State with fully operational AI Act enforcement powers at the national level. On 2 February 2026, the European Commission conducted its first mandatory review of Article 5 prohibitions, potentially expanding the list of banned AI applications based on evidence of emerging risks. Meanwhile, in November 2025, the European Commission proposed the “Digital Omnibus,” a plan to simplify the EU's sweeping digital regulations, which could delay when certain high-risk obligations take effect; however, this proposal must still pass through the EU legislative process.

For Microsoft, the EU AI Act creates both obligation and opportunity. The company has stated that its early investment in responsible AI positions it well to meet regulatory demands and to help customers do the same. Microsoft has already established a European board of directors, composed of European nationals, exclusively overseeing all data centre operations in compliance with European law. But the Act's requirements for explainability, auditability, and fairness documentation go far beyond what any single company's voluntary commitments have historically delivered.

Canada's Regulatory Vacuum and the Corporate Governance Paradox

While the EU is implementing the world's most detailed AI regulatory framework, Canada finds itself in a strikingly different position. The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in June 2022, was designed to establish a comprehensive regulatory framework for AI. It would have introduced measures to regulate AI systems, prohibited harmful practices, created a new AI and Data Commissioner, and imposed penalties of up to C$25 million or 5% of global revenue for non-compliance.

AIDA never became law. The bill died on the order paper in January 2025 after extensive parliamentary scrutiny revealed concerns about its scope, the delegation of regulatory powers, and the adequacy of public consultations. Critics noted that key provisions were vague, including the lack of a clear definition for “high-impact system,” with the Act stating that the definition might evolve in the future. The Act was also criticised for having been developed behind closed doors by a select group of industry representatives, drawing criticism for its lack of broader stakeholder engagement.

The current federal government has indicated it will seek to regulate AI through privacy legislation, policy, and investment rather than overarching AI-specific legislation. In October 2025, the government held a public engagement “sprint” in connection with a new AI Strategy Task Force to support a renewed national AI strategy, expected to rely on policy mechanisms rather than comprehensive legislative reform. Canada's Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, stated that “Canada is scaling homegrown companies while also working with international partners to build the advanced infrastructure our innovators require.”

This creates what might be called the corporate governance paradox: in the absence of binding regulation, corporations like Microsoft step into the gap with voluntary commitments, infrastructure investments, and self-imposed governance frameworks. Microsoft's five-point sovereignty plan, its Sovereign Landing Zone architecture, and its Threat Intelligence Hub all function as de facto governance mechanisms. But they are governance mechanisms designed, implemented, and enforced by the governed entity itself.

The paradox deepens when you consider that Canada has launched the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a broader C$2.4 billion investment in AI initiatives announced in the 2024 federal budget, alongside a C$2 billion Sovereign AI Compute Strategy encompassing the AI Compute Challenge (up to C$700 million), the Sovereign Compute Infrastructure Programme (up to C$705 million), and the AI Compute Access Fund (up to C$300 million). The country also has sector-specific regulatory efforts: the Office of the Superintendent of Financial Institutions (OSFI) has released Draft Guideline E-23 on Model Risk Management for financial institutions, Ontario's Workers for Workers Four Act (effective 2026) will impose requirements on employers using AI in hiring, and Canadian law societies in Alberta, British Columbia, and Ontario have issued guidance for lawyers using generative AI. But none of these measures constitute the kind of comprehensive, cross-sector AI governance framework that the EU AI Act represents.

Responsible AI Tooling and the Measurement Problem

Microsoft's responsible AI framework rests on six stated principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The company has operationalised these through its Responsible AI Standard, which covers six domains and establishes 14 goals intended to reduce AI risks and their associated harms. But principles are not outcomes. The critical question is whether Microsoft's tooling can produce measurable governance results that satisfy regulators, customers, and civil society stakeholders.

The company's primary instrument is the Responsible AI Dashboard, which integrates several components for assessing and improving model performance. Error Analysis identifies cohorts of data with higher error rates, including when systems underperform for specific demographic groups or infrequently observed input conditions. Fairness Assessment, powered by the open-source Fairlearn library, identifies which groups may be disproportionately negatively impacted by an AI system and in what ways. Model Interpretability, powered by InterpretML, generates human-understandable descriptions of model predictions at both global and local levels; for example, it can explain what features affect the overall behaviour of a loan allocation model, or why a specific customer's application was approved or rejected. The dashboard also includes counterfactual what-if components that help stakeholders explore how changes in inputs would alter outcomes.

For generative AI specifically, Microsoft Foundry allows developers to assess applications for quality and safety using both human review and AI-assisted metrics. Microsoft has also introduced Transparency Notes, documentation designed to help customers understand how AI technologies work and make informed deployment decisions. The company's 2025 Responsible AI Transparency Report detailed 67 red-teaming operations conducted across flagship models, including the Phi series and Copilot tools, stress-testing them for vulnerabilities to malicious prompts and misuse. Microsoft introduced an internal workflow tool that centralises responsible AI requirements and simplifies documentation for pre-deployment reviews; for high-impact or sensitive use cases involving biometric data or critical infrastructure, the company provides hands-on counselling to ensure heightened scrutiny and ethical alignment.

In September 2025, Nadella announced new AI commitments focusing on enhanced safety protocols, transparency in algorithms, and investments in bias mitigation tools. He warned at the World Economic Forum that AI would lose public support unless it demonstrated tangible value: “We will quickly lose even the social permission to take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness.”

Microsoft has also aligned its Cloud Adoption Framework AI governance guidance with the NIST AI Risk Management Framework (AI RMF), which organises recommendations into four core functions: Govern, Map, Measure, and Manage. Azure Policy and Microsoft Purview are offered as tools to enforce policies automatically across AI deployments, with regular assessments of areas where automation can improve policy adherence. Counterfit, an open-source command-line tool, allows developers to simulate cyberattacks against AI systems, assessing vulnerabilities across cloud, on-premises, and edge environments.

Yet the measurement problem persists. Responsible AI dashboards and transparency notes are useful tools, but they are fundamentally self-assessment instruments. They tell you what Microsoft's own systems detect about Microsoft's own models. Civil society organisations have been explicit about what they consider insufficient. A survey by The Future Society of 44 civil society organisations found overwhelming consensus on the need for legally binding measures, with enforcement mechanisms receiving the highest support across all priorities. The top-ranked demand was establishing legally binding “red lines” prohibiting certain high-risk AI systems incompatible with human rights obligations, followed by mandating systematic, independent third-party audits of general-purpose AI systems covering bias, transparency, and accountability. A side event titled “Global AI Governance: Empowering Civil Society,” held during the Paris AI Action Summit in February 2025, reinforced these priorities.

The Frontier Governance Framework and Corporate Accountability

Microsoft's response to growing calls for accountability has been its Frontier Governance Framework, introduced in the 2025 Transparency Report. The framework emerged from voluntary safety commitments made in May 2024 alongside fifteen other AI organisations and now functions as an internal monitoring and risk assessment mechanism for advanced models before release. It represents Microsoft's attempt to self-regulate frontier AI development before governments can impose external constraints.

The framework's effectiveness depends entirely on its implementation rigour and the independence of its oversight. Microsoft's partnerships with civil society organisations, including its collaboration with the Stimson Center on the Global Perspectives Responsible AI Fellowship, suggest an awareness that corporate governance cannot operate in isolation. The fellowship brings together diverse stakeholders from civil society, academia, and the private sector for discussions on AI's societal impact. Brad Smith has emphasised that government, industry, academia, and civil society must work together to advance AI policy.

But awareness is not the same as accountability. The gap between corporate voluntary commitments and the binding regulatory frameworks that civil society demands remains wide. As one participant in The Future Society consultation articulated: “Public accountability demands that we develop meaningful measures of impact on important issues like standards of living and be transparent about how things are going.” Civil society organisations are calling for standardised methodologies for independent verification across jurisdictions, crisis response protocols with clear intervention thresholds, and transparent participation mechanisms that ensure equitable representation. Microsoft's investment of US$80 billion in AI data centres during fiscal year 2025 makes it one of the world's largest investors in AI infrastructure; that scale of spending creates commensurate obligations for governance transparency.

Divergent Frameworks and the Replicability Question

The global landscape of AI governance is characterised by fundamental divergences. The EU has adopted a regulation-first approach emphasising human rights, conformity assessments, and mandatory transparency. The United States has historically favoured innovation-first self-governance, though sector regulators including the Consumer Financial Protection Bureau, the Food and Drug Administration, and the Equal Employment Opportunity Commission are increasingly referencing NIST AI RMF principles in their expectations for safe deployment. China pursues state-led AI governance with centralised control over AI development. The BRICS nations, representing eleven countries including Brazil, Russia, India, China, South Africa, Saudi Arabia, Egypt, the UAE, Ethiopia, Indonesia, and Iran, advocate for flexible governance structures that respect national sovereignty while maintaining international cooperation. McKinsey analysis suggests that sovereign AI could represent a market of US$600 billion by 2030, with up to 40% of AI workloads potentially moving to sovereign environments.

Only about 30 countries currently host in-country compute infrastructure capable of supporting advanced AI workloads. Many lack not only hardware but also local model development, applications, energy systems, and governance frameworks optimised for AI. This compute divide creates a structural dependency: nations without indigenous AI infrastructure must rely on hyperscalers like Microsoft, accepting their governance frameworks as a condition of access to AI capabilities. Seventy-one per cent of executives, investors, and government officials surveyed by McKinsey characterised sovereign AI as an “existential concern” or “strategic imperative” for their organisations.

Microsoft's Canadian investment can be seen as a template for this dynamic. The company offers sovereignty tools (SLZ, SAIL, Azure Local), cybersecurity collaboration (Threat Intelligence Hub), and local AI developer support (Cohere partnership). Cohere's advanced language models, including Command A, Embed 4, and Rerank, are being integrated into the Microsoft Foundry first-party model lineup, making Canadian-developed AI accessible on Azure. Microsoft and Cohere aim to co-develop industry-specific models for sectors like natural resources and manufacturing, where Canada has particular strengths. This partnership serves a dual purpose: it provides enterprise customers with an alternative to US-developed models, and it bolsters Canada's credentials as an AI innovation hub.

The question of replicability hinges on whether Microsoft's approach can be transplanted to jurisdictions with fundamentally different regulatory, political, and economic contexts. Consider the EU: Microsoft has already committed to end-to-end AI data processing within Europe as part of the EU Data Boundary, and Microsoft 365 Copilot now processes interactions in-country for 15 countries. The company's Sovereign Landing Zone provides EU-specific policy sets aligned with the AI Act's requirements. But the EU's regulatory expectations go well beyond data residency. The Act requires conformity assessments for high-risk systems, detailed technical documentation, human oversight mechanisms, and ongoing monitoring obligations. These requirements demand independent verification, not just self-reported compliance through corporate dashboards.

Building Governance That Outlasts the Press Release

The mechanisms that would transform corporate AI commitments into measurable governance outcomes fall into three categories: explainability, auditability, and fairness documentation. Each requires specific institutional arrangements that go beyond voluntary corporate action.

Explainability demands that AI systems provide meaningful explanations of their decisions to affected individuals. Microsoft's InterpretML and model interpretability tools offer technical capabilities for this, generating both global explanations (what features affect a model's overall behaviour) and local explanations (why a specific decision was made). But technical explainability is only useful if it is accessible to non-technical stakeholders, including regulators, affected communities, and individual users. The EU AI Act's transparency obligations under Article 50, applicable from August 2026, will require explanations that are comprehensible to the humans who interact with AI systems, not just the engineers who build them.

Auditability requires independent third-party access to AI systems, training data, and deployment processes. Microsoft's red-teaming operations and its alignment with the NIST AI RMF's Govern, Map, Measure, and Manage functions provide an internal audit framework. But the civil society consensus, as documented by The Future Society, is that self-auditing is insufficient. Measurable governance outcomes require external audit mechanisms with genuine investigative authority, standardised methodologies for independent verification across jurisdictions, and enforceable penalties for non-compliance. The EU AI Act's conformity assessment procedures for high-risk systems point in this direction, but their effectiveness will depend on the capacity and independence of national competent authorities.

Fairness documentation requires systematic evidence that AI systems do not discriminate against protected groups. Microsoft's Fairlearn library and the Responsible AI Dashboard's fairness assessment capabilities provide tools for detecting disparate impact. But fairness is not a purely technical concept. It involves normative judgements about which disparities are acceptable and which constitute discrimination, judgements that vary across cultures, legal systems, and political contexts. A fairness standard calibrated for Canadian employment law may be inadequate for EU anti-discrimination directives or for the complex intersectional discrimination patterns that civil society organisations have documented.

What Replicable Governance Actually Requires

Microsoft's Canadian investment demonstrates that a hyperscaler can build infrastructure, deploy sovereignty tools, and partner with local institutions to create governance capabilities. The skills component alone is substantial: Microsoft aims to help 250,000 Canadians earn AI credentials by 2026 through its Microsoft Elevate unit, having already engaged 5.7 million learners and supported 546,000 individuals in completing AI training across the country. Only 24% of Canadians have received AI-related training, compared to a 39% global average, according to Microsoft data.

But replicable governance requires something more: institutional arrangements that survive changes in corporate leadership, shifts in commercial strategy, and the inevitable tensions between profitability and public interest.

Nadella himself has acknowledged this tension. In November 2025, he published a widely circulated memo on “Shared Economic Gains,” warning the tech industry against value extraction and arguing that for the AI revolution to be sustainable, it must create more wealth for its users than for its creators. He has consistently argued that “technology development doesn't just happen; it happens because us humans make design choices. Those design choices need to be grounded in principles and ethics.”

The replicability question ultimately comes down to whether Microsoft's governance architecture can be separated from Microsoft itself. If the Sovereign AI Landing Zone is truly open-source, if the Threat Intelligence Hub's methodologies can be adopted by other nations' cybersecurity centres, if the responsible AI tooling can be validated by independent auditors, then Canada's experience could serve as a genuine template. If, however, these governance mechanisms remain dependent on Microsoft's infrastructure, subject to Microsoft's terms of service, and validated primarily by Microsoft's own assessments, then they represent corporate governance rather than public governance, and their replicability is limited to jurisdictions willing to accept that distinction.

The EU AI Act's phased implementation will provide the most rigorous test. By August 2026, when the majority of provisions become applicable, Microsoft and every other AI provider operating in Europe will face mandatory requirements for transparency, explainability, and accountability that no voluntary framework can substitute. The question is whether the governance muscles Microsoft is building in Canada, through its SLZ architecture, its Threat Intelligence Hub, and its responsible AI tooling, will prove strong enough to meet those requirements, or whether the gap between corporate self-governance and democratic accountability will prove too wide to bridge.

For Canada, for Europe, and for the approximately 30 nations currently capable of hosting advanced AI workloads, the answer will define the next decade of AI governance. Microsoft has laid down a US$5.4 billion wager that its version of sovereignty by design can become the global standard. Whether that wager pays off depends not on the size of the investment, but on whether the governance frameworks it produces can earn the trust of the regulators, civil society organisations, and citizens whose lives AI systems increasingly shape.


References and Sources

  1. Microsoft. “Microsoft Deepens Its Commitment to Canada with Landmark $19B AI Investment.” Microsoft On the Issues, 9 December 2025. https://blogs.microsoft.com/on-the-issues/2025/12/09/microsoft-deepens-its-commitment-to-canada-with-landmark-19b-ai-investment/

  2. Business Standard. “Microsoft to invest over $5.4 bn in Canada to expand AI infrastructure.” 9 December 2025. https://www.business-standard.com/technology/tech-news/microsoft-to-invest-over-5-4-bn-in-canada-to-expand-ai-infrastructure-125120901025_1.html

  3. Fortune Business Insights. “Sovereign Cloud Market Size, Share, Growth | Forecast [2034].” 2025. https://www.fortunebusinessinsights.com/sovereign-cloud-market-112386

  4. EU Artificial Intelligence Act. “Implementation Timeline.” 2025. https://artificialintelligenceact.eu/implementation-timeline/

  5. Microsoft Learn. “Sovereign Landing Zone (SLZ) Implementation Options.” 2025. https://learn.microsoft.com/en-us/industry/sovereign-cloud/sovereign-public-cloud/sovereign-landing-zone/implementation-options

  6. Microsoft Azure Blog. “Microsoft Strengthens Sovereign Cloud Capabilities with New Services.” November 2025. https://azure.microsoft.com/en-us/blog/microsoft-strengthens-sovereign-cloud-capabilities-with-new-services/

  7. Innovation, Science and Economic Development Canada. “Artificial Intelligence and Data Act.” https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act

  8. White & Case LLP. “AI Watch: Global Regulatory Tracker, Canada.” 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-canada

  9. Microsoft. “Responsible AI Principles and Approach.” https://www.microsoft.com/en-us/ai/principles-and-approach

  10. Microsoft Learn. “What is Responsible AI, Azure Machine Learning.” https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2

  11. GitHub. “Microsoft Responsible AI Toolbox.” https://github.com/microsoft/responsible-ai-toolbox

  12. AI Magazine. “Inside Microsoft's 2025 Responsible AI Transparency Report.” 2025. https://aimagazine.com/articles/inside-microsofts-2025-responsible-ai-transparency-report

  13. The Future Society. “Ten AI Governance Priorities: Survey of 44 Civil Society Organizations.” 2025. https://thefuturesociety.org/cso-ai-governance-priorities/

  14. McKinsey & Company. “The Sovereign AI Agenda: Moving from Ambition to Reality.” 2025. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/the-sovereign-ai-agenda-moving-from-ambition-to-reality

  15. NIST. “AI Risk Management Framework.” https://www.nist.gov/itl/ai-risk-management-framework

  16. Microsoft Learn. “Govern AI, Cloud Adoption Framework.” https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/ai/govern

  17. ERP Today. “Microsoft's Canada Investment Puts Digital Sovereignty to Work.” December 2025. https://erp.today/microsofts-canada-investment-puts-digital-sovereignty-to-work/

  18. BetaKit. “Microsoft to spend $7.5 billion on AI data centre expansion with pledge to protect Canada's 'digital sovereignty.'” December 2025. https://betakit.com/microsoft-to-spend-7-5-billion-on-ai-data-centre-expansion-with-pledge-to-protect-canadas-digital-sovereignty/

  19. BNN Bloomberg. “Microsoft will protect Canada digital sovereignty: president.” 13 December 2025. https://www.bnnbloomberg.ca/business/politics/2025/12/13/microsoft-president-insists-company-will-stand-up-to-defend-canadian-digital-sovereignty/

  20. Trilateral Research. “EU AI Act Compliance Timeline: Key Dates for 2025-2027 by Risk Tier.” 2025. https://trilateralresearch.com/responsible-ai/eu-ai-act-implementation-timeline-mapping-your-models-to-the-new-risk-tiers

  21. Tony Blair Institute for Global Change. “Sovereignty in the Age of AI: Strategic Choices, Structural Dependencies and the Long Game Ahead.” 2025. https://institute.global/insights/tech-and-digitalisation/sovereignty-in-the-age-of-ai-strategic-choices-structural-dependencies

  22. TechXplore. “Microsoft's AI deal promises Canada digital sovereignty, but is that a pledge it can keep?” January 2026. https://techxplore.com/news/2026-01-microsoft-ai-canada-digital-sovereignty.html

  23. Canada.ca. “Canada's National Cyber Security Strategy: Securing Canada's Digital Future.” 2025. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/ntnl-cbr-scrt-strtg-2025/index-en.aspx

  24. Osler, Hoskin & Harcourt LLP. “Canada's 2026 Privacy Priorities: Data Sovereignty, Open Banking and AI.” 2025. https://www.osler.com/en/insights/reports/2025-legal-outlook/canadas-2026-privacy-priorities-data-sovereignty-open-banking-and-ai/

  25. CNBC. “Microsoft expects to spend $80 billion on AI-enabled data centers in fiscal 2025.” 3 January 2025. https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-80-billion-on-ai-data-centers-in-fy-2025.html

  26. Microsoft 365 Blog. “Microsoft offers in-country data processing to 15 countries to strengthen sovereign controls for Microsoft 365 Copilot.” 4 November 2025. https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/04/microsoft-offers-in-country-data-processing-to-15-countries-to-strengthen-sovereign-controls-for-microsoft-365-copilot/

  27. Microsoft. “Responsible AI Tools and Practices.” https://www.microsoft.com/en-us/ai/tools-practices

  28. Schwartz Reisman Institute, University of Toronto. “What's Next After AIDA?” 2025. https://srinstitute.utoronto.ca/news/whats-next-for-aida

  29. Digital Journal. “Microsoft's $19-billion Canadian AI investment stokes digital sovereignty debate.” December 2025. https://www.digitaljournal.com/business/microsofts-19-billion-canadian-ai-investment-stokes-digital-sovereignty-debate/article

  30. Microsoft Source EMEA. “Microsoft Expands Digital Sovereignty Capabilities.” November 2025. https://news.microsoft.com/source/emea/2025/11/microsoft-expands-digital-sovereignty-capabilities/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Reflections

My friend, Rachel, who has much more experience with pets than I do, shared an excellent bit of wisdom after Kika died.

Death isn’t a failure of care.

On some level, it seems obvious enough. No one has ever beaten death. Your loved one was supposed to be the first? I don't think so.

An illustration of Kika on a desk with a blanket wrapped around her. Kika looks tired but grateful.
Image by ChatGPT, based on a photo of Kika in her final days

On another level, I find the statement difficult to accept. There's always more that could have been done. I could have timed her medication better. I could have brushed her more to calm her and to express my love. I could have slept by her side those final days, on the floor of my office where she hid at night. I don't know why the idea only occurred to me later. Maybe she would have lived longer if I had. (To be fair to myself, I was pretty confused and exhausted in those final days. Perhaps I couldn't think clearly with her health declining.)

Kika's death

Kika ultimately died of gastrointestinal disease. She may have had lymphoma, but we will never know for sure. Her primary veterinarian felt that certainty was unnecessary, because the treatment for lymphoma was the same as the treatment for other kinds of gastrointestinal disease. We did follow that treatment plan, but it wasn't enough.

I brought Kika in for an appointment the day before she died, on my mother's urging. My mom knew I would never regret going, but I would regret not going. That turned out to be excellent advice. If it hadn't been for that, I would still be second-guessing myself, wondering if Kika had died of acute pancreatitis or something else that I failed to see. The vet took some fluids and sent them to Cornell, but when Kika passed the next day, the practice cancelled the tests. On the day Kika passed, the attending veterinarian did say something about how Kika's fluid looked cancerous under their microscope, but I was so flustered I could hardly understand what she was talking about.

The GI med may have worsened Kika's condition in the end, but I still think trying it was the right course of action. No one could have predicted how she would react to it, and if I hadn't tried it, I would be blaming my avoidance of the medication for her decline.

Even still, I should have been able to extend Kika's life somehow. Every choice matters. In that sense, wasn't her death a failure of care?

I don't know. There are many causes of death in humans and animals. Kika didn't die of the following things, to be clear, but mistakes happen, people die of medical malpractice, and accidents take lives too soon. Are those deaths predetermined? Are they unavoidable?

Maybe they are.

Kika died because she was a living thing, and living things die. Death is the price we pay for birth. Perhaps everything else is secondary—a few days here, a few days there, perhaps more. Immortality? No.

The Mortal Rule

I'm not a Buddhist, but I find Buddhism interesting and Buddhist and Buddhist-inspired meditation indescribably helpful. Buddhism can sometimes seem impenetrable, with myriad traditions, vast terminology, and scripture which is much more voluminous and much more sprawling than Westerners are accustomed to. I recently stumbled across the “Five Remembrances,” though, which are not at all difficult to understand. They offer a meaningful response to my difficulties, or perhaps a preventative for the feelings I've been struggling with. Practitioners are encouraged to memorize and reflect upon these facts, as interpreted by Lion's Roar and author Koun Franz:

I am of the nature to grow old. There is no way to escape growing old.

I am of the nature to have ill health. There is no way to escape having ill health.

I am of the nature to die. There is no way to escape death.

All that is dear to me and everyone I love are of the nature to change. There is no way to escape being separated from them.

My actions are my only true belongings. I cannot escape the consequences of my actions. My actions are the ground upon which I stand.

The names of others can be substituted in these reminders. Kika was of the nature to grow old. Kika was of the nature to have ill health. Kika was of the nature to die. There's nothing we could have done to change that. (The last remembrance seems like a non-sequitur, but I assume its inclusion partly serves as a reminder of the centrality of karma in Buddhist thought.)

Philosophical stoicism offers similar advice:

With regard to whatever objects either delight the mind, or contribute to use, or are tenderly beloved, remind yourself of what nature they are, beginning with the merest trifles: if you have a favorite cup, that it is but a cup of which you are fond, – for thus, if it is broken, you can bear it; if you embrace your child, or your wife, that you embrace a mortal, – and thus, if either of them dies, you can bear it.

—The Enchiridion of Epictetus, as translated by T.W. Higginson

Dialectical behavior therapy encourages the broader practice of “coping ahead.” Others take solace in remembering the phrase “memento mori.” It's not the Golden Rule, I suppose, but there's enough ancient and modern support for the idea: remember that you and others are destined to die. It has helped me, and it may help you, too.

Watching without failing

In the end, Rachel may be correct. Certainly, insisting that your loved ones overcome death is insisting that you be disappointed. You will not succeed. Even still, that won't make you a failure.

#Life #Quotes

 
Read more...

from Chemin tournant

Dans la chambre de passage ou un studio de location, voilà qu'il devient fenêtre, capturé par son carré, ouverte puis refermée sur l'illusion qui nous désorbite la tête. On croit voir ce qu'on croit voir, mais soumis aux forces de l'ordre interne, on se retrouve seulement bêta devant l'image, humainement trompé. On en fait un tableau, montage inachevé, qu'on voudrait libéré des clôtures du monde, de la propriété, réchappé de l'explication, une petite chose aussitôt perdue de quelques lettres aux formes noires, en attente d'être retrouvée.

Le mot œil apparait 13 fois dans Ma vie au village.

#VoyageauLexique

Dans ce deuxième Voyage au Lexique, je continue d’explorer, en me gardant de les exploiter, les mots de Ma vie au village (in Journal de la brousse endormie) dont le nombre d’occurrences est significatif.

 
Lire la suite... Discuss...

Join the writers on Write.as.

Start writing or create a blog