from The Belringer

Democracy

Democracy is the foundation of our Great America. It is 'of the people, by the people, for the people.' At least, that is our tagline. (Motto) Why, then, must I choose between a two-party system? Why does it require me to be on the right or left?

Can I not be some of both? I have two arms; one is on the right, and the other is on the left. If I use only one while ignoring the other, my productivity is less than half of what it could be. Sometimes my right hand is more practical, and sometimes my left.

My best is when I can use both. I can say the same thing about our country’s leaders. They achieve less than half of their obtainable potential when using only one side.

Why can’t we require the legislators to use both sides? Or, unbeknownst to us, did they pass a law requiring the “other” side always to be wrong?

 
Read more...

from The Belringer

Where Is My Church?

A Documentary Narrative

Introduction: Searching for the Spirit

The story of the Bel family, and especially Rich, begins with a question: Why do people, even with earnest spiritual goals, so often lose their way?

In the early 1990s, the pain inflicted by the traditional church led the Bel family to step away from organized religion. Rich, the central figure, recalls this as a pivotal moment—a time marked by both reward and pain.

Disillusioned by churches that seemed more concerned with appearances than substance, Rich chose to follow God and the Holy Spirit, leaving behind the institutional church. For a period, he had no congregation, missing some aspects of church life but refusing to 'play church.'

The Encounter: A New Kind of Church

During this season, a friend persistently invited Rich to a small group in town. His first visit was jarring: loud music, exuberant worship, and practices he had once condemned. Initially, Rich wanted nothing to do with it. But the preacher’s message resonated deeply, echoing truths Rich had long held but never heard from a pulpit. Despite his discomfort, Rich returned and gradually embraced the Spirit-led spontaneity of the group. Over time, what he once despised became the highlight of his week. The people were genuine, praying boldly and serving joyfully. Rich’s beliefs were reshaped as he became part of a community that lived out Kingdom principles.

Growth and Transformation

Midweek gatherings took place in homes, modeled after early Church cell groups. These meetings were marked by worship, open sharing, prayer, and service. The church grew organically, transforming lives and fostering miracles. Rich found himself living the Kingdom church he had only read about in Scripture. It was a season of spiritual prosperity, but with growth came new challenges.

The Shift: From Spirit to Structure

Eight or nine years into its existence, the church began to change. Growth brought structure, and spontaneity gave way to rules. Leadership formed a government, and expressions of the Spirit were increasingly regulated. Home groups became scripted, and programs replaced Spirit-led freedom.

The church expanded physically, building a new worship center and hiring new staff, often at the expense of long-time servants. Central doctrines became law, and participation in leadership required speaking in tongues—a practice that became a source of confusion and exclusion, especially among the youth.

The Disappearance: Losing the Spirit

As the church became more like a business, laws multiplied, and control replaced freedom. Rich felt the loss deeply. The church he had cherished disappeared, replaced by an institution governed by human authority.

The breaking point came during the dedication of the new building, when Rich felt a clear message from God: 'You don’t belong here; this is not for you.' He left, grieving the loss of the Spirit-led community he had loved.

Reflection: Lessons from the Journey

Rich’s experience echoes the words of Paul: 'Oh, foolish people…why are you so foolish? You started in the Spirit, but now try to reach goals through human plans.'

The story raises questions about the possibility of a church wholly led by the Spirit when it grows in numbers. Growth seems to demand rules, and success appears to require structure. Yet, the early church thrived with few guidelines, relying on the Spirit for discipline and direction, yet their church and message spread worldwide.

History reveals a pattern: human plans often replace God’s Spirit, and churches risk becoming cult-like assemblies centered on charismatic leaders and loyalty.

Conclusion: The Call to Be the Church

Rich’s journey ends not with a return to organized religion, but with a call to 'be the church.'

The documentary narrative invites readers to reflect on the tension between Spirit-led freedom and human control, and to consider what it truly means to live as the Body of Christ.

 
Read more...

from After Bedtime Notes

It was New Year’s Eve yesterday. It went as expected, which is oddly reassuring given our 3-year old was running around like crazy – like on any other day recently. A dinner-playdate, with just a few sips of whiskey and, arguably, far too much food went smoothly. Firework extravaganza at midnight, typical for the area, resembled active war zone for a few hours too many, guaranteeing total disaster of whatever was left of local ecosystem’s mental health.

Nothing new.

Morning was a bit unorthodox though. No hangover, not anymore. Smart people in their forties don’t have these. More like a general sense of overwhelm and overload. Yes, it’s a New Year. New opportunities (not really, nothing has changed in one day). New habits (ha, let’s see about this one in two weeks). New conclusions (that aren’t really new), including one that this time it is actually almost guaranteed to go downhill. The second part of life, cemented and anchored, here to stay. Then, new old memories, once forgotten, now rediscovered for whatever reason. New regrets. New commitments and new decisions to drop commitments.

The usual New Year’s mental chaos with a hint of melancholy.

And on top of that, a layer of zucchini cake, pasta salad, beans with tomatoes, and probably a dozen more lies along the lines of “I’ll just try this one.” All soaked in an exotic mixture of four vastly different brands of whiskey. Well, actually three brands of it and one serious sip of bourbon from some forgotten by gods republican hellhole.

And this, just this, was too much.

Today couldn’t simply start the same way as usual then. Breakfast, lunch – why, what for? Just because it’s the normal way? Because that’s what adults do, eat breakfast when it’s breakfast time? Well, yesterday was out of ordinary, to the point of me experiencing primal fears before stepping on the bathroom scale. As it turned out, rightly so, but it’s another thing.

It’s just not natural to do things the same way again, when the day before was so vastly different from the normal one (assuming it exists at all). When there’s feast, there’s fast – or, at the very least, should be. That’s exactly what I did – nothing at all.

No breakfast, just black coffee.

No lunch, just rooibos tea.

Light dinner after weirdly satisfying walk in freezing rain gracefully reinforced by stormy wind? Well, yes please. Light one, a leftover from yesterday, just a cup (or two, if we’re completely honest) of pasta salad full of hastily chopped vegetables.

No LinkedIn, finally with no remorses! I hate that thing anyway.

No serious writing, just this post here.

No TV. No radio. No news.

Also: not a thought about new opportunities, habits, conclusions, memories, regrets, and commitments. I might’ve jumped on the stationary bike for some deeply satisfying minutes, but that’s different. Addictions are not relevant.

Fast for the midship then – and fast for the bow. One gets narrower, the other less cluttered.

What a glorious day.

Happy New Year!

 
Read more...

from wystswolf

I could care less about sleeping in anything else, but never socks. Never.

For the record, Wolf hates sleeping in socks.

Given the choice, I prefer sleeping as I was created: nothing between me and my dreams but a downy cover. If it’s warm, you can keep the blanket.

But my feet? They must always be naked.

I am so weird about my feet.

My third Friday in Europe turned out to be way more of a party than I anticipated. A day that started late, grew organically, and delivered some of the highest highs, along with a bit of a sour note at the climax. Busy days and late nights mean slow starts in the morning. The day saw me rousing between 8 and 9 a.m. CET. As the apartment is on the first floor and settled between rows of tall buildings, it is never clear to me early in the morning if the sun is shining or not.

Stepping out into the frosty morning in my flannel pajamas, I glimpse blue sky and deduce that today will be a brilliant day to be out running around. So I do the next logical thing:

I sit inside and write for four hours.

When the wife finally stirred, we planned a visit to the Sofía Reina to see art—specifically Picasso’s Guernica. There is some debate as to the best way to transfer: bus, train, or Uber. Bus is the most convenient for short hops, and so we shower, dress, and dash out the door.

Naturally, we can’t just get on the bus. First, we have to find an orange. In the neighborhoods of Madrid, there are fruit stands on every block. Sometimes two. Oranges are in season right now, so you look for the orbs with leaves attached. It’s an indication that they are the freshest. Twenty cents for a plump, luscious bite of citrus.

Then, of course, we needed a café, where I stumbled through my very limited Spanish to order a coffee and empanada. I failed to correctly distinguish between meat (carne) and chicken (pollo). But it was very good in spite of the mis-order.

Strolling and window-shopping is a delight on a brisk, sunny Friday morning, and so we leisurely gawk at stunning evening gowns, fancy luggage, and sundries of all kinds.

I’ve just eaten, but I find the smell of fried chicken irresistible. Stopping at an open window, I ask for “un pollo, por favor.” It takes a moment, as the cook fries it only when you ask, and it is deliciously hot and fresh—a plump, juicy breast so hot it steams in the morning cold. Thank you, missus chicken. You were delicious.

I finish just in time for the number 39 to roll up and swipe my metro card twice. Beep, beep. A total of three euros for us both to ride across town to the Sofía Reina.

I have discovered I really enjoy riding the buses here. They are clean, well-lit, and cared for. People-watching is a lot of fun, though drawing while riding is kind of a challenge, as we’re rarely on the bus very long. And there’s plenty to see through the windows.

Hopping off at the Atocha stop, we cross a VERY busy intersection. This is close to the city center and the busiest spot I’ve yet walked. I think I could spend all day here watching the mortar going about their lives. The Museo Sofía Reina is in the middle of a 20-year upgrade/restoration. The exterior is in varying states of shrouded construction tarps and fancy louvered metal veneer. The veneer is interesting, but it is most certainly one of those design styles that will age poorly and forever date the upgrade.

After tickets and an audioguide, we start the sojourn into this MASSIVE institution. It might be the biggest museum I’ve ever been in. It is a repurposed government building, and so it isn’t ideal. The structure is a large rectangle whose middle is a courtyard/garden. From above it looks like a giant, squared-off letter “O.”

The galleries are all old office spaces on the outer wall. This is awkward because it creates a labyrinth: some galleries huge, some tiny, some dead ends. And the official map is pretty useless.

So getting lost becomes a ritual. We ask “¿Dónde estamos nosotros?” (Where are we?) of the museum attendants. They can almost always show us on the map where we are, but it isn’t super useful information since nothing else is clearly labeled.

But the art is worth it.

The first floor hosts traveling exhibits, and we are able to see work by little-known Spanish artists. Very intriguing shapes and colors, and carnival scenes that seem universal to every human.

One gallery has massive—I mean MASSIVE—monolithic steel slabs. The literature says they weigh thirty-eight tons.

The placard explains that they were lost for twenty years, stored in a warehouse that was sold and sold and sold until no one knew where the humongous slabs went.

My assessment: sold for weight.

So in 2002, the artist recreated the monoliths and made the Sofía Reina docents very happy—and no doubt lined the artist’s pockets handsomely. Good for you, artist. Grab that money for the rest of us. There is an exhibition by a very old artist who has spent her lifetime painting crisp works in gouache and acrylics. Her most striking pieces depict people—mostly women—with agricultural themes. I am inspired by her portraits and larger works that carry aquatic and agrarian motifs.

As with most moving art, I question why I don’t paint more—especially portraits. The people who mean the most to me should get painted. I am also in love with her nudes. I love painting nudes and believe everyone should experience the power of either being the artist or the subject. Both, if possible. I have yet to be the subject for someone, but I think I might be ready.

For a certainty, I long to paint some more than others. My first real thrill, though, comes from Picasso’s Woman in Blue. His figurative work always surprises me because his cubist work is so heavily promoted. But Woman in Blue is quite lovely and striking. She’s heavily gowned in a massive, rich dress and completely covered except for her face, which is painted as heavily powdered with red cheeks. Her expression is forlorn, eyes distant—somewhat sad.

The placard explains that she is a prostitute, and that Picasso loved portraying the marginalized people he found in life.

I am reminded of my own recent realization and fascination with what I termed the “mortar” of life—those people and places largely overlooked by society, yet absolutely part of the fabric. We love the lightbulb, but need the miles of wire to make what it is.

The hour is late, and as the museum opens its doors for free during the last two hours of the night, I worry about the crush of incoming art lovers. I decide I’ll have to return another day to experience all of Picasso’s galleries—but I must see Guernica.

I need only follow the din.

We can hear the crowd from several galleries away. Late in the day on a Friday, everyone wants to see the famous painting. I am most excited because of its history, as laid out in Russell Martin’s Picasso’s War—an excellent history of why the painting was made and its complex life in the public eye.

Pictures do not do a work like this justice. Nor do crowds. This work needs time and space.

Interestingly, the crowd has created a buffer at the front of the viewing.

The painting is twelve by twenty-five feet, and there is a cordon keeping viewers about ten feet back. Between the crowd and the cordon is a no-man’s land. I can only assume the crowd is being polite to one another—or perhaps they instinctively know they need distance to take it all in.

I decide that, in this moment, the more interesting aspect is the guards watching the painting and the crowd. So I turn my camera and my sketchbook on them, not Guernica itself.

In drawing, I begin to realize how important this is to the Spanish, and to humans in general. As I internalize how cruel humans can be, I am moved to tears—which I believe is exactly what Picasso intended. To affect the viewer.

Mission accomplished.

As the free hour triggers, the place becomes mobbed, and we decide it’s time to be somewhere else.

Dipping out of the museum, we drop into a McDonald’s for a snack and some warmth. Madrid has mastered the electronic order kiosk, which I loathe. I prefer human interaction. But I have to admit, as a non-Spanish speaker, the kiosk is much more efficient and less stressful. This is how the robots win.

Wandering the streets until after dark, we find that instead of worn down, we are energized by the nightlife. My wife spots a Hard Rock Hotel, and we investigate the possibility of a live show. None are forthcoming.

So we decide to call it a night. Seven-thirty, cold and dark, and we are thin from the day’s museum visit.

As we try to figure out which bus will get us home to the Latin Quarter, I recall seeing an ad on the ride out. The bus in front of us had “CABARET—see it live” emblazoned in Spanish.

A quick search turns up that the Kit Kat Klub is in fact performing the show in less than an hour. We are only a thirty-minute walk away, but my wife—though excited and eager to see it—has no interest in trekking through Madrid that night.

So Uber it is. Mistake.

We’ve been operating under the assumption that traffic always flows. This is our first real experience with central Madrid on a Friday night. We live west of here, out of the tourist zone, where traffic is usually fluid. But here it is a grind.

We sit and sit as our driver battles it out. The worst part is watching the map as we inch to within fifty meters of the theater entrance, only to be pulled into traffic in a tunnel beneath the old town—where we’ve been drinking, eating, and living.

I want to jump out and dash to the theater, but instead we sit for twenty more minutes while he escapes the tunnel and gets stuck in a roundabout. We finally abandon him and make the ten-minute walk to the venue.

We are in luck—minutes to showtime. I misunderstand the clerk and instead of buying seats up close, I buy them in the back. Better, because there is less crush of bodies; worse, because I have not brought my distance eyewear and the whole show, while beautiful, is slightly blurry.

And speaking of the show—wow.

I expected half-measures with lots of reliance on titillation and suggested nudity, but to the director’s credit, they told a compelling story. Well sung. Well acted. Yes, the performers were stunning in their mostly naked states, and I applauded the daily work required to maintain such peak human form.

But by the third act, I was in tears. Blinding tears.

We started with a bottle of wine, and by intermission it was long gone, as were the two mini bottles of whiskey my wife smuggled in. Feeling no pain, we decided a second bottle of wine would be ideal to finish the show.

We should have stopped at one.

Inebriation heightened my sense of the story’s development. By the third act, I was undone. Up until then, everyone is managing—hiding in music, wit, appetite, motion. Then the story closes its exits. Pleasure stops being refuge and starts looking like delay.

Love and history arrive at the same moment and ask to be taken seriously.

What broke through for me was the quiet grief of realizing that fantasy can be sincere and still be unsustainable, and that some reckonings can’t be danced around forever.

My muse once said she identifies with Sally, and I understand why. Sally survives by keeping the lights on, by choosing momentum, by believing in the moment she’s standing in. Hitching rides with stars. Watching it, I felt the pull of Cliff—not because I’m leaving or want to, but because I recognize the fear he carries: the dread that two people can love each other deeply and still not want the same future, or need the same kind of ground. The film touched that nerve—the uneasy knowledge that loving someone doesn’t always guarantee harmony, and that seeing clearly can feel like a threat even when it’s an act of care. All this, in Spanish. I didn’t realize I had internalized the story so completely.

It was the emotional tearing that drowned that second bottle of vino. When the performance ended, we stumbled into the night, red-eyed and full of yearning.

We should have gone straight home. But even close to midnight, Madrid was alive in a way we’d never seen. Plazas and avenues shot full of people. And so we swayed and danced in the streets like real Spaniards under the holiday lights.

It was magical.

Another stop at a pub added insult to our alcohol injury. By one a.m., we knew we were toast.

The glory of being completely smashed comes with hard consequences, and we both paid the price. My poor wife on a side street, revisiting the evening’s dinner and snacks. Me, once home, after she was safely in bed.

The old adage is true: beer then liquor, never sicker—or whatever idiom covers wine, then liquor, then wine, then beer, and the long night that follows.

At the very least, I made sure that before it all went quiet for the night, my feet were free and unencumbered for sleep. No amount of drink in the world can erase that need.

We’d had the experience of a lifetime in Madrid that day. It was among the highest highs of the adventure and the lowest lows.

I wouldn’t trade a thing.

Except maybe, save Sally from her sadness.



Drawing

 
Read more... Discuss...

from TechNewsLit Explores

Highmark Stadium, then Ralph Wilson Stadium, 14 Sept. 2014 (A. Kotok)

On Sunday afternoon, the Buffalo Bills play their last regular-season home game at Highmark Stadium in nearby Orchard Park, against division rival New York Jets. The Bills are in the National Football League playoffs this year, but in second-place in the AFC East division, so they will likely play their playoff games elsewhere, making Sunday’s game probably their last game at Highmark.

My photo of that stadium, taken during a 2014 season game, is probably my most-viewed shot ever. Here’s how it happened.

The stadium, built in 1972 started out as Rich Stadium with naming rights sold to a local dairy products company, but in 1998 became Ralph Wilson Stadium after the team’s owner, which lasted until 2016. Thus the stadium became known locally as The Ralph, and that nickname stuck as other naming rights came and went. Highmark is a health insurance company that bought the naming rights in 2021.

For several years my two brothers and I — all Buffalo natives — along with their kids and grandkids, went to a Buffalo Bills home game each season. Bills fans, called the Bills Mafia, have a fierce legendary loyalty, despite the team’s ups-and-downs, portrayed in indie films old and new. The Bills Mafia is even the subject of a Hallmark feature film, released this past holiday season.

In Sept. 2014, we got tickets to the Bills game against division rival Miami Dolphins at The Ralph. We discovered, however, that those mid-field seats were up in the nose-bleed section, near the last row. (By the way, the Bills won that game 29-10.)

So I decided to make lemonade out of those lemons. With my Canon point-and-shoot camera, I took three slightly overlapping photos of the field and crowd, then after the game stitched them together with Microsoft’s photo-editing software into a panorama image.

After the game, I posted the image on my Flickr page, and gave it a Creative Commons license, making it freely available with attribution and a link back to the original Flickr file. About a month later, the photo was imported into Wikipedia and Wikimedia Commons.

The image soon appeared on the Ralph Wilson Stadium, now Highmark Stadium page on Wikipedia, where it still resides. For some time, it also appeared on the Buffalo. N.Y. Wikipedia page. Plus, Bills defensive back Jordan Poyer used the photo for a while as the title image on his Twitter page.

The team is building a new stadium, also called Highmark and also outdoors, across the road from the current stadium. I will have to get nose-bleed tickets next season for another photo.

Copyright © Technology News and Literature. All rights reserved.

 
Read more...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 1er janvier 2026

Presque 22h. Tout le monde s'est retiré. Restent deux filles un peu fatiguées. On sirote doucement un super sake (cadeau de papi) qui reste au chaud dans la marmite avant de se faire un onsen sous la neige. Un vrai luxe, des images de magazine. On est comme des reines en somme. Les clients nous ont remerciées aujourd'hui, ils ont eu hier une des plus belles fêtes quils aient connu dans l'auberge. Ils espèrent que ça pourra se reproduire. On a promis d'être là aussi longtemps que ça sera possible.

 
Lire la suite...

from Mitchell Report

A serene watercolor illustration of a cozy workspace by a window, featuring a laptop displaying code, an open book, a steaming mug, and a potted plant, all bathed in warm sunlight streaming through the window. The artwork combines realistic elements with abstract watercolor splashes.

I just wanted to put up a quick blog post and wish everyone a Happy New Year. Let's pray and hope that God blesses us all. I'm looking forward to this year on many fronts. I want to continue getting my heart under control with my Obstructive Hypertrophic Cardiomyopathy, do more self-hosting with AI, and learn more coding. This is also my 25th year of proper blogging (more posts coming on that later).

With all the computing power I'm putting together, I'm hoping to learn more about web development, self-hosting, and becoming less dependent on big tech. I'm one year closer to retirement, which I'm looking forward to. Instead of focusing on an employer's needs, I'll be able to focus 100 percent on what interests me and devote more time to my faith. Putting my love of God and technology together.

So Happy New Year to everyone, and let's see what 2026 brings for us all. Should be an interesting year, especially on the home front with the United States of America hitting 250 years.

#personal

 
Read more... Discuss...

from Taking Thoughts Captive

Almighty and Everlasting God, from Whom cometh down every good and perfect gift: We give Thee thanks for all Thy benefits, temporal and spiritual, bestowed upon us in the year past, and we beseech Thee of Thy goodness, grant us a favorable and joyful year, defend us from all dangers and adversities, and send upon us the fullness of Thy blessing; through Jesus Christ, Thy Son, our Lord. Who liveth and reigneth with Thee and the Holy Ghost, ever One God, world without end. Amen.

Common Service Book of the Lutheran Church, 1917

#prayers

 
Read more...

from An Open Letter

I just didn’t sleep until this late. I think I’ve honestly found my person, it’s like finding someone that just gets a lot of different parts of me and it feels like the more I reveal or let my guard down with, the more I’m accepted. It’s such a strange feeling for that. It’s not like we are the exact same, we definitely have our flaws and things that grate on eachother, but I wouldn’t want it in any other package.

 
Read more...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 1er janvier 2026 #auberge

Hier super soirée : koto et flûte, un pensionnaire avait prévu, il joue en duo avec mamie tous les ans depuis des années. On a passé 4 heures à table, A et mamie à la cuisine, moi et papi au service, entre on prenait place à table aussi c'était extrêmement chaleureux. Le concert parfait, des vrais pros. À minuit super sake de fukushima ça n’existe plus, c’est encore plus précieux. Comme je faisais aussi le service je n'ai pas beaucoup bu, je suis assez contente de moi.

Ce matin déneigement de l'entrée, ça réchauffe et met en forme. La neige n'a pas cessé de tomber, on a plus de 70 cm dehors. Si ça continue on devra déneiger les toits à nouveau, à un mètre c’est critique.

 
Lire la suite...

from Dans les saules

Feuillets de décembre 2025

Quand tu as ouvert la porte entre nos deux mondes En moi quelque chose a respiré Je me suis sentie soulagée d’un poids resté trop longtemps invisible J’ignorais les dragons rouges aux larmes de givre coincés dans mon corps Alors qu’au dehors je ressentais le souffle du moineau apeuré Je n’ai jamais su déchiffrer les langages des humains Mais votre sang s’invitait en tambour dans mon coeur Je cherchais dans vos phrases une clef capable de décoder l’ineffable Aveugle dans ma propre grotte je devais juste soulever les paupières Sentir les ondoiements de mon souffle jusqu’aux racines et aux cîmes de mon être Faire la lumière dans cet espace qui m’abrite Et l’écouter fredonner cette mélodie qui est la mienne J’apprendrai à aimer cette musique qui a la forme du sang et des étoiles Qui est tissée aux bordures de la peau et des rêves Qui est unique et semblable à toute vie J’apprivoiserai ma propre langue Nous parlerons des dialectes distincts Mais leur musicalité nous rassemblera dans la lumière des feux de joie

  J’ai longtemps cherché quelqu’un Qui se serait assis à l’intérieur de moi et m’aurait emplie toute entière Il aurait allumé un feu dans cette pièce pleine de vide Et mis de la musique pour apaiser les silences il aurait construit des ponts pour me relier au dehors et tissé un cocon pour me protéger des intempéries il aurait effacé toutes les distances affreuses sur les visages impassibles il aurait dessiné des sourires il aurait troqué les gifles de violence et les embruns indifférents contre la douceur des étreintes et la joie irradiante dans les jours moches, avec la folie rouge, à travers les pleurs qui déforment tout : il m’aurait aimée partout et tout le temps jamais il n’aurait fermé la porte et encore moins abandonnée j’ai toujours cherché au dehors de moi comme une évidence sur ma condition volatile je me suis vue feu follet, opaline, volutes, je vivais sur une lune où la pesanteur n’a pas cours et je cherchais quelqu’un pour assurer, m’assurer, me rassurer quelqu’un pour faire contrepoids à ma légèreté si extravagante qu’elle en devenait odieuse dans cette pièce pleine de vide j’ai oublié trop longtemps qu’il y avait déjà quelqu’un toute petite, si petite qu’elle en était presque invisible, sa voix fluette devenue soupir, une petite fille pleurait dans une larme bleue, ses sanglots cachés dans un brouillard opaque, ce jour-là, un jour de décembre, pour la première fois je l’ai entendue sangloter doucement et je suis descendue dans la pièce qui était toujours aussi froide mais qui n’était plus vide, je suis descendue, j’ai allumé un feu et j’ai mis de la musique je me suis assise par terre, à l’intérieur de moi pour la première fois, juste à côté d’elle, et j’ai pris sa main dans la mienne

  Toute mon enfance s’est étirée dans une longue nuit d’hiver J’avais entre le monde et moi un bouclier d’argent Il ravalait mes larmes quand je pensais vous perdre, quand ma tête imaginait les horreurs qui pourraient un jour me séparer de vous et vous séparer de moi Longtemps j’ai tout vu par le filtre de cette distance J’imaginais des tourbillons de mélasse, des abysses indomptables, d’ineffables abîmes et des cyclones plus profonds que le plus profond des trous de la terre Longtemps j’ai cru que tout le monde portait en lui ce gouffre d’infranchissable, composé d’angoisses mutiques et de cris mutilés Plus tard seulement j’ai su que non J’ai compris avec stupeur que je m’étais trompée : chacun voit la vie avec son propre regard, teinté d’unique et de coquillages ambrés, fêlé ou déformé, les nuances sont trop nombreuses pour être décrites dans un poème et je me rends compte que je ne sais rien des yeux des autres et de leurs peaux qui respire des parfums inconnus Des milliards de fragrances ennuagent la terre où nous habitons et je ne suis consciente que de si peu d’entre elles Je nous pensais semblables mais nous étions uniques Quand nos regards divergeaient, je transformais ma pupille en lame d’acier, je ravalais un sanglot indompté qui venait se débattre dans ma gorge à m’en étouffer Je n’ai jamais accepté que nous puissions être différents, sans cesse je cherchais à vous rameuter à moi-même, pour m’unifier dans une étreinte désespérée J’ai toujours eu si peur de ce qui nous séparait J’ai toujours cru que je ne le supporterais pas, que le dragon coincé dans mon corps, ivre de panique, déchirerait ma peau, la folie rouge m’engloutirait, je finirais avalée, honteuse, dépossédée, sans sang, sans chair, sans amour Je ne sais pas pourquoi j’ai pensé cela si longtemps, pourquoi je me suis sentie en sursis, si vulnérable, prête à être attaquée n’importe quand, à l’affût, l’œil apeuré, acéré Il me semble que je ne croyais pas à ma propre existence Furtive et diaphane, elle était pour moi une erreur dans la marche du temps Un jour, on allait se rendre compte qu’on m’avait donné une vie et qu’elle ne m’était pas destinée, Je devais me faire toute petite, passer inaperçu, sans quoi on m’arrêterait et on me mettrait à la porte de ma propre existence Je me suis toujours apprêtée à mourir, et j’ai toujours craint de disparaître sans avoir pu incarner ma vie au moins une fois, une heure, une minute C’est votre regard seul, votre présence seule, votre approbation qui consentait à me rendre vivante Je pensais qu’il fallait mériter d’être en vie, et que ce mérite, vous seuls pouviez me le donner Seul mon cœur ouvrait d’autres possibles et dans l’instant se nourrissait de beauté, revêtait sur mes lèvres une douceur rosée Je l’ai laissé m’apprivoiser et j’ai nourri notre amitié Maintenant, chaque aube m’adoucit Je me familiarise avec ma propre existence et je lui reconnais son droit à être Quand je remercie la vie, ce n’est plus avec un sourire coupable mais avec un rire franc Comme si soudain j’avais pris racine et que je ne pouvais plus m’envoler au moindre souffle de vent Je sais que chacun passe sur cette terre avec ses folies emmurées, ses éclats de fée, ses lumières odorantes et ses abysses qui lui sont propres et ne se dévoileront peut-être jamais Je ne regrette rien et je ne m’apitoie pas J’ai une tendresse immense pour celle que j’ai été et pour tous les êtres sur cette planète qui tournent en rond, immobiles, dans la prison de leur tête, qui peinent à ouvrir la fenêtre Je les comprends tellement Parfois encore, je suffoque, ma boule dans la gorge revient, l’air s’absente, tout devient étriqué : moi, le temps, l’espace, l’amour Parfois encore j’ai mes murs qui sentent le moisi et mes vitres sont si pleines de crasses que je ne vois rien au dehors Ce grand nettoyage-là ne finit jamais Mais il en vaut la peine Pour toutes les grâces qui s’invitent dans nos vies quand on ne craint plus les courants d’air Pour tous les éclats de lumière qui ne viendront certes jamais nous apporter un sens sur un plateau d’argent Mais si une goutte de rosée traversée par l’aube a le droit d’exister sans raison, simplement d’être, pleinement, sans attente et sans tension, pourquoi pas nous ?

  En moi il y a un cheval fou Je ne pense pas qu’il soit fou Et je doute que ce soit vraiment un cheval pourtant parfois il est comme fou et se cabre comme un cheval il pourrait déchirer ma peau avec ses sabots et cependant il m’aime et veut me protéger en me protégeant me brise je tends la main vers lui et ses flancs sont couverts de sang il a si mal des blessures qui ne lui appartiennent pas je te dirai tout doux mon beau je te dirai des mots muets pour t’écouter, mon front sur le plateau de ton front je t’entends et aujourd’hui je suis responsable de ma vie je saurai me défendre s’il le faut tu peux ranger ta carapace de guerrier et tes fouets tu peux te reposer toi qui es sans cesse sur le qui vive à l’affût de la moindre bravade quand tu t’agiteras encore je te dirai tout doux mon beau je sais tous les risques que j’encours j’ai rangé le bouclier à lame d’argent je l’ai troqué contre l’orbe d’un lac ça n’a l’air de rien mais c’est très efficace dans le monde du dehors tu peux ruer et te cabrer mon cheval fou j’ai la peau endurcie des tanins du soleil et des nodosités des grands chênes un jour tu auras plus de paix et ta folie ne sera plus folie elle aura l’allure d’une danse étrange et fantasque si douce qu’on ne peut que la chérir pour toujours

  J’écris pour m’expliquer à moi-même Pour vous dire des choses que je ne savais pas mais que vous connaissiez peut-être Que chacun se pense être le miroir de l’autre et du monde Et que c’est faux Il y a une multitude invraisemblable de vérités Chacun porte sa capeline et son flambeau et traverse son chemin, rebrousse les forêts noires et espère découvrir la lumière derrière le repli d’une clairière J’aimerais avoir ce regard qui ne cherche pas à tout prix ce qui rassemble Et qui dans l’écart révèle une étreinte Nous sommes tous à la recherche d’un reste troublé de l’enfance Une zone à réparer Je construis ma cabane qui ne ressemble à aucune autre dans l'espoir de m’accoler avec grâce au reste du monde

  Je n’aime pas ces moments où la nuit redevient ennemie Je dois m’extraire de l’obscurité, marais des pensées Elles s’enroulent et s’emberlificotent à l’ombre de mon oreiller Je ne peux pas les empêcher d’exister et leur présence bruyante m’empêche de dormir Font planer une menace sans nom Ou trop terrifiante du moins pour être nommée Comme si le noir allait d’un instant à l’autre définitivement, irrémédiablement Tout engloutir C’est une luxuriance assoiffée, un foisonnement qui annonce un chaos terrible Je ne lutte plus Après les avoir senties m’assaillir pendant une heure je me lève Il ne sert à rien de se cacher Je suis là et elles aussi, je n’aime pas les sentir fourmiller autour de moi comme autour d’une carcasse à dépouiller Je me lève avec un essaim de corbeaux qui dépasse de ma tête Je me fais un café en chemin certains volatiles se sont déjà avoués vaincus Comme si le mouvement seul les décourageait Je me fais un café et me voilà dans la nuit et cette fois je suis seule avec un silence fatigué et le salon désert Dehors le jardin s’enroule dans une étole de brume et ses sillages cotonneux ajoutent encore de la respiration dans ma nuit attaquée Je respire j’écris De la vie j’aime les contrastes et la profondeur Je suis partie en exploration sous la surface du monde Là où d’autres voyages d’est en ouest ou de nord en sud Je ne fais que creuser pour m’engager vers le ciel Dans une verticalité vertigineuse et sublime D’une douceur et d’un amour que je sais absolus Je louvoie entre les abysses et les cîmes Quand d’autres voguent de New-York aux Carpates ou que sais-je encore Je n’aurai jamais de photographies à montrer aux amis, un soir d’hiver Seuls ces poèmes écrits sur un ordinateur Pour retracer l’ébauche d’un chemin Et partager avec vous maladroitement ces quelques pérégrinations intérieures

 
Lire la suite...

from betancourt

Disclaimer: el 08/11/2025 se casaron mis amigos M. y A. Me pidieron dar un discurso, el cual procedí a escribir. Luego el día de la boda, asaltado por una migraña horrible, olvidé de pronto todo lo que escribí aquí. Lo cuelgo aquí como un testimonio del amor que le tengo a ambos.

Buenas noches. Cuando M. y A. me pidieron que dijera unas palabras el día de hoy, me planteé dos retos: no usar chatgpt para redactarlo y no demorar demasiado al decirlo frente a ustedes. Veremos si puedo cumplir mi segundo reto.

Hoy en día vivimos en un mundo complicado, que nos propone retos que la gran mayoría preferiría no tener que superar. Y estos retos, que a veces nos parecen monstruos gigantes e inabarcables, a menudo consiguen robarnos de a poco la esperanza. Esperanza, no en un mundo mejor sino en uno que nos den ganas de vivir. Uno al que nos queramos dirigir y sobre el cual nos sintamos a salvo. Un mundo que a veces sentimos más lejos y a veces más cerca. Un mundo en el que estarán nuestros hijos o no.

Dado este panorama, cada día necesitamos aferrarnos más a aquellas cosas que nos dan esperanza: los amigos, las familias, las personas que nos acompañan, las metas, ustedes dos, los gatos. Y es por eso que estamos aquí hoy, celebrando su unión: porque queremos un mundo que nos haga ilusión habitar y nos hace ilusión habitar un mundo en el que ustedes dos están juntos. Porque les amamos como individuos, como pareja, como la esperanza de que a lo mejor el futuro sí será mejor.

Tengo mucho tiempo pensando que le hacemos bien al mundo a nuestro alrededor cuando nos hacemos el bien entre nosotros. El día de hoy, me gustaría decirles que no tengo duda de que se hacen bien y que gracias a ello nos hacen bien a todos nosotros. Estoy muy feliz de que empiecen esta nueva etapa el día de hoy (bueno, no exáctamente hoy porque se mudaron juntos hace como un mes) y espero que sepan que cuentan con mi apoyo en cualquier reto que venga en el futuro. Estoy seguro que también con el de todos los demás.

Gracias.

 
Leer más...

from Chemin tournant

Écriteur, je publie mes textes depuis 2008 sur Chemin tournant qui migre doucement à partir de ce jour de la côte Ouest (Wordpress) à la côte Est du Web, de SF la ville du brouillard à NYC qui ne dort jamais, sans quitter pourtant la grande forêt équatoriale de son origine, urbaine désormais, puisqu’après 25 ans passés dans la région de l’Est-Cameroun, je vis à Yaoundé.

Il ne s’agit pas d’une traversée bien périlleuse, mais d’un changement tout de même. J’ignore où il me conduira.

En attendant que le chemin tourne pleinement ici, vous pouvez découvrir mon labeur d’écriture à cette future ancienne adresse : chemin tournant.

Au tournant du chemin est mon infolettre mensuelle, gratuite et démodée.

 
Lire la suite... Discuss...

from Roscoe's Story

In Summary: * Another good day, with more quality family visiting than we've had for a long time. With the Miami / Ohio St. game currently in the 3rd qtr., I'm wondering if I'll be able to stay awake through the rest of the game. Eyes are getting heavy and the brain's starting to fog.

Prayers, etc.: My daily prayers

Health Metrics: * bw= 222.2 lbs. 100.80 kg * bp= 150/86 (67)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 06:30 – 1 banana, 1 peanut butter sandwich * 11:35 – plate of pancit * 15:00 – steak, home made vegetable soup, mashed potatoes, white rice, fresh fruit, cake

Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 05:30 – read, pray, follow news reports from various sources, surf the socials, nap * 07:45 – bank accounts activity monitored * 11:23 – tuned into the ReliaQuest Bowl, Iowa Hawkeyes vs Vanderbilt Commodores, the game already in progress, Iowa leads 7 to 0 in the 1st qtr. * 11:30 to 18:00 – daughter-in-law and her fiance came over and spent the day visiting. He and I “sort of” watched Duke beat Arizona St. while doing “chores” to help the women as they fixed us a big meal. * 18:30 – listening now to NCAA Football, the Cotton Bowl Game, Miami Hurricanes vs Ohio St. Buckeyes

Chess: * 11:20 – moved in all pending CC games

 
Read more...

from hustin.art

The alchemist's fingers trembled as the orrery gears ground to a halt—wrong again. “Your calculations are off by 0.003 degrees, Brother Eliasz,” sneered the Archbishop's automaton, its voice like grinding cathedral pipes. Outside, the copper-plated spires of Neo-Lutetia hummed with forbidden electricity. I wiped quicksilver from my brow. “Bullshit. Your Ptolemaic models are obsolete.” The stained glass windows rattled as the celestial engines misfired. Somewhere below, the peasant riots began. The automaton's censer swung ominously. “Heresy has a price.” Damn right. I yanked the hidden lever. Let them choke on Kepler's truth for once.

#Scratch

 
더 읽어보기...

from SmarterArticles

When Stanford University's Provost charged the AI Advisory Committee in March 2024 to assess the role of artificial intelligence across the institution, the findings revealed a reality that most enterprise leaders already suspected but few wanted to admit: nobody really knows how to do this yet. The committee met seven times between March and June, poring over reports from Cornell, Michigan, Harvard, Yale, and Princeton, searching for a roadmap that didn't exist. What they found instead was a landscape of improvisation, anxiety, and increasingly urgent questions about who owns what, who's liable when things go wrong, and whether locking yourself into a single vendor's ecosystem is a feature or a catastrophic bug.

The promise is intoxicating. Large language models can answer customer queries, draft research proposals, analyse massive datasets, and generate code at speeds that make traditional software look glacial. But beneath the surface lies a tangle of governance nightmares that would make even the most seasoned IT director reach for something stronger than coffee. According to research from MIT, 95 per cent of enterprise generative AI implementations fail to meet expectations. That staggering failure rate isn't primarily a technology problem. It's an organisational one, stemming from a lack of clear business objectives, insufficient governance frameworks, and infrastructure not designed for the unique demands of inference workloads.

The Governance Puzzle

Let's start with the most basic question that organisations seem unable to answer consistently: who is accountable when an LLM generates misinformation, reveals confidential student data, or produces biased results that violate anti-discrimination laws?

This isn't theoretical. In 2025, researchers disclosed multiple vulnerabilities in Google's Gemini AI suite, collectively known as the “Gemini Trifecta,” capable of exposing sensitive user data and cloud assets. Around the same time, Perplexity's Comet AI browser was found vulnerable to indirect prompt injection, allowing attackers to steal private data such as emails and banking credentials through seemingly safe web pages.

The fundamental challenge is this: LLMs don't distinguish between legitimate instructions and malicious prompts. A carefully crafted input can trick a model into revealing sensitive data, executing unauthorised actions, or generating content that violates compliance policies. Studies show that as many as 10 per cent of generative AI prompts can include sensitive corporate data, yet most security teams lack visibility into who uses these models, what data they access, and whether their outputs comply with regulatory requirements.

Effective governance begins with establishing clear ownership structures. Organisations must define roles for model owners, data stewards, and risk managers, creating accountability frameworks that span the entire model lifecycle. The Institute of Internal Auditors' Three Lines Model provides a framework that some organisations have adapted for AI governance, with management serving as the first line of defence, internal audit as the second line, and the governing body as the third line, establishing the organisation's AI risk appetite and ethical boundaries.

But here's where theory meets practice in uncomfortable ways. One of the most common challenges in LLM governance is determining who is accountable for the outputs of a model that constantly evolves. Research underscores that operationalising accountability requires clear ownership, continuous monitoring, and mandatory human-in-the-loop oversight to bridge the gap between autonomous AI outputs and responsible human decision-making.

Effective generative AI governance requires establishing a RACI (Responsible, Accountable, Consulted, Informed) framework. This means identifying who is responsible for day-to-day model operations, who is ultimately accountable for outcomes, who must be consulted before major decisions, and who should be kept informed. Without this clarity, organisations risk creating accountability gaps where critical failures can occur without anyone taking ownership. The framework must also address the reality that LLMs deployed today may behave differently tomorrow, as models are updated, fine-tuned, or influenced by changing training data.

The Privacy Labyrinth

In late 2022, Samsung employees used ChatGPT to help with coding tasks, inputting proprietary source code. OpenAI's service was, at that time, using user prompts to further train their model. The result? Samsung's intellectual property potentially became part of the training data for a publicly available AI system.

This incident crystallised a fundamental tension in enterprise LLM deployment: the very thing that makes these systems useful (their ability to learn from context) is also what makes them dangerous. Fine-tuning embeds pieces of your data into the model's weights, which can introduce serious security and privacy risks. If those weights “memorise” sensitive content, the model might later reveal it to end users or attackers via its outputs.

The privacy risks fall into two main categories. First, input privacy breaches occur when data is exposed to third-party AI platforms during training. Second, output privacy issues arise when users can intentionally or inadvertently craft queries to extract private training data from the model itself. Research has revealed a mechanism in LLMs where if the model generates uncontrolled or incoherent responses, it increases the chance of revealing memorised text.

Different LLM providers handle data retention and training quite differently. Anthropic, for instance, does not use customer data for training unless there is explicit opt-in consent. Default retention is 30 days across most Claude products, but API logs shrink to seven days starting 15 September 2025. For organisations with stringent compliance requirements, Anthropic offers an optional Zero-Data-Retention addendum that ensures maximum data isolation. ChatGPT Enterprise and Business plans automatically do not use prompts or outputs for training, with no action required. However, the standard version of ChatGPT allows conversations to be reviewed by the OpenAI team and used for training future versions of the model. This distinction between enterprise and consumer tiers becomes critical when institutional data is at stake.

Universities face particular challenges because of regulatory frameworks like the Family Educational Rights and Privacy Act (FERPA) in the United States. FERPA requires schools to protect the privacy of personally identifiable information in education records. As generative artificial intelligence tools become more widespread, the risk of improper disclosure of sensitive data protected by FERPA increases.

At the University of Florida, faculty, staff, and students must exercise caution when providing inputs to AI models. Only publicly available data or data that has been authorised for use should be provided to the models. Using an unauthorised AI assistant during Zoom or Teams meetings to generate notes or transcriptions may involve sharing all content with the third-party vendor, which may use that data to train the model.

Instructors should consider FERPA guidelines before submitting student work to generative AI tools like chatbots (e.g., generating draft feedback on student work) or using tools like Zoom's AI Companion. Proper de-identification under FERPA requires removal of all personally identifiable information, as well as a reasonable determination made by the institution that a student's identity is not personally identifiable. Depending on the nature of the assignment, student work could potentially include identifiable information if they are describing personal experiences that would need to be removed.

The Vendor Lock-in Trap

Here's a scenario that keeps enterprise architects awake at night: you've invested eighteen months integrating OpenAI's GPT-4 into your customer service infrastructure. You've fine-tuned models, built custom prompts, trained your team, and embedded API calls throughout your codebase. Then OpenAI changes their pricing structure, deprecates the API version you're using, or introduces terms of service that conflict with your regulatory requirements. What do you do?

The answer, for most organisations, is exactly what the vendor wants you to do: nothing. Migration costs are prohibitive. A 2025 survey of 1,000 IT leaders found that 88.8 per cent believe no single cloud provider should control their entire stack, and 45 per cent say vendor lock-in has already hindered their ability to adopt better tools.

The scale of vendor lock-in extends beyond API dependencies. Gartner estimates that data egress fees consume 10 to 15 per cent of a typical cloud bill. Sixty-five per cent of enterprises planning generative AI projects say soaring egress costs are a primary driver of their multi-cloud strategy. These egress fees represent a hidden tax on migration, making it financially painful to move your data from one cloud provider to another. The vendors know this, which is why they often offer generous ingress pricing (getting your data in) whilst charging premium rates for egress (getting your data out).

So what's the escape hatch? The answer involves several complementary strategies. First, AI model gateways act as an abstraction layer between your applications and multiple model providers. Your code talks to the gateway's unified interface rather than to each vendor directly. The gateway then routes requests to the optimal underlying model (OpenAI, Anthropic, Gemini, a self-hosted LLaMA, etc.) without your application code needing vendor-specific changes.

Second, open protocols and standards are emerging. Anthropic's open-source Model Context Protocol and LangChain's Agent Protocol promise interoperability between LLM vendors. If an API changes, you don't need a complete rewrite, just a new connector.

Third, local and open-source LLMs are increasingly preferred. They're cheaper, more flexible, and allow full data control. Survey data shows strategies that are working: 60.5 per cent keep some workloads on-site for more control; 53.8 per cent use cloud-agnostic tools not tied to a single provider; 50.9 per cent negotiate contract terms for better portability.

A particularly interesting development is Perplexity's TransferEngine communication library, which addresses the challenge of running large models on AWS's Elastic Fabric Adapter by acting as a universal translator, abstracting away hardware-specific details. This means that the same code can now run efficiently on both NVIDIA's specialised hardware and AWS's more general-purpose infrastructure. This kind of abstraction layer represents the future of portable AI infrastructure.

The design principle for 2025 should be “hybrid-first, not hybrid-after.” Organisations should embed portability and data control from day one, rather than treating them as bolt-ons or manual migrations. A cloud exit strategy is a comprehensive plan that outlines how an organisation can migrate away from its current cloud provider with minimal disruption, cost, or data loss. Smart enterprises treat cloud exit strategies as essential insurance policies against future vendor dependency.

The Procurement Minefield

If you think negotiating a traditional SaaS contract is complicated, wait until you see what LLM vendors are putting in front of enterprise legal teams. LLM terms may appear like other software agreements, but certain terms deserve far more scrutiny. Widespread use of LLMs is still relatively new and fraught with unknown risks, so vendors are shifting the risks to customers. These products are still evolving and often unreliable, with nearly every contract containing an “AS-IS” disclaimer.

When assessing LLM vendors, enterprises should scrutinise availability, service-level agreements, version stability, and support. An LLM might perform well in standalone tests but degrade under production load, failing to meet latency SLAs or producing incomplete responses. The AI service description should be as specific as possible about what the service does. Choose data ownership and privacy provisions that align with your regulatory requirements and business needs.

Here's where things get particularly thorny: vendor indemnification for third-party intellectual property infringement claims has long been a staple of SaaS contracts, but it took years of public pressure and high-profile lawsuits for LLM pioneers like OpenAI to relent and agree to indemnify users. Only a handful of other LLM vendors have followed suit. The concern is legitimate. LLMs are trained on vast amounts of internet data, some of which may be copyrighted material. If your LLM generates output that infringes on someone's copyright, who bears the legal liability? In traditional software, the vendor typically indemnifies you. In AI contracts, vendors have tried to push this risk onto customers.

Enterprise buyers are raising their bar for AI vendors. Expect security questionnaires to add AI-specific sections that ask about purpose tags, retrieval redaction, cross-border routing, and lineage. Procurement rules increasingly demand algorithmic-impact assessments alongside security certifications for public accountability. Customers, particularly enterprise buyers, demand transparency about how companies use AI with their data. Clear governance policies, third-party certifications, and transparent AI practices become procurement requirements and competitive differentiators.

The Regulatory Tightening Noose

In 2025, the European Union's AI Act introduced a tiered, risk-based classification system, categorising AI systems as unacceptable, high, limited, or minimal risk. Providers of general-purpose AI now have transparency, copyright, and safety-related duties. The Act's extraterritorial reach means that organisations outside Europe must still comply if they're deploying AI systems that affect EU citizens.

In the United States, Executive Order 14179 guides how federal agencies oversee the use of AI in civil rights, national security, and public services. The White House AI Action Plan calls for creating an AI procurement toolbox managed by the General Services Administration that facilitates uniformity across the Federal enterprise. This system would allow any Federal agency to easily choose among multiple models in a manner compliant with relevant privacy, data governance, and transparency laws.

The Enterprise AI Governance and Compliance Market is expected to reach 9.5 billion US dollars by 2035, likely to surge at a compound annual growth rate of 15.8 per cent. Between 2020 and 2025, this market expanded from 0.4 billion to 2.2 billion US dollars, representing cumulative growth of 450 per cent. This explosive growth signals that governance is no longer a nice-to-have. It's a fundamental requirement for AI deployment.

ISO 42001 allows certification of an AI management system that integrates well with ISO 27001 and 27701. NIST's Generative AI profile gives a practical control catalogue and shared language for risk. Financial institutions face intense regulatory scrutiny, requiring model risk management applying OCC Bulletin 2011-12 framework to all AI/ML models with rigorous validation, independent review, and ongoing monitoring. The NIST AI Risk Management Framework offers structured, risk-based guidance for building and deploying trustworthy AI, widely adopted across industries for its practical, adaptable advice across four principles: govern, map, measure, and manage.

The European Question

For organisations operating in Europe or handling European citizens' data, the General Data Protection Regulation introduces requirements that fundamentally reshape how LLM deployments must be architected. The GDPR restricts how personal data can be transferred outside the EU. Any transfer of personal data to non-EU countries must meet adequacy, Standard Contractual Clauses, Binding Corporate Rules, or explicit consent requirements. Failing to meet these conditions can result in fines up to 20 million euros or 4 per cent of global annual revenue.

Data sovereignty is about legal jurisdiction: which government's laws apply. Data residency is about physical location: where your servers actually sit. A common scenario that creates problems: a company stores European customer data in AWS Frankfurt (data residency requirement met), but database administrators access it from the US headquarters. Under GDPR, that US access might trigger cross-border transfer requirements regardless of where the data physically lives.

Sovereign AI infrastructure refers to cloud environments that are physically and legally rooted in national or EU jurisdictions. All data including training, inference, metadata, and logs must remain physically and logically located in EU territories, ensuring compliance with data transfer laws and eliminating exposure to foreign surveillance mandates. Providers must be legally domiciled in the EU and not subject to extraterritorial laws like the U.S. CLOUD Act, which allows US-based firms to share data with American authorities, even when hosted abroad.

OpenAI announced data residency in Europe for ChatGPT Enterprise, ChatGPT Edu, and the API Platform, helping organisations operating in Europe meet local data sovereignty requirements. For European companies using LLMs, best practices include only engaging providers who are willing to sign a Data Processing Addendum and act as your processor. Verify where your data will be stored and processed, and what safeguards are in place. If a provider cannot clearly answer these questions or hesitates on compliance commitments, consider it a major warning sign.

Achieving compliance with data residency and sovereignty requirements requires more than geographic awareness. It demands structured policy, technical controls, and ongoing legal alignment. Hybrid cloud architectures enable global orchestration with localised data processing to meet residency requirements without sacrificing performance.

The Self-Hosting Dilemma

The economics of self-hosted versus cloud-based LLM deployment present a decision tree that looks deceptively simple on the surface but becomes fiendishly complex when you factor in hidden costs and the rate of technological change.

Here's the basic arithmetic: you need more than 8,000 conversations per day to see the cost of having a relatively small model hosted on your infrastructure surpass the managed solution by cloud providers. Self-hosted LLM deployments involve substantial upfront capital expenditures. High-end GPU configurations suitable for large model inference can cost 100,000 to 500,000 US dollars or more, depending on performance requirements.

To generate approximately one million tokens (about as much as an A80 GPU can produce in a day), it would cost 0.12 US dollars on DeepInfra via API, 0.71 US dollars on Azure AI Foundry via API, 43 US dollars on Lambda Labs, or 88 US dollars on Azure servers. In practice, even at 100 million tokens per day, API costs (roughly 21 US dollars per day) are so low that it's hard to justify the overhead of self-managed GPUs on cost alone.

But cost isn't the only consideration. Self-hosting offers more control over data privacy since the models operate on the company's own infrastructure. This setup reduces the risk of data breaches involving third-party vendors and allows implementing customised security protocols. Open-source LLMs work well for research institutions, universities, and businesses that handle high volumes of inference and need models tailored to specific requirements. By self-hosting open-source models, high-throughput organisations can avoid the growing per-token fees associated with proprietary APIs.

However, hosting open-source LLMs on your own infrastructure introduces variable costs that depend on factors like hardware setup, cloud provider rates, and operational requirements. Additional expenses include storage, bandwidth, and associated services. Open-source models rely on internal teams to handle updates, security patches, and performance tuning. These ongoing tasks contribute to the daily operational budget and influence long-term expenses.

For flexibility and cost-efficiency with low or irregular traffic, LLM-as-a-Service is often the best choice. LLMaaS platforms offer compelling advantages for organisations seeking rapid AI adoption, minimal operational complexity, and scalable cost structures. The subscription-based pricing models provide cost predictability and eliminate large upfront investments, making AI capabilities accessible to organisations of all sizes.

The Pedagogy Versus Security Tension

Universities face a unique challenge: they need to balance pedagogical openness with security and privacy requirements. The mission of higher education includes preparing students for a world where AI literacy is increasingly essential. Banning these tools outright would be pedagogically irresponsible. But allowing unrestricted access creates governance nightmares.

At Stanford, the MBA and MSx programmes allow instructors to not ban student use of AI tools for take-home coursework, including assignments and examinations. Instructors may choose whether to allow student use of AI tools for in-class work. PhD and undergraduate courses follow the Generative AI Policy Guidance from Stanford's Office of Community Standards. This tiered approach recognises that different educational contexts require different policies.

The 2025 EDUCAUSE AI Landscape Study revealed that fewer than 40 per cent of higher education institutions surveyed have AI acceptable use policies. Many institutions do not yet have a clear, actionable AI strategy, practical guidance, or defined governance structures to manage AI use responsibly. Key takeaways from the study include a rise in strategic prioritisation of AI, growing institutional governance and policies, heavy emphasis on faculty and staff training, widespread AI use for teaching and administrative tasks, and notable disparities in resource distribution between larger and smaller institutions.

Universities face particular challenges around academic integrity. Research shows that 89 per cent of students admit to using AI tools like ChatGPT for homework. Studies report that approximately 46.9 per cent of students use LLMs in their coursework, with 39 per cent admitting to using AI tools to answer examination or quiz questions.

Universities primarily use Turnitin, Copyleaks, and GPTZero for AI detection, spending 2,768 to 110,400 US dollars per year on these tools. Many top schools deactivated AI detectors in 2024 to 2025 due to approximately 4 per cent false positive rates. It can be very difficult to accurately detect AI-generated content, and detection tools claim to identify work as AI-generated but cannot provide evidence for that claim. Human experts who have experience with using LLMs for writing tasks can detect AI with 92 per cent accuracy, though linguists without such experience were not able to achieve the same level of accuracy.

Experts recommend the use of both human reasoning and automated detection. It is considered unfair to exclusively use AI detection to evaluate student work due to false positive rates. After receiving a positive prediction, next steps should include evaluating the student's writing process and comparing the flagged text to their previous work. Institutions must clearly and consistently articulate their policies on academic integrity, including explicit guidelines on appropriate and inappropriate use of AI tools, whilst fostering open dialogues about ethical considerations and the value of original academic work.

The Enterprise Knowledge Bridge

Whilst fine-tuning models with proprietary data introduces significant privacy risks, Retrieval-Augmented Generation has emerged as a safer and more cost-effective approach for injecting organisational knowledge into enterprise AI systems. According to Gartner, approximately 80 per cent of enterprises are utilising RAG methods, whilst about 20 per cent are employing fine-tuning techniques.

RAG operates through two core phases. First comes ingestion, where enterprise content is encoded into dense vector representations called embeddings and indexed so relevant items can be efficiently retrieved. This preprocessing step transforms documents, database records, and other unstructured content into a machine-readable format that enables semantic search. Second is retrieval and generation. For a user query, the system retrieves the most relevant snippets from the indexed knowledge base and augments the prompt sent to the LLM. The model then synthesises an answer that can include source attributions, making the response both more accurate and transparent.

By grounding responses in retrieved facts, RAG reduces the likelihood of hallucinations. When an LLM generates text based on retrieved documents rather than attempting to recall information from training, it has concrete reference material to work with. This doesn't eliminate hallucinations entirely (models can still misinterpret retrieved content) but it substantially improves reliability compared to purely generative approaches. RAG delivers substantial return on investment, with organisations reporting 30 to 60 per cent reduction in content errors, 40 to 70 per cent faster information retrieval, and 25 to 45 per cent improvement in employee productivity.

RAG Vector-Based AI leverages vector embeddings to retrieve semantically similar data from dense vector databases, such as Pinecone or Weaviate. The approach is based on vector search, a technique that converts text into numerical representations (vectors) and then finds documents that are most similar to a user's query. Research findings reveal that enterprise adoption is largely in the experimental phase: 63.6 per cent of implementations utilise GPT-based models, and 80.5 per cent rely on standard retrieval frameworks such as FAISS or Elasticsearch.

A strong data governance framework is foundational to ensuring the quality, integrity, and relevance of the knowledge that fuels RAG systems. Such a framework encompasses the processes, policies, and standards necessary to manage data assets effectively throughout their lifecycle. From data ingestion and storage to processing and retrieval, governance practices ensure that the data driving RAG solutions remain trustworthy and fit for purpose. Ensuring data privacy and security within a RAG-enhanced knowledge management system is critical. To make sure RAG only retrieves data from authorised sources, companies should implement strict role-based permissions, multi-factor authentication, and encryption protocols.

Azure Versus Google Versus AWS

When it comes to enterprise-grade LLM platforms, three dominant cloud providers have emerged. The AI landscape in 2025 is defined by Azure AI Foundry (Microsoft), AWS Bedrock (Amazon), and Google Vertex AI. Each brings a unique approach to generative AI, from model offerings to fine-tuning, MLOps, pricing, and performance.

Azure OpenAI distinguishes itself by offering direct access to robust models like OpenAI's GPT-4, DALL·E, and Whisper. Recent additions include support for xAI's Grok Mini and Anthropic Claude. For teams whose highest priority is access to OpenAI's flagship GPT models within an enterprise-grade Microsoft environment, Azure OpenAI remains best fit, especially when seamless integration with Microsoft 365, Cognitive Search, and Active Directory is needed.

Azure OpenAI is hosted within Microsoft's highly compliant infrastructure. Features include Azure role-based access control, Customer Lockbox (requiring customer approval before Microsoft accesses data), private networking to isolate model endpoints, and data-handling transparency where customer prompts and responses are not stored or used for training. Azure OpenAI supports HIPAA, GDPR, ISO 27001, SOC 1/2/3, FedRAMP High, HITRUST, and more. Azure offers more on-premises and hybrid cloud deployment options compared to Google, enabling organisations with strict data governance requirements to maintain greater control.

Google Cloud Vertex AI stands out with its strong commitment to open source. As the creators of TensorFlow, Google has a long history of contributing to the open-source AI community. Vertex AI offers an unmatched variety of over 130 generative AI models, advanced multimodal capabilities, and seamless integration with Google Cloud services.

Organisations focused on multi-modal generative AI, rapid low-code agent deployment, or deep integration with Google's data stack will find Vertex AI a compelling alternative. For enterprises with large datasets, Vertex AI's seamless connection with BigQuery enables powerful analytics and predictive modelling. Google Vertex AI is more cost-effective, providing a quick return on investment with its scalable models.

The most obvious difference is in Google Cloud's developer and API focus, whereas Azure is geared more towards building user-friendly cloud applications. Enterprise applications benefit from each platform's specialties: Azure OpenAI excels in Microsoft ecosystem integration, whilst Google Vertex AI excels in data analytics. For teams using AWS infrastructure, AWS Bedrock provides access to multiple foundation models from different providers, offering a middle ground between Azure's Microsoft-centric approach and Google's open-source philosophy.

Prompt Injection and Data Exfiltration

In AI security vulnerabilities reported to Microsoft, indirect prompt injection is one of the most widely-used techniques. It is also the top entry in the OWASP Top 10 for LLM Applications and Generative AI 2025. A prompt injection vulnerability occurs when user prompts alter the LLM's behaviour or output in unintended ways.

With a direct prompt injection, an attacker explicitly provides a cleverly crafted prompt that overrides or bypasses the model's intended safety and content guidelines. With an indirect prompt injection, the attack is embedded in external data sources that the LLM consumes and trusts. The rise of multimodal AI introduces unique prompt injection risks. Malicious actors could exploit interactions between modalities, such as hiding instructions in images that accompany benign text.

One of the most widely-reported impacts is the exfiltration of the user's data to the attacker. The prompt injection causes the LLM to first find and/or summarise specific pieces of the user's data and then to use a data exfiltration technique to send these back to the attacker. Several data exfiltration techniques have been demonstrated, including data exfiltration through HTML images, causing the LLM to output an HTML image tag where the source URL is the attacker's server.

Security controls should combine input/output policy enforcement, context isolation, instruction hardening, least-privilege tool use, data redaction, rate limiting, and moderation with supply-chain and provenance controls, egress filtering, monitoring/auditing, and evaluations/red-teaming.

Microsoft recommends preventative techniques like hardened system prompts and Spotlighting to isolate untrusted inputs, detection tools such as Microsoft Prompt Shields integrated with Defender for Cloud for enterprise-wide visibility, and impact mitigation through data governance, user consent workflows, and deterministic blocking of known data exfiltration methods.

Security leaders should inventory all LLM deployments (you can't protect what you don't know exists), discover shadow AI usage across your organisation, deploy real-time monitoring and establish behavioural baselines, integrate LLM security telemetry with existing SIEM platforms, establish governance frameworks mapping LLM usage to compliance requirements, and test continuously by red teaming models with adversarial prompts. Traditional IT security models don't fully capture the unique risks of AI systems. You need AI-specific threat models that account for prompt injection, model inversion attacks, training data extraction, and adversarial inputs designed to manipulate model behaviour.

Lessons from the Field

So what are organisations that are succeeding actually doing differently? The pattern that emerges from successful deployments is not particularly glamorous: it's governance all the way down.

Organisations that had AI governance programmes in place before the generative AI boom were generally able to better manage their adoption because they already had a committee up and running that had the mandate and the process in place to evaluate and adopt generative AI use cases. They already had policies addressing unique risks associated with AI applications, including privacy, data governance, model risk management, and cybersecurity.

Establishing ownership with a clear responsibility assignment framework prevents rollout failure and creates accountability across security, legal, and engineering teams. Success in enterprise AI governance requires commitment from the highest levels of leadership, cross-functional collaboration, and a culture that values both innovation and responsible deployment. Foster collaboration between IT, security, legal, and compliance teams to ensure a holistic approach to LLM security and governance.

Organisations that invest in robust governance frameworks today will be positioned to leverage AI's transformative potential whilst maintaining the trust of customers, regulators, and stakeholders. In an environment where 95 per cent of implementations fail to meet expectations, the competitive advantage goes not to those who move fastest, but to those who build sustainable, governable, and defensible AI capabilities.

The truth is that we're still in the early chapters of this story. The governance models, procurement frameworks, and security practices that will define enterprise AI in a decade haven't been invented yet. They're being improvised right now, in conference rooms and committee meetings at universities and companies around the world. The organisations that succeed will be those that recognise this moment for what it is: not a race to deploy the most powerful models, but a test of institutional capacity to govern unprecedented technological capability.

The question isn't whether your organisation will use large language models. It's whether you'll use them in ways that you can defend when regulators come knocking, that you can migrate away from when better alternatives emerge, and that your students or customers can trust with their data. That's a harder problem than fine-tuning a model or crafting the perfect prompt. But it's the one that actually matters.


References and Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog