from Holmliafolk

en mann med hatt. blå himmel i bakgrunnen

Jeg er fotograf. Bryllupsfotograf. Det viktigste å huske som fotograf er at man tar bilder av folk i bevegelse. Photo sur le vif, som det heter. Bilder av folk som står rett opp og ned og ser i kameraet med lyset midt i ansiktet er flate og kjedelige. Det blir ikke noe liv.

Det er også viktig å være presentabel. Pen jakke, pent skjerf, gjerne en hatt. Det handler om å gi et godt og profesjonelt inntrykk. Som fotograf bryr man seg om sånne ting.

Dessuten stiller man ikke kameraet inn på auto. Og tar helst av linsebeskyttelsen.

Er det her du vil jeg skal stå?

 
Read more...

from Roscoe's Quick Notes

Big Ten Basketball

Purdue vs UCLA.

From the ongoing Big Ten Men's Basketball Tournament, I'll be following a Semi-Final Round game this afternoon, Purdue vs UCLA. Approximate start time for this is 2:30 PM Central Time, depending on how long the earlier game takes to finish.

And the adventure continues.

 
Read more...

from The happy place

slept like a baby, I dreamed that I dreamed that I was losing one front tooth; when I held it with pinch grip, it it came loose so I pressed it down again into the bloody gum for it to grow back there. I pushed it down deeply, even deeper than the other teeth. I thought I made it stick, but when I let go of it, it came loose again. I regretted not brushing my teeth better, because that would have prevented this.

Then I realised, to my relief, that I’d just dreamed that, however the same tooth came loose again, this time in my outer dream.

Having woken up from all of these dreams, having all teeth still, especially the one I was dreaming about, I felt a sense of thankfulness and decided to go out into this cold, cloudy, wet and dirty weather — which made me think of a soggy, sour dishcloth — and take myself out for a run. I saw branches of trees lining the roads laying in puddles and on the roadside, blown off by the winds yesterday. And my body felt slow and every movement with the legs felt uncomfortable like they’d been used too much lately.

It still felt good

I completed my running and have two complete lines of teeth

And so why shouldn’t I feel happy?

 
Read more... Discuss...

from 下川友

会社帰り、疲れたなあと思いながら、 前からチケットを取っていた、学生の頃によく聴いていたバンドのライブを観に行った。

そのバンドは30周年だった。 当時はアルバムを聴くだけで、ライブに行ったことはなかったので、行けてよかったと思う。 しかも会場がNHKホールだったので、座って観ることができた。 アンコールで当時の曲が聴けたのもよかった。

次の日、俺は高熱を出した。 39℃だ。

鼻水や咳はあまり出ていない。 ただ、体の重さだけが、いつもの5倍くらい違う。 なんとなく、菌由来のものではない気がする、と体の感覚で思った。

子どもの頃は、37℃の時点で学校を休んでいた。 でも今は、熱が出て苦しむこと自体が面倒くさくて、 いつも通りPCをいじっていた。

昼になると、やたらとミネラルっぽいものを体が欲していた。 野菜ジュースとカットパイナップル、それからヨーグルトを食べた。

PCで作業していると、左胸あたりがつってきた。 俺はよく筋肉がつるので、いつものやつかと思った。

しかし、その痛みがどんどん強くなり、 作業ができなくなった。

痛みを紛らわせようと部屋の中をうろうろしたが、 だんだん立っていることもできなくなり、 ギリギリのところで布団にダイブした。

高熱のつらさと筋肉の痙攣のコンボで、 体がシャットダウンしそうになっていた。

その間に、妻は救急車を呼んでいた。

救急隊が来た頃には痙攣は治まっており、結局自分で歩けたので、 救急隊に軽く診察してもらったあと、 そのまま妻と一緒に歩いて病院へ向かった。

診察の結果、 やはりインフルエンザでもコロナでもなく、 解熱剤だけ処方されて帰ってきた。

高熱にもかかわらず、 俺は無敵だと言わんばかりに、 夕飯は妻が作ってくれたカレーを食べた。

 
もっと読む…

from 💚

Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil

Amen

Jesus is Lord! Come Lord Jesus!

Come Lord Jesus! Christ is Lord!

 
Read more...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 14 mars 2026 Le retour de l'Américain

Je venais de finir avec un groupe ado et jeunes adultes et puis yôko vient me dire que l´Américain est revenu, il voudrait me voir. Je le reçois dans le dôjô il commence moitié anglais moitié un japonais écorché et très hésitant, rudimentaire disons. Il me dit qu’il a réfléchi et pense avoir compris quelque chose et il souhaite s'excuser. Et hop il me fait dogeza dans les formes devant les élèves, et là il me bluffe un peu j'avoue. Les yeux de yôko s´arrondissent comme des bols. Silence total dans la salle. Ok je lui dis vous passez la première marche. Merci, je reconnais vos efforts, c’est ce que j'espérais sans trop y croire. Vous me surprenez. C'est d'accord, travaillez la langue pour qu'on puisse communiquer facilement sans que j'aie à traduire, continuez à progresser dans la culture et je vous prends comme élève. Il est reparti avec le sourire cette fois, un salut correct, arigatogozaimasu sensei. Je suis hyper contente de ce développement j’espère que vous comprenez ça.

 
Lire la suite...

from Golden Splendors

Marigold results from Tokyo, Japan at Korakuen Hall on Saturday, March 14, 2026 live on Wrestle Universe:

Thom Fain and Sonny Gutierrez were the English broadcast team. Fain said he was filling in for Stewart Fulton.

Yuuka Yamazaki defeated AI by submission with a single leg crab hold in 7:36. This was the in-ring debut of AI. The graphics in Japanese and English had her name as just AI but the ring announcer called her “Lady AI”.

Nagisa Tachibana won a 3-Way Match over Hummingbird and Shinno by pinning Humminbird with a La Magistral Cradle setup by a stunner and a springboard flying bodypress in 5:46.

Seri Yamaoka pinned Rea Seto with her clutch finisher in 10:25.

Angel Hayze and Maddy Morgan defeated Chika Goto and Nao Ishikawa when Morgan pinned Ishikawa after a Moonsault set up by a superkick from Hayze in 6:32. The announcers said this was the Japan debut for Hayze and Morgan who are from the UK. They said Morgan just turned 18 and she and Hayze will be here for “several more months” on this tour.

Utami Hayashishita pinned Syoko Koshino after the Torture Rack Bomb in 8:28. This was Match 2 of Koshino’s Seven Match Trial Series. So far she’s 0-2 after losing the first one to Miku Aono.

Megaton pinned Independent World Jr. Champion Kuroshio TOKYO Japan with a horizontal cradle to win the title in 9:24. The title is originally from the defunct Frontier Martial-Arts Wrestling promotion in the early 1990s. Marigold owner Rossy Ogawa did the photo op with the belt and both of the wrestlers before the match and you could see the FMW logo is still on the belt. It’s been mostly defended recently in Just Tap Out. This was Japan’s first defense of it, he won it from Akira Jumonji in JTO on 1/4/26. Megaton hadn’t won a match in Marigold in nearly a year but was still given the chance today despite coming in with 65 losses. One of the announcers had a funny line saying, “Megaton’s win/loss record looks like my credit score after a Vegas weekend.”

Marigold Twin Star Tag Team Champions Misa Matsui and CHIAKI defeated Miku Aono and Kouki Amarei when CHIAKI pinned Amarei after a diving leg drop in 17:51. Amarei was about to pin CHIAKI at one point after giving her the diving twisted splash finisher but Matsui pulled the referee out of the ring to stop the count. Aono is the current Marigold World Champion.

Marigold 3D Trios Champions Mai Sakurai, Natsumi Showzuki, and Erina Yamanaka defeated Mayu Iwatani, Victoria Yuzuki, and Komomo Minami when Showzuki pinned Minami after a diving meteora set up by off the top rope moves from Sakurai and Yamanaka in 13:13. Iwatani had the Marigold Superfly Title belt and the GHC Women’s Title belt with her. Marigold and Pro Wrestling NOAH have had a partnership for a couple of years now. Iwatani is currently the GHC (Global Honored Crown) Women’s Champion in NOAH as well as the Superfly Champion in Marigold. For the record, the 3D in the Marigold Trios Titles stands for “Dream, Diamond, Destiny”.

After the main event, Showzuki got on the mic and told Mayu Iwatani she wants to challenge her for both the Superfly Title and the GHC Women’s Title. Shinno, Yuuka Yamazaki, and Seri Yamaoka then came out to challenge for the 3D Trios Titles on March 29.

 
Read more...

from Iain Harper's Blog

In the late 1990s and early 2000s, a wave of filmmakers made what seemed like an obvious choice. Film stock was expensive, temperamental, required careful storage, and would eventually decay. Digital was immediate, endlessly copyable, and felt like the future. Why keep shooting on a format invented in the 1880s when you could embrace the new millennium properly?

Two decades later, those cutting-edge digital productions are now far harder to restore to modern standards than films shot on celluloid fifty years earlier. A well-preserved 35mm negative from 1955 can yield a gorgeous 4K transfer. A digital feature from 2003, shot on what was then state-of-the-art equipment, might be stuck at standard definition forever.

Days of Future Past

When Danny Boyle shot 28 Days Later in 2002, he chose Canon XL-1 miniDV cameras. The decision was partly practical as the lightweight cameras allowed for guerrilla-style shooting on London’s deserted streets, and partly aesthetic. The harsh, blown-out digital look gave the film an immediacy that felt perfect for a story about civilisation’s collapse.

Cillian Murphy in 28 Days Later

The cameras recorded at 720×576 pixels which is PAL standard definition. For context, a modern iPhone shoots 4K video at 3840×2160 pixels, with roughly 25 times more information in every frame.

At the time, this didn’t seem like a problem. Standard definition was the norm. DVDs looked fantastic compared to VHS. Nobody was thinking about what these films would look like in twenty years.

In contrast, when you shoot on 35mm film, the main standard for movie cameras, you’re not really capturing a fixed resolution. You’re exposing silver halide crystals to light, creating a physical record of the scene with an almost absurd amount of potential detail. The exact “resolution” depends on the film stock and how you scan it, but modern estimates put 35mm somewhere between 4K and 8K equivalent. Some argue even higher for large format stock such as 80mm.

More importantly, that detail actually exists in the negative. It’s been sitting there since the day the film was shot, waiting for scanning technology to catch up. When we remaster Lawrence of Arabia or 2001: A Space Odyssey in 4K, we’re not inventing detail. We’re finally extracting what was always there.

Digital video from the early 2000s doesn’t work that way. What was captured is what exists. Those 720×576 pixels aren’t hiding secret information underneath. The cameras had a fixed resolution, and that resolution is now embarrassingly low by contemporary standards.

The Uncanny Valley of Upscaling

“But wait,” you might reasonably ask, “can’t we just use AI to upscale these films?”

We can. And increasingly, we do. Tools have become remarkably sophisticated at adding plausible detail to low-resolution footage. The results can be impressive, especially for content that wasn’t intended to look “cinematic” in the first place such as old TV shows, news footage and home videos.

The problem is that word. Plausible. AI upscaling doesn’t reveal hidden details. It hallucinates detail that looks like it could have been there. The algorithm examines a blocky, pixelated face and generates what a higher-resolution version of that face might look like based on patterns it learned from millions of other faces.

Sometimes this works brilliantly. Sometimes you get something that sits in a weird uncanny valley, technically sharper but somehow wrong in ways that are hard to articulate. Textures that feel synthetic, skin that looks waxy and fabric that doesn’t quite behave like fabric.

For films that were shot on early digital for aesthetic reasons, aggressive AI processing creates an additional problem. The lo-fi digital texture of 28 Days Later isn’t a flaw to be corrected, it’s part of what made the movie work. Clean it up too much and you lose something that can’t be put back.

This puts restoration teams in an impossible position. Do you present the film as it was intended to be seen, knowing modern audiences on 65-inch 4K screens will notice every compression artifact? Or do you “improve” it with AI, knowing you’re changing the director’s original vision at its core.

A Brief History of Bad Timing

The 2000s were uniquely cursed in this regard. It was the precise moment when digital filmmaking became viable enough that serious directors started using it, but before the technology had matured to resolutions that would remain acceptable long-term.

Consider the timeline.

Late 1990s — Digital video exists but is mostly confined to low-budget indie films and documentaries. The Dogme 95 movement embraces the format’s limitations as aesthetic virtues. Lars von Trier shoots The Celebration on miniDV in 1998.

2000–2002 — Early digital starts appearing in mainstream productions. George Lucas shoots Attack of the Clones on Sony CineAlta cameras at 1080p, declaring it the future of cinema. Boyle shoots 28 Days Later on miniDV. The gates are opening.

2003–2006 — The wave crests. Michael Mann shoots Collateral and Miami Vice on Thompson Viper cameras. David Lynch makes Inland Empire on a Sony PD-150, declaring he’ll never shoot film again. Robert Rodriguez pushes digital filmmaking into family blockbusters with Spy Kids sequels and Sin City.

2007–2010 — The first truly high-resolution digital cinema cameras appear. The Red One launches in 2007, capable of shooting at 4K. The Arri Alexa follows in 2010. From this point forward, digital films generally capture enough resolution to survive future format changes (subject to future radical changes to screen technology).

That roughly seven-year window, let’s call it 2000 to 2007, is a generation of films that were technologically progressive for their time and are now technologically trapped.

Some of the most visually distinctive work of the era lives in this limbo. Inland Empire’s hallucinatory nightmare textures were inseparable from the crude DV format Lynch used. Dancer in the Dark’s raw emotional brutality came partly from being shot on 100 consumer camcorders simultaneously. Open Water’s horror worked because it felt like you were watching somebody’s holiday video turn into a snuff film.

George Lucas enters, stage right

Attack of the Clones (2002) was the first major studio production shot entirely on digital cameras. Lucas had been pushing for this transition for years, convinced that digital was not only the future but actively superior to film.

The Sony CineAlta cameras used for Episodes II and III captured at 1080p. By the standards of 2002, this was impressive, true high definition when most consumers were still watching standard def broadcasts. By current standards, it’s less than a quarter of 4K resolution and roughly a sixteenth of 8K.

4K releases of the prequel trilogy exist, but they’re heavily upscaled rather than derived from native high-resolution sources. Watch them on a large modern display and you’ll notice a certain softness, a lack of the crystalline detail present in the original trilogy restorations (which were shot on film and could be properly scanned at 4K).

The irony here is that Lucas was so convinced of digital’s superiority that he also went back and “improved” the original trilogy with digital effects, effects that were rendered at resolutions that now look dated while the underlying film footage remains timeless.

Why Film Ages Better Than Files

A film negative is a physical object that can be re-examined with improving technology. Better scanners extract more detail. Better colour science improves the transfer. The negative hasn’t changed, but our ability to read it has.

A digital file is a fixed quantity. The numbers in the file are the numbers in the file. You can process them differently, upscale them algorithmically, but you can’t extract information that was never captured.

There’s also the question of format obsolescence. Film is remarkably stable as a storage medium. A properly stored negative from 1920 can still be projected or scanned today using the same principles as when it was created. The format hasn’t changed because the format is physical.

Digital formats change constantly. Codecs fall out of favour. Compression standards evolve and storage media become unreadable. A miniDV tape from 2003 requires increasingly rare hardware to play. A hard drive from the same era might be entirely dead. The theoretical advantages of digital, perfect copying, no degradation, only matter if you can actually access the data.

There are documented cases of studios discovering that digital masters from the early 2000s had become corrupted or were stored in formats nobody could easily read anymore. The Library of Congress has warned repeatedly about the challenges of digital preservation compared to traditional film archiving.

This doesn’t mean film is some perfect archival medium. It absolutely isn’t. Celluloid degrades. Colour stocks from the 1970s and 80s are notorious for fading toward magenta. Nitrate film from the silent era is literally flammable and chemically unstable. Acetate stock can develop “vinegar syndrome,” becoming brittle and unusable. Countless films have been lost because negatives were stored poorly, damaged in fires, or simply thrown away when studios decided they had no commercial value.

The point isn’t that film preservation is easy. It’s that when a film negative is properly preserved (stored at controlled temperature and humidity, protected from light and chemical contamination) the information embedded in those silver halide crystals remains accessible. The ceiling for recovery is remarkably high, even if reaching that ceiling requires considerable effort and expense.

What Happens Now?

Studios and distributors are increasingly turning to AI-powered restoration for early digital films, with mixed results.

The 4K release of something like Collateral is the best-case scenario. The film was shot at 1080p, but the imagery was carefully composed and the digital artifacts were minimal. AI upscaling can add convincing detail without changing the viewing experience at its core. It’s not quite the same as a native 4K source, but it’s acceptable.

At the other end of the spectrum, a film like Inland Empire probably shouldn’t be “restored” in any traditional sense. The blown-out highlights, crushed blacks, and compression artifacts aren’t problems to be solved. They’re part of the film’s visual language. Any version that removes them would be a different movie. Most early digital films fall somewhere between these extremes, requiring case-by-case decisions about how much intervention is appropriate.

A Note on What We’ve Lost

The films shot on early digital aren’t obscure curiosities. They include some of the most culturally important work of their era. 28 Days Later all but invented the modern zombie movie. Inland Empire is Lynch at his most experimental. Collateral is Mann’s masterpiece. The Star Wars prequels, whatever your feelings about them, were childhood-defining for a generation.

These films exist, and will continue to exist, in some form. But the question of how they’ll look to future audiences remains unresolved. Will AI upscaling become convincing enough that the resolution limitations become invisible? Will tastes shift so that early digital aesthetic becomes valued rather than apologised for? Will someone invent restoration techniques we can’t currently imagine?

In Praise of Uncertainty

Early digital films aren’t going to disappear. They’ll be preserved, restored with whatever tools are available, and watched by future audiences who will bring their own expectations and tolerances to the experience.

But there’s something worth recognising about the people who chose digital in the early 2000s, often because it seemed like the responsible, forward-thinking choice. They were wrong in ways they couldn’t have anticipated.

The filmmakers who stuck with “outdated” 35mm through this period, often facing pressure and mockery for their technological conservatism, turned out to be the ones preserving their work most reliably for the future.

Christopher Nolan’s stubborn insistence on shooting film, which seemed almost pathologically nostalgic at the time, now looks prescient. His films from this era scan beautifully at 4K and will continue to scale up as display technology improves. His digital-pioneering contemporaries are stuck trying to make 1080p footage look acceptable on increasingly massive screens.

There’s no triumphalism in pointing this out. Just a reminder that the future is harder to predict than it looks, and the technologies that feel inevitable sometimes turn out to be evolutionary dead ends.

The early digital era produced remarkable films that pushed the medium in directions film stock couldn’t go. Those films deserve to be seen and remembered. But the format that made them possible also trapped them in amber at resolutions that grow more limiting every year.

 
Read more... Discuss...

from Crónicas del oso pardo

Por algún misterio de la naturaleza, soy una persona que no piensa. No tengo pensamientos, al menos en el sentido tradicional de la palabra.

Algunas personas piensan que me engaño a mí mismo, pero no es así. Llevo una vida normal, puedo hacer esto o lo otro. Pero siempre es espontáneo, incluso lo que voy diciendo, y nadie ha encontrado motivo para creer que ignoro lo que hago o digo. Pareciera que lo he pensado, aunque yo sé que no es así.

No faltará quien diga que podría ser que las emociones me dominan y que el buen resultado de mis actividades es obra del destino o de la buena suerte. No es cierto, he estudiado lo suficiente, pero por algún motivo no vivo rumiando el conocimiento; simplemente lo aplico.

Por algún misterioso asunto, las cosas son así. Incluso en mi vida personal, cuando me llegó el momento me casé, disfruto de cada momento, y así voy.

¿He intentado pensar? La respuesta es: sí. He tratado, pero no sé qué es eso. Y, por lo que veo a mi alrededor, no vale la pena. La gente está enferma de pensar.

 
Leer más...

from Café histoire

Nouvelle Fondation. En décembre, j’ai acheté un ThinkPad T480 d’occasion, puis un T470s, tous deux reconditionnés et dotés de Linux Mint. Ceci est la chronique de ce choix et de ce passage de l’univers Apple à l’univers Linux.

Depuis maintenant trois mois, je me familiarise avec mon ThinkPad et le système d’exploitation Linux Mint. L’acclimatation opère.

Ces derniers jours, j’ai repris mon MacBook Air 13,6 pouces, afin de mettre à jour et d’optimiser les données et de les synchroniser notamment avec mon nuage professionnel. C’est clair que c’est un bel outil de travail. Le processeur est aussi plus récent. Son principal avantage par rapport à mon ThinkPad est sans conteste l’autonomie de sa batterie. Me revoilà cependant vite revenu à mon ThinkPad. J’apprécie son écran de 14 pouces et surtout son incroyable clavier.

Au niveau de la réactivité et du processeur, je ne note pas de différence suffisamment notable au niveau de mes tâches quotidiennes avec les mêmes logiciels, tels que Firefox ou LibreOffice. J’ai même moins de blocage de vidéo sur YouTube avec mon ThinkPad. Par contre, mon MacBook Air sera plus réactif au démarrage de l’ordinateur. Au niveau du pavé tactile, celui du MacBook Air est également meilleur et plus précis. Pour certaines tâches, je dois recourir à la souris avec mon ThinkPad (peut-être aussi parce que je maîtrise mal les touches au-dessus du pavé tactile et le TrackPoint au milieu du clavier…).

J’aime bien le côté tout-en-un de mon ThinkPad avec son lecteur de carte SD et ses ports USB-C et USB-A. Ainsi, même s’il est plus encombrant que mon MacBook Air, j’ai moins à me préoccuper d’emporter des accessoires avec moi. En plus, disposant d’un disque dur de 512 GB au lieu des 256 GB du MacBook Air, je n’ai pas non plus besoin de me demander s’il faut ou non que je prenne mon disque dur externe. Je peux aussi directement synchroniser mon cloud professionnel.

Il y a de fortes chances que ce soit des circonstances particulières que le recours au MacBook Air s’impose. C’est principalement si je dois emporter le portable le moins encombrant, notamment à moto. Il y a peut-être des besoins plus pointus en traitement d’images qui pourraient justifier son utilisation. Et c’est à peu près tout pour l’instant.

Les dernières annonces d’Apple ont néanmoins titillé mon intérêt. Et c’est curieusement le MacBook Neo qui remporte la palme. Avec son écran 13 pouces, il est celui qui se rapproche le plus de mon ancien MacBook 12 pouces. Pour iFixit, il marque aussi un retour à un MacBook plus facilement réparable.

Il est ainsi possible de changer de batterie facilement, les ports USB-C et les haut-parleurs sont modulaires. Les éléments sont vissés et non collés. La RAM et le processeur restent soudés. Ce n'est pas parfait, mais il y a progrès. Au final, il obtient d’iFixit la note de six (sur dix) en matière de réparabilité (comparé à la note de dix pour le Thinkpad T480 et la note de quatre pour le MacBook Air M4). Par ailleurs, le prix est doux, même pour la version avec un disque dur de 512 GB.

Du côté de mes deux ThinkPad, j’ai eu tendance à privilégier le T470s légèrement plus fin et léger. Mais rien n’est vraiment décidé ou clair. Le T480 reste plus puissant et dispose de l’avantage de pouvoir changer une des batteries en usage nomade. Il est plus agréable aussi pour un travail de rédaction long.

Je viens aussi de constater que je dispose avec Antidote Web d’un correcteur orthographique pour Firefox. C’est une très bonne nouvelle. Une autre solution est l’extension Language Tool (une version gratuite et une version payante), mais il n’y a pas de raison de payer pour la version payante en disposant déjà d’Antidote. Il faut vraiment que j’utilise plus systématiquement Antidote quand je rédige un texte et que j’envisage de rédiger mes textes dans mon navigateur Firefox. J’ai une marge de progression indéniable en la matière.

Je suis donc revenu rapidement à mon ThinkPad. Et content. Avec la satisfaction d'être dans un univers libre. J’ai développé ainsi une forme d’esprit tranquille. Particulièrement concernant la propriété et la diffusion de mes données. J’en garde la maîtrise sans me poser la question de leur récupération sans mon consentement.

Tags : #AuCafé #Linux #ThinkPad #ŧ480 #t470s #Apple #MacBookNeon #MacBook

 
Lire la suite... Discuss...

from SmarterArticles

The app on your phone that you opened this morning, the one you use to check the weather or scan a receipt or convert a file format, may be one of the last of its kind. Not because it will stop working, but because the entire concept of downloading, installing, and maintaining software is hurtling toward obsolescence. In its place, something stranger and more fluid is taking shape: software that exists for minutes, hours, or days before vanishing without a trace, conjured from nothing by artificial intelligence and dissolved just as quickly once it has served its purpose.

Welcome to the age of the disposable app.

This is not a speculative fantasy plucked from a science fiction screenplay. It is a prediction grounded in converging trends across AI-assisted code generation, serverless cloud infrastructure, and a growing cultural exhaustion with the bloated, notification-heavy app ecosystems that have defined the smartphone era. By 2026, industry leaders and analysts anticipate that AI will routinely generate temporary, purpose-built software modules on demand, modules that close after serving their function and leave behind nothing but the data their users choose to keep. The implications for how we relate to technology, own our data, and understand what “software” even means are profound, disorienting, and largely uncharted.

Software That Forgets Itself

The idea of ephemeral software is not entirely new. Serverless computing, which emerged in the mid-2010s with platforms like AWS Lambda, already operates on a principle of transience: functions spin up in response to events, execute their logic, and shut down. The global serverless computing market, projected by Grand View Research to reach $52.13 billion by 2030 at a compound annual growth rate of 14.1 per cent, has normalised the concept of infrastructure that appears and vanishes on demand. What is new is the combination of large language models capable of generating entire applications from natural language prompts, serverless infrastructure that can host them without persistent servers, and a user base increasingly comfortable with the idea that code does not need to live forever.

Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, captured this shift vividly in his 2025 year-in-review blog post. He described having “vibe coded entire ephemeral apps just to find a single bug because why not,” adding that code is “suddenly free, ephemeral, malleable, discardable after single use.” The term “vibe coding,” which Karpathy coined in February 2025, describes a mode of programming where developers “fully give in to the vibes, embrace exponentials, and forget that the code even exists.” What began as an amusing experiment for weekend projects has, within a year, evolved into what Karpathy now calls “agentic engineering,” a workflow where autonomous AI agents handle the vast majority of code production while humans orchestrate and verify. Writing on his personal blog about his experience vibe coding MenuGen, an end-to-end application built entirely by Cursor and Claude, Karpathy expressed excitement about a future where “the barrier to app drop to ~zero, where anyone could build and publish an app just as easily as they can make a TikTok.”

The numbers support the trajectory. According to Stack Overflow's 2025 Developer Survey, which gathered responses from over 49,000 developers across 177 countries, 84 per cent of respondents are using or planning to use AI tools in their development process, up from 76 per cent the previous year. Fully 51 per cent of professional developers use AI tools daily. Some 44 per cent of developers are now turning to AI tools to learn to code, up from 37 per cent the year before. Meanwhile, Gartner projects that by 2026, low-code development tools will account for 75 per cent of new application development, up from less than 25 per cent in 2020. The global low-code market itself is forecast to reach $44.5 billion by 2026, growing at a compound annual rate of 19 per cent. Eighty-four per cent of enterprises have already adopted low-code or no-code tools to reduce IT backlogs, and organisations adopting low-code report 50 to 70 per cent faster development cycles compared to traditional methods.

These are not incremental improvements. They represent a fundamental rewiring of how software comes into existence.

When Apps Become Verbs

Chris Royles, Field CTO for EMEA at Cloudera and a Fellow of the British Computer Society who holds a PhD in artificial intelligence from the University of Liverpool, is among those who have articulated this vision most directly. In a set of predictions published for 2026, Royles stated that “AI will start to radically change the way we think about apps, how they function and how they're built.” Today's applications, he noted, are declarative: millions of lines of code following fixed rules. AI is tearing up that rulebook. Users will soon request temporary modules generated by code and a prompt, and “once that function has served its purpose, it closes.” These disposable apps, Royles suggested, can be “built and rebuilt in seconds.”

His colleague Paul Mackay, RVP Cloud EMEA and APAC at Cloudera, offered a complementary warning. Many organisations, Mackay observed, “will begin shelving their 'Frankenstein' AI applications they built for specific business use cases, as costs spiral and governance concerns grow.” The implication is striking: not only will new software be born ephemeral, but existing permanent software may itself be retired and replaced by disposable alternatives as organisations recognise that maintaining complex, bespoke AI applications is becoming untenable.

The shift is already visible in practice. In January 2026, the global ecommerce platform Rokt held a company-wide hackathon (internally branded as “Rokt'athon”) in which more than 700 employees, many of them non-technical, used Replit's AI agent to build 135 fully functional internal applications in a single 24-hour period. Lawyers, marketers, and operations staff built tools for hiring workflows, analytics dashboards, training games, and SQL query repositories. As one Rokt executive put it, “We're empowering people who couldn't code with the ability to build software. And it's exciting, having lawyers come up to me and say, 'I've been building in Replit.'” None of these applications went through a traditional software development lifecycle. None were designed to last indefinitely. They were built to solve a problem, and once the problem was solved, many would be retired or rebuilt from scratch.

This pattern, where software becomes a verb rather than a noun, something you do rather than something you have, represents a break with decades of computing convention. Since the dawn of the personal computer, software has been a product: boxed, licensed, installed, updated, patched, and eventually deprecated through a lifecycle measured in years. The disposable app collapses that lifecycle into days, hours, or even minutes.

The Exhaustion Economy

The appeal of ephemeral software is not purely technological. It is also cultural, born from a mounting frustration with the current state of digital life.

The mobile app ecosystem has become, by most measures, unsustainable. According to AppsFlyer's 2025 uninstall report, more than one in every two apps installed is uninstalled within 30 days of download. Mobile apps lose 77 per cent of their daily active users within the first three days. By day 30, the average retention rate drops to approximately 6 per cent, meaning 94 per cent of users churn within a month. Dating apps exhibit an uninstall rate of roughly 65 per cent, and gaming apps are not far behind at 52 per cent. Performance remains the single most decisive factor: nearly 96 per cent of users consider performance a key element in deciding whether to keep or delete an app, and more than 40 per cent now drop applications that seek unnecessary access to their device or personal data.

Meanwhile, organisations are drowning in SaaS sprawl. The average enterprise now uses 112 SaaS applications, and the global SaaS market is projected to reach approximately $408 billion in 2025. There are over 42,000 SaaS companies worldwide. Reports indicate that 91 per cent of AI tools in organisations remain unmanaged, creating both productivity drag and security vulnerabilities. Subscription fatigue is measurable and growing: users are exhausted by overlapping features across dozens of apps, endless notifications, and the cognitive overhead of managing an ever-expanding digital toolset.

Disposable apps offer an alternative logic. Rather than downloading a permanent application to perform a task you might need once, you describe what you need, an AI generates it, you use it, and it disappears. No installation. No subscription. No notification settings to configure. No account to create and subsequently forget the password for. The software exists precisely as long as it is useful and not a moment longer.

This aligns with a broader cultural movement toward what designers and technologists have begun calling “minimalist utility,” the idea that technology should do one job exceptionally well, remove friction, and respect the user's time, attention, and data. After years of maximalist design that promised ever more features, integrations, and engagement surfaces, minimalist utility promises “enough”: the smallest set of capabilities that reliably solves a real problem. The shift is not anti-innovation. It is a demand for clarity, control, and measurable value, a recognition that the app economy's relentless expansion has produced diminishing returns for the people it was supposed to serve.

Where Does the Data Go?

The most unsettling question raised by disposable software is not about the software itself. It is about the data.

When an application exists for a few hours and then vanishes, what happens to the information it processed? If an AI generates a temporary expense tracker for a business trip, analyses a set of medical records for a quick consultation, or creates a one-off survey tool for customer feedback, where do those numbers, those records, those responses reside once the app closes? Who owns them? Who is responsible for their security? Who ensures they are not retained by the AI system that generated the app, or by the cloud infrastructure that hosted it?

These questions are not hypothetical. They strike at the heart of an already fragile regulatory landscape. The European Union's General Data Protection Regulation (GDPR), which has resulted in 2,245 fines totalling 5.65 billion euros since enforcement began in 2018, grants individuals the right to erasure, commonly known as the right to be forgotten. Under Article 17, individuals can request that organisations delete their personal data. The technical burden of tracking where personal data has been stored or processed is already significant for traditional software; for ephemeral applications that spin up and dissolve across distributed cloud infrastructure, it becomes an order of magnitude more complex.

The enforcement trajectory is unambiguous. In 2025 alone, European regulators issued fines amounting to 2.3 billion euros, a 38 per cent year-over-year increase. TikTok received a 530 million euro penalty for illegal data transfers to China. Meta paid 479 million euros for consent manipulation. The French data protection authority CNIL levied a 100 million euro fine against Google for making cookie rejection harder than acceptance, establishing a precedent around dark patterns in consent interfaces. The message is clear: regulators are not slowing down. And the EU AI Act, whose most significant compliance deadline falls on 2 August 2026, introduces additional obligations for high-risk AI systems, including requirements around data governance, transparency, human oversight, and record-keeping. Organisations that fail to comply face fines of up to 35 million euros or 7 per cent of global annual turnover.

The collision between ephemeral software and persistent data regulation creates a novel governance challenge. If an AI-generated app processes personal data during its brief existence, the controller (the organisation or individual who deployed the app) remains responsible for ensuring GDPR compliance, including responding to data subject access requests and deletion requests. But if the app itself no longer exists, and its architecture was generated dynamically by an AI model, reconstructing where data flowed, how it was processed, and whether copies were retained becomes extraordinarily difficult. As the European Data Protection Board (EDPB) clarified in its April 2025 report, large language models rarely achieve anonymisation standards, meaning that any data processed through AI-generated applications is likely to retain personal data characteristics that trigger regulatory obligations.

Seventy-one per cent of organisations already cite cross-border data transfer compliance as their top regulatory challenge in 2025. Disposable apps, which may be generated in one jurisdiction, hosted in another, and accessed from a third, threaten to multiply this complexity exponentially.

The Governance Gap

The regulatory challenge extends beyond data protection. Disposable apps raise fundamental questions about software accountability and quality assurance that existing frameworks were never designed to address.

Traditional software development follows established patterns of testing, review, deployment, and maintenance. Code is written by identifiable developers, reviewed by peers, tested against defined criteria, deployed through controlled pipelines, and maintained through versioned updates. When something goes wrong, there is a trail: version numbers, commit histories, deployment logs, and responsible parties. This infrastructure of accountability has been built over decades and is baked into regulatory frameworks, industry standards, and professional practices.

Disposable AI-generated software dissolves this trail. If an AI generates a temporary tool that produces incorrect calculations, gives flawed medical guidance, or mishandles financial data, who bears responsibility? The user who described what they wanted? The AI model that generated the code? The platform that hosted the ephemeral application? The company that trained the model? The cloud provider whose serverless infrastructure executed the code? The liability chain for a piece of software that existed for ninety minutes and was generated by a prompt written in plain English is, to put it mildly, unclear.

Chris Royles, in his 2026 predictions for Cloudera, emphasised that “rigorous governance is required” for disposable apps, noting that “organisations need visibility into the reasoning processes used to create these modules to ensure errors are corrected safely.” His colleague Wim Stoop, Senior Director at Cloudera, predicted the emergence of “specialist AI agents dedicated to data governance” that would “continuously monitor, classify, and secure data wherever it resides, ensuring governance becomes an always-on function embedded into daily operations.” Stoop's vision implies a future where governance itself becomes autonomous and persistent, even as the software it oversees remains temporary and fleeting.

Yet the governance infrastructure for this new paradigm remains largely theoretical. The Stack Overflow 2025 Developer Survey found that developers show the most resistance to using AI for high-responsibility, systemic tasks: 76 per cent have no plans to use AI for deployment and monitoring, and 69 per cent resist using it for project planning. A “reputation for quality” and a “robust and complete API” rank far higher than “AI integration” when developers evaluate new technology. This caution among practitioners stands in tension with the speed at which disposable app generation is advancing. The technology is moving faster than the frameworks designed to govern it.

Trust in an Ephemeral World

The trust dynamics of disposable software are counterintuitive. On one hand, ephemeral apps could be more secure than permanent ones. A tool that exists for two hours presents a far smaller attack surface than one that sits on a device for years, accumulating vulnerabilities through outdated dependencies and unpatched security flaws. If the app is gone, there is nothing to hack. Disposable apps can also be designed with encryption, limited data collection, and proper teardown processes that destroy residual data upon closure.

On the other hand, the Stack Overflow survey reveals a troubling pattern: positive sentiment toward AI tools among developers has declined from over 70 per cent in 2023 and 2024 to just 60 per cent in 2025, even as adoption has increased. The biggest single frustration, cited by 66 per cent of developers, is dealing with “AI solutions that are almost right, but not quite,” which leads to the second biggest frustration: “Debugging AI-generated code is more time-consuming,” cited by 45 per cent. Experienced developers are the most sceptical, with the lowest “highly trust” rate (2.6 per cent) and the highest “highly distrust” rate (20 per cent). When asked about a future with advanced AI, 75 per cent of developers said the primary reason they would still ask a person for help is “when I don't trust AI's answers.”

If the people building these systems do not fully trust them, why should the people using the resulting applications? The question becomes more urgent when disposable apps move beyond internal tools and weekend projects into domains with real consequences: healthcare, finance, legal advice, education. A disposable app that helps a nurse calculate drug dosages, even for a single shift, carries stakes that demand the same rigour as permanent medical software. The ephemerality of the tool does not diminish the permanence of its potential consequences.

AI agents, which represent the next frontier of this trend, are not yet mainstream among developers. The Stack Overflow survey found that 52 per cent of developers either do not use agents or stick to simpler AI tools, and 38 per cent have no plans to adopt them. Among those who do use agents, the productivity benefits are clear: 69 per cent report improved workflow and 70 per cent report reduced time on specific tasks. But only 17 per cent believe agents have improved team collaboration. The picture that emerges is one of individual productivity gains that have not yet translated into systemic trust or organisational confidence.

Rethinking Ownership in a Post-Permanent World

The shift from permanent to ephemeral software does not merely change how we build technology. It changes how we think about ownership, identity, and the digital artefacts that define our lives.

For decades, the software on our devices has served as a form of digital identity. The apps on your phone, the programmes on your computer, the subscriptions you maintain: these are choices that reflect who you are, what you value, and how you organise your life. When software becomes ephemeral, conjured for a task and dissolved afterward, that relationship evaporates. You do not own the tool. You do not even really use the tool in the traditional sense. You describe a need, something appears, it does its job, and it is gone.

This has implications for data portability and interoperability. Current regulatory frameworks, including the GDPR's right to data portability and the EU's Digital Markets Act, assume that users have ongoing relationships with software platforms, relationships that generate data over time and create lock-in effects that regulation seeks to mitigate. Disposable apps short-circuit this model entirely. There is no lock-in because there is no permanence. But there is also no continuity: no history of preferences refined over months, no accumulated data that can be exported to a competitor, no institutional memory embedded in the tool.

The Consent Management Platform market, which has grown from $802.85 million in 2025 to a projected $3.59 billion by 2033, reflects the complexity of managing user consent in an era of proliferating data touchpoints. Disposable apps threaten to multiply those touchpoints dramatically. Each ephemeral application that processes personal data creates a new consent obligation, a new data processing record, and a new potential liability, all compressed into a timeframe that makes traditional compliance workflows unworkable. The 2026 regulatory landscape demands systematic consent management, including Global Privacy Control signal recognition, one-click reject mechanisms with equal prominence, and granular consent per purpose. Achieving this within a disposable app that may exist for less than an hour requires entirely new approaches to consent architecture.

India's Digital Personal Data Protection Act, which entered its enforcement-heavy phase following the release of operational rules in November 2025, and new US state privacy laws taking effect in 2026, including California's updated CCPA with its mandatory one-click data deletion mechanism (the Delete Act), add further layers of complexity. Three additional US state privacy laws take effect in 2026, joining the growing patchwork of jurisdictional requirements. Organisations deploying disposable apps will need to navigate this maze, much of which assumes precisely the kind of persistent, identifiable software relationships that ephemeral apps are designed to eliminate.

The Class Divide of Ephemeral Computing

There is a risk, largely unexamined, that disposable apps could deepen existing digital inequalities.

The ability to generate software on demand requires access to AI models, cloud infrastructure, and reliable internet connectivity. For knowledge workers at well-resourced organisations, disposable apps promise liberation from SaaS fatigue and IT backlogs. For individuals and communities without reliable connectivity or the digital literacy to articulate their needs to an AI, the shift may simply replace one form of exclusion with another.

Gartner's prediction that by 2026, developers outside of formal IT departments will account for at least 80 per cent of the user base for low-code development tools, up from 60 per cent in 2021, sounds like democratisation. And in many ways it is. Karpathy himself has noted that “regular people benefit a lot more from LLMs compared to professionals” and expressed excitement about seeing “the barrier to app drop to ~zero, where anyone could build and publish an app just as easily as they can make a TikTok.” Rokt's hackathon, where lawyers and marketers built functional software in hours, demonstrates the potential. Jason Wong, a Gartner analyst, has observed that “the high cost of tech talent and a growing hybrid or borderless workforce will contribute to low-code technology adoption,” suggesting that economic pressures are accelerating the shift.

But “anyone” still means anyone with access to the right tools, the right infrastructure, and the right prompts. The global serverless computing market is concentrated overwhelmingly in North America, Europe, and parts of East Asia. The countries where app uninstall rates are highest, Bangladesh at 65.56 per cent, Nepal at 65.27 per cent, Pakistan at 64.58 per cent, are also the countries least likely to benefit from the disposable app revolution, not because their populations lack ingenuity but because the infrastructure and economic conditions to participate fully are not yet in place. OpenAI's GPT models dominate the LLM landscape (82 per cent of developers in the Stack Overflow survey reported using them), and Anthropic's Claude Sonnet models are used more by professional developers (45 per cent) than by those learning to code (30 per cent). Access to the best AI code generation tools remains stratified by both geography and economic circumstance.

Building for Impermanence

What does it mean to design for a world where software is not built to last?

The answer is still forming, but several principles are emerging. First, data must be decoupled from applications more radically than ever before. If the app is temporary, the data layer cannot be. Users will need persistent, portable data stores that any ephemeral application can connect to, process, and disconnect from without taking the data with it. This is architecturally feasible; serverless databases like AWS DynamoDB, Google Cloud SQL, and Azure Cosmos DB already provide exactly this kind of persistence. But achieving it at scale requires a fundamental shift in how users and organisations think about data stewardship. The stateless nature of serverless functions, which by design do not maintain long-term memory between invocations, makes this decoupling both necessary and technically natural. Solutions including external storage services, event-driven state passing, and managed stateful services are already bridging the gap between ephemeral execution and persistent data needs.

Second, governance must become embedded rather than applied. Cloudera's prediction of AI governance agents, always-on systems that monitor and classify data regardless of which application is accessing it, points toward a model where compliance does not depend on the longevity of any particular piece of software. As Stoop put it, governance will shift from “something people do to something they oversee,” with humans “shaping the process as it runs” rather than manually enforcing every rule. The EU AI Act's requirement for transparency in AI-generated interactions, which becomes enforceable under Article 50 in August 2026, will accelerate this need. Every AI-generated interaction must be disclosed, synthetic content must be labelled, and deepfakes must be identified.

Third, the economics of software will shift from subscriptions to consumption. If apps are generated on demand and discarded after use, the per-seat, per-month licensing model that has dominated SaaS for two decades becomes obsolete. In its place, we might see usage-based pricing for AI-generated software: pay for the compute to generate the app, the time it runs, and the data it processes. Forrester projects that generative AI spending will grow at an average annual rate of 36 per cent through 2030, capturing 55 per cent of the $227 billion AI software market. Much of that spending will likely flow through consumption-based models that align with the ephemeral nature of the software being produced.

Fourth, and perhaps most importantly, users will need new mental models for their relationship with technology. The permanent app trained us to think of software as a possession, something we chose, configured, and lived with. The disposable app asks us to think of software as a service in the most literal sense: a fleeting act performed on our behalf, no more permanent than a conversation. Whether that shift feels liberating or destabilising will depend largely on whether the infrastructure of data ownership, governance, and trust catches up with the pace of technical change.

After Permanence

We are not there yet. The 77 per cent of developers who say vibe coding is not part of their professional workflow, the 52 per cent who have not adopted AI agents, and the steadily declining trust in AI tools among experienced practitioners all suggest that the transition will be neither smooth nor complete. Permanent software will not vanish overnight. Mission-critical systems, regulated industries, and applications requiring years of accumulated context will continue to demand traditional development approaches for the foreseeable future.

But the direction of travel is unmistakable. The convergence of AI code generation, serverless infrastructure, and user exhaustion with permanent software is creating conditions for a genuinely new paradigm. Henen Garcia, Chief Architect for Telecommunications at Red Hat, has argued that 2026 marks a “decisive pivot towards agentic AI, autonomous software entities capable of reasoning, planning, and executing complex workflows without constant human intervention.” If those entities can build software as easily as they can execute it, the distinction between the tool and the task it performs begins to dissolve entirely.

Karpathy's vision of a world where “the barrier to app drops to ~zero” is not a prediction about some distant future. It is a description of what is already happening in hackathons, internal tools, and weekend projects around the world. The question is not whether disposable apps will arrive. They are already here. The question is whether our institutions, our regulations, and our own habits of mind can adapt to a world where the software we rely on was born this morning and will be dead by tonight. The answer will determine not just the future of technology, but the future of the data, the decisions, and the human experiences that technology is built to serve.


References and Sources

  1. Karpathy, A. (2025). “2025 LLM Year in Review.” karpathy.bearblog.dev. Available at: https://karpathy.bearblog.dev/year-in-review-2025/

  2. Karpathy, A. (2025). “Vibe coding.” X (formerly Twitter), 2 February 2025. Available at: https://x.com/karpathy/status/1886192184808149383

  3. Karpathy, A. (2025). “Software in the era of AI.” Y Combinator Keynote. Discussed at: https://www.latent.space/p/s3

  4. Karpathy, A. (2025). “Vibe coding MenuGen.” karpathy.bearblog.dev. Available at: https://karpathy.bearblog.dev/vibe-coding-menugen/

  5. Stack Overflow (2025). “2025 Developer Survey.” Available at: https://survey.stackoverflow.co/2025/

  6. Stack Overflow (2025). “Developers remain willing but reluctant to use AI.” stackoverflow.blog, 29 December 2025. Available at: https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/

  7. Stack Overflow (2025). “AI Section, 2025 Developer Survey.” Available at: https://survey.stackoverflow.co/2025/ai

  8. Gartner. “Forecast Analysis: Low-Code Development Technologies, Worldwide.” Available at: https://www.gartner.com/en/documents/7146430

  9. Gartner (2024). “75 Percent of Enterprise Software Engineers Will Use AI Code Assistants by 2028.” Press release, 11 April 2024. Available at: https://www.gartner.com/en/newsroom/press-releases/2024-04-11-gartner-says-75-percent-of-enterprise-software-engineers-will-use-ai-code-assistants-by-2028

  10. Kissflow (2026). “Gartner Forecasts Low Code/No Code Platform Market for 2026.” Available at: https://kissflow.com/low-code/gartner-forecasts-on-low-code-development-market/

  11. Royles, C. (2025). Cloudera 2026 Predictions. Reported in IT Brief Asia: https://itbrief.asia/story/cloudera-forecasts-disposable-apps-ai-governance-shift

  12. Royles, C. (2025). Cloudera 2026 Predictions. Reported in Artificial Intelligence News: https://www.artificialintelligence-news.com/news/ai-in-2026-experimental-ai-concludes-autonomous-systems-rise/

  13. Replit (2026). “How Rokt built 135 internal applications in 24 hours.” Customer case study. Available at: https://replit.com/customers/rokt

  14. AppsFlyer (2025). “App uninstall report, 2025 edition.” Available at: https://www.appsflyer.com/resources/reports/app-uninstall-benchmarks-report/

  15. GetStream (2026). “2026 Guide to App Retention: Benchmarks, Stats, and More.” Available at: https://getstream.io/blog/app-retention-guide/

  16. European Commission. “AI Act: Shaping Europe's digital future.” Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  17. GDPR.eu. “Everything you need to know about the Right to be forgotten.” Available at: https://gdpr.eu/right-to-be-forgotten/

  18. GDPR-info.eu. “Art. 17 GDPR, Right to erasure.” Available at: https://gdpr-info.eu/art-17-gdpr/

  19. SecurePrivacy (2026). “EU AI Act 2026 Compliance Guide.” Available at: https://secureprivacy.ai/blog/eu-ai-act-2026-compliance

  20. Orrick (2025). “The EU AI Act: 6 Steps to Take Before 2 August 2026.” Available at: https://www.orrick.com/en/Insights/2025/11/The-EU-AI-Act-6-Steps-to-Take-Before-2-August-2026

  21. SecurePrivacy (2026). “Privacy Laws 2026: Global Updates and Compliance Guide.” Available at: https://secureprivacy.ai/blog/privacy-laws-2026

  22. Forrester (2025). “Spend on Generative AI Will Grow 36% Annually to 2030.” Available at: https://www.forrester.com/blogs/spend-on-generative-ai-will-grow-36-annually-to-2030/

  23. Forrester. “Global AI Software Forecast, 2023 to 2030.” Available at: https://www.forrester.com/report/global-ai-software-forecast-2023-to-2030/RES179806

  24. Grand View Research. Serverless Computing Market Report. Referenced at: https://americanchase.com/future-of-serverless-computing/

  25. Wolters Kluwer (2025). “Privacy in transition: What 2025 taught us and how to prepare for 2026.” Available at: https://www.wolterskluwer.com/en/expert-insights/privacy-in-transition-what-2025-taught-us-and-how-to-prepare-for-2026

  26. CodeConductor (2026). “Disposable AI Apps: AI Is Changing Software Development in 2026.” Available at: https://codeconductor.ai/blog/disposable-apps-ai-changing-software-development/

  27. Artificial Intelligence News (2025). “AI in 2026: Experimental AI concludes as autonomous systems rise.” Available at: https://www.artificialintelligence-news.com/news/ai-in-2026-experimental-ai-concludes-autonomous-systems-rise/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * A quiet Friday winds down. In the bottom of the sixth inning, my Rangers are leading the Rockies 9 to 2. Earlier today in the college basketball game I followed, Michigan beat Ohio St. 71 to 67. As Michigan is predicted by many to with the NCAA Championship, I'm proud of Ohio State's performance. After this baseball game ends there's nothing else I have scheduled other than finishing my night prayers and turning in early.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 230.49 lbs * bp= 143/85 (64)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:05 – 1 banana * 08:30 – fried chicken * 09:15 – lasagna * 14:40 – more lasagna * 16:00 – 1 fresh apple * 18:05 – 1 peanut butter sandwich

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:00 – bank accounts activity monitored * 05:10 – read, write, pray, follow news reports from various sources, surf the socials, and nap * 10:30 – listening to the pregame show ahead of today's Ohio St. vs Michigan men's basketball game * 13:25 – and Michigan wins, 71 to 67. * 15:00 – activated the MLB Gameday Screen, and the audio feed for the radio call of this afternoon's game between the Rangers and the Rockies * 18:05 – and the Rangers win, 9 to 4.

Chess: * 15:30 – moved in all pending CC games

 
Read more...

Join the writers on Write.as.

Start writing or create a blog