from eivindtraedal

Det var altså Canadas statsminister, ikke Storbritannias, som skulle holde “Love actually-talen” som alle har ventet på. Mark Carney sier høyt for åpen scene det europeiske statsleder har hvisket på bakrommet, eller latt ligge mellom linjene: den gamle verdensordenen ledet av USA er død, vi må bygge en ny.

Den viktigste setningen for oss i Norge er kanskje denne: “The middle powers must act together, because if we're not at the table, we're on the menu”. Norge er ikke en “middle power”, men vi er naboen til en. Sammen med EU og Canada utgjør vi 20 % av verdensøkonomien. Men akkurat nå er Norge svært sårbare. NATO har ingen reell troverdighet, og vi står utenfor EU. Vi sitter ikke ved bordet, altså står vi på menyen, enten det skulle gjelde en handelskrig eller en skarp krig.

Det er verdt å gjenta noen ubehagelige fakta: USA har ALLEREDE pekt ut liberale demokratier i Europa som en sikkerhetstrussel, og erklært det som et mål å bryte opp EU og styrke høyreradikale partier i Europa. De er nå i ferd med å tvinge til seg territorie fra et NATO-land. Ideologene rundt Trump ser for seg en verden av “interessesoner” delt opp mellom Kina, Russland og USA. De ser for seg at Europa skal være på menyen, noe Trump viser med sine handlinger nå. Han forakter oss, og det samme gjør hans nærmeste medarbeidere.

Dette er den største og mest dramatiske endringene i den geopolitiske situasjonen siden andre verdenskrig, og de angår oss direkte. Hvis du ikke er villig til å reeavluere dine standpunkter til EU i denne situasjonen, er du enten ideologisk forblindet eller svært dårlig orientert om verden rundt oss.

EU-motstanderne ser ut til å ha gått tom for argumenter før diskusjonen har begynt. “EU-debatt er splittende”. Ok, men vi må ta den allikevel! “EU-tilhengere vil skremme oss inn i EU”. Ja, det er fordi situasjonen er reelt skremmende! Ulven er her. Det er helt riktig å rope ulv!

Mange nordmenn har brukt så mye av livet sitt på å fremstille Europa som Norges største trussel at det er vanskelig å omstille hodet. På gårsdagens “Debatten” argumenterte Trygve Slagsvold Vedum på en måte som antyder at han gladelig hadde avgitt hele Norge til USAs kontroll hvis Trump hadde pekt på oss. “Ingen vits i å støtte Danmark i avskrekking, USA vil bare skyte oss uansett.” Da er det vel bare å sende søknad om å bli den. 51. stat, Trygve?

Heldigvis har vi et annet valg. Vi kan være med i et demokratisk fellesskap av stater som respekterer hverandres frihet og uavhengighet, og som har makt og ressurser til å avskrekke fiender. Hadde Norge vært et mer rasjonelt land, hadde søknaden blitt sendt seinest 6. november 2024. Men den nest beste dagen er i dag.

 
Read more... Discuss...

from Faucet Repair

29 December 2025

Seen while commuting to the studio today: a hollow rectangular yellow road divider on its side, sun on it from an angle that threw a slanted shadow across its inside. Tall silhouetted street lights repeating in the reflections of flat windows. A single skinny street light framed and floating in the clouds (partial reflection while looking out the bus window). A gray chicken wing eaten clean on the bus floor by my feet.

 
Read more...

from Faucet Repair

27 December 2025

On the plane from Lisbon back to London, a bit of red-brown hair belonging to the woman in front of me curling around the back of her seat to my tray table, sun shining on it from the window. When I looked closely, I could see little rainbow dots pocking the glinting tops of each lock loop.

 
Read more...

from Iain Harper's Blog

I recently wrote at length about the historical context and the moral and ethical reactions to synthetic content, particularly low-quality content colloquially known as “Slop”.

https://iain.so/ai-slop-psychology-history-and-the-problem-of-the-ersatz

Over the Christmas period, there was an interesting storm in a teacup (with two handles?) when tech blogger John Gruber published two posts accusing Apple CEO Tim Cook of sharing “AI slop” on Twitter/X.

The image in question, a whimsical illustration of milk and cookies promoting Apple TV+'s Pluribus, was created by established artist Keith Thomson.

Keith Thomson Pluribus image for Apple

Gruber's posts demonstrate how the legitimate concern about AI-generated content has metastasised into something far less useful, a witch-hunt mentality that now threatens the reputations of working artists.

The First Post: Accusation as Headline

On 27 December, Gruber published a post titled “Tim Cook Posts AI Slop in Christmas Message on Twitter/X, Ostensibly to Promote 'Pluribus'.”

https://daringfireball.net/linked/2025/12/27/slopibus

The title construction is interesting; this isn't “Does Tim Cook's Image Look Like AI?” or “Questions About Cook's Christmas Post.” The headline presents the accusation as an established fact. So we presume we’re going to see some, you know, actual evidence.

It turns out that Gruber's case rests on several subjective observations about the image:

The soft focus tree with a crisp edge. This is a standard technique in both photography and illustration. Selective focus with defined edges appears throughout Thomson's portfolio. It's not evidence of AI, it's evidence of artistic choice.

The milk carton labelled both “Whole Milk” and “Lowfat Milk.” Gruber found this damning. He later added an update acknowledging that the actual props from Pluribus have exactly the same labelling. But rather than reconsidering his thesis, he dismissed this as “a stupid mistake to copy”. When, in fact, the image is just accurately reflecting the movie’s props.

Furthermore, milk cartons are central to the plot of Pluribus. In Episode 5, “Got Milk,” protagonist Carol Sturka investigates mysterious milk cartons that the hive-mind Others consume. The conflicting labels aren't a mistake to copy; they're a deliberate reference to the show's core mystery. The “errors” are deliberate and calculated.

The “Cow Fun Puzzle” maze. Gruber writes that he “can't recall ever seeing a puzzle of any kind on a milk carton” and suggests this conflates milk cartons with cereal boxes. This is simply a failure of memory or imagination. Mazes and puzzles have appeared on milk cartons for decades, particularly in the American market where Pluribus is set. For example, a 2002 Packaging World article documented a Crayola school milk program in which cartons were printed with “puzzles and brainteasers” on the side panel.

The general “weirdness” of the image. Subjective aesthetic judgments dressed up as forensic analysis aren't evidence. Thomson's established body of work frequently features surreal, off-kilter scenes that blend everyday objects with unexpected elements. His style has been compared to a modern, whimsical Edward Hopper.

The Scam Theory

Most troublingly, Gruber wrote: “Apple must have somehow fallen for a scam, because that Keith Thomson's published paintings are wonderful.”

Let's be clear about what's being alleged here: that a professional artist with an established portfolio and decades of work deliberately defrauded one of the world's largest companies by submitting AI-generated work as his own. This is an extraordinary accusation to make without solid evidence.

The Follow-Up: Doubling Down

Two days later, Gruber published “Slop Is Slop,” which was somehow even worse.

https://daringfireball.net/2025/12/slop_is_slop

The “Non-Denial Denial”

When journalists contacted Keith Thomson, he responded: “I'm unable to comment on specific client projects. In general, I always draw and paint by hand and sometimes incorporate standard digital tools.”

Gruber's interpretation? “That is a non-denial denial that he used generative AI to create the image.”

This reading is remarkable. An artist says he draws and paints by hand and sometimes uses standard digital tools. Gruber treats this as a confession of AI use because... it didn't explicitly exclude AI? By this standard, any artist who doesn't specifically deny using every conceivable tool in every interview is implicitly admitting to using them.

“Standard digital tools” in the illustration world typically refers to a wide range of software, such as Photoshop, Illustrator, and Procreate, that have been industry standards for decades. Interpreting this phrase as a coded admission of generative AI use requires a level of motivated reasoning that borders on the paranoid.

Perhaps those tools did, in fact, include generative AI, intentionally used to underscore the show's nuance and themes. That's clever and playful, not slop.

Rejecting the Obvious Explanation

M.G. Siegler, a former Google Ventures partner, suggested the image might be deliberately referencing Pluribus's AI themes.

The show is explicitly about a hive mind that functions eerily like a large language model. It can't create anything truly new; it can only recombine existing knowledge. Siegler wondered whether the promotional image might be playing with these very themes.

Multiple critics have also noted that Pluribus functions as an allegory for generative AI. James Poniewozik of the New York Times explicitly drew parallels between the show's premise and “the modern lure of AI, which promises to deliver progress and plenty for the low, low price of smooshing all human intelligence into one obsequious collective mind.”

Gruber's response was contemptuous: “I think MG didn't put enough y's in the wayyyy in 'I'm sure I'm reading wayyyy too much into that tweet'. There is no 3D chess being played here.”

But consider what Gruber is asking us to believe: that Apple, a company notoriously obsessive about brand presentation, accidentally published sloppy AI-generated artwork to promote their flagship new show, credited a specific artist by name, and doubled down when challenged, all without anyone noticing it was AI.

Against this, Siegler's theory that a promotional image for a show about AI themes might deliberately play with AI aesthetics seems almost boringly straightforward.

Occam's Razor's Misapplication

Gruber invokes Occam's razor, arguing that “the simplest explanation is that it simply is AI-generated slop, and Keith Thomson suckered Apple into paying for it.”

This is a fundamental misuse of Occam's razor. The principle isn't “assume the most cynical interpretation.” It's “don't multiply explanatory entities unnecessarily.”

The simplest explanation for the image is:

  1. Apple commissioned promotional art from a professional artist
  2. The artist created an image referencing the show's plot
  3. The image was designed to look slightly “off” to match the show's themes about compromised reality
  4. The artist's style, which has always embraced surreal elements, was deliberately deployed
  5. Apple published it.

Gruber's “simple” explanation requires:

  1. Keith Thomson, an established artist with decades of work, decided to commit professional fraud
  2. He submitted AI-generated work as his own hand-made art
  3. Apple's entire marketing apparatus failed to notice
  4. When challenged, Apple doubled down and explicitly credited the work as human-made
  5. Thomson gave statements that carefully avoided denying AI use (implying conspiracy).

Which of these scenarios actually requires fewer assumptions?

Separately, Gruber has written thoughtfully about AI and art. In October 2025, he published a piece acknowledging that “generative AI tools not only can be, but already are, used to create genuine art.” He claims his objection isn't to AI itself but to “slop”: low-quality output passed off as craftsmanship.

Fair enough. But the Pluribus incident shows how easily this reasonable concern can morph into something uglier: a presumption of guilt, a refusal to consider alternative explanations, and a willingness to publicly accuse working artists of fraud based on aesthetic hunches.

The backlash against AI-generated imagery is understandable. Genuine slop exists, and people have every right to be concerned about it. But “slopback”, the reflexive accusation of AI use based on vibes and pattern-matching, helps no one.

John Gruber, a normally careful writer, let his suspicions outrun his evidence and publicly accused a working artist of fraud. At a minimum, he owes Keith Thomson an apology.

The benefit of the doubt, as PiunikaWeb noted, “is gone in 2025.” Perhaps we should work on getting it back.

 
Read more... Discuss...

from Build stuff; Break stuff; Have fun!

I was scrolling through my drafted posts and realized how much has changed in the last year.

There are posts I had prepared where I wanted to show how I migrated X to Y, for example. With AI in mind, this could now be done in minutes, and it no longer feels worth talking or writing about.

A lot of these posts can probably be deleted because they already feel outdated. It’s crazy how much has changed in such a short time, but it’s also exciting to see how all of this continues to evolve.


90 of #100DaysToOffload
#log #ai
Thoughts?

 
Weiterlesen... Discuss...

from laska

J'ai terminé trois BD dont j'ai bien envie de vous parler. Surtout que j'ai fini par la plus mignonne, ça m’a motivée.

Si vous préférez en savoir le moins possible avant de les lire, comme moi, ne lisez que le premier paragraphe pour chaque ouvrage :)

[Cet article évoque la santé mentale, de la violence parentale, des hospitalisations et des violences auto infligées pour les deux premières BD.]

D'abord, Qu'est-ce qui monte et qui descend, chroniques d'une borderline de KNL. Davantage un ouvrage illustré qu'une bande dessinée, c'est un récit autobiographique introuvable sur le site de l'éditeur Hachette. Le site web de l'autrice est aussi introuvable. J'espère qu'elle va bien, parce qu'elle est borderline. Ça veut dire qu'elle traverse des hauts et des bas très intenses, jusqu'à l'extrême. Une tendance à penser en noir ou blanc, un ami qui ne répond pas aux messages dans l'heure, c'est forcément qu'il ne l'apprécie plus. Tous ses symptômes, KNL les décrit super bien.

Elle parle des hospitalisations, des aspects légers du genre les blagues avec les soignant'es. Mais aussi la maltraitance de l'institution en France, quand en 2014 encore le téléphone est confisqué, le pyjama obligatoire, une visite par jour... Dans les points positifs, elle a rencontré de bons médecins et un équilibre médicamenteux. Enfin, équilibre fragile mais beaucoup d'espoir, toujours. Je suis assez étonnée qu'elle ne soit pas tombée sur des soignant'es maltraitant'es, en tout cas ce n'est pas évoqué. Elle ne relate aucun entretien médical d’ailleurs.

La chance qu'elle a et que de nombreux patients n'ont plus, c'est sa mère et son compagnon qui sont des soutiens indéfectibles. (Ce qui ne veut pas dire qu’elle devrait aller mieux que ça comme on peut parfois l’entendre).

Des discussions entre patient'es hospitalisé'es, ressort quelques indiscrétions mais surtout du soutien. (Déballer sa vie entière dès la première minute, c'est ce que j'avais constaté aussi, mais avec des horreurs au passage qui me hantent encore.)

Cette BD est très colorée et belle. Elle est aussi didactique (c'est à dire qui vise à instruire, je ne suis jamais au clair avec ce mot et c'est un peu ironique). Je la recommande, si vous la trouvez en bibliothèque vu que je ne sais pas si elle est disponible en rayon.


Passons à Chère Maman, les mères aussi peuvent être toxiques. De Sophie Adriansen et Melle Caroline (allez lire ses BD autobiographiques c'est absolument génial), le titre parle de lui-même.

Alix l'héroïne est assez effacée devant sa mère, qui la rabaisse tout le temps. Elle n'a pas les ressources pour se mettre les mots sur son comportement et elle lui trouve des excuses.

Mais à force d'entendre le fameux “On n'a qu'une mère” dont il faudrait s'accommoder de tout, elle se met, très progressivement, en colère. Là j'ai commencé à respirer, parce qu'au début j'avais envie de hurler.

Facile à dire, hein ? Quand on arrive 35 ans après et qu'on regarde ça de l'extérieur, qu'on a pas grandi avec. Les autres membres de la famille ne se rendent d'ailleurs pas compte du traitement que la mère inflige à sa fille. Et d'autant plus parce que c'est à cette fille-là.


Enfin, je devais finir avant le reste de ma pile L'amourante, parce que quelqu'un d'autre l'a réservé après moi à la bibliothèque. Quel chouette non-choix ! Ça parle d'une jeune femme, Louise, qui ne vieillit plus tant que quelqu'un l'aime.

C'est un peu Entretien avec un vampire où l'on traverse les siècles avec pertes et fracas. Mais du point de vue d'une femme , ça n'est pas la même expérience, qui l'eût cru. Et ça commence plus tôt dans l'Histoire. Qu'est-ce que c'est beau, tous ces paysages et ces voyages! Son amour de l'art aussi, de l'écriture de contes aux marionnettes.

Mais évidemment on se pose la question de qu'est-ce que c'est d'être humain, comment on vit en trompant les gens éternellement mais qu'on a une âme, enfin quelques scrupules.

Je me retrouve dans ce personnage assez froide, très blasée, peu intéressée par la passion qui porte nombre de ses congénères.

Et pourtant, c'est un bonbon parce que même si je ne suis quasi jamais amoureuse, j'aime les romances et mon cœur a craquelé avec Louise.

Bonnes lectures !

 
Lire la suite...

from TECH

Imaginez un instant la scène. Une sorte de type louche vous alpague au coin d'une rue numérique pour vous murmurer: Hé petit, tu veux voir les entrailles de l'algorithme de X ? C'est juste là, sers-toi. C'est précisément l'impression que donne la récente manœuvre d'Elon Musk. Le milliardaire semble tenir, du moins en partie, une promesse faite il y a une semaine, ouvrir l'algorithme de recommandation de son réseau social au public. Si l'intention paraît noble sur le papier, la réalité ressemble davantage à une vaste opération de communication qu'à une véritable révolution de la transparence.

Il faut se souvenir que Musk avait déjà promis cette ouverture dès 2022. À l'époque, nous avions eu droit à un simple instantané du code, rapidement devenu obsolète, loin de la définition standard d'un projet open source vivant et collaboratif. Cette nouvelle tentative, bien que présentée comme un pas en avant, souffre des mêmes maux chroniques. Le patron de X a promis de mettre à jour ce dépôt toutes les quatre semaines, mais permettez-moi de douter de la tenue de cet engagement quand on observe le passif de l'entreprise.

Le problème principal réside dans ce qui manque. Elon Musk avait assuré qu'il publierait tout le code utilisé pour déterminer les recommandations, y compris pour les publicités. Or, de là où je suis assis, cette promesse est loin d'être tenue. Le code régissant l'affichage publicitaire brille par son absence. Plus troublant encore, le système de tri par défaut du fil “Abonnements”, désormais géré par l'IA Grok depuis novembre dernier, semble lui aussi introuvable dans ce dépôt. Nous avons donc affaire à un puzzle incomplet dont les pièces les plus lucratives et les plus opaques ont été soigneusement retirées de la boîte.

Le site web Gizmodo a tenté d'obtenir des réponses sur ces omissions flagrantes, mais le silence radio de X est devenu une norme inquiétante. Néanmoins, nous voici avec ce nouveau tas de code et la première chose à savoir est que, selon les propres mots d'Elon Musk, cet algorithme est nul. C'est une déclaration fascinante, surtout quand on la compare à celle de Nikita Bier, chef produit chez X, qui se vante d'une augmentation du temps d'engagement des nouveaux utilisateurs. Qui croire ? L'algorithme est-il inefficace ou est-il une machine à addiction trop performante ?

La vérité est probablement plus cynique. Celui décrit dans la documentation technique ressemble à une mise à jour de la méthode TikTok. Un système conçu pour capturer votre attention à tout prix. Il ne cherche pas à vous informer ou à élever le débat, mais à stimuler vos pulsions les plus primaires. Il privilégie l'engagement pur, cherchant désespérément ce qui vous fera arrêter de scroller, quitte à vous inonder de contenus clivants. C'est un mécanisme qui flatte votre ça et ignore totalement votre surmoi.

Musk qualifie également son algorithme de stupide, une réponse directe aux plaintes de certains utilisateurs conservateurs américains, comme Mark Kern, qui estiment que le système pénalise les comptes souvent bloqués. Si cela est techniquement plausible, il est hilarant de voir ces critiques omettre que les comptes massivement bloqués sont souvent des vecteurs de harcèlement. L'algorithme ne serait donc pas woke, mais simplement un filtre basique contre les comportements toxiques, ce qui semble déranger une frange spécifique de l'utilisateur “libéré” par Musk.

Mais le point le plus critique de cette fausse transparence réside dans la nature même du système. X admet que tout repose désormais sur une architecture d'intelligence artificielle basée sur Grok. L'analyse ne se fait plus via des règles manuelles compréhensibles par un humain, mais par un apprentissage automatique opaque qui ingurgite vos clics, vos réponses et vos favoris pour recracher ce qu'il juge pertinent. Ouvrir le code d'une boîte noire neuronale est un non-sens. Voir le code source du conteneur ne vous explique pas comment l'IA prend ses décisions à l'intérieur. C'est du théâtre de la transparence, rien de plus.

Le contexte aggrave ce sentiment de fumisterie. La plateforme est devenue une entreprise privée, fuyant les obligations de reporting public et a récemment écopé d'amendes de l'Union Européenne pour son manque de transparence. De plus, l'outil Grok est actuellement sous le feu des critiques pour avoir généré des images non consensuelles à caractère sexuel. Dans ce climat de dérégulation et de chaos, jeter quelques lignes de code en pâture au public ressemble à une diversion maladroite.

Nous sommes face à deux concepts irréconciliables: les besoins d'une entreprise qui doit accrocher l'utilisateur pour vendre de la publicité et le désir humain d'être bien informé et serein. Rendre l'algorithme open source ne résoudra jamais cette équation impossible tant que le but ultime restera la maximisation du profit par l'attention. Nous verrons si les développeurs externes parviennent à extraire quelque chose d'utile de ce code, mais il y a fort à parier que cette opération ne serve qu'à masquer la réalité d'un service devenu un casino attentionnel imprévisible.

 
Lire la suite... Discuss...

from An Open Letter

I’m at such a huge point in my life, but it’s also been such a nightmarishly stressful and shit day. Tomorrow will be better.

 
Read more...

from Thoughts on Nanofactories

It is the future, and Nanofactories have brought the rest of the universe to our doorstep. Any material object can now be printed directly into our living rooms, granted we have the schema files.

This immediacy is incredibly efficient, but there is a sense we have lost some of the magic and mystery that we used to attach to things distant and rare. Historic romantic obsessions, such as Orientalism or Futurism, were powerful to people largely because the objects of desire were distant and near-unobtainable. That gap between the desire and the acquisition created a space for the imagination to fly and bloom.

That’s not to say we want to go back to such romantic fantasies, because, as we now clearly know: they were fantasies. Orientalism could be positive when it enchanted people to want to learn more about very real cultures to their East. But it was destructive when it was shared as accurate representation. At its worst, it portrayed people from the Middle-East and Asia as “other” enough that Western civilians would not empathize when Western governments invaded, pillaged, enslaved, and forced opium through those same Eastern nations.

Let us never return to that world.

Futurism was different, but still tricky. Through Humanist and adjacent movements, it inspired many many people to contribute to the sciences, to engineering, and to the arts. We had taken the reigns of history from unpredictable gods, and assure ourselves that we were on track to a better future. There was indeed widespread flourishing as a result of this movement, but there were also those who became too obsessed and blinded by the fantasy of that future, that they willingly sacrificed their present world for it.

Within a few decades of the birth of the Internet – that Futurist invention which connected people across Earth – the leading technology companies ignored consent and unleashed armies of “scraper” bots to ingest all of the information and creations of the world’s population to train their new Generative Artificial Intelligence models. These companies built vast data centers to power these models, sucking up enormous percentages of global energy, water, and computation hardware in the process.

Even at the time, many people were asking who would commit to such destructive, trust-breaking, and unsustainable development. These were not idiots – far from it. It was because the Futurist fantasy they held core included building god-like super-AI panaceas which would cure all of the world’s issues. When you are bringing about God, any sacrifice is worth it. And yet again these Romantics of the Distant committed their lives to false utopian fantasies, and sacrificed the consent and rights of many real humans along the way.

So now that we can print anything, whenever we desire, does Romanticism still have a place in our world? Should it? I suppose those among you old enough might remember a time when many Nanofactory design schemas were hard to find online. That time brought a kind of romanticism. I remember spending three days searching for the schema for a Honeyball Orange tree, as I had heard they were the tastiest of oranges. For three days I looked, and convinced myself in the process that these was going to be life-changingly tasty. No other orange tree could match. I didn’t find it at the time, but years later, during the Open-Schema boom, I finally came across it. I printed it, picked, and sliced open my first Honeyball Orange. And finally I tasted it. It was delicious. It really was. But I don’t know whether I can say it was life-changing in the way I had charged myself to believe. I probably wouldn’t recommend anyone else spending three days for this orange.

This is where I’m particularly intrigued by the recent “Hinted Design” movement emerging from the Avant-Garde side of the Printed Arts. One of their core philosophies centres around creating print schemas for objects which do not tell the whole story, but are designed to spark the imagination to fill out the empty space and the blanks. They often utilize moving lines, glassfreeze, and photon catchers to give a shimmering sense of a full object that could be.

I suspect that this kind of Romanticism we should continue to pursue – where we go in knowing that it is invigorating imagination at work, but also knowing that these distant fantasies are made to inspire, not to convey reality itself.

 
Read more... Discuss...

from Roberto Deleón

Vivo solo desde 2014. Doce años de silencio cotidiano cambian la forma en que uno se relaciona con los demás.

Aprendí a estar en silencio. A tenerme como compañero. A habitar el tiempo de la manera que más me gusta.

Por eso, cuando alguien se me acerca con expectativas, con intensidad o con cierta invasión, generalmente no reacciono bien. No es rechazo: es cuidado. Protejo el espacio que me permite respirar.

Con el tiempo entendí que no se trataba de aislarme, sino de ordenar.

Hubo un tiempo en el que no sabía estar solo. Necesitaba estar con amigos, con mi novia de ese entonces, siempre acompañado. Y lo disfrutaba. Pero sin planearlo, sin decidirlo conscientemente, la vida me fue llevando —con paciencia— al lugar que hoy habito.

Hoy quiero escribir sobre cómo he ordenado eso que llamo mi ecosistema social.


¿Qué es mi ecosistema social?

Es el conjunto de espacios, ritmos y personas que me rodean y que cumplen, más o menos, estas condiciones:

  • no exigen intensidad
  • no colapsan en el silencio
  • permiten conversación sin agenda
  • no dependen de las redes

No es una renuncia a los vínculos, es una forma de evitar que se desgasten.

Con el tiempo entendí que este ecosistema funciona mejor cuando tiene tres capas claras y dos ritmos bien cuidados.


Las tres capas

Capa 1: el núcleo

Una o dos personas. No más.

Son vínculos donde puedo hablar de cualquier cosa: lo importante, lo absurdo, lo profundo o simplemente jugar y bromear. No necesitan ser sostenidos activamente. El silencio de horas o días no se vive como abandono.

A veces es una caminata que se retoma después de semanas. O un audio que no exige respuesta inmediata. O una conversación que continúa exactamente donde quedó.

Estos vínculos no se buscan: se reconocen cuando aparecen. Curiosamente, suelen nacer en conversaciones laterales, sin intención, sin escenario preparado. Más adelante quiero desarrollar esa idea.

Suelen ser amigos antiguos, o personas con las que el tiempo ya hizo su trabajo.

Aquí el eje no es la conversación ni el silencio por sí solos, sino la confianza. El tiempo de calidad aparece cuando hace falta, no cuando se fuerza.


Capa 2: la afinidad

Aquí están las personas con las que converso bien. Hay intereses en común, pero el vínculo todavía no es núcleo.

Cuando nos vemos —o acordamos vernos— hablamos de eso que compartimos: cocina, matemáticas, física, ingeniería, ideas. No se trata de quién tiene la razón ni de quién sabe más. No hay competencia. Hay respeto y admiración mutua: yo por lo que saben, ellos por lo que yo sé.

En esta capa, la conversación es el centro. Y eso, para mí, es profundamente satisfactorio.


Capa 3: el hábitat (espacios, no personas)

Esta capa no está hecha de gente, sino de lugares.

Son mis espacios. Lugares donde no busco conversación; si ocurre, es un efecto secundario. No tengo que presentarme, no tengo que rendir. Voy solo y regreso lleno.

Aquí viven mi clase de cocina, el gimnasio, la natación, el ciclismo cuando hay buen grupo, algunas charlas o talleres a los que asisto con regularidad, o incluso ese restaurante que ya me reconoce sentado solo.

Este hábitat sostiene todo lo demás.


El ritmo: donde todo se cuida

Los vínculos que valen, se sostienen solos. Pero necesitan ritmo.

Aquí es donde debo encontrar el compás que más me gusta entre las tres capas. Demasiado de algo cansa. Muy poco, y el vínculo no se desarrolla.

Si vamos de afuera hacia adentro:

  • Un hábitat visitado en exceso aburre. Mucho ciclismo, mucha cocina, demasiado de lo mismo, pierde encanto.
  • En la capa de afinidad, abusar del contacto no deja madurar ideas ni vivir experiencias nuevas; no da espacio al crecimiento silencioso.
  • Y si el núcleo deja de ser tranquilo y natural para volverse obligado, actuado o forzado, también se agota.

El ritmo no se impone: se escucha.


El lenguaje también cuida el ritmo

Por eso cuido mucho cómo hablo de los encuentros.

No me nace decir: “a ver cuándo nos vemos” “deberíamos vernos”

Prefiero frases como: “me gustó esta conversación” “cuando coincida, seguimos” “otro día seguimos hablando”

No cierran puertas. Dejan respirar el encuentro.

Porque si ya valió la pena, no necesita ser amarrado.

Tal vez cuidar los vínculos no sea vernos más, sino saber cuándo dejar espacio.


Con el tiempo entendí que mi ecosistema social no se trata de sumar personas, planes o presencia, sino de cuidar las condiciones: el espacio, el ritmo y la calidad con la que algo ocurre.

Hay vínculos que crecen en la conversación, otros en el silencio, y otros simplemente en saber que están ahí.

Todo lo demás es ruido.

Este es el mapa que hoy me funciona.

Si el tuyo se dibuja distinto, o si algo de esto te hizo pensar, puedes escribirme. Las buenas conversaciones no siempre empiezan hablando.

Envíame tu comentario, lo leeré con calma→ https://tally.so/r/2EEedb

 
Leer más... Discuss...

from Chemin tournant

Fond d'orange badigeon sur la toile qui chute entre du vert et la grisaille des troncs, les cours odorant l'air pour l'office d'un rouge de palmiste et l'air qui nous soupente, accroche au ciel passé la vieille angoisse humaine, soi ne pouvant jamais soustraire ses rêves à ce qui vient. L'usure de la lumière gagne le bord des choses.

Nombre d’occurrences : 14

#VoyageauLexique

Dans ce deuxième Voyage au Lexique, je continue d’explorer, en me gardant de les exploiter, les mots de Ma vie au village (in Journal de la brousse endormie) dont le nombre d’occurrences est significatif.

 
Lire la suite... Discuss...

from G A N Z E E R . T O D A Y

Spoke too soon: Awoke at 3:00am for no apparent reason, so decided to burn the night oil and continue work on the Ganzeer.com update.

It's mostly organizational: Editing work categories and adding new ones, and shuffling projects around and fixing metadata, that sort of thing. Not fun, but should hopefully result in a web presence that makes better sense.

I did however come across a couple dusty external hard-drives that hold a treasure trove of olden works, including documentation of the first publication I ever put together: 8x8!

Won't get around to populating the website with most of that old stuff till I'm done with organizing what's already on there first. I reckon I'll probably crash by sunrise and have to make an attempt to recalibrate my body's circadian rhythms once again over the days to come.

#journal #work

 
Read more... Discuss...

from SmarterArticles

In February 2025, artificial intelligence researcher Andrej Karpathy, co-founder of OpenAI and former AI leader at Tesla, posted a provocative observation on social media. “There's a new kind of coding I call 'vibe coding',” he wrote, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” By November of that year, Collins Dictionary had named “vibe coding” its Word of the Year, recognising how the term had come to encapsulate a fundamental shift in humanity's relationship with technology. As Alex Beecroft, managing director of Collins, explained: “The selection of 'vibe coding' as Collins' Word of the Year perfectly captures how language is evolving alongside technology.”

The concept is beguilingly simple. Rather than writing code line by line, users describe what they want in plain English, and large language models generate the software. Karpathy himself described the workflow with disarming candour: “I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like 'decrease the padding on the sidebar by half' because I'm too lazy to find it. I 'Accept All' always, I don't read the diffs anymore.” Or, as he put it more succinctly: “The hottest new programming language is English.”

For newsrooms, this represents both an extraordinary opportunity and a profound challenge. The Generative AI in the Newsroom project, a collaborative effort examining when and how to use generative AI in news production, has been tracking these developments closely. Their assessment suggests that 2026's most significant newsroom innovation will not emerge from development teams but from journalists who can now create their own tools. The democratisation of software development promises to unlock creativity and efficiency at unprecedented scale. But it also threatens to expose news organisations to security vulnerabilities, regulatory violations, and ethical failures that could undermine public trust in an industry already battling credibility challenges.

The stakes could hardly be higher. Journalism occupies a unique position in the information ecosystem, serving as a watchdog on power while simultaneously handling some of society's most sensitive information. From whistleblower communications to investigative documents, from source identities to personal data about vulnerable individuals, newsrooms are custodians of material that demands the highest standards of protection. When the barriers to building software tools collapse, the question becomes urgent: how do organisations ensure that the enthusiasm of newly empowered creators does not inadvertently compromise the very foundations of trustworthy journalism?

The Democratisation Revolution

Kerry Oslund, vice president of AI strategy at The E.W. Scripps Company, captured the zeitgeist at a recent industry panel when he declared: “This is the revenge of the English major.” His observation points to a fundamental inversion of traditional power structures in newsrooms. For decades, journalists with story ideas requiring custom tools had to queue for limited development resources, often watching their visions wither in backlogs or emerge months later in compromised form. Vibe coding tools like Lovable, Claude, Bubble AI, and Base44 have shattered that dependency.

The practical implications are already visible. At Scripps, the organisation has deployed over 300 AI “agents” handling complex tasks that once required significant human oversight. Oslund described “agent swarms” where multiple AI agents pass tasks to one another, compiling weekly reports, summarising deltas, and building executive dashboards without human intervention until the final review. The cost savings are tangible: “We eliminated all third-party voice actors and now use synthetic voice with our own talent,” Oslund revealed at a TV News Check panel.

During the same industry gathering, leaders from Gray Media, Reuters, and Stringr discussed similar developments. Gray Media is using AI to increase human efficiency in newsrooms, allowing staff to focus on higher-value journalism while automated systems handle routine tasks.

For community journalism, the potential is even more transformative. The Nieman Journalism Lab's predictions for 2026 emphasise how vibe coding tools have lowered the cost and technical expertise required to build prototypes, creating space for community journalists to experiment with new roles and collaborate with AI specialists. By translating their understanding of audience needs into tangible prototypes, journalists can instruct large language models on the appearance, features, and data sources they require for new tools.

One prominent data journalist, quoted in coverage of the vibe coding phenomenon, expressed the reaction of many practitioners: “Oh my God, this vibe coding thing is insane. If I had this during our early interactive news days, it would have been a godsend. Once you get the hang of it, it's like magic.”

But magic, as any journalist knows, demands scrutiny. As programmer Simon Willison clarified in his analysis: “If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book. That's using an LLM as a typing assistant.” The distinction matters enormously. True vibe coding, where users accept AI-generated code without fully comprehending its functionality, introduces risks that newsrooms must confront directly.

The Security Imperative and Shadow AI

The IBM 2025 Cost of Data Breach Report revealed statistics that should alarm every news organisation considering rapid AI tool adoption. Thirteen percent of organisations reported breaches of AI models or applications, and of those compromised, a staggering 97% reported lacking AI access controls. Perhaps most troubling: one in five organisations reported breaches due to shadow AI, the unsanctioned use of AI tools by employees outside approved governance frameworks.

The concept of shadow AI represents an evolution of the “shadow IT” problem that has plagued organisations for decades. As researchers documented in Strategic Change journal, the progression from shadow IT to shadow AI introduces new threat vectors. AI systems possess intrinsic security vulnerabilities, from the potential compromising of training data to the exploitation of AI models and networks. When employees use AI tools without organisational oversight, these vulnerabilities multiply.

For newsrooms, the stakes are uniquely high. Journalists routinely handle information that could endanger lives if exposed: confidential sources, whistleblower identities, leaked documents revealing government or corporate malfeasance. The 2014 Sony Pictures hack demonstrated how devastating breaches can be, with hackers releasing salaries of employees and Hollywood executives alongside sensitive email traffic. Data breaches in media organisations are particularly attractive to malicious actors because they often contain not just personal information but intelligence with political or financial value.

The Gartner research firm predicts that by 2027, more than 40% of AI-related data breaches will be caused by improper use of generative AI across borders. The swift adoption of generative AI technologies by end users has outpaced the development of data governance and security measures. According to the Cloud Security Alliance, only 57% of organisations have acceptable use policies for AI tools, and fewer still have implemented access controls for AI agents and models, activity logging and auditing, or identity governance for AI entities.

The media industry's particular vulnerability compounds these concerns. As authentication provider Auth0 documented in an analysis of major data breaches affecting media companies: “Data breaches have become commonplace, and the media industry is notorious for being a magnet for cyberthieves.” With billions of users consuming news online, the attack surface for criminals continues to expand. Media companies frequently rely on external vendors, making it difficult to track third-party security practices even when internal processes are robust.

Liability in the Age of AI-Generated Code

When software fails, who bears responsibility? This question becomes extraordinarily complex when the code was generated by an AI and deployed by someone with no formal engineering training. The legal landscape remains unsettled, but concerning patterns are emerging.

Traditional negligence and product liability principles still apply, but courts have yet to clarify how responsibility should be apportioned between AI tool developers and the organisations utilising these tools. Most AI providers prominently display warnings such as “AI can make mistakes and verify the output” while including warranty disclaimers that push due diligence burdens back onto the businesses integrating AI-generated code. The RAND Corporation's analysis of liability for AI system harms notes that “AI developers might also be held liable for malpractice should courts find there to be a recognised professional standard of care that a developer then violated.”

Copyright and intellectual property considerations add further complexity. In the United States, copyright protection hinges on human authorship. Both case law and the U.S. Copyright Office agree that copyright protection is available only for works created through human creativity. When code is produced solely by an AI without meaningful human authorship, it is not eligible for copyright protection.

Analysis by the Software Freedom Conservancy found that approximately 35% of AI-generated code samples contained licensing irregularities, potentially exposing organisations to significant legal liabilities. This “licence contamination” problem has already forced several high-profile product delays and at least two complete codebase rewrites at major corporations. In the United States, a lawsuit against GitHub Copilot (Doe v. GitHub, Inc.) argues that the tool suggests code without including necessary licence attributions. As of spring 2025, litigation continued.

For news organisations, the implications extend beyond licensing. In journalism, tools frequently interact with personal data protected under frameworks like the General Data Protection Regulation. Article 85 of the GDPR requires Member States to adopt exemptions balancing data protection with freedom of expression, but these exemptions are not blanket protections. The Austrian Constitutional Court declared the Austrian journalistic exemption unconstitutional, ruling that it was illegitimate to entirely exclude media data processing from data protection provisions. When Romanian journalists published videos and documents for an investigation, the data protection authority asked for information that could reveal sources, under threat of penalties reaching 20 million euros.

A tool built through vibe coding that inadvertently logs source communications or retains metadata could expose a news organisation to regulatory action and, more critically, endanger the individuals who trusted journalists with sensitive information.

Protecting Vulnerable Populations and Investigative Workflows

Investigative journalism depends on systems of trust that have been carefully constructed over decades. Sources risk their careers, freedom, and sometimes lives to expose wrongdoing. The Global Investigative Journalism Network's guidance emphasises that “most of the time, sources or whistleblowers do not understand the risks they might be taking. Journalists should help them understand this, so they are fully aware of how publication of the information they have given could impact them.”

Digital security has become integral to this protective framework. SecureDrop, an open-source platform for operating whistleblowing systems, has become standard in newsrooms committed to source protection. Encrypted messaging applications like Signal offer end-to-end protection. These tools emerged from years of security research and have been vetted by experts who understand both the technical vulnerabilities and the human factors that can compromise even robust systems.

When a journalist vibe codes a tool for an investigation, they may inadvertently undermine these protections without recognising the risk. As journalist James Risen of The Intercept observed: “We're being forced to act like spies, having to learn tradecraft and encryption and all the new ways to protect sources. So, there's going to be a time when you might make a mistake or do something that might not perfectly protect a source. This is really hard work.”

The Perugia Principles for Journalists, developed in partnership with 20 international journalists and experts, establish twelve principles for working with whistleblowers in the digital age. First among them: “First, protect your sources. Defend anonymity when it is requested. Provide safe ways for sources to make 'first contact' with you, where possible.” A vibe-coded tool, built without understanding of metadata, logging, or network traffic patterns, could create exactly the kind of traceable communication channel that puts sources at risk.

Research from the Center for News, Technology and Innovation documents how digital security threats have become more important than ever for global news media. Journalists and publishers have become high-profile targets for malware, spyware, and digital surveillance. These threats risk physical safety, privacy, and mental health while undermining whistleblower protection and source confidentiality.

The resource disparity across the industry compounds these challenges. News organisations in wealthier settings are generally better resourced and more able to adopt protective technologies. Smaller, independent, and freelance journalists often lack the means to defend against threats. Vibe coding might seem to level this playing field by enabling under-resourced journalists to build their own tools, but without security expertise, it may instead expose them to greater risk.

Governance Frameworks for Editorial and Technical Leadership

The challenge for news organisations is constructing governance frameworks that capture the benefits of democratised development while mitigating its risks. Research on AI guidelines and policies from 52 media organisations worldwide, analysed by journalism researchers and published through Journalist's Resource, offers insights into emerging best practices.

The findings emphasise the need for human oversight throughout AI-assisted processes. As peer-reviewed analysis notes: “The maintenance of a 'human-in-the-loop' principle, where human judgment, creativity, and editorial oversight remain central to the journalistic process, is vital.” The Guardian requires senior editor approval for significant AI-generated content. The CBC has committed not to use AI-powered identification tools for investigative journalism without proper permissions.

The NIST AI Risk Management Framework provides a structured approach applicable to newsroom contexts. It guides organisations through four repeatable actions: identifying how AI systems are used and where risks may appear (Map), evaluating risks using defined metrics (Measure), applying controls to mitigate risks (Manage), and establishing oversight structures to ensure accountability (Govern). The accompanying AI RMF Playbook offers practical guidance that organisations can adapt to their specific needs.

MIT Sloan researchers have proposed a “traffic light” framework for categorising AI use cases by risk level. Red-light use cases are prohibited entirely. Green-light use cases, such as chatbots for general customer service, present low risk and can proceed with minimal oversight. Yellow-light use cases, which comprise most AI applications, require enhanced review and human judgment at critical decision points.

For newsrooms, this framework might translate as follows:

Green-light applications might include internal productivity tools, calendar management systems, or draft headline generators where errors create inconvenience rather than harm.

Yellow-light applications would encompass data visualisations for publication, interactive features using public datasets, and transcription tools for interviews with non-sensitive subjects. These require review by someone with technical competence before deployment.

Red-light applications would include anything touching source communications, whistleblower data, investigative documents, or personal information about vulnerable individuals. These should require professional engineering oversight and security review regardless of how they were initially prototyped.

Building Decision Trees for Non-Technical Staff

Operationalising these distinctions requires clear decision frameworks that non-technical staff can apply independently. The Poynter Institute's guidance on newsroom AI ethics policies emphasises the need for organisations to create AI committees and designate senior staff to lead ongoing governance efforts. “This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised.”

A practical decision tree for vibe-coded projects might begin with a series of questions:

First, does this tool handle any data that is not already public? If so, escalate to technical review.

Second, could a malfunction in this tool result in publication of incorrect information, exposure of source identity, or violation of individual privacy? If yes, professional engineering oversight is required.

Third, will this tool be used by anyone other than its creator, or persist beyond a single use? Shared tools and long-term deployments require enhanced scrutiny.

Fourth, does this tool connect to external services, databases, or APIs? External connections introduce security considerations that require expert evaluation.

Fifth, would failure of this tool create legal liability, regulatory exposure, or reputational damage? Legal and compliance review should accompany technical review for such applications.

The Cloud Security Alliance's Capabilities-Based Risk Assessment framework offers additional granularity, suggesting that organisations apply proportional safeguards based on risk classification. Low-risk AI applications receive lightweight controls, medium-risk applications get enhanced monitoring, and high-risk applications require full-scale governance including regular audits.

Bridging the Skills Gap Without Sacrificing Speed

The tension at the heart of vibe coding governance is balancing accessibility against accountability. The speed and democratisation that make vibe coding attractive would be undermined by bureaucratic review processes that reimpose the old bottlenecks. Yet the alternative, allowing untrained staff to deploy tools handling sensitive information, creates unacceptable risks.

Several approaches can help navigate this tension.

Tiered review processes can match the intensity of oversight to the risk level of the application. Simple internal tools might require only a checklist review by the creator themselves. Published tools or those handling non-public data might need peer review by a designated “AI champion” with intermediate technical knowledge. Tools touching sensitive information would require full security review by qualified professionals.

Pre-approved templates and components can provide guardrails that reduce the scope for dangerous errors. News organisations can work with their development teams to create vetted building blocks: secure form handlers, properly configured database connections, privacy-compliant analytics modules. Journalists can be directed to incorporate these components rather than generating equivalent functionality from scratch.

Sandboxed development environments can allow experimentation without production risk. Vibe-coded prototypes can be tested and evaluated in isolated environments before any decision about broader deployment. This preserves the creative freedom that makes vibe coding valuable while creating a checkpoint before tools reach users or sensitive data.

Mandatory training programmes should ensure that all staff using vibe coding tools understand basic security concepts, data handling requirements, and the limitations of AI-generated code. This training need not make everyone a programmer, but it should cultivate healthy scepticism about what AI tools produce and awareness of the questions to ask before deployment.

The Emerging Regulatory Landscape

News organisations cannot develop governance frameworks in isolation from the broader regulatory environment. The European Union's AI Act, adopted in 2024, establishes requirements that will affect media organisations using AI tools. While journalism itself is not classified as high-risk under the Act, AI systems used in media that could manipulate public opinion or spread disinformation face stricter oversight. AI-generated content, including synthetic media, must be clearly labelled.

The Dynamic Coalition on the Sustainability of Journalism and News Media released its 2024-2025 Annual Report on AI and Journalism, calling for shared strategies to safeguard journalism's integrity in an AI-driven world. The report urges decision-makers to “move beyond reactive policy-making and invest in forward-looking frameworks that place human rights, media freedom, and digital inclusion at the centre of AI governance.”

In the United States, the regulatory landscape is more fragmented. More than 1,000 AI-related bills have been introduced across state legislatures in 2024-2025. California, Colorado, New York, and Illinois have adopted or proposed comprehensive AI and algorithmic accountability laws addressing transparency, bias mitigation, and sector-specific safeguards. News organisations operating across multiple jurisdictions must navigate a patchwork of requirements.

The Center for News, Technology and Innovation's review of 188 national and regional AI strategies found that regulatory attempts rarely directly address journalism and vary dramatically in their frameworks, enforcement capacity, and international coordination. This uncertainty places additional burden on news organisations to develop robust internal governance rather than relying on external regulatory guidance.

Cultural Transformation and Organisational Learning

Technical governance alone cannot address the challenges of democratised development. Organisations must cultivate cultures that balance innovation with responsibility.

IBM's research on shadow AI governance emphasises that employees should be “encouraged to disclose how they use AI, confident that transparency will be met with guidance, not punishment. Leadership, in turn, should celebrate responsible experimentation as part of organisational learning.” Punitive approaches to unsanctioned AI use tend to drive it underground, where it becomes invisible to governance processes.

News organisations have particular cultural advantages in addressing these challenges. Journalism is built on verification, scepticism, and accountability. The same instincts that lead journalists to question official sources and demand evidence should be directed at AI-generated outputs. Newsroom cultures that emphasise “trust but verify” can extend this principle to tools and code as readily as to sources and documents.

The Scripps approach, which Oslund described as starting with “guardrails and guidelines to prevent missteps,” offers a model. “It all starts with public trust,” Oslund emphasised, noting Scripps' commitment to accuracy and human oversight of AI outputs. Embedding AI governance within broader commitments to editorial integrity may prove more effective than treating it as a separate technical concern.

The Accountability Question

When something goes wrong with a vibe-coded tool, who is responsible? This question resists easy answers but demands organisational clarity.

The journalist who created the tool bears some responsibility, but their liability should be proportional to what they could reasonably have been expected to understand. An editor who approved deployment shares accountability, as does any technical reviewer who cleared the tool. The organisation itself, having enabled vibe coding without adequate governance, may bear ultimate responsibility.

Clear documentation of decision-making processes becomes essential. When a tool is deployed, records should capture: who created it, what review it received, who approved it, what data it handles, and what risk assessment was performed. This documentation serves both as a protection against liability and as a learning resource when problems occur.

As professional standards for AI governance in journalism emerge, organisations that ignore them may face enhanced liability exposure. The development of industry norms creates benchmarks against which organisational practices will be measured.

Recommendations for News Organisations

Based on the analysis above, several concrete recommendations emerge for news organisations navigating the vibe coding revolution.

Establish clear acceptable use policies for AI development tools, distinguishing between permitted, restricted, and prohibited use cases. Make these policies accessible and understandable to non-technical staff.

Create tiered review processes that match oversight intensity to risk level. Not every vibe-coded tool needs security audit, but those handling sensitive data or reaching public audiences require appropriate scrutiny.

Designate AI governance leadership within the organisation, whether through an AI committee, a senior editor with oversight responsibility, or a dedicated role. This leadership should have authority to pause or prohibit deployments that present unacceptable risk.

Invest in training that builds basic security awareness and AI literacy across editorial staff. Training should emphasise the limitations of AI-generated code and the questions to ask before deployment.

Develop pre-approved components for common functionality, allowing vibe coders to build on vetted foundations rather than generating security-sensitive code from scratch.

Implement sandbox environments for development and testing, creating separation between experimentation and production systems handling real data.

Maintain documentation of all AI tool deployments, including creation, review, approval, and risk assessment records.

Conduct regular audits of deployed tools, recognising that AI-generated code may contain latent vulnerabilities that only become apparent over time.

Engage with regulatory developments at national and international levels, ensuring that internal governance anticipates rather than merely reacts to legal requirements.

Foster cultural change that treats AI governance as an extension of editorial integrity rather than a constraint on innovation.

Vibe coding represents neither utopia nor dystopia for newsrooms. It is a powerful capability that, like any technology, will be shaped by the choices organisations make about its use. The democratisation of software development can expand what journalism is capable of achieving, empowering practitioners to create tools tailored to their specific needs and audiences. But this empowerment carries responsibility.

The distinction between appropriate prototyping and situations requiring professional engineering oversight is not always obvious. Decision frameworks and governance structures can operationalise this distinction, but they require ongoing refinement as technology evolves and organisational learning accumulates. Liability, compliance, and ethical accountability gaps are real, particularly where published tools interface with sensitive data, vulnerable populations, or investigative workflows.

Editorial and technical leadership must work together to ensure that speed and accessibility gains do not inadvertently expose organisations to data breaches, regulatory violations, or reputational damage. The journalists building tools through vibe coding are not the enemy; they are practitioners seeking to serve their audiences and advance their craft. But good intentions are insufficient protection against technical vulnerabilities or regulatory requirements.

As the Generative AI in the Newsroom project observes, the goal is “collaboratively figuring out how and when (or when not) to use generative AI in news production.” That collaborative spirit, extending across editorial and technical domains, offers the best path forward. Newsrooms that get this balance right will harness vibe coding's transformative potential while maintaining the trust that makes journalism possible. Those that do not may find that the magic of democratised development comes with costs their organisations, their sources, and their audiences cannot afford.


References and Sources

  1. Karpathy, A. (2025). “Vibe Coding.” X (formerly Twitter). https://x.com/karpathy/status/1886192184808149383

  2. Collins Dictionary. (2025). “Word of the Year 2025: Vibe Coding.” https://www.collinsdictionary.com/us/woty

  3. CNN. (2025). “'Vibe coding' named Collins Dictionary's Word of the Year.” https://www.cnn.com/2025/11/06/tech/vibe-coding-collins-word-year-scli-intl

  4. Generative AI in the Newsroom. (2025). “Vibe Coding for Newsrooms.” https://generative-ai-newsroom.com/vibe-coding-for-newsrooms-6848b17dac99

  5. Nieman Journalism Lab. (2025). “Rise of the vibecoding journalists.” https://www.niemanlab.org/2025/12/rise-of-the-vibecoding-journalists/

  6. TV News Check. (2025). “Agent Swarms And Vibe Coding: Inside The New Operational Reality Of The Newsroom.” https://tvnewscheck.com/ai/article/agent-swarms-and-vibe-coding-inside-the-new-operational-reality-of-the-newsroom/

  7. The E.W. Scripps Company. (2024). “Scripps creates AI team to lead strategy, business development and operations across company.” https://scripps.com/press-releases/scripps-creates-ai-team-to-lead-strategy-business-development-and-operations-across-company/

  8. IBM Newsroom. (2025). “IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications.” https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications

  9. Gartner. (2025). “Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027.” https://www.gartner.com/en/newsroom/press-releases/2025-02-17-gartner-predicts-forty-percent-of-ai-data-breaches-will-arise-from-cross-border-genai-misuse-by-2027

  10. Auth0. (2024). “11 of the Worst Data Breaches in Media.” https://auth0.com/blog/11-of-the-worst-data-breaches-in-media/

  11. Threatrix. (2025). “Software Liability in 2025: AI-Generated Code Compliance & Regulatory Risks.” https://threatrix.io/blog/threatrix/software-liability-in-2025-ai-generated-code-compliance-regulatory-risks/

  12. MBHB. (2025). “Navigating the Legal Landscape of AI-Generated Code: Ownership and Liability Challenges.” https://www.mbhb.com/intelligence/snippets/navigating-the-legal-landscape-of-ai-generated-code-ownership-and-liability-challenges/

  13. European Data Journalism Network. (2024). “Data protection in journalism: a practical handbook.” https://datavis.europeandatajournalism.eu/obct/data-protection-handbook/gdpr-applied-to-journalism.html

  14. Global Investigative Journalism Network. (2025). “Expert Advice to Keep Your Sources and Whistleblowers Safe.” https://gijn.org/stories/gijc25-tips-keep-sources-whistleblowers-safe/

  15. Journalist's Resource. (2024). “Researchers compare AI policies and guidelines at 52 news organizations.” https://journalistsresource.org/home/generative-ai-policies-newsrooms/

  16. SAGE Journals. (2024). “AI Ethics in Journalism (Studies): An Evolving Field Between Research and Practice.” https://journals.sagepub.com/doi/10.1177/27523543241288818

  17. Poynter Institute. (2024). “Your newsroom needs an AI ethics policy. Start here.” https://www.poynter.org/ethics-trust/2024/how-to-create-newsroom-artificial-intelligence-ethics-policy/

  18. Center for News, Technology and Innovation. (2024). “Journalism's New Frontier: An Analysis of Global AI Policy Proposals and Their Impacts on Journalism.” https://cnti.org/reports/journalisms-new-frontier-an-analysis-of-global-ai-policy-proposals-and-their-impacts-on-journalism/

  19. Media Rights Agenda. (2025). “DC-Journalism Launches 2024/2025 Annual Report on Artificial Intelligence, Journalism.” https://mediarightsagenda.org/dc-journalism-launches-2024-2025-annual-report-on-artificial-intelligence-journalism/

  20. NIST. (2024). “AI Risk Management Framework.” https://www.nist.gov/itl/ai-risk-management-framework

  21. Cloud Security Alliance. (2025). “Capabilities-Based AI Risk Assessment (CBRA) for AI Systems.” https://cloudsecurityalliance.org/artifacts/capabilities-based-risk-assessment-cbra-for-ai-systems

  22. Palo Alto Networks. (2025). “What Is Shadow AI? How It Happens and What to Do About It.” https://www.paloaltonetworks.com/cyberpedia/what-is-shadow-ai

  23. IBM. (2025). “What Is Shadow AI?” https://www.ibm.com/think/topics/shadow-ai

  24. Help Net Security. (2025). “Shadow AI risk: Navigating the growing threat of ungoverned AI adoption.” https://www.helpnetsecurity.com/2025/11/12/delinea-shadow-ai-governance/

  25. Wikipedia. (2025). “Vibe coding.” https://en.wikipedia.org/wiki/Vibe_coding

  26. Simon Willison. (2025). “Not all AI-assisted programming is vibe coding (but vibe coding rocks).” https://simonwillison.net/2025/Mar/19/vibe-coding/

  27. RAND Corporation. (2024). “Liability for Harms from AI Systems: The Application of U.S. Tort Law.” https://www.rand.org/pubs/research_reports/RRA3243-4.html

  28. Center for News, Technology and Innovation. (2024). “Journalists & Cyber Threats.” https://innovating.news/article/journalists-cyber-threats/

  29. USC Center for Health Journalism. (2025). “An early AI pioneer shares how the 'vibe coding' revolution could reshape data journalism.” https://centerforhealthjournalism.org/our-work/insights/early-ai-pioneer-shares-how-vibe-coding-revolution-could-reshape-data-journalism

  30. Wiley Online Library. (2024). “From Shadow IT to Shadow AI: Threats, Risks and Opportunities for Organizations.” Strategic Change. https://onlinelibrary.wiley.com/doi/10.1002/jsc.2682

  31. U.S. Copyright Office. (2024). “Copyright and Artificial Intelligence.” https://www.copyright.gov/ai/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog