from Larry's 100

Larry's Faves: 2025 Movies

See all #Larrys2025Faves

Bad year for movies. Or a bad year for me seeing movies. Award season rules push studios to cram releases at year’s end, and it's hard to keep up. I will catch up with films I missed in Jan-March.

My list, ranked, with microreviews:

1. One Battle After Another: Brilliant, has a tragic flaw 2. Sinners: Juke Joint Cinema 3. Pavements: Larry’s100 Review 4. Weapons: Campy, creepy, freaky 5. Baltimorons: Larry’s100 Review 6. The Ballad of Wallis Island: Tim Key is brilliant 7. KPop Demon Hunters: Trust the phenom 8. John Candy: I Like Me: Sadder than you’d think 9. Fantastic Four: Didn’t suck 10. Nonnas: More sweet than savory

Weapons

#movies #film #2025BestOf #FilmReviews #BestOfFilm #Larrys2025Faves #Larrys2025FavesFilm #Cinema #MovieReviews #100WordReview #Larrys100 #100DaysToOffload

 
Read more... Discuss...

from Kroeber

#002279 – 25 de Agosto de 2025

A tarde de fim de Dezembro está gelada, o pôr-do-sol é bonito, as ondas óptimas. Os surfistas na água não conseguem escutar este tipo na mesa em frente à minha que destila teorias de conspiração sobre os Estados Unidos, o Musk, o Trump e os nazis e experiências secretas numa ilha perto de Nova Iorque. A mulher em silêncio é interlocutora silenciosa, apenas rindo de vez em quando, incentivando o monólogo.

 
Leia mais...

from Larry's 100

Larry's Faves: 2025 Television

tv

Putting this list together, I was knocked out by how good television was in 2025. This list includes stone-cold classics like Andor, Black Mirror, and Hacks, while making room for the mind-bending genre work of Pluribus, The Eternaut, and Severance. Comedies came laughing back and were welcome against the darkness of both reality and fiction.

In 2025, television was balanced between shows that took me by surprise (The Studio) and high-quality comfort food (Abbott Elementary). We don’t know what awaits us, with more corporate consolidation and the storm clouds of AI, so I am savoring (most of) the shows below.

1. Pluribus (Apple TV): Larry's 100 Reviews 2. Andor (Disney+): Two-season masterpiece 3. The Rehearsal (HBO): Rube Goldberg of social experiments 4. The Eternaut (Netflix): Larry's 100 Review 5. The Studio: (Apple TV) Fast-paced & meta AF 6. Shoresy (Hulu): I challenge you to find a show this sweet and filthy 7. Righteous Gemstones (HBO): Goodbye, sweet prince 8. Task (HBO): Existential dread with a water ice sidecar 9. The Lowdown (FX/Hulu):Pulp noir Gen X style 10. Slow Horses (Apple TV): Larry's 100 Review 11. Severance (Apple TV): Goat room for the win 12. Abbott Elementary (ABC): Ava is one of my favorite characters on TV 13. Hacks (HBO): Loved the Larry Sanders call backs 14. Mythic Quest (Apple TV): Poppy is one of my favorite characters on TV 15. Agatha All Along (Disney+): Witchy kitsch and great cast 16. Death By Lightning (Netflix): Who knew? 17. White Lotus (HBO): Ground Zero for the Year of Goggins 18. Gilded Age (HBO): No show has actors having fun like Gilded Age 19 Black Mirror (Netflix): Strong season for this institution 20. Fallout (Amazon): Cheating a bit, will straddle into 2026 21. Silo (Apple TV): Apple is the real SciFi channel 22 North of North (Netflix) A funny, smart, sexy show set near the Arctic. 23. The Paper (Peacock): Better than I expected, strong ensemble 24. American Primeval (Netflix): Gritty westerns are always appreciated 25. The Bear (FX/Hulu): Overhyped mess, Carmy is annoying 26. Stranger Things (Netflix): Larry's 100 Review 27. Last of Us (HBO): The last episode was a horrible hour of TV 28. Murderbot (Apple TV): Stuff to like, but got tedious. 29. The Four Seasons (Netflix): Catnip for almost old people 30. The Witcher (Netflix): I miss Henry Cavill 31. King & Conquer (Amazon): This was conceived in a lab just for me, but it sucked

#tv #television #2025BestOf #TVReviews #BestOfTV #Larrys2025FavesTV #Streaming #100WordReview #Larrys100 #100DaysToOffload #Larrys2025Faves

 
Read more... Discuss...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 30 décembre 2025 #auberge

Alors comme on est pas habituées à ces pratiques, on a demandé à mon frère de mettre ses experts sur le coup de notre rêve de reprendre l'auberge. Lui il a toujours pensé que c'était une folie, mais il a joué le jeu comme si c'était pour lui. Alors il y a plusieurs points essentiels. Bien sûr on peut acheter les bâtiments du vivant de m et p, mais on peut pas avoir la licence de restauration parce que c’est une licence de 1947 qui a été donnée aux établissements familiaux qui existaient depuis x années avant. Elle est transmissibles aux descendants, enfants, neveux et nièces cousins etc. à condition qu'ils aient un lien de parenté mais elle peut pas être vendue car elle a été donnée c’est un régime spécial destiné uniquement aux établissements traditionnels anciens. Donc il nous faudrait obtenir une nouvelle licence. Et voilà le point merdique : si un quelconque héritier se pointe un jour et dit moi je prends la licence familiale, au revoir neko et A il sera prioritaire. La loi japonaise est très protectrice des droits familiaux et professionnels. Pour la même raison si un héritier en ligne directe, donc enfants, petits-enfants, arrière etc. décide d'attaquer la vente il a de fortes chances de gagner, il y a plein d’angles d'attaque possible : le montant, l'influence sur des gens très âgés, le fait qu'une des acheteuses est une étrangère etc. 80 % de chances que le tribunal nous donne tort. Pour assurer notre coup il faudrait l'accord de la fille de m et p mais aussi des petites-filles qui signeraient une renonciation au droit de licence et à toute réclamation sur la vente. En plus comme le Japon ne reconnaît pas le mariage entre femmes, tout contrat devra être établi à nos deux noms pour qu'on ait les mêmes droits sur les bâtiments et la nouvelle licence et badaboum A n'a pas encore le statut de résidente donc si son visa était annulé (avec la dingue au pouvoir c'est pas du tout impossible) alors au revoir princesse. Même si M et P nous adoptaient, solution extrême ça poserait encore un problème pour A. : deviendrait-elle Japonaise du même coup ? malheureusement la question ne semble pas tranchée. Tout cumulé la possibilité de réussite à moyen terme du projet est proche de 0% sans parler des pertes financières dont je vous ferai pas un exposé c’est pas passionnant. Il y a en plus pour tout arranger des lois locales régionales qui sont ce qu'elles sont mais qui vont pas dans notre sens et qui me considèrent moi même comme étrangère, étant originaire du kanto. Et la loi non écrite la plus importante du japon et qui chapeaute toutes les autres et qui dit Ça Ne Se Fait Pas ça vous fait rire mais c’est ça mon pays C’est comme ça par exemple que des milliers de postes dans les grandes entreprises je dis bien des milliers sont occupés par des gens qui arrivent le matin et repartent le soir sans avoir strictement rien fait ou presque : on peut pas les virer parce que “ça ne se fait pas”, aucune considération économique ne peut avoir raison là-dessus. C’est dingo, hein ? C'est le Japon.

Alors soit on trouve l'adresse de la fille de m et p et celles des deux petites filles soit on abandonne définitivement. Elles sont vivantes car elles sont toujours mentionnées sur le livret familial mais personne ne sait ce qu'elles sont devenues. La fille a dans les 70 ans et les deux petites-filles un peu plus âgées que nous 38 et 40 c’est tout ce qu'on sait. Environ 100 000 Japonais disparaissent chaque année, inutile de dire que c’est pas la peine de s'adresser à la police pour les trouver.

Alors pfffff notre rêve est parti emporté par le vent 🌬️ On est allées faire une grande marche sous la neige en silence pour digérer ça ❄️❄️❄️ merci à vous de vous être fait du souci pour nous, ça ira mieux demain Si le chasse-neige dégage la route on devra accueillir 6 pensionnaires, il faudra être souriantes comme des vraies Japonaises bien dressées.

 
Lire la suite...

from Tropical Reason

Umberto Eco said that the Internet has given idiots the right to speak to a legion of idiots. But long before the Internet, we already had fireworks to prove that this legion was not only there: it was organized and capable of causing very real harm.

No algorithms are needed to amplify hate or stupidity. This legion of idiots terrorizes animals, starts fires and injures thousands of people, sometimes killing them, while overloading hospitals, police forces and fire services.

In places like Berlin, entire neighborhoods have turned into war zones where it is no longer safe to walk down the street. An idiot holding a firework enjoys the same rights as someone who just wants to take a peaceful walk and celebrate New Year’s Eve with their family.

That isn’t freedom. It's collective irresponsibility masquerading as tradition.

#Eco #Fireworks #Berlin #Germany #Idiocy #SocialMedia

 
Read more... Discuss...

from The happy place

The crayfish wire cages were empty. They lay scattered here and there, on surprisingly different places on the lawn; blown in all directions.

So then — like I wrote earlier — there weren’t any crabs in them, and no crayfish.

Crayfish are murk dwellers, who during winter go into a type of stand-by mode in the lakebed, hidden by the dark depths of mud. I think primarily that’s why those cages were empty, being as they were: on land.

But in my imagination, it could’ve been something in there.

A tall pine tree has blown over at mother’s, uprooted by the giant forces of nature.

And the neighbours car port did blow away, denting their car before taking off.

And the roof of a nearby hotel. Like it was just some hat…

But now only icy winds are blowing over the frosted ground, which glitters in the sunlight.

But soon there will fall lots of snow.

They say.

And it’s also soon spring

This weather, in a freakish coincidence, mirrors how I feel.

 
Läs mer... Discuss...

from hustin.art

#NSFW

This post is NSFW 19+ Adult content. Viewer discretion is advised.


https://soundcloud.com/hustin_art/sets/minori_hatsune/s-GajK6rIzmhy?si=42c9705c7eac4b09bc2244f3327647b5&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

In Connection With This Post: Minori Hatsune https://hustin.art/minori-hatsune

Minori Hatsune, at the time of her debut, emerged as a performer with a distinctly Japanese look—rounded, cheerful facial features paired with large breasts, a classic combination. The AV industry in the late 2000s had not yet been saturated with celebrity-level visuals as it is now, and slightly awkward appearances were still common. ….



 
더 읽어보기...

from Kool-Aid with Karan

2025 was the year I told myself I would become a movie guy. I wasn't a big movie watcher before this year and as a result I have significant gaps in movie history and knowledge. I haven't seen a lot of the classics and quintessential 21st century movies. So, I buckled down and started watching as many movies as I could.

As of writing this I've watched 108 movies this year. Most of them are not movies I've watched already, and they range from new releases to movies from the 50's. There was no real rhyme or reason to which movies I chose to watch. Some were from recommendations from friends, family, and the internet (specifically the movie podcasts The Big Picture and The Rewatchables), and other choices were from randomly flipping through Tubi when I had some time to kill.

There are still so many movies I have to watch, ones that most people can't believe I still haven't seen, but I'll get through 100 more in 2026, and so on and so on.

But in this post I just want to talk about my favourite movies released in 2025.

2025 Movie Lists

My top 5 movies of 2025, in no particular order:

  • One Battle After Another
  • Weapons
  • Sinners
  • 28 Years Later
  • Marty Supreme

I enjoyed all these movies immensely. With every single movie on this list I left the theatre thinking about the movie for days and weeks after, enthusiastically discussing them with friends and family and recommending them to anyone I met. I saw Sinners twice this year, once in IMAX and again in 70mm at the Revue Cinema. That movie took my breath away every time I watched it. One Battle After Another, Weapons, and 28 Years Later, were the movies that stuck in my brain the most this year. I would remember scenes or lines from them and that would lead me down a rabbit hole thinking about the grander themes of the movies or how beautifully they were shot. Weapons and Marty Supreme were two movies that had me on the edge of my seat the entire movie, with loads of laugh-out-loud moments immediately followed by tension, or in the case of Weapons, a scare. I watched all these movies in theatres and I'm so glad I did. KEEP THEATRES ALIVE!

Other movies I thoroughly enjoyed:

  • Eddington
  • Black Bag
  • The Shrouds
  • Wake Up Dead Man
  • The Phoenician Scheme
  • F1
  • Superman
  • Companion
  • Materialists

Of the superhero movies released this year that I saw, Superman was my favourite. It was fun and funny, and the action set-pieces were great. Companion is a low-key movie that I feel more people need to watch. It's very well-written and a fun thriller with good acting mixed in. The Shrouds I watched in a theatre that had two other people in it and, as with a lot of Cronenberg movies, I left the movie trying to parse what I had just watched. I watched Eddington when I got home a little drunk and that was one of the most stressful experiences I've ever had watching a movie, but man was that movie incredible.

I could write more about each movie in particular but I think I'll just leave it at that. If you're reading this and you haven't seen some of the movies on this list, I definitely recommend renting them, or if you get the chance watch them in a theatre. I really enjoyed a lot of movies that came out this year, and I only saw a fraction of them. I'll probably have more favourites from this year that I'll catch-up on in 2026.

Here's to 100 more movies in 2026!

 
Read more...

from Roscoe's Story

In Summary: * Ready to wrap up a good Monday, time spent putting things back in order after the wedding weekend of a best friend of the daughter-in-law. It is indeed good to be back home.

Prayers, etc.: * My daily prayers

Health Metrics: * bw= 218.48 lbs. * bp= 142/88 (75)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 06:30 – 1 banana * 08:00 – small apple pastries * 11:30 – plate of sweet rice * 16:50 – plate of pancit * 19:00 – pork, pancit, steak burger patties with mushroom sauce, Mexican wedding cookies

Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:30 – read, pray, follow news reports from various sources, surf the socials, nap * 08:50 – bank accounts activity monitored * 13:00 – listening to the JLab Birmingham Bowl: Georgia Southern Eagles vs Appalachian State Mountaineers, An NCAA college football game broadcast live by the Appalachian State Sports Network. * 14:20 – start my weekly laundry * 16:10 – And the Georgia Southern Eagles have just won the JLab Birmingham Bowl, 29 to 10. * 16:15 – Listening now to B97 – The Home for IU Women's Basketball ahead of tonight's early NCAA women's basketball game between the Minnesota Golden Gophers and my Indiana Hoosiers. Yes, of course I'll try to stay here for the radio call of the game. * 18:45 – Minnesota gets the win over the Hoosiers, score is 71 to 48. * 19:00 – dinner guests arrived, then ... * 20:15 – ... left early because they had to catch an early flight tomorrow. * 20:30 – time now to wrap up the night prayers, and shut things down in this joint.

Chess: * 13:30 – moved in all pending CC games

 
Read more...

from wystswolf

The energy of passion drips out in viscous dreams.

I feel urgent A pressure to explore The tro-tros are all full Black boys wave me off Saying, “sorry, abruni”

A big thick horn Vibrated the ground, “COOOOMMMMMAAAAHHH BOOOOOOAAAAAATAAAAHH!” It says.

The black tro-tro doorman Gives me a thumbs up “Boats are better. For the motion.”

I walk through city forever The stone path starts to wear Thin from the pressure of So many steps.

The ‘boats’ are massive faces bobbing in The water, people sprawled Languidly over cheeks On foreheads, in the hollow Where tears gather Before spilling

I can't get through the door though. It is big and red and has brass knobs But turn as I might, The monoliths don't budge.

A woman appears. Deep, dark eyes. High cheekbones flushed with rose. Lips curved like a Stradivarius— tuned for mischief. Beautiful without asking permission.

“They are locked,” She says “You must be hungry. You need soup.”

We go from restaurant to Restaurant all glowing Golden in the night like gems. Each one we enter says the same thing: 'No Soup!'

The last door enters a living room. Ultramodern and angled, everything in white. Wherever I step, I leave mud stains. I am extremely embarrassed, But she just laughs and rolls piercing eyes. She lays me down on the couch.

A bowl materializes and she says, 'Eat.' And tips the bowl to my lips. I sip it down, but the bowl never empties. It pours and pours but I never get full. She laces her fingers through my hair And Massages my scalp, it's very very pleasant. I feel incredibly relaxed and soothed.

The room turns from cold white stone To soft yellow and pink woods. Something pulses and vibrates slow and steady. Suddenly a bell rings, it is the same bell They used to ring when I was in high school. “Oh, I'm sorry you have to leave.” She says And invites me to the door. When I approach, it is a giant steak. Pressing it, my hands sink into the surface And then I fall through.

The loss of my balance wakes me and I find I'm grasping for someone who isn't there.

Drifting back to slumber after rousing, I discover myself in a city of lights and rivers. I am a giant climbing the Eiffel Tower. The police come and explain that it isn't permitted and that I must come down. Rather than acquiesce, I call for a ride and a giant purple bear comes running in to snatch me up. They yell stop! But my attention is already in the cotton candy clouds. No time for distraction.

The bear takes flight into the clouds and lowers me on a spiders strand into a thick pool. I fall in slowly, like the water is welcoming with a hug. Under the surface everything is sensual and the liquid seeps into me until I become part of the world. I hear a voice calling me, but i cannot see the speaker.

The lights go down and I am lost in her forever.

I wake thick and warm, the hum of feeling still in my skin. The dream time was slow—devotional. The kind of passion that takes its time because there is no need to rush a good thing—It left me with the afterimage of being wanted without apology.

It was a hunger and restraint—not reckless—with intentional consideration. Want and ache that anticipated her discovery, her smile when she finds the evidence and asks, quietly, did you mean for me to see this?

Everything moved at the pace of breath. Of listening. Of hands learning through without asking. It wasn’t urgency that drove it, but confidence—the certainty that nothing needed to be rushed because nothing was in danger of disappearing.

 
Read more... Discuss...

from SmarterArticles

When Sarah Andersen, Kelly McKernan, and Karla Ortiz filed their copyright infringement lawsuit against Stability AI and Midjourney in January 2023, they raised a question that now defines one of the most contentious debates in technology: can AI image generation's creative potential be reconciled with artists' rights and market sustainability? More than two years later, that question remains largely unanswered, but the outlines of potential solutions are beginning to emerge through experimental licensing frameworks, technical standards, and a rapidly shifting platform landscape.

The scale of what's at stake is difficult to overstate. Stability AI's models were trained on LAION-5B, a dataset containing 5.85 billion images scraped from the internet. Most of those images were created by human artists who never consented to their work being used as training data, never received attribution, and certainly never saw compensation. At a U.S. Senate hearing, Karla Ortiz testified with stark clarity: “I have never been asked. I have never been credited. I have never been compensated one penny, and that's for the use of almost the entirety of my work, both personal and commercial, senator.”

This isn't merely a legal question about copyright infringement. It's a governance crisis that demands we design new institutional frameworks capable of balancing competing interests: the technological potential of generative AI, the economic livelihoods of millions of creative workers, and the sustainability of markets that depend on human creativity. Three distinct threads have emerged in response. First, experimental licensing and compensation models that attempt to establish consent-based frameworks for AI training. Second, technical standards for attribution and provenance that make the origins of digital content visible. Third, a dramatic migration of creator communities away from platforms that embraced AI without meaningful consent mechanisms.

The most direct approach to reconciling AI development with artists' rights is to establish licensing frameworks that require consent and provide compensation for the use of copyrighted works in training datasets.

Getty Images' partnership with Nvidia represents the most comprehensive attempt to build such a model. Rather than training on publicly scraped data, Getty developed its generative AI tool exclusively on its licensed creative library of approximately 200 million images. Contributors are compensated through a revenue-sharing model that pays them “for the life of the product”, not as a one-time fee, but as a percentage of revenue “into eternity”. On an annual recurring basis, the company shares revenues generated from the tool with contributors whose content was used to train the AI generator.

This Spotify-style compensation model addresses several concerns simultaneously. It establishes consent by only using content from photographers who have already agreed to licence their work to Getty. It provides ongoing compensation that scales with the commercial success of the AI tool. And it offers legal protection, with Getty providing up to £50,000 in legal coverage per image and uncapped indemnification as part of enterprise solutions.

The limitations are equally clear. It only works within a closed ecosystem where Getty controls both the training data and the commercial distribution. Most artists don't licence their work through Getty, and the model provides no mechanism for compensating creators whose work appears in open datasets like LAION-5B.

A different approach has emerged in the music industry. In Sweden, STIM (the Swedish music rights society) launched what it describes as the world's first collective AI licence for music. The framework allows AI companies to train their systems on copyrighted music lawfully, with royalties flowing back to the original songwriters both through model training and through downstream consumption of AI outputs.

STIM's Acting CEO Lina Heyman described this as “establishing a scalable, democratic model for the industry”, one that “embraces disruption without undermining human creativity”. GEMA, a German performing rights collection society, has proposed a similar model that explicitly rejects one-off lump sum payments for training data, arguing that “such one-off payments may not sufficiently compensate authors given the potential revenues from AI-generated content”.

These collective licensing approaches draw on decades of experience from the music industry, where performance rights organisations have successfully managed complex licensing across millions of works. The advantage is scalability: rather than requiring individual negotiations between AI companies and millions of artists, a collective licensing organisation can offer blanket permissions covering large repertoires.

Yet collective licensing faces obstacles. Unlike music, where performance rights organisations have legal standing and well-established royalty collection mechanisms, visual arts have no equivalent infrastructure. And critically, these systems only work if AI companies choose to participate. Without legal requirements forcing licensing, companies can simply continue training on publicly scraped data.

The consent problem runs deeper than licensing alone. In 2017, Monica Boța-Moisin coined the phrase “the 3 Cs” in the context of protecting Indigenous People's cultural property: consent, credit, and compensation. This framework has more recently emerged as a rallying cry for creative workers responding to generative AI. But as researchers have noted, the 3 Cs “are not yet a concrete framework in the sense of an objectively implementable technical standard”. They represent aspirational principles rather than functioning governance mechanisms.

Regional Governance Divergence

The lack of global consensus has produced three distinct regional approaches to AI training data governance, each reflecting different assumptions about the balance between innovation and rights protection.

The United States has taken what researchers describe as a “market-driven” approach, where private companies through their practices and internal frameworks set de facto standards. No specific law regulates the use of copyrighted material for training AI models. Instead, the issue is being litigated in lawsuits that pit content creators against the creators of generative AI tools.

In August 2024, U.S. District Judge William Orrick of California issued a significant ruling in the Andersen v. Stability AI case. He found that the artists had reasonably argued that the companies violate their rights by illegally storing work and that Stable Diffusion may have been built “to a significant extent on copyrighted works” and was “created to facilitate that infringement by design”. The judge denied Stability AI and Midjourney's motion to dismiss the artists' copyright infringement claims, allowing the case to move towards discovery.

This ruling suggests that American courts may not accept blanket fair use claims for AI training, but the legal landscape remains unsettled. Yet without legislation, the governance framework will emerge piecemeal through court decisions, creating uncertainty for both AI companies and artists.

The European Union has taken a “rights-focused” approach, creating opt-out mechanisms for copyright owners to remove their works from text and data mining purposes. The EU AI Act explicitly declares text and data mining exceptions to be applicable to general-purpose AI models, but with critical limitations. If rights have been explicitly reserved through an appropriate opt-out mechanism (by machine-readable means for online content), developers of AI models must obtain authorisation from rights holders.

Under Article 53(1)© of the AI Act, providers must establish a copyright policy including state-of-the-art technologies to identify and comply with possible opt-out reservations. Additionally, providers must “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model”.

However, the practical implementation has proven problematic. As legal scholars note, “you have to have some way to know that your image was or will be actually used in training”. The ECSA's secretary general told Euronews that “the work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us” these protections.

Japan has pursued perhaps the most permissive approach. Article 30-4 of Japan's revised Copyright Act, which came into effect on 1 January 2019, allows broad rights to ingest and use copyrighted works for any type of information analysis, including training AI models, even for commercial use. Collection of copyrighted works as AI training data is permitted without permission of the copyright holder, provided the use doesn't cause unreasonable harm.

The rationale reflects national priorities: AI is seen as a potential solution to a swiftly ageing population, and with no major local Japanese AI providers, the government implemented a flexible AI approach to quickly develop capabilities. However, this has generated increasing pushback from Japan-based content creators, particularly developers of manga and anime.

The United Kingdom is currently navigating between these approaches. On 17 December 2024, the UK Government announced its public consultation on “Copyright and Artificial Intelligence”, proposing an EU-style broad text and data mining exception for any purpose, including commercial, but only where the party has “lawful access” and the rightholder hasn't opted out. A petition signed by more than 37,500 people, including actors and celebrities, condemned the proposals as a “major and unfair threat” to creators' livelihoods.

What emerges from this regional divergence is not a unified governance framework but a fragmented landscape where “the world is splintering”, as one legal analysis put it. AI companies operating globally must navigate different rules in different jurisdictions, and artists have vastly different levels of protection depending on where they and the AI companies are located.

The C2PA and Content Credentials

Whilst licensing frameworks and legal regulations attempt to govern the input side of AI image generation (what goes into training datasets), technical standards are emerging to address the output side: making the origins and history of digital content visible and verifiable.

The Coalition for Content Provenance and Authenticity (C2PA) is a formal coalition dedicated to addressing the prevalence of misleading information online through the development of technical standards for certifying the source and history of media content. Formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, collaborators include the Associated Press, BBC, The New York Times, Reuters, Leica, Nikon, Canon, and Qualcomm.

Content Credentials provide cryptographically secure metadata that captures content provenance from the moment it is created through all subsequent modifications. They function as “a nutrition label for digital content”, containing information about who produced a piece of content, when they produced it, and which tools and editing processes they used. When an action was performed by an AI or machine learning system, it is clearly identified as such.

OpenAI now includes C2PA metadata in images generated with ChatGPT and DALL-E 3. Google collaborated on version 2.1 of the technical standard, which is more secure against tampering attacks. Microsoft Azure OpenAI includes Content Credentials in all AI-generated images.

The security model is robust: faking Content Credentials would require breaking current cryptographic standards, an infeasible task with today's technology. However, metadata can be easily removed either accidentally or intentionally. To address this, C2PA supports durable credentials via soft bindings such as invisible watermarking that can help rediscover the associated Content Credential even if it's removed from the file.

Critically, the core C2PA specification does not support attribution of content to individuals or organisations, so that it can remain maximally privacy-preserving. However, creators can choose to attach attribution information directly to their assets.

For artists concerned about AI training, C2PA offers partial solutions. It can make AI-generated images identifiable, potentially reducing confusion about whether a work was created by a human artist or an AI system. It cannot, however, prevent AI companies from training on human-created images, nor does it provide any mechanism for consent or compensation. It's a transparency tool, not a rights management tool.

Glaze, Nightshade, and the Resistance

Frustrated by the lack of effective governance frameworks, some artists have turned to defensive technologies that attempt to protect their work at the technical level.

Glaze and Nightshade, developed by researchers at the University of Chicago, represent two complementary approaches. Glaze is a defensive tool that individual artists can use to protect themselves against style mimicry attacks. It works by making subtle changes to images invisible to the human eye but which cause AI models to misinterpret the artistic style.

Nightshade takes a more aggressive approach: it's a data poisoning tool that artists can use as a group to disrupt models that scrape their images without consent. By introducing carefully crafted perturbations into images, Nightshade causes AI models trained on those images to learn incorrect associations.

The adoption statistics are striking. Glaze has been downloaded more than 8.5 million times since its launch in March 2023. Nightshade has been downloaded more than 2.5 million times since January 2024. Glaze has been integrated into Cara, a popular art platform, allowing artists to embed protection in their work when they upload images.

Shawn Shan, the lead developer, was named MIT Technology Review Innovator of the Year for 2024, reflecting the significance the artistic community places on tools that offer some degree of protection in the absence of effective legal frameworks.

Yet defensive technologies face inherent limitations. They require artists to proactively protect their work before posting it online, placing the burden of protection on individual creators rather than on AI companies. They're engaged in an arms race: as defensive techniques evolve, AI companies can develop countermeasures. And they do nothing to address the billions of images already scraped and incorporated into existing training datasets. Glaze and Nightshade are symptoms of a governance failure, tactical responses to a strategic problem that requires institutional solutions.

Spawning and Have I Been Trained

Between defensive technologies and legal frameworks sits another approach: opt-out infrastructure that attempts to create a consent layer for AI training.

Spawning AI created Have I Been Trained, a website that allows creators to opt out of the training dataset for art-generating AI models like Stable Diffusion. The website searches the LAION-5B training dataset, a library of 5.85 billion images used to feed Stable Diffusion and Google's Imagen.

Since launching opt-outs in December 2022, Spawning has helped thousands of individual artists and organisations remove 78 million artworks from AI training. By late April, that figure had exceeded 1 billion. Spawning partnered with ArtStation to ensure opt-out requests made on their site are honoured, and partnered with Shutterstock to opt out all images posted to their platforms by default.

Critically, Stability AI promised to respect opt-outs in Spawning's Do Not Train Registry for training of Stable Diffusion 3. This represents a voluntary commitment rather than a legal requirement, but it demonstrates that opt-out infrastructure can work when AI companies choose to participate.

However, the opt-out model faces fundamental problems: it places the burden on artists to discover their work is being used and to actively request removal. It works retrospectively rather than prospectively. And it only functions if AI companies voluntarily respect opt-out requests.

The infrastructure challenge is enormous. An artist must somehow discover that their work appears in a training dataset, navigate to the opt-out system, verify their ownership, submit the request, and hope that AI companies honour it. For the millions of artists whose work appears in LAION-5B, this represents an impossible administrative burden. The default should arguably be opt-in rather than opt-out: work should only be included in training datasets with explicit artist permission.

The Platform Migration Crisis

Whilst lawyers debate frameworks and technologists build tools, a more immediate crisis has been unfolding: artist communities are fracturing across platform boundaries in response to AI policies.

The most dramatic migration occurred in early June 2024, when Meta announced that starting 26 June 2024, photos, art, posts, and even post captions on Facebook and Instagram would be used to train Meta's AI chatbots. The company offered no opt-out mechanism for users in the United States. The reaction was immediate and severe.

Cara, an explicitly anti-AI art platform founded by Singaporean photographer Jingna Zhang, became the primary destination for the exodus. In around seven days, Cara went from having 40,000 users to 700,000, eventually reaching close to 800,000 users at its peak. In the first days of June 2024, the Cara app recorded approximately 314,000 downloads across the Apple App Store and Google Play Store, compared to 49,410 downloads in May 2024. The surge landed Cara in the Top 5 of Apple's US App Store.

Cara explicitly bans AI-generated images and uses detection technology from AI company Hive to identify and remove rule-breakers. Each uploaded image is tagged with a “NoAI” label to discourage scraping. The platform integrates Glaze, allowing artists to automatically protect their work when uploading. This combination of policy (banning AI art), technical protection (Glaze integration), and community values (explicitly supporting human artists) created a platform aligned with artist concerns in ways Instagram was not.

The infrastructure challenges were severe. Server costs jumped from £2,000 to £13,500 in a week. The platform is run entirely by volunteers who pay for the platform to keep running out of their own pockets. This highlights a critical tension in platform migration: the platforms most aligned with artist values often lack the resources and infrastructure of the corporate platforms artists are fleeing.

DeviantArt faced a similar exodus following its launch of DreamUp, an artificial intelligence image-generation tool based on Stable Diffusion, in November 2022. The release led to DeviantArt's inclusion in the copyright infringement lawsuit alongside Stability AI and Midjourney. Artist frustrations include “AI art everywhere, low activity unless you're amongst the lucky few with thousands of followers, and paid memberships required just to properly protect your work”.

ArtStation, owned by Epic Games, took a different approach. The platform allows users to tag their projects with “NoAI” if they would like their content to be prohibited from use in datasets utilised by generative AI programs. This tag is not applied by default; users must actively designate their projects. This opt-out approach has been more acceptable to many artists than platforms that offer no protection mechanisms at all, though it still places the burden on individual creators.

Traffic data from November 2024 shows DeviantArt.com had more total visits compared to ArtStation.com, with DeviantArt holding a global rank of #258 whilst ArtStation ranks #2,902. Most professional artists maintain accounts on multiple platforms, with the general recommendation being to focus on ArtStation for professional work whilst staying on DeviantArt for discussions and relationships.

This platform fragmentation reveals how AI policies are fundamentally reshaping the geography of creative communities. Rather than a unified ecosystem, artists now navigate a fractured landscape where different platforms offer different levels of protection, serve different community norms, and align with different values around AI. The migration isn't simply about features or user experience; it's about alignment on fundamental questions of consent, compensation, and the role of human creativity in an age of generative AI.

The broader creator economy shows similar tensions. In December 2024, more than 500 people in the entertainment industry signed a letter launching the Creators Coalition on AI, an organisation addressing AI concerns across creative fields. Signatories included Natalie Portman, Cate Blanchett, Ben Affleck, Guillermo del Toro, Aaron Sorkin, Ava DuVernay, and Taika Waititi, along with members of the Directors Guild of America, SAG-AFTRA, the Writers Guild of America, the Producers Guild of America, and IATSE. The coalition's work is guided by four core pillars: transparency, consent and compensation for content and data; job protection and transition plans; guardrails against misuse and deep fakes; and safeguarding humanity in the creative process.

This coalition represents an attempt to organise creator power across platforms and industries, recognising that individual artists have limited leverage whilst platform-level organisation can shift policy. The Make it Fair Campaign, launched by the UK's creative industries on 25 February, similarly calls on the UK government to support artists and enforce copyright laws through a responsible AI approach.

Can Creative Economies Survive?

The platform migration crisis connects directly to the broader question of market sustainability. If AI-generated images can be produced at near-zero marginal cost, what happens to the market for human-created art?

CISAC projections suggest that by 2028, generative AI outputs in music could approach £17 billion annually, a sizeable share of a global music market Goldman Sachs valued at £105 billion in 2024. With up to 24 per cent of music creators' revenues at risk of being diluted due to AI developments by 2028, the music industry faces a pivotal moment. Visual arts markets face similar pressures.

Creative workers around the world have spoken up about the harms of generative AI on their work, mentioning issues such as damage to their professional reputation, economic losses, plagiarism, copyright issues, and an overall decrease in creative jobs. The economic argument from AI proponents is that generative AI will expand the total market for visual content, creating opportunities even as it disrupts existing business models. The counter-argument from artists is that AI fundamentally devalues human creativity by flooding markets with low-cost alternatives, making it impossible for human artists to compete on price.

Getty Images has compensated hundreds of thousands of artists with “anticipated payments to millions more for the role their content IP has played in training generative technology”. This suggests one path towards market sustainability: embedding artist compensation directly into AI business models. But this only works if AI companies choose to adopt such models or are legally required to do so.

Market sustainability also depends on maintaining the quality and diversity of human-created art. If the most talented artists abandon creative careers because they can't compete economically with AI, the cultural ecosystem degrades. This creates a potential feedback loop: AI models trained predominantly on AI-generated content rather than human-created works may produce increasingly homogenised outputs, reducing the diversity and innovation that makes creative markets valuable.

Some suggest this concern is overblown, pointing to the continued market for artisanal goods in an age of mass manufacturing, or the survival of live music in an age of recorded sound. Human-created art, this argument goes, will retain value precisely because of its human origin, becoming a premium product in a market flooded with AI-generated content. But this presumes consumers can distinguish human from AI art (which C2PA aims to enable) and that enough consumers value that distinction enough to pay premium prices.

What Would Functional Governance Look Like?

More than two years into the generative AI crisis, no comprehensive governance framework has emerged that successfully reconciles AI's creative potential with artists' rights and market sustainability. What exists instead is a patchwork of partial solutions, experimental models, and fragmented regional approaches. But the outlines of what functional governance might look like are becoming clearer.

First, consent mechanisms must shift from opt-out to opt-in as the default. The burden should be on AI companies to obtain permission to use works in training data, not on artists to discover and prevent such use. This reverses the current presumption where anything accessible online is treated as fair game for AI training.

Second, compensation frameworks need to move beyond one-time payments towards revenue-sharing models that scale with the commercial success of AI tools. Getty Images' model demonstrates this is possible within a closed ecosystem. STIM's collective licensing framework shows how it might scale across an industry. But extending these models to cover the full scope of AI training requires either voluntary industry adoption or regulatory mandates that make licensing compulsory.

Third, transparency about training data must become a baseline requirement, not a voluntary disclosure. The EU AI Act's requirement that providers “draw up and make publicly available a sufficiently detailed summary about the content used for training” points in this direction. Artists cannot exercise rights they don't know they have, and markets cannot function when the inputs to AI systems are opaque.

Fourth, attribution and provenance standards like C2PA need widespread adoption to maintain the distinction between human-created and AI-generated content. This serves both consumer protection goals (knowing what you're looking at) and market sustainability goals (allowing human creators to differentiate their work). But adoption must extend beyond a few tech companies to become an industry-wide standard, ideally enforced through regulation.

Fifth, collective rights management infrastructure needs to be built for visual arts, analogous to performance rights organisations in music. Individual artists cannot negotiate effectively with AI companies, and the transaction costs of millions of individual licensing agreements are prohibitive. Collective licensing scales, but it requires institutional infrastructure that currently doesn't exist for most visual arts.

Sixth, platform governance needs to evolve beyond individual platform policies towards industry-wide standards. The current fragmentation, where artists must navigate different policies on different platforms, imposes enormous costs and drives community fracturing. Industry standards or regulatory frameworks that establish baseline protections across platforms would reduce this friction.

Finally, enforcement mechanisms are critical. Voluntary frameworks only work if AI companies choose to participate. The history of internet governance suggests that without enforcement, economic incentives will drive companies towards the least restrictive jurisdictions and practices. This argues for regulatory approaches with meaningful penalties for violations, combined with technical enforcement tools like C2PA that make violations detectable.

None of these elements alone is sufficient. Consent without compensation leaves artists with rights but no income. Compensation without transparency makes verification impossible. Transparency without collective management creates unmanageable transaction costs. But together, they sketch a governance framework that could reconcile competing interests: enabling AI development whilst protecting artist rights and maintaining market sustainability.

The evidence so far suggests that market forces alone will not produce adequate protections. AI companies have strong incentives to train on the largest possible datasets with minimal restrictions, whilst individual artists have limited leverage to enforce their rights. Platform migration shows that artists will vote with their feet when platforms ignore their concerns, but migration to smaller platforms with limited resources isn't a sustainable solution.

The regional divergence between the U.S., EU, and Japan reflects different political economies and different assumptions about the appropriate balance between innovation and rights protection. In a globalised technology market, this divergence creates regulatory arbitrage opportunities that undermine any single jurisdiction's governance attempts.

The litigation underway in the U.S., particularly the Andersen v. Stability AI case, may force legal clarity that voluntary frameworks have failed to provide. If courts find that training AI models on copyrighted works without permission constitutes infringement, licensing becomes legally necessary rather than optional. This could catalyse the development of collective licensing infrastructure and compensation frameworks. But if courts find that such use constitutes fair use, the legal foundation for artist rights collapses, leaving only voluntary industry commitments and platform-level policies.

The governance question posed at the beginning remains open: can AI image generation's creative potential be reconciled with artists' rights and market sustainability? The answer emerging from two years of crisis is provisional: yes, but only if we build institutional frameworks that don't currently exist, establish legal clarity that courts have not yet provided, and demonstrate political will that governments have been reluctant to show. The experimental models, technical standards, and platform migrations documented here are early moves in a governance game whose rules are still being written. What they reveal is that reconciliation is possible, but far from inevitable. The question is whether we'll build the frameworks necessary to achieve it before the damage to creative communities and markets becomes irreversible.

References & Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Space Goblin Diaries

In the echoing passages of my dreadnought you will hear the robot before you see it—a metallic clattering sound that must seem to come from all around you...
But that sound cannot prepare you for the sight of the robot when it appears at your feet! A mass of razor-sharp blades and glowing red optical sensors faces you, behind which stretches a long, flexible body that clings to the wall with dozens of twitching legs. It regards you with a cold intelligence, its adaptive programming prepared to react to anything you might do.
That mass of chittering blades will be the last thing you see, Commander Vortex! The hunter-killer robot coils its body and prepares to lunge... Happy midwinter celebrations, Earth creatures!

I took a break from working on the game for a couple of weeks over Christmas, but earlier in the month I did some work on a chapter in which the hero braves the air ducts of Vorak's space dreadnought and must face a hunter-killer robot and a whirling fan of death.

Looking back over the year, I'm pleased with the progress I've made. I've taken Foolish Earth Creatures from a vague idea to a roughly half-finished game, which a limited playtest has confirmed is on track to be fun. I still have a lot of work to do, but I'm pretty confident in the overall design of the game. All that remains now is to fill in the content.

My plan for the next few months is to finish writing a single path through the game and then work sideways from that to fill out the other paths. I'm hoping to finish the game by the end of 2026.

What will the new year have in store for Vorak, the Master Brain? Learn more in next month's developer diary!

#FoolishEarthCreatures #DevDiary

 
Read more... Discuss...

from Mitchell Report

Today was a peculiar yet fulfilling day. It started with a dense fog but cleared up nicely. I managed to capture the sun through my telescope for the first time, and I also spotted the moon in broad daylight, though the telescope had some trouble pinpointing it. I hope you find the images as delightful as I did. They were taken with my Zwo Seestar S30, except for the fog picture, which I captured with the Samsung Galaxy S24 Ultra. A blurry photograph taken from inside a car at night, showing a view of a road with traffic lights and faint outlines of trees in foggy or misty conditions. The dashboard of the car is visible in the foreground.

Driving through a foggy night, the world blurs into a mysterious haze of lights and shadows.

A serene sky with a soft gradient from deep to light blue, featuring a pale moon visible in daylight and several small birds flying in the distance.

Birds dance around the daylit moon, painting a serene scene in the vast blue canvas above.

A close-up image of the sun showing a detailed and textured orange surface surrounded by darkness, highlighting sunspots and solar features.

Gazing into the cosmic abyss, the sun reveals its fiery solitude surrounded by the dark embrace of space.

#photos #landscape #nature

 
Read more... Discuss...

from An Open Letter

“Because I feel this way because I’ve asked for things several times and each time I got my hopes up, but it ends up falling short and that’s why I am walled up to protect myself. When you ask, it hurts because it feels like it’s both confirmation that me putting hope into asking the other times was stupid because you didn’t do/remember, but it’s also asking me to again reach my hand out to be bitten.“

 
Read more...

from Zéro Janvier

Nuit de colère est le cinquième roman appartenant au cycle romanesque Le Rêve du Démiurge de Francis Berthelot.

Le récit débute en 1978, dans les Monts du Cantal. Kantor, jeune garçon de douze ans, est le seul survivant du suicide collectif des membres l’Ordre du Fer Divin, qui se sont immolés avec leur gourou, le propre père de l’enfant rescapé. Ce père que se faisait appeler Fercaël, nous l’avons connu sous le nom de Laurent Ferrier, le garçon qui tourmentait Olivier dans le premier roman du cycle, L’ombre d’un soldat.

Recueilli par sa tante Muriel, désormais comédienne de théâtre que l’on avait recroisée dans Mélusath, Kantor vit une adolescence difficile et solitaire, d’autant qu’il a hérité de son père un étrange pouvoir, celui de pouvoir lire et d’influencer les pensées des personnes dont il croise le regard.

Au collège, Kantor rencontre Octave, un camarade qui lui offre son amitié et qui semble aussi tourmenté que lui. Octave vit en effet dans l’ombre de son père, un des philosophes les plus célébrés du moment, et présente un affinité étrange avec le froid et la glace.

Francis Berthelot signe ici un roman absolument sublime sur le pouvoir et a violence, sur l’hérédité, et sur la dépression. Il le fait dans un style qui mêle poésie et dureté, avec un talent remarquable pour introduire des motifs issus du fantastique dans un récit presque réaliste.

 
Lire la suite... Discuss...

Join the writers on Write.as.

Start writing or create a blog