from Attronarch's Athenaeum

Pilots' Almanac: Maritime & Piloting Rules is currently DriveThruRPG deal of the day for 50% off. Although for Hârn, I found it to be a great resource in my OD&D game, especially the sections on crews, ships, and maritime trade. An expensive book that rarely goes on sale.

Frog God Games is also running a 50% off sale. From what I can see some of the products are 75% off. Good sale to pick up some of the more expensive tomes and bundles like Razor Coast, Northlands, and Necromancer Games supplements.

#Sale #FGG #NG #Harn #OSR

 
Read more...

from Notes I Won’t Reread

Hoy el microondas decidió tomarse vacaciones, el café conspiraba y el gato me miraba como si yo fuera un extraño.

Escuché la misma canción tres veces y todavía no entiendo por qué me hace sentir… algo que no sé nombrar.Las calles parecían más silenciosas que de costumbre, y todo se movía sin cuidado.

Y, bueno, recordé a alguien. No importa quién. Solo que de repente todo lo que hago parece demasiado ordenado, demasiado correcto, demasiado aburrido comparado con cómo era antes.

Gracias.

sinceramente,

Ahmed

 
Read more... Discuss...

from Human in the Loop

The promise was straightforward: Google would democratise artificial intelligence, putting powerful creative tools directly into creators' hands. Google AI Studio emerged as the accessible gateway, a platform where anyone could experiment with generative models, prototype ideas, and produce content without needing a computer science degree. Meanwhile, YouTube stood as the world's largest video platform, owned by the same parent company, theoretically aligned in vision and execution. Two pillars of the same ecosystem, both bearing the Alphabet insignia.

Then came the terminations. Not once, but twice. A fully verified YouTube account, freshly created through proper channels, uploading a single eight-second test video generated entirely through Google's own AI Studio workflow. The content was harmless, the account legitimate, the process textbook. Within hours, the account vanished. Terminated for “bot-like behaviour.” The appeal was filed immediately, following YouTube's prescribed procedures. The response arrived swiftly: appeal denied. The decision was final.

So the creator started again. New account, same verification process, same innocuous test video from the same Google-sanctioned AI workflow. Termination arrived even faster this time. Another appeal, another rejection. The loop closed before it could meaningfully begin.

This is not a story about a creator violating terms of service. This is a story about a platform so fragmented that its own tools trigger its own punishment systems, about automation so aggressive it cannot distinguish between malicious bots and legitimate experimentation, and about the fundamental instability lurking beneath the surface of platforms billions of people depend upon daily.

The Ecosystem That Eats Itself

Google has spent considerable resources positioning itself as the vanguard of accessible AI. Google AI Studio, formerly known as MakerSuite, offers direct access to models like Gemini and PaLM, providing interfaces for prompt engineering, model testing, and content generation. The platform explicitly targets creators, developers, and experimenters. The documentation encourages exploration. The barrier to entry is deliberately low.

The interface itself is deceptively simple. Users can prototype with different models, adjust parameters like temperature and token limits, experiment with system instructions, and generate outputs ranging from simple text completions to complex multimodal content. Google markets this accessibility as democratisation, as opening AI capabilities that were once restricted to researchers with advanced degrees and access to massive compute clusters. The message is clear: experiment, create, learn.

YouTube, meanwhile, processes over 500 hours of video uploads every minute. Managing this torrent requires automation at a scale humans cannot match. The platform openly acknowledges its hybrid approach: automated systems handle the initial filtering, flagging potential violations for human review in complex cases. YouTube addressed creator concerns in 2024 by describing this as a “team effort” between automation and human judgement.

The problem emerges in the gap between these two realities. Google AI Studio outputs content. YouTube's moderation systems evaluate content. When the latter cannot recognise the former as legitimate, the ecosystem becomes a snake consuming its own tail.

This is not theoretical. Throughout 2024 and into 2025, YouTube experienced multiple waves of mass terminations. In October 2024, YouTube apologised for falsely banning channels for spam, acknowledging that its automated systems incorrectly flagged legitimate accounts. Channels were reinstated, subscriptions restored, but the underlying fragility of the system remained exposed.

The November 2025 wave proved even more severe. YouTubers reported widespread channel terminations with no warning, no prior strikes, and explanations that referenced vague policy violations. Tech creator Enderman lost channels with hundreds of thousands of subscribers. Old Money Luxury woke to find a verified 230,000-subscriber channel completely deleted. True crime creator FinalVerdictYT's 40,000-subscriber channel vanished for alleged “circumvention” despite having no history of ban evasion. Animation creator Nani Josh lost a channel with over 650,000 subscribers without warning.

YouTube's own data from this period revealed the scale: 4.8 million channels removed, 9.5 million videos deleted. Hundreds of thousands of appeals flooded the system. The platform insisted there were “no bugs or known issues” and attributed terminations to “low effort” content. Creators challenged this explanation by documenting their appeals process and discovering something unsettling.

The Illusion of Human Review

YouTube's official position on appeals has been consistent: appeals are manually reviewed by human staff. The @TeamYouTube account stated on November 8, 2025, that “Appeals are manually reviewed so it can take time to get a response.” This assurance sits at the foundation of the entire appeals framework. When automation makes mistakes, human judgement corrects them. It is the safety net.

Except creators who analysed their communication metadata discovered the responses were coming from Sprinklr, an AI-powered automated customer service platform. Creators challenged the platform's claims of manual review, presenting evidence that their appeals received automated responses within minutes, not the days or weeks human review would require.

The gap between stated policy and operational reality is not merely procedural. It is existential. If appeals are automated, then the safety net does not exist. The system becomes a closed loop where automated decisions are reviewed by automated processes, with no human intervention to recognise context, nuance, or the simple fact that Google's own tools might be generating legitimate content.

For the creator whose verified account was terminated twice for uploading Google-generated content, this reality is stark. The appeals were filed correctly, the explanations were detailed, the evidence was clear. None of it mattered because no human being ever reviewed it. The automated system that made the initial termination decision rubber-stamped its own judgement through an automated appeals process designed to create the appearance of oversight without the substance.

The appeals interface itself reinforces the illusion. Creators are presented with a form requesting detailed explanations, limited to 1,000 characters. The interface implies human consideration, someone reading these explanations and making informed judgements. But when responses arrive within minutes, when the language is identical across thousands of appeals, when metadata reveals automated processing, the elaborate interface becomes theatre. It performs the appearance of due process without the substance.

YouTube's content moderation statistics reveal the scale of automation. The platform confirmed that automated systems are removing more videos than ever before. As of 2024, between 75% and 80% of all removed videos never receive a single view, suggesting automated removal before any human could potentially flag them. The system operates at machine speed, with machine judgement, and increasingly, machine appeals review.

The Technical Architecture of Distrust

Understanding how this breakdown occurs requires examining the technical infrastructure behind both content creation and content moderation. Google AI Studio operates as a web-based development environment where users interact with large language models through prompts. The platform supports text generation, image creation through integration with other Google services, and increasingly sophisticated multimodal outputs combining text, image, and video.

When a user generates content through AI Studio, the output bears no intrinsic marker identifying it as Google-sanctioned. There is no embedded metadata declaring “This content was created through official Google tools.” The video file that emerges is indistinguishable from one created through third-party tools, manual editing, or genuine bot-generated spam.

YouTube's moderation systems evaluate uploads through multiple signals: account behaviour patterns, content characteristics, upload frequency, metadata consistency, engagement patterns, and countless proprietary signals the platform does not publicly disclose. These systems were trained on vast datasets of bot behaviour, spam patterns, and policy violations. They learned to recognise coordinated inauthentic behaviour, mass-produced low-quality content, and automated upload patterns.

The machine learning models powering these moderation systems operate on pattern recognition. They do not understand intent. They cannot distinguish between a bot network uploading thousands of spam videos and a single creator experimenting with AI-generated content. Both exhibit similar statistical signatures: new accounts, minimal history, AI-generated content markers, short video durations, lack of established engagement patterns.

The problem is that legitimate experimental use of AI tools can mirror bot behaviour. A new account uploading AI-generated content exhibits similar signals to a bot network testing YouTube's defences. Short test videos resemble spam. Accounts without established history look like throwaway profiles. The automated systems, optimised for catching genuine threats, cannot distinguish intent.

This technical limitation is compounded by the training data these models learn from. The datasets consist overwhelmingly of actual policy violations: spam networks, bot accounts, coordinated manipulation campaigns. The models learn these patterns exceptionally well. But they rarely see examples of legitimate experimentation that happens to share surface characteristics with violations. The training distribution does not include “creator using Google's own tools to learn” because, until recently, this scenario was not common enough to appear in training data at meaningful scale.

This is compounded by YouTube's approach to AI-generated content. In 2024, YouTube revealed its AI content policies, requiring creators to “disclose when their realistic content is altered or synthetic” through YouTube Studio's disclosure tools. This requirement applies to content that “appears realistic but does not reflect actual events,” particularly around sensitive topics like elections, conflicts, public health crises, or public officials.

But disclosure requires access to YouTube Studio, which requires an account that has not been terminated. The catch-22 is brutal: you must disclose AI-generated content through the platform's tools, but if the platform terminates your account before you can access those tools, disclosure becomes impossible. The eight-second test video that triggered termination never had the opportunity to be disclosed as AI-generated because the account was destroyed before the creator could navigate to the disclosure settings.

Even if the creator had managed to add disclosure before upload, there is no evidence YouTube's automated moderation systems factor this into their decisions. The disclosure tools exist for audience transparency, not for communicating with moderation algorithms. A properly disclosed AI-generated video can still trigger termination if the account behaviour patterns match bot detection signatures.

The Broader Pattern of Platform Incoherence

This is not isolated to YouTube and Google AI Studio. It reflects a broader architectural problem across major platforms: the right hand genuinely does not know what the left hand is doing. These companies have grown so vast, their systems so complex, that internal coherence has become aspirational rather than operational.

Consider the timeline of events in 2024 and 2025. Google returned to using human moderators for YouTube after AI moderation errors, acknowledging that replacing humans entirely with AI “is rarely a good idea.” Yet simultaneously, YouTube CEO Neal Mohan announced that the platform is pushing ahead with expanded AI moderation tools, even as creators continue reporting wrongful bans tied to automated systems.

The contradiction is not subtle. The same organisation that acknowledged AI moderation produces too many errors committed to deploying more of it. The same ecosystem encouraging creators to experiment with AI tools punishes them when they do.

Or consider YouTube's AI moderation system pulling Windows 11 workaround videos. Tech YouTuber Rich White had a how-to video on installing Windows 11 with a local account removed, with YouTube allegedly claiming the content could “lead to serious harm or even death.” The absurdity of the claim underscores the system's inability to understand context. An AI classifier flagged content based on pattern matching without comprehending the actual subject matter.

This problem extends beyond YouTube. AI-generated NSFW images slipped past YouTube moderators by hiding manipulated visuals in what appear to be harmless images when viewed by automated systems. These AI-generated composites are designed to evade moderation tools, highlighting that systems designed to stop bad actors are being outpaced by them, with AI making detection significantly harder.

The asymmetry is striking: sophisticated bad actors using AI to evade detection succeed, while legitimate creators using official Google tools get terminated. The moderation systems are calibrated to catch the wrong threat level. Adversarial actors understand how the moderation systems work and engineer content to exploit their weaknesses. Legitimate creators follow official workflows and trigger false positives. The arms race between platform security and bad actors has created collateral damage among users who are not even aware they are in a battlefield.

The Human Cost of Automation at Scale

Behind every terminated account is disruption. For casual users, it might be minor annoyance. For professional creators, it is existential threat. Channels representing years of work, carefully built audiences, established revenue streams, and commercial partnerships can vanish overnight. The appeals process, even when it functions correctly, takes days or weeks. Most appeals are unsuccessful. According to YouTube's official statistics, “The majority of appealed decisions are upheld,” meaning creators who believe they were wrongly terminated rarely receive reinstatement.

The creator whose account was terminated twice never got past the starting line. There was no audience to lose because none had been built. There was no revenue to protect because none existed yet. But there was intent: the intent to learn, to experiment, to understand the tools Google itself promotes. That intent was met with immediate, automated rejection.

This has chilling effects beyond individual cases. When creators observe that experimentation carries risk of permanent account termination, they stop experimenting. When new creators see established channels with hundreds of thousands of subscribers vanish without explanation, they hesitate to invest time building on the platform. When the appeals process demonstrably operates through automation despite claims of human review, trust in the system's fairness evaporates.

The psychological impact is significant. Creators describe the experience as Kafkaesque: accused of violations they did not commit, unable to get specific explanations, denied meaningful recourse, and left with the sense that they are arguing with machines that cannot hear them. The verified creator who followed every rule, used official tools, and still faced termination twice experiences not just frustration but a fundamental questioning of whether the system can ever be navigated successfully.

A survey on trust in the creator economy found that more than half of consumers (52%), creators (55%), and marketers (48%) agreed that generative AI decreased consumer trust in creator content. The same survey found that similar majorities agree AI increased misinformation in the creator economy. When platforms cannot distinguish between legitimate AI-assisted creation and malicious automation, this erosion accelerates.

The response from many creators has been diversification: building presence across multiple platforms, developing owned channels like email lists and websites, and creating alternative revenue streams outside platform advertising revenue. This is rational risk management when platform stability cannot be assumed. But it represents a failure of the centralised platform model. If YouTube were genuinely stable and trustworthy, creators would not need elaborate backup plans.

The economic implications are substantial. Creators who might have invested their entire creative energy into YouTube now split attention across multiple platforms. This reduces the quality and consistency of content on any single platform, creates audience fragmentation, and increases the overhead required simply to maintain presence. The inefficiency is massive, but it is rational when the alternative is catastrophic loss.

The Philosophy of Automated Judgement

Beneath the technical failures and operational contradictions lies a philosophical problem: can automated systems make fair judgements about content when they cannot understand intent, context, or the ecosystem they serve?

YouTube's moderation challenges stem from attempting to solve a fundamentally human problem with non-human tools. Determining whether content violates policies requires understanding not just what the content contains but why it exists, who created it, and what purpose it serves. An eight-second test video from a creator learning Google's tools is categorically different from an eight-second spam video from a bot network, even if the surface characteristics appear similar.

Humans make this distinction intuitively. Automated systems struggle because intent is not encoded in pixels or metadata. It exists in the creator's mind, in the context of their broader activities, in the trajectory of their learning. These signals are invisible to pattern-matching algorithms.

The reliance on automation at YouTube's scale is understandable. Human moderation of 500 hours of video uploaded every minute is impossible. But the current approach assumes automation can carry judgements it is not equipped to make. When automation fails, human review should catch it. But if human review is itself automated, the system has no correction mechanism.

This creates what might be called “systemic illegibility”: situations where the system cannot read what it needs to read to make correct decisions. The creator using Google AI Studio is legible to Google's AI division but illegible to YouTube's moderation systems. The two parts of the same company cannot see each other.

The philosophical question extends beyond YouTube. As more critical decisions get delegated to automated systems, across platforms, governments, and institutions, the question of what these systems can legitimately judge becomes urgent. There is a category error in assuming that because a system can process vast amounts of data quickly, it can make nuanced judgements about human behaviour and intent. Speed and scale are not substitutes for understanding.

What This Means for Building on Google's Infrastructure

For developers, creators, and businesses considering building on Google's platforms, this fragmentation raises uncomfortable questions. If you cannot trust that content created through Google's own tools will be accepted by Google's own platforms, what can you trust?

The standard advice in the creator economy has been to “own your platform”: build your own website, maintain your own mailing list, control your own infrastructure. But this advice assumes platforms like YouTube are stable foundations for reaching audiences, even if they should not be sole revenue sources. When the foundation itself is unstable, the entire structure becomes precarious.

Consider the creator pipeline: develop skills with Google AI Studio, create content, upload to YouTube, build an audience, establish a business. This pipeline breaks at step three. The content created in step two triggers termination before step four can begin. The entire sequence is non-viable.

This is not about one creator's bad luck. It reflects structural instability in how these platforms operate. YouTube's October 2024 glitch resulted in erroneous removal of numerous channels and bans of several accounts, highlighting potential flaws in the automated moderation system. The system wrongly flagged accounts that had never posted content, catching inactive accounts, regular subscribers, and long-time creators indiscriminately. The automated system operated without adequate human review.

When “glitches” of this magnitude occur repeatedly, they stop being glitches and start being features. The system is working as designed, which means the design is flawed.

For technical creators, this instability is particularly troubling. The entire value proposition of experimenting with AI tools is to learn through iteration. You generate content, observe results, refine your approach, and gradually develop expertise. But if the first iteration triggers account termination, learning becomes impossible. The platform has made experimentation too dangerous to attempt.

The risk calculus becomes perverse. Established creators with existing audiences and revenue streams can afford to experiment because they have cushion against potential disruption. New creators who would benefit most from experimentation cannot afford the risk. The platform's instability creates barriers to entry that disproportionately affect exactly the people Google claims to be empowering with accessible AI tools.

The Regulatory and Competitive Dimension

This dysfunction occurs against a backdrop of increasing regulatory scrutiny of major platforms and growing competition in the AI space. The EU AI Act and US Executive Order are responding to concerns about AI-generated content with disclosure requirements and accountability frameworks. YouTube's policies requiring disclosure of AI-generated content align with this regulatory direction.

But regulation assumes platforms can implement policies coherently. When a platform requires disclosure of AI content but terminates accounts before creators can make those disclosures, the regulatory framework becomes meaningless. Compliance is impossible when the platform's own systems prevent it.

Meanwhile, alternative platforms are positioning themselves as more creator-friendly. Decentralised AI platforms are emerging as infrastructure for the $385 billion creator economy, with DAO-driven ecosystems allowing creators to vote on policies rather than having them imposed unilaterally. These platforms explicitly address the trust erosion creators experience with centralised platforms, where algorithmic bias, opaque data practices, unfair monetisation, and bot-driven engagement have deepened the divide between platforms and users.

Google's fragmented ecosystem inadvertently makes the case for these alternatives. When creators cannot trust that official Google tools will work with official Google platforms, they have incentive to seek platforms where tool and platform are genuinely integrated, or where governance is transparent enough that policy failures can be addressed.

YouTube's dominant market position has historically insulated it from competitive pressure. But as 76% of consumers report trusting AI influencers for product recommendations, and new platforms optimised for AI-native content emerge, YouTube's advantage is not guaranteed. Platform stability and creator trust become competitive differentiators.

The competitive landscape is shifting. TikTok has demonstrated that dominant platforms can lose ground rapidly when creators perceive better opportunities elsewhere. Instagram Reels and YouTube Shorts were defensive responses to this competitive pressure. But defensive features do not address fundamental platform stability issues. If creators conclude that YouTube's moderation systems are too unpredictable to build businesses on, no amount of feature parity with competitors will retain them.

The Possible Futures

There are several paths forward, each with different implications for creators, platforms, and the broader digital ecosystem.

Scenario One: Continued Fragmentation

The status quo persists. Google's various divisions continue operating with insufficient coordination. AI tools evolve independently of content moderation systems. Periodic waves of false terminations occur, the platform apologises, and nothing structurally changes. Creators adapt by assuming platform instability and planning accordingly. Trust continues eroding incrementally.

This scenario is remarkably plausible because it requires no one to make different decisions. Organisational inertia favours it. The consequences are distributed and gradual rather than acute and immediate, making them easy to ignore. Each individual termination is a small problem. The aggregate pattern is a crisis, but crises that accumulate slowly do not trigger the same institutional response as sudden disasters.

Scenario Two: Integration and Coherence

Google recognises the contradiction and implements systematic fixes. AI Studio outputs carry embedded metadata identifying them as Google-sanctioned. YouTube's moderation systems whitelist content from verified Google tools. Appeals processes receive genuine human review with meaningful oversight. Cross-team coordination ensures policies align across the ecosystem.

This scenario is technically feasible but organisationally challenging. It requires admitting current approaches have failed, allocating significant engineering resources to integration work that does not directly generate revenue, and imposing coordination overhead across divisions that currently operate autonomously. It is the right solution but requires the political will to implement it.

The technical implementation would not be trivial but is well within Google's capabilities. Embedding cryptographic signatures in AI Studio outputs, creating API bridges between moderation systems and content creation tools, implementing graduated trust systems for accounts using official tools, all of these are solvable engineering problems. The challenge is organisational alignment and priority allocation.

Scenario Three: Regulatory Intervention

External pressure forces change. Regulators recognise that platforms cannot self-govern effectively and impose requirements for appeals transparency, moderation accuracy thresholds, and penalties for wrongful terminations. YouTube faces potential FTC Act violations regarding AI terminations, with fines up to $53,088 per violation. Compliance costs force platforms to improve systems.

This scenario trades platform autonomy for external accountability. It is slow, politically contingent, and risks creating rigid requirements that cannot adapt to rapidly evolving AI capabilities. But it may be necessary if platforms prove unable or unwilling to self-correct.

Regulatory intervention has precedent. The General Data Protection Regulation (GDPR) forced significant changes in how platforms handle user data. Similar regulations focused on algorithmic transparency and appeals fairness could mandate the changes platforms resist implementing voluntarily. The risk is that poorly designed regulations could ossify systems in ways that prevent beneficial innovation alongside harmful practices.

Scenario Four: Platform Migration

Creators abandon unstable platforms for alternatives offering better reliability. The creator economy fragments across multiple platforms, with YouTube losing its dominant position. Decentralised platforms, niche communities, and direct creator-to-audience relationships replace centralised platform dependency.

This scenario is already beginning. Creators increasingly maintain presence across YouTube, TikTok, Instagram, Patreon, Substack, and independent websites. As platform trust erodes, this diversification accelerates. YouTube remains significant but no longer monopolistic.

The migration would not be sudden or complete. YouTube's network effects, existing audiences, and infrastructure advantages provide substantial lock-in. But at the margins, new creators might choose to build elsewhere first, established creators might reduce investment in YouTube content, and audiences might follow creators to platforms offering better experiences. Death by a thousand cuts, not catastrophic collapse.

What Creators Can Do Now

While waiting for platforms to fix themselves is unsatisfying, creators facing this reality have immediate options.

Document Everything

Screenshot account creation processes, save copies of content before upload, document appeal submissions and responses, and preserve metadata. When systems fail and appeals are denied, documentation provides evidence for escalation or public accountability. In the current environment, the ability to demonstrate exactly what you did, when you did it, and how the platform responded is essential both for potential legal recourse and for public pressure campaigns.

Diversify Platforms

Do not build solely on YouTube. Establish presence on multiple platforms, maintain an email list, consider independent hosting, and develop direct relationships with audiences that do not depend on platform intermediation. This is not just about backup plans. It is about creating multiple paths to reach audiences so that no single platform's dysfunction can completely destroy your ability to communicate and create.

Understand the Rules

YouTube's disclosure requirements for AI content are specific. Review the policies, use the disclosure tools proactively, and document compliance. Even if moderation systems fail, having evidence of good-faith compliance strengthens appeals. The policies are available in YouTube's Creator Academy and Help Centre. Read them carefully, implement them consistently, and keep records proving you did so.

Join Creator Communities

When individual creators face termination, they are isolated and powerless. Creator communities can collectively document patterns, amplify issues, and pressure platforms for accountability. The November 2025 termination wave gained attention because multiple creators publicly shared their experiences simultaneously. Collective action creates visibility that individual complaints cannot achieve.

Consider Legal Options

When platforms make provably false claims about their processes or wrongfully terminate accounts, legal recourse may exist. This is expensive and slow, but class action lawsuits or regulatory complaints can force change when individual appeals cannot. Several law firms have begun specialising in creator rights and platform accountability. While litigation should not be the first resort, knowing it exists as an option can be valuable.

The Deeper Question

Beyond the immediate technical failures and policy contradictions, this situation raises a question about the digital infrastructure we have built: are platforms like YouTube, which billions depend upon daily for communication, education, entertainment, and commerce, actually stable enough for that dependence?

We tend to treat major platforms as permanent features of the digital landscape, as reliable as electricity or running water. But the repeated waves of mass terminations, the automation failures, the gap between stated policy and operational reality, and the inability of one part of Google's ecosystem to recognise another part's legitimate outputs suggest this confidence is misplaced.

The creator terminated twice for uploading Google-generated content is not an edge case. They represent the normal user trying to do exactly what Google's marketing encourages: experiment with AI tools, create content, and engage with the platform. If normal use triggers termination, the system is not working.

This matters beyond individual inconvenience. The creator economy represents hundreds of billions of dollars in economic activity and provides livelihoods for millions of people. Educational content on YouTube reaches billions of students. Cultural conversations happen on these platforms. When the infrastructure is this fragile, all of it is at risk.

The paradox is that Google possesses the technical capability to fix this. The company that built AlphaGo, developed transformer architectures that revolutionised natural language processing, and created the infrastructure serving billions of searches daily can certainly ensure its AI tools are recognised by its video platform. The failure is not technical capability but organisational priority.

The Trust Deficit

The creator whose verified account was terminated twice will likely not try a third time. The rational response to repeated automated rejection is to go elsewhere, to build on more stable foundations, to invest time and creativity where they might actually yield results.

This is how platform dominance erodes: not through dramatic competitive defeats but through thousands of individual creators making rational decisions to reduce their dependence. Each termination, each denied appeal, each gap between promise and reality drives more creators toward alternatives.

Google's AI Studio and YouTube should be natural complements, two parts of an integrated creative ecosystem. Instead, they are adversaries, with one producing what the other punishes. Until this contradiction is resolved, creators face an impossible choice: trust the platform and risk termination, or abandon the ecosystem entirely.

The evidence suggests the latter is becoming the rational choice. When the platform cannot distinguish between its own sanctioned tools and malicious bots, when appeals are automated despite claims of human review, when accounts are terminated twice for the same harmless content, trust becomes unsustainable.

The technology exists to fix this. The question is whether Google will prioritise coherence over the status quo, whether it will recognise that platform stability is not a luxury but a prerequisite for the creator economy it claims to support.

Until then, the paradox persists: Google's left hand creating tools for human creativity, Google's right hand terminating humans for using them. The ouroboros consuming itself, wondering why the creators are walking away.


References and Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Two things that I'm particularly happy about today: 1.) ordered the wife's Christmas and birthday gifts and cards. (Her birthday is Dec. 31, so I usually order her stuff at the same time.) and 2.) found an early men's basketball game to follow, so I'll be able to finish the game and still ease my way into a sensible senior bedtime.

Prayers, etc.: * My daily prayers

Health Metrics: * bw= 223.66 lbs. * bp= 154/89 (63)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 06:00 – 1 banana, 1 blueberry muffin, toast & butter * 10:00 – baked salmon w. mushroom sauce * 16:30 – 1 more blueberry muffin

Activities, Chores, etc.: * 04:45 – listen to local news talk radio * 05:50 – bank accounts activity monitored * 06:00 – read, pray, follow news reports from various sources, surf the socials * 16:00 – listening to The Jack Riccardi Show * 17:00 – listening to The Joe Pags Show * 18:00 – tuned into the radio call of an NCAA men's basketball game, Toledo Rockets vs. Michigan St. Spartans, late in the 1st half. * 19:30 – MSU wins 92 to 69. Time now to turn off the radio, put on some relaxing music, and quietly read before bed.

Chess: * 11:15 – moved in all pending CC games

 
Read more...

from The happy place

I’ve met a lot of interesting people during my travels

I’ve even been to England, I saw some tourists there, whereas I was there for business

Once in Germany I even drank beer from a giant glass shoe, maybe one litre, just like Cinderella.

I’ve been to America too, but I don’t recommend.

I didn’t know what ”smog” was before. Still not sure.

Actually, I don’t like travelling unless it’s to Norway.

But on all of these places, shines the same moon

And in Canada once, I ate poutine

That was remarkable.

And there was a giant waterslide, which I saw in a mall.

It was winter there. In Canada (although in the mall it’s all the same)

It doesn’t matter

I have been a few times to Paris

The french are role models

There were poor beggars eating cucumbers from glass jars in the park outside the Eiffel Tower

In Italy there were bad memories of a broken family, my father got blisters on his feet.

Long ago.

I like Greece more than italy

I felt like Theseus once when we were at Rhodos, but the minotaur was long gone.

It’s the same sky there

There are corpses in the Mediterranean Sea

In Finland they frequently drink beer for lunch, or I did anyway

 
Läs mer... Discuss...

from fromjunia

I found myself welling up with tears before my Buddha statue.

“How are you here? How is the Buddha-nature here? I’m not doubting that it is. I’m asking how? Because this is awful.”

As I’ve talked about before, I’ve been spending the last several months in very dark moods. I’m definitely better than I used to be, but it’s still been about four months since I left the upper-end of depression for longer than a single day. This has given me time to see what the dark moods have to teach me, because they certainly aren’t going anywhere with any haste. Why fight it when it can deepen my understanding of what it means to be human?

This has landed me in a kind of pessimistic liberal theism. Of sorts. Like many Westerners with multiple religious identities including Buddhism, it gets a little murky in places. Nevertheless, a picture has begun to form, drawing from four sources: Søren Kierkegaard, Alfred Whitehead, Walter Benjamin, and Mahayana Buddhism (inflected by Zen and Arthur Schopenhauer).


Anxiety and Despair in Kierkegaard

Kierkegaard felt that existing as a human was a pretty rough deal. He was a very sad boy and felt overwhelming depression and anxiety his whole life. He even broke off his marriage because he felt that she didn’t deserve to deal with his moods (although, maybe that was ultimately the correct call, as his fiancé was 14; a right answer with the wrong equation). But he spent his time engaging with these moods in a deep way, and came away with a pretty remarkable account of the role of anxiety and despair.

For Kierkegaard, anxiety is a response to the freedom that humans have. We can make meaningful choices that shape our lives. And we don’t have assurance that it’ll work out in the end. That’s scary. I sometimes present anxiety as the general knowledge that even if you do everything right, things can still turn out wrong, which I think Kierkegaard would empathize with. However you slice it, a critical part of Kierkegaard’s position is that anxiety isn’t pathological per se, but rather comes from a confrontation with our base reality as humans. It can be a sign of health, or of moving in a healthy direction.

Something similar happens with despair. Per Kierkegaard, most people are in a state of despair, even if they don’t realize it. That’s because being a human is impossible. We are stuck between who we are—our history, our social circumstances, our habits—and who we are becoming, and we are always becoming and often yearning to become something else. That’s not a stable arrangement. It’s so easy, natural even, to cling to our current state and despair that we are forced to change, or to embrace change and despair that we cannot change certain things about ourselves. According to Kierkegaard, all humans at some point are one or the other, perhaps even shifting between the two. But without an existential anchor to stabilize this process between being and becoming, we are stuck in despair. Kierkegaard thought this existential anchor was the Christian God. As someone who is not a Christian, at least not in any way that would be widely recognizable as such to Christians, I’m inclined to look elsewhere.


Being and Becoming in Whitehead

Whitehead had an interesting take on reality and God. He, like Kierkegaard, thought that we are both being and becoming. He thought all things were being and becoming, actually. That includes God.

Whitehead influenced a lot of liberal theologians with his process thought. He articulated a God that was compassionate—literally, suffering with others, experiencing all that happens directly—and drawing reality to a higher good. He saw a God that held a memory of the universe, grounding the past, experienced the present with all of creation, and non-coercively drew reality towards a more intense future, a “harmony of opposites” where conflicts are not resolved per se but do come to exist in a way that drives things towards aesthetic greatness.

This is an optimistic theology. Whitehead was inclined to think that things get better because the structure of the universe was tilted towards improvement, with God pulling it non-coercively towards an aesthetic greater good.

But if Kierkegaard is right, it’s unclear to me why God would not feel despair either. God cannot fix the past. Maybe God hopes to integrate a disastrous past into a greater harmony of opposites and in that way redeem it. But God can’t do that reliably. Not without the cooperation of the rest of the universe, which is shot through with freedom. There is no promise that the past will ever be redeemed, and it certainly seems that in the arc of human history there is much left to be redeemed, and more happening all the time. From the human angle, there are many things that are irredeemable, generating despair. If God experiences our despair about this as well, then it would seem that God too is unable to resolve the tension between the poles of being and becoming.


Benjamin’s Angel of History

Walter Benjamin wrote about this human perspective in a divine register. His story about the Angel of History has been one of my touchstones for the last decade, and I can only see Whitehead’s God in it.

There is a painting by Klee called Angelus Novus. An angel is depicted there who looks as though he were about to distance himself from something which he is staring at. His eyes are opened wide, his mouth stands open and his wings are outstretched. The Angel of History must look just so. His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair, to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high. That which we call progress, is this storm.

(Courtesy of marxists.org)

God is the repository of history, the eternal memory, and presently experiencing the suffering of all of creation. And per Benjamin, we are experiencing suffering in a particularly salient way: We have perpetually experienced eternal defeat in the form of being forgotten. Whitehead might feel that God’s eternal memory alleviates this, but we do not experience it. God experiences our despair, and the despair itself taints God’s memory, and God wishes it would not, that it be redeemed into a harmony of opposites, but is forever limited by experiencing the facts of reality, which are that we are trapped.

To articulate what is past does not mean to recognize “how it really was.” It means to take control of a memory, as it flashes in a moment of danger. For historical materialism it is a question of holding fast to a picture of the past, just as if it had unexpectedly thrust itself, in a moment of danger, on the historical subject. The danger threatens the stock of tradition as much as its recipients. For both it is one and the same: handing itself over as the tool of the ruling classes. In every epoch, the attempt must be made to deliver tradition anew from the conformism which is on the point of overwhelming it. For the Messiah arrives not merely as the Redeemer; he also arrives as the vanquisher of the Anti-Christ. The only writer of history with the gift of setting alight the sparks of hope in the past, is the one who is convinced of this: that not even the dead will be safe from the enemy, if he is victorious. And this enemy has not ceased to be victorious.

The tradition of the oppressed teaches us that the “emergency situation” in which we live is the rule.

Indeed, Benjamin entirely rejects our ability to access an eternal and complete memory, and for good reason. That is simply not how we experience time. We experience time with salience, with some things more citable than others. We experience time emotionally and dripping with value. We can only imagine that God is the same way.

It is well-known that the Jews were forbidden to look into the future. The Torah and the prayers instructed them, by contrast, in remembrance. This disenchanted those who fell prey to the future, who sought advice from the soothsayers. For that reason the future did not, however, turn into a homogenous and empty time for the Jews. For in it every second was the narrow gate, through which the Messiah could enter.


The Okay Suffering of the Buddha

Buddhism has a weird public image. “The end of suffering,” it proclaims. Perhaps more clear and honest is the saying “pain is unavoidable but suffering is optional.” Even more honest: “You still suffer, but it’s okay now.” “No end to suffering,” as the Heart Sutra teaches. (I understand the context provides nuance; just stay with me here.)

That’s the perspective of Zen Buddhism, particularly of the tradition I’ve had the most engagement with, Ordinary Mind. It’s also aligned somewhat with the interpretation of reality that Arthur Schopenhauer walked away with. According to Schopenhauer, reality is fundamentally unsatisfying. The Buddha would probably agree, with all the caveats and nuances and paradoxes the Buddha always offers. But let’s stay with what we can learn here. Reality is fundamentally unsatisfying, but we can’t escape reality. It’s a pretty bleak situation.

What can we do? Schopenhauer said that we should simply withdraw and engage with reality as little as possible. I’m not sure that’s right. I’d break from Schopenhauer here and follow Ordinary Mind in saying that by coming to reality and letting it teach us, as I have with my dark moods, as Kierkegaard did, it becomes a little more okay. In therapy I’ve heard this referred to as clean pain and dirty pain. There’s the clean pain of reality, and the dirty pain we heap on it. We can at least reduce our suffering by wiping away the dirty pain and leave ourselves with the clean pain by seeing reality as it is, without the delusions we tend to experience.


Hope, Regardless

This is an awful tragic view of reality. It’s a tragic view of God, because it means that God is always suffering, and perhaps in perpetually intensifying ways, depending on if you try to save the progression of harmony of opposites and how you understand “aesthetic” here. It means that we can try to stabilize ourselves and end our despair by anchoring ourselves to God, but if we truly do that then we’d be introduced to the despair of others through God’s universal compassion. Mahayana teaches that we’re here to be compassionate to the despair of others and to alleviate it. Perhaps a pessimistic variant of Mahayana Buddhism would say that we can never fully escape suffering, but we can reduce it by caring about others.

Hope is usually understood as forward-looking. It says that in the future, things will be better, or that there’s something in the future to hold on to. I’m not a fan of the latter because it seems like denying parts of reality, and I’m not an optimist about the future, so I don’t like the former either.

But if reality isn’t doing any work for us—if the universe is fundamentally orthogonal to our happiness, if not hostile to it—that means that if we give a damn, we better roll up our sleeves and build it ourselves. It means that there is an imperative to reduce suffering. It means that we find hope not in the future, but right now, in the actions we do to make suffering a little less. It saves us from the idolatry of the future, as pessimist philosopher Emil Cioran says, and frees us to find hope in the reality in front of us, in compassion and care.


Dark Moods, Dark Theology

I had to pass through pretty hopeless times to find a seed of hope again. I might never have if I hadn’t let myself sit and engage with my dark moods. I tried to return to the optimism so popular in contemporary culture, and so prevalent in liberal theology, but I couldn’t experience it as anything other than a lie.

I found hope again. Not in the creative advance of Whitehead, or the existential anchor of Kierkegaard, or the belief in the fundamental goodness of people so common in Unitarian Universalism (one of my faith traditions). I found hope in pessimism. I found compassion in universal suffering. I found a way forward with my faith by understanding my faith as flexible enough to accommodate the suffering that humans experience. Instead of seeing my depression as purely pathological, I let myself understand it as a thing that happens to humans, and as I believe that all things that happen to humans are able to be analyzed under a religious lens, I found religion in depression.

I doubt I’m alone. Like I said, depression happens to people, including religious people. I hope that I can share my pessimistic faith with others and save them from the oppression of mandatory optimism. For now, I return to the compassion of the Buddha, and find it makes my suffering a little more okay.

 
Read more...

from Build stuff; Break stuff; Have fun!

I'm falling a bit behind. Started another freelance project, so things are a bit slower now.

Most of the MVP is done, and I'm starting to polish some things. For Day 15, there was a small refactor of the Add/Edit forms to make them more robust and improve the UX.

One of the notable things here is the introduction of react-hook-form and zod. Which makes the most sense in combination with Supabase. In addition, I moved all form fields into shared components.

All the changes will give me a good feeling that this app can grow after the MVP. :)

👋


73 of #100DaysToOffload
#log #AdventOfProgress
Thoughts?

 
Weiterlesen... Discuss...

from Tuesdays in Autumn

Early December must be one of the least propitious times of the year for reading. There always seems to be far too much else to do. Only this evening have I reached the end of a short novel started a few weeks ago: Audition by Katie Kitamura. A correspondent's recommendation had made me curious to read it.

It's a story that struck me as very satisfyingly ambiguous. The narration had a clean, polished surface, which nevertheless gave the impression of considerable depth yawning beneath. Just about every small expectation that came to mind about where it might go next was soon afterwards neatly confounded: seldom did I feel sure of where I stood. It's a book that says a good deal and suggests a great deal more within its relatively narrow span of 197 pages.


The week has largely been taken up with work and with Christmas shopping. I left my annual campaign of on-line festive consumerism inadvisably late this year, and catching up has felt onerous. There have been lengthy sessions switching between browser tabs in search of appropriate items. There has been a barrage of notifications about orders and deliveries. There have been parcels to collect; parcels brought to my door; parcels left outside my door; parcels left outside neighbours' doors. Items damaged in transit or bought in error or subject to buyer's remorse have had to be returned. There have been second thoughts and changes of mind, and items at first intended for one recipient now earmarked for another. Even then, there is at least one item I know I will regret giving. There has been a rapid depletion of funds; a faint nausea about the excess of it all. I have to draw a line under it all now even if some dissatisfaction remains. Only gift-wrapping, distribution and presentation are left to manage.


The last time I'd tried blue cheese before my belated acquisition this year of a taste for the stuff, it had been in the shape of some Stilton four or five Christmasses ago. That experience had not been a happy one, and the recollection of it had pushed Stilton some way back in the queue of the cheeses I've wanted to try since my 'conversion'. Fortunately, having now seen the blue light, I have come back to it, and I'm finding the wedge I bought at Tesco on Saturday very much to my liking.


Wine of the week: a somewhat costly but very delicious Manzanilla Pasada, which is “delightfully aromatic with reminiscences of green apples and the characteristic hint of sea breeze.”

 
Read more... Discuss...

from Ernest Ortiz Writes Now

I used to like Medium articles. Anything related to writing, marketing, or business, Medium was my go-to. However, too many paywalled articles and forcing you to register just to read the free articles turned me away from the site.

I do love Substack and there are many informative and thoughtful creators I follow. However, I don’t think I’m intelligent enough nor have the time to write such articles. Who knows, maybe in the future when I’m not so busy.

The main reason I choose Write.as for my primary blog is focusing solely on writing without worrying about click stats, email marketing, or selling a product or service. All my posts are free and they are not monetized. So enjoy, take what you can learn, and spread the word. And I will try to do the same with yours.

#writing #medium #substack

 
Read more... Discuss...

from hello-kate

Something that has been keeping me busy in 2025 is an emerging audacious plan to buy a community building for our neighbourhood.

I live on the Tower Gardens Estate in Tottenham, north London – in the heart of one of the most deprived wards in one of the most deprived local authority areas in England. There is no easily accessible community space on our (large!) estate – and the one potential building – the old estate office, which was a Sure Start centre for a while – is now on Haringey’s disposals list.

A few of us have been working to cook up a plan to get this building and turn it into a community asset. The council are supportive of our plan but need us to raise the money to buy it from them. We think there’s huge potential, but we’re in the classic bind at the start of a project like this – we need money for commissioning our own valuation and condition surveys, and while we’ve done some super fun events (see pics!) to get ideas and opinions, there’s loads more co-design and broadening of our thinking we need to do.

We have such dreams for the building – a community garden, a library of things, a public living room, hireable community space, a music practice room, a community-led retrofit centre for the estate – but if we can’t raise money quickly (the end of the financial year?!) it will be put on the market and then who knows what will happen. Our current thinking is set out here.

We’ve launched a fundraiser to help cover initial costs – here – and we’re applying for as many pots of feasibility funding as we can. We’ve been knocked back from the AHF feasibility pot as the building isn’t listed (although it is an important building in an important conservation area!). I know this is a tight time of year and there are zillions of important places to put money. But I would love advice from the wise folk in my network about how we might get over this initial project start up funding hump.

We’ve got a great group and a decent plan, and I know that we can make something real happen here next year. Really happy to chat to anyone who might have leads on useful Haringey-based funding pots!!

 
Read more...

from Hunter Dansin

What I meant to say to you all those years ago.

One of the notes that inspired this poem

I met them in the margin of a used book,  next to difficult paragraphs  and subtle thoughts.

A penciled question mark  told me all I wanted to know ?  about their mind.

If I gave this book to a friend, I would have to tell them, “The marks are not mine.

“They are the marks of a mind, grappling, stretching, struggling. In a word: reading.

“Though I will say I admire them for persevering with a book, with which they seem to disagree.

“When was the last time you read a book, whose message grated on you, ! and made you want to shut it?

“In this day in age, we put such stock in the cover in our hands and what it says about us.

“Maybe that is why, they put in those marks instead of giving up.

“In that case, I can't fault them. We do what we must to keep reading, when we know it is good for us.”

I suppose those marks in the margin, on the whole, though distracting, made me read deeper into a book

Which I was wont to accept without protest or criticism. Thank you, friend,

For making my mind sharper. If we ever meet,  I hope to return the favor.

#poetry


I hope you enjoyed this “sequel” to my original short poem In the Margin. I have been reading a book of Robert Frost poems, and have come to really enjoy his deceptively simple dialogue. This was my attempt to adapt the technique, and I hope you liked it.


Send me a kind word or a cup of coffee:

Buy Me a Coffee | Listen to My Music | Listen to My Podcast | Follow Me on Mastodon | Read With Me on Bookwyrm

 
Read more... Discuss...

from hello-kate

Reflecting on 2025 and what I’ve learned this year!

Some key highlights

Making a podcast, obviously! Getting Corporate Bodies out into the world has been a real joy. Getting to work with Mark again and to have some profoundly deep and interesting conversations with very wise and wonderful people has stretched my brain and ambitions in all sorts of ways. Check it out if you haven’t already, I am very proud of it.

The patient, mature and expansive work I’ve had the honour to be part of at Catalyst, as we move to close the CIC. There’s a mix of fierce clarity, recognising that the right path is one of careful closure, and excitement of seeing that we can have a bigger impact by closing proactively and redistributing our remaining funds. There’s obviously also some grief and sadness as Catalyst is one of the most values-led and expansive organisations I’ve been part of – but I know its ripples will continue to expand.

My freelance practice is really starting to take shape! I’ve loved helping organisations and teams conjure the future, building emergent strategies and healthy cultures together. The mix of 1:1 and team coaching, strategy consultancy and facilitation is really energising, and I’ve loved being able to bring together and use frameworks like Three Horizons, Deep Democracy, permaculture and sociocracy, with a nice dose of sci-fi thrown in. I will have some capacity in the new year so if you’d like to chat about how I can work with you, get in touch!

Things are going from strength to strength at Digital Commons, and the addition of the wonderful Sara and Carmen to the team has been really impactful. Building the tech infrastructure that social movements need is hard and slow work, but it’s beginning to emerge – check out the latest updates to LandExplorer.coop, and keep your eyes open for some of the outputs for our Data for Housing Justice collab with Shared Assets early next year…

And – last but not least – I’ve absolutely relished the power of being away from my kitchen table and sharing a lovely space with the wonderful Beth and Deepa – check out the view from my desk! Being in a place where things are being made every day really helps me stay grounded, and integrating a mini-commute into my days has helped hugely with my mental health. Strong recommend!!

Deepa's work

![Deepa's work](https://i.snap.as/K75Hy3vv.jpg)
 
Read more...

from Shared Visions

Report by Milan Đorđević, Tijana Cvetković & Noa Treister

Building the cooperative did not begin with a strict program but with a series of conversations about how artists might reorganise their work and the relationships around it. So, this text turns to one specific attempt to rethink artistic exchange and the conditions under which art is produced and circulated in Serbia.

Our research study within the Association of Fine Artists of Serbia (ULUS, 2023) has shown that the visual arts field is marked by a high level of centralisation, dependence on a few major institutions, and the gradual erosion of public cultural infrastructure. As market logic expanded into areas once shaped by collective investment, cultural participation narrowed, particularly for the working class. In such a landscape, artistic value tends to be defined by visibility and demand rather than by social relevance. This is the background against which we began to explore whether practices of exchange outside the monetary frame might open different relationships between art and its surroundings.

In defining the scope and modes of operation of the co-op, we began to experiment with barter as a tool for rethinking exchange. The question we placed at the centre of this process was simple but fundamental: how can barter, as a form of non-monetary exchange, function both as a critique of existing art economies and of the way they define the role of art and artists in society, while also prefiguring alternatives?

Our first public experiment was conducted in Požega in the autumn of 2025, under the title Ponudi, razmeni, ponesiOffer, Exchange, Take Away. Visitors were invited to offer something of their own in return for an artwork. It could be a haircut, a home-cooked meal, help with repairs, a professional service in a non-art-related field, or money if they wished. The format resembled an auction, but its rhythm and meaning were different. Each artwork was accompanied by space for offers; after the exhibition closed, the artists reviewed the proposals and decided which to accept.

The exhibition took place in a space that had previously been a hair salon in the centre of Požega. It had been empty for months, and the owner was considering turning it into an art space. That circumstance gave us a kind of freedom that is rare when working within established institutions. There were no curatorial or administrative expectations, only the practical question of how to make the exchange visible and accessible. We organised the exhibition to be open a few afternoons and evenings during a period of three weeks. A person from the local community was engaged for a modest fee to keep the space open, welcome visitors, and explain how to make an offer and how the exchange would unfold. At the same time, we promoted the event through social media, direct letters sent to local entrepreneurs, and through personal networks. In small towns like Požega​​ (≈12,300 inhabitants), we realised that word of mouth still functions as the most effective form of public communication – slower but more durable than any campaign. By the end of the three weeks, around fifty people had visited the space, and several of them made their offers.

After the exhibition, we gathered for a workshop that opened one of the most persistent questions among artists: how to define the value of one’s own work. Most participants admitted they find it difficult to put a price on something that does not fit into standard market categories. One artist said she rarely sells her work as an object, and that her decisions depend on “who approaches her and whether they understand each other”. Others spoke about the challenge of balancing artistic integrity with livelihood. As one participant noted, “you can’t measure everything in hours, but you can’t ignore the time and materials either”.

For us, this conversation was central. A cooperative is not built around the idea of profit but around the need for sustainability. As we discussed, we don’t have to be profit-oriented, but we do have to cover our basic living costs. This simple statement cuts through much of the ambiguity that surrounds the notion of artistic value. It recognises that art, like any form of labour, depends on material conditions, but also that value is not fixed – it is negotiated in relation to others, to context, and to shared purpose. And barter became a way to make these relations visible: a two-day truck trip to Durrës in Albania; twenty professional hair colorings and haircuts with no time limit; a curatorial text for the next exhibition; documentation for building legalisation up to 200 square metres; a personal herbarium; a weekend stay with breakfast for up to eight people, and many more proposals that carry different understanding of value and relation. None of them could be translated neatly into monetary terms, and that was precisely the point. The exchanges showed what people were ready to give and how they imagined their connection to art, as care, as time, as skill, as hospitality. Whether professional artistic work becomes a matter of survival arithmetic (as was mentioned during the workshop) or remains unrecognised as labour, the question is the same: how to live from what one creates. As one artist put it, few people see art as work at all, and that is precisely where the cooperative finds its role – to shift perception and rebuild the link between artistic value and the conditions of life that sustain it.

Even though some visitors offered money for the artworks, the non-monetary exchanges shaped the atmosphere of the event in a different way. Instead of fixed prices, artists provided approximate starting points for negotiation, which opened space to focus less on monetary value and more on the people who approached them. Buyers were no longer anonymous figures but individuals whose interests, skills or forms of care said something about why they wanted a particular work. Several artists accepted offers that were modest or unconventional, simply because they felt a sense of recognition in them. From a conventional entrepreneurial standpoint, accepting less than the assumed market value might be seen as diminishing one’s worth, but this concern did not play a central role here. The exchange was not framed as a market transaction to be optimised, but as a space in which value could be shaped through relation rather than price. That shift loosened the usual distance between artist and audience and made the encounter feel grounded in mutual attention instead of market logic.

The co-op should bring artistic labour back into the everyday economy of life and exchange, without romanticising precarity or denying the need for income. Its way of selling art should test how art might live when its value comes from relations rather than from market recognition. This intention became clearer when we proposed to repeat the experiment in one of the central art spaces in Serbia. The response from its curators exposed precisely the tension we wanted to address. They worried that the idea of exchanging artworks for homemade goods or services could “devalue” art and “encourage amateurism”. Their concern was not unique; it reflected a broader institutional anxiety about how artistic value is defined and protected in a system that already struggles to sustain its own workers.

What the experiment left us with was not a ready-made model but a clearer sense of the questions that need to be worked through: how to organise exchanges that recognise artistic labour without falling back on market metrics; how to involve communities without reproducing hierarchies; and how to build structures that make such practices sustainable rather than exceptional. For the next iteration of the exhibition, we turned to a public library in Bor, a mining town with a growing community of Chinese workers, as a place to continue the experiment. The interest shown by artists during the open call, their questions, suggestions, and willingness to engage even when they could not participate, confirmed that the need for such spaces is real. Rather than closing a cycle, the workshop and exhibition in Požega marked the beginning of a longer process; in the coming period, we plan to develop a series of these kinds of events that deepen this exploration of alternative economies of art.

 
Read more... Discuss...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 16 décembre 2025 Introspection

C’est revenu le temps du kotatsu : ma princesse et son laptop, moi les coudes autour d’un bouquin (je tiens ma tête les coudes sur la table). On a baissé la lampe pour être bien éclairées, A porte des lunettes pour travailler, elle ne veut pas que je me fatigue les yeux. On a mis les hanten doublés Ça c’est le décor.

Dans ma tête c’est moins clair J'ai plus eu de cauchemar depuis je sais pas et plus d’hallucinations non plus. Je me sens beaucoup plus stable, plus tranquille. Je m'endors sans crainte.

Mes psys me disent que je n'ai pas fini. Je veux bien le croire, puisque je n’arrive pas à en parler avec mon frère, pourtant je crois que mon interprétation est juste, alors qu'est-ce qui ne va encore pas? C'est vrai j’ai reçu ces coups et ces brimades comme une preuve d'intérêt alors que je me croyais inexistante. J'en ai même été fière. C’est dingue hein ? C’est vrai. J’ai fait plus que supporter, j'ai aimé ça. Je trimbalais mes marques comme des médailles, j'étais fière de savoir endurer.

Dans le hokkaido ils ne m’ont jamais sorti un cri, peut-être des gémissements que j'arrivais à étouffer. C'était comme un défi. Quand on m'a violée je n’ai pas pu retenir des larmes, mais pas un son, je le sais, on m'a forcée à voir les vidéos ignobles, ils me traitaient de petite salope, petite arrogante, petite aristo de merde. Ils me tiraient les cheveux. Ça les mettait en rage, et moi pas un son et je baissais pas les yeux. Ils devenaient fous, je recevais des gifles, des raclées, ils me jetaient par terre… Bref Alors quoi maintenant, qu'est-ce qui manque ? Qu'est-ce qui est enfoui si profondément que je ne vois rien ressortir, pas un indice. Mes psys semblent avoir une idée mais peut-être bien qu'ils bluffent, je suis seule en face de cette question. Pourquoi je n’ose pas en parler à mon frère ? Pourquoi je n’ose pas de lui dire que j'ai aimé sa tyrannie violente ? Pour pas perdre mon statut de victime héroïque ? — Tu parles comme je m'en fous Je ne comprends pas Je ne vois pas où est le point. Ma chérie ne peut plus rien pour m'aider, bien qu'elle voudrait tellement. Personne ne peut plus rien. C'est entre moi et moi, merde alors.

 
Lire la suite...

Join the writers on Write.as.

Start writing or create a blog