Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
đ
This Taunted Earth
For every eleven A tall seeking series Forego the sea And this planet sky To Roskilde and proper Dare peace And go by the Window We have a word With gold and great By silver spore A fittance for the lot With highest purple To seize our Earth And Summer bright To make stands blue For reading measure A space For mew
đž
from
đ
Children of The USA
A bliss to the United States In Holy reset of the universe In clarity we know and make this stand That God is our home Being there in the way To see the Woman, Liberty A notice plan built Spare courage in the nine A Dove took our day To be here and afar Timeless noon and this night There is courage in the morning A special day To fit our path Redeeming spirit Impressed forever And a way with this Flower Betrothed to each petal We are forever In natureâs hand
from
đŠââŹé¶ · doooong - blog
#Yesteryear #doooongMuse
JUN TOGAWA æ¶ć·çŽ


Jun Togawa (æžć·çŽ, Togawa Jun; born 31 March 1961) is a Japanese singer, musician and actress. She is one of the greatest influences on Japanese avant-garde music and media, and her career spans over 35 years. Her close friends over the years include Susumu Hirasawa. She was mainly active from 1981 to 1995.


After gaining attention as a guest singer for the New Wave band Halmens and her acting roles in Japanese dramas and commercials for the Washlet, she began her professional music career in the early 1980s as a singer.[1][2] She joined former Halmens member KĆji Ueno and artist/lyricist Keiichi Ohta to form the ShĆwa era-themed band Guernica in 1981, whose first album was released under YEN Records in 1982.
In 1984, during a hiatus on Guernica, she released a live album Ura Tamahime with a backing band called Yapoos; the band included some former Halmens members and the album featured several covers of Halmens songs. The same year, she released her debut solo album Tamahime-sama (also on YEN), containing themes of menstruation, womanhood, and romance with a recurring insect and pupa motif. The following year, she came out with album Kyokuto Ian Shoka (Far Eastern Comfort Songs) with a backing band called the Jun Togawa Unit. Later that year she released her album Suki Suki Daisuki, a satirical take on aidoru music, this time under her own Alfa Records sublabel, HYS.


She joined Yapoos and solidified the group as an official band, releasing their first album in 1987. She did two more albums with Guernica in 1988 and 1989, and continued singing with Yapoos, releasing albums mainly into the mid 90s, then one in 2003 and another in 2019. Generally the differentiation between her self-named bands and the Yapoos has been a greater degree of collaboration in the latter.

Although she never achieved major pop success, she survived as an influential and respected underground music figure both solo and as the lead singer of Guernica and her most commercial project Yapoos where she is particularly noted for her connection to eroguro culture


Notable collaborators over the years include Haruomi Hosono who sponsored Guernica's first album and produced & wrote music for some of her earlier works. Her late sister Kyoko Togawa was an actress who at times ventured into the music world and cross collaborated at times. Around 1990 Jun shared management with Susumu Hirasawa resulting in quite a number of collaborations.
She has acted in the films Untamagiru and The Family Game
In 1989, Susumu Hirasawa, who had placed his band P-MODEL on hiatus, joined the Yapoos as support, appearing in the âBach Studio IIâ section of the TV program âYume de Aietaraâ, where he played in a session with Downtown, Ucchan Nanchan and Susumu Hirasawa.

In 1991, Togawa appeared on the TV Tokyo program Jun Togawa x Susumu Hirasawa (MC: Kenzo Saeki) âJun Togawa Revival Festival!â
In 1992, Susumu Hirasawa offered her âBeals (1992)â as a Yapoos song.
In 1995, âShowa Kyounenâ was released to commemorate the 10th anniversary of Jun Togawa's performing career. Based on the concept of âcovering nostalgic melodies of the Showa era,â the album contained six songs arranged by Susumu Hirasawa. The songs include âRibbon Knightâ composed by Isao Tomita and arranged by Susumu Hirasawa.
Her 2004 album, Togawa Fiction, with the Jun Togawa Band, featured elements of progressive rock, electropop and other genres. In 2008, she released a career-spanning three-CD boxed set, Togawa Legend Self Select Best & Rare 1979-2008 which featured many of her most popular songs along with several scarcer tracks and hard to find collaborations.
She marked the 35th anniversary of her professional career in 2016 by releasing new collaboration albums with Vampillia and Hijokaidan, her first new recordings in twelve years.
https://www.youtube.com/watch?v=cWlugvcnuSA
https://www.youtube.com/watch?v=X90sB3L9osY
from Editorial AnticatĂĄstrofe
Elegir lo que transmite un texto no es solo cuestiĂłn de comunicar un mensaje en texto, si no con el propio medio utilizado. Lo mĂĄs evidente es hablar de lo visual, aunque la forma e incluso el proceso cuenta algo mĂĄs sobre el texto.
La maternidad ha sido una experiencia vital que me atravesado entera. Me ha inspirado, agotado y llenado como nunca nada antes. Me ha tocado en lo corporal y en lo emocional, y por lo tanto no es nada fĂĄcil darle forma a una publicaciĂłn que hable sobre ello. Aunque estudiĂ© un poco de diseño editorial en la carrera de artes, me gusta rebuscar e investigar mĂĄs fuentes de inspiraciĂłn. Mi primer acercamiento a la creaciĂłn de âMaternal y serâ ha sido precisamente eso, buscar publicaciones que hablen de lo editorial y el diseño, tranquilamente.

Para diseñar el zine de âMaternal y serâ, era importante elegir materiales y tĂ©cnicas que acompañasen el mensaje en general. Por ello, las hojas internas son de una textura agradable para mĂ, algo duras (que despuĂ©s he usado en otras publicaciones, tambiĂ©n, porque me ha gustado mucho), y la portada es transparente. A parte de resultar agradable estĂ©ticamente, me gustaba que esta transparencia representase, en parte, esa sensaciĂłn de desnudez que tiene el compartir algo vital. Al ser vegetal, no es exactamente transparente como lo serĂa un plĂĄstico, si no que parece una suerte de piel fina. Es una cubierta, pero se puede ver perfectamente lo que esconde. Como detalle, en la portada, incluyo una imagen escaneada de una de mis libretas de notas. Me gusta mucho escribir a mano, y creo que en la transcripciĂłn a veces se pierde algo. Este es mi modo de âhonrarâ lo escrito a mano.

Para diseñar el interior, me hice un documento de texto plano con todo el contenido y luego fui, mentalmente, visualizando las pĂĄginas. En este caso no querĂa anidar ninguna, de modo que serĂa una colecciĂłn de varias âpostalesâ unidas. La complicaciĂłn principal es que iba a usar algunas imĂĄgenes grandes (ilustraciones que yo misma he hecho), y para que la calidad fuese la Ăłptima, tenĂa que primero imprimir la pĂĄgina en PNG usando la impresora en modo foto, y luego usar esa misma hoja de nuevo en el cajĂłn de papel, colocada de modo que puediera imprimir el documento ya si, en PDF para impresiĂłn. Por supuesto, esto hace que no pueda simplemente sacar un pdf y esperar que se imprima, tenĂa que sacar un trozo, imprimir la imagen, reajustar el cajĂłn de papel, y volver a imprimir, cada vez que habĂa una imagen.

AdemĂĄs de todo, dĂas antes, habĂa estado probando la âfalsaâ serigrafĂa. Esto es, hacer una imagen en dos o tres partes, con color plano, y en lugar de sacarlo todo como una sola imagen con varias capas, imprimĂa una, la colocaba entro del cajĂłn de imprimir, imprimĂa otra y asĂ. EL resultado es casi casi igual que imprimirlo de golpe todo, pero se nota ligeramente la diferencia en la capa de tinta. DecidĂ sacar una hoja que contenĂa un subrayado de este modo, imprimiendo primero el subrayado y volviendo a meter la hoja impresa en el cajĂłn de impresiĂłn para sacar el texto. El resultado es muy bonito, aunque sea tedioso.

Finalmente, una vez estaba todo impreso, quedaba coser. Me planteé coser a
mĂĄquina, pero pensĂ© que para algo tan cercano, tenĂa sentido hacerlo a mano (por quĂ© soy asĂ). ElegĂ hilo duro rojo (rojo sangre), y empecĂ© a coser el interior, de modo que cosĂa la primera con la segunda, la segunda con la tercera y asĂ, dejando caer los restos sin cortar, deliberadamente. De este modo, caĂan de un modo dramĂĄtico que inspiraban a algo visceral. En mi cabeza me recordaba a mi parto. Coser todos los ejemplares fue tedioso, pero lo hice una noche sobre la mesa donde comemos, en compañĂa de mi pareja, que estaba estudiando en paralelo, asĂ que fue una experiencia de calma en el caos que es maternar, a veces.
En una publicaciĂłn artĂstica que adoro desde hace años, puede que sea uno de mis libros favoritos, aparece el uso de texto invertido que sĂłlo se leĂa correctamente al contraluz. La publicaciĂłn, llamada âComo la casa miaâ de Laura C. Vela y Xirou Xiao, conseguĂa una pausa en le lectore a travĂ©s de eso, o al menos esa fue mi sensaciĂłn, haciendo una experiencia mĂĄs Ăntima. Yo querĂa hacer algo similar, y usĂ© papel vegetal (de nuevo), con texto invertido, que completaba el texto de la pĂĄgina anterior. Para lograrlo, tuve que hacer varias pruebas de maquetaciĂłn hasta que encajĂł como yo querĂa, pero el resultado me hizo muy feliz. Hay que tener cuidado porque si se pliega reciĂ©n salido de la impresora, el papel se mancha.

Para finalizar, la cubierta tenĂa un problema. Si simplemente la doblaba por la mitad, no podĂa abarcar el contenido, que era grueso, y si dejaba el margen del lomo en la mitad, la portada quedaba partida. Se me ocurriĂł probar a abarcar la portada entera, y dejar que la contraportada fuera la que se acortase, y me encantĂł el resultado. ParecĂa que invitaba a la lectura dejando entrever algo de su interior (mĂĄs aĂșn que la transparencia). PensĂ© en dejarlo sin unir, pero para casos asĂ, me habĂa comprado dĂas antes una suerte de grapa de pinzas metĂĄlicas. Bastaba pinzar la primera pĂĄgina con la portada, y ya estaba listo.
No es un diseño que estĂ© pensado para hacer muchas veces rĂĄpidamente, es un diseño artesanal que requiere cuidado y paciencia, como quien hace un dibujo. Por eso cada uno de los volĂșmenes al final, aunque sea corto, tardaba entorno a 20 minutos en completarlo (sin contar con las horas de maquetaciĂłn digital, y pruebas de impresiĂłn, quiero decir, ya que eso sĂłlo se hace una vez). No pretendĂa hacer muchos, sĂłlo unos poco, incluyendo uno para el espacio de comadres al que asisto. Desgraciadamente, tienen el primero que hice tras las pruebas, y se colĂł una errata que he corregido en los posteriores. El resultado es fruto de querer volcar varias ideas que acompañasen al texto, sin la pretensiĂłn de hacerlo eficiente en lo artesanal.
Como conclusiĂłn, dedicar paciencia y priorizar la artesanĂa que transmita el mensaje puede dar una experiencia increĂble para aprender y experimentar, y el resultado final puede ser muy satisfactorio. No destaca por la cantidad, pero no importa.
from An Open Letter
Itâs kind of weird how catching up on sleep makes such a massive difference on my emotional well-being. Same with exercise. I feel much better today, and I understand that circumstances have changed since then still is really surprising how drastic change that is. This is really weird thing to talk about, but I kind of thought today about why I want a relationship or love in the first place, not because I donât feel it, but mostly just because I guess Iâm not fully sure how to put it into words about why I want it. I think essentially what it boiled down to for me is it doesnât have to be something magical where someoneâs suddenly something that gives you like a purpose in life or anything like that, but I think itâs like an incredibly close friend that you have a lot of proximity with and you can build a shared amount of trust and reliability from and I donât really think thereâs someone better than E in that sense. Sometimes I do get a little bit worried about some minor things, like at certain points I think she is much more comfortable being âweirdâ, and I think thatâs something that isnât a bad thing, just is something different than what Iâm used to. I love her so much, and I honestly get very surprised when I think about what it might be like with her in the future.
from
Bloc de notas
las actualizaciones que tuvo no le sirvieron de mucho / crĂ©anme mĂĄs bien le perjudicaron al punto que un dĂa / despuĂ©s de cenar ya no pudo apagarse
Pilots' Almanac: Maritime & Piloting Rules is currently DriveThruRPG deal of the day for 50% off. Although for HĂąrn, I found it to be a great resource in my OD&D game, especially the sections on crews, ships, and maritime trade. An expensive book that rarely goes on sale.
Frog God Games is also running a 50% off sale. From what I can see some of the products are 75% off. Good sale to pick up some of the more expensive tomes and bundles like Razor Coast, Northlands, and Necromancer Games supplements.
#Sale #FGG #NG #Harn #OSR
from
Human in the Loop

The promise was straightforward: Google would democratise artificial intelligence, putting powerful creative tools directly into creators' hands. Google AI Studio emerged as the accessible gateway, a platform where anyone could experiment with generative models, prototype ideas, and produce content without needing a computer science degree. Meanwhile, YouTube stood as the world's largest video platform, owned by the same parent company, theoretically aligned in vision and execution. Two pillars of the same ecosystem, both bearing the Alphabet insignia.
Then came the terminations. Not once, but twice. A fully verified YouTube account, freshly created through proper channels, uploading a single eight-second test video generated entirely through Google's own AI Studio workflow. The content was harmless, the account legitimate, the process textbook. Within hours, the account vanished. Terminated for âbot-like behaviour.â The appeal was filed immediately, following YouTube's prescribed procedures. The response arrived swiftly: appeal denied. The decision was final.
So the creator started again. New account, same verification process, same innocuous test video from the same Google-sanctioned AI workflow. Termination arrived even faster this time. Another appeal, another rejection. The loop closed before it could meaningfully begin.
This is not a story about a creator violating terms of service. This is a story about a platform so fragmented that its own tools trigger its own punishment systems, about automation so aggressive it cannot distinguish between malicious bots and legitimate experimentation, and about the fundamental instability lurking beneath the surface of platforms billions of people depend upon daily.
Google has spent considerable resources positioning itself as the vanguard of accessible AI. Google AI Studio, formerly known as MakerSuite, offers direct access to models like Gemini and PaLM, providing interfaces for prompt engineering, model testing, and content generation. The platform explicitly targets creators, developers, and experimenters. The documentation encourages exploration. The barrier to entry is deliberately low.
The interface itself is deceptively simple. Users can prototype with different models, adjust parameters like temperature and token limits, experiment with system instructions, and generate outputs ranging from simple text completions to complex multimodal content. Google markets this accessibility as democratisation, as opening AI capabilities that were once restricted to researchers with advanced degrees and access to massive compute clusters. The message is clear: experiment, create, learn.
YouTube, meanwhile, processes over 500 hours of video uploads every minute. Managing this torrent requires automation at a scale humans cannot match. The platform openly acknowledges its hybrid approach: automated systems handle the initial filtering, flagging potential violations for human review in complex cases. YouTube addressed creator concerns in 2024 by describing this as a âteam effortâ between automation and human judgement.
The problem emerges in the gap between these two realities. Google AI Studio outputs content. YouTube's moderation systems evaluate content. When the latter cannot recognise the former as legitimate, the ecosystem becomes a snake consuming its own tail.
This is not theoretical. Throughout 2024 and into 2025, YouTube experienced multiple waves of mass terminations. In October 2024, YouTube apologised for falsely banning channels for spam, acknowledging that its automated systems incorrectly flagged legitimate accounts. Channels were reinstated, subscriptions restored, but the underlying fragility of the system remained exposed.
The November 2025 wave proved even more severe. YouTubers reported widespread channel terminations with no warning, no prior strikes, and explanations that referenced vague policy violations. Tech creator Enderman lost channels with hundreds of thousands of subscribers. Old Money Luxury woke to find a verified 230,000-subscriber channel completely deleted. True crime creator FinalVerdictYT's 40,000-subscriber channel vanished for alleged âcircumventionâ despite having no history of ban evasion. Animation creator Nani Josh lost a channel with over 650,000 subscribers without warning.
YouTube's own data from this period revealed the scale: 4.8 million channels removed, 9.5 million videos deleted. Hundreds of thousands of appeals flooded the system. The platform insisted there were âno bugs or known issuesâ and attributed terminations to âlow effortâ content. Creators challenged this explanation by documenting their appeals process and discovering something unsettling.
YouTube's official position on appeals has been consistent: appeals are manually reviewed by human staff. The @TeamYouTube account stated on November 8, 2025, that âAppeals are manually reviewed so it can take time to get a response.â This assurance sits at the foundation of the entire appeals framework. When automation makes mistakes, human judgement corrects them. It is the safety net.
Except creators who analysed their communication metadata discovered the responses were coming from Sprinklr, an AI-powered automated customer service platform. Creators challenged the platform's claims of manual review, presenting evidence that their appeals received automated responses within minutes, not the days or weeks human review would require.
The gap between stated policy and operational reality is not merely procedural. It is existential. If appeals are automated, then the safety net does not exist. The system becomes a closed loop where automated decisions are reviewed by automated processes, with no human intervention to recognise context, nuance, or the simple fact that Google's own tools might be generating legitimate content.
For the creator whose verified account was terminated twice for uploading Google-generated content, this reality is stark. The appeals were filed correctly, the explanations were detailed, the evidence was clear. None of it mattered because no human being ever reviewed it. The automated system that made the initial termination decision rubber-stamped its own judgement through an automated appeals process designed to create the appearance of oversight without the substance.
The appeals interface itself reinforces the illusion. Creators are presented with a form requesting detailed explanations, limited to 1,000 characters. The interface implies human consideration, someone reading these explanations and making informed judgements. But when responses arrive within minutes, when the language is identical across thousands of appeals, when metadata reveals automated processing, the elaborate interface becomes theatre. It performs the appearance of due process without the substance.
YouTube's content moderation statistics reveal the scale of automation. The platform confirmed that automated systems are removing more videos than ever before. As of 2024, between 75% and 80% of all removed videos never receive a single view, suggesting automated removal before any human could potentially flag them. The system operates at machine speed, with machine judgement, and increasingly, machine appeals review.
Understanding how this breakdown occurs requires examining the technical infrastructure behind both content creation and content moderation. Google AI Studio operates as a web-based development environment where users interact with large language models through prompts. The platform supports text generation, image creation through integration with other Google services, and increasingly sophisticated multimodal outputs combining text, image, and video.
When a user generates content through AI Studio, the output bears no intrinsic marker identifying it as Google-sanctioned. There is no embedded metadata declaring âThis content was created through official Google tools.â The video file that emerges is indistinguishable from one created through third-party tools, manual editing, or genuine bot-generated spam.
YouTube's moderation systems evaluate uploads through multiple signals: account behaviour patterns, content characteristics, upload frequency, metadata consistency, engagement patterns, and countless proprietary signals the platform does not publicly disclose. These systems were trained on vast datasets of bot behaviour, spam patterns, and policy violations. They learned to recognise coordinated inauthentic behaviour, mass-produced low-quality content, and automated upload patterns.
The machine learning models powering these moderation systems operate on pattern recognition. They do not understand intent. They cannot distinguish between a bot network uploading thousands of spam videos and a single creator experimenting with AI-generated content. Both exhibit similar statistical signatures: new accounts, minimal history, AI-generated content markers, short video durations, lack of established engagement patterns.
The problem is that legitimate experimental use of AI tools can mirror bot behaviour. A new account uploading AI-generated content exhibits similar signals to a bot network testing YouTube's defences. Short test videos resemble spam. Accounts without established history look like throwaway profiles. The automated systems, optimised for catching genuine threats, cannot distinguish intent.
This technical limitation is compounded by the training data these models learn from. The datasets consist overwhelmingly of actual policy violations: spam networks, bot accounts, coordinated manipulation campaigns. The models learn these patterns exceptionally well. But they rarely see examples of legitimate experimentation that happens to share surface characteristics with violations. The training distribution does not include âcreator using Google's own tools to learnâ because, until recently, this scenario was not common enough to appear in training data at meaningful scale.
This is compounded by YouTube's approach to AI-generated content. In 2024, YouTube revealed its AI content policies, requiring creators to âdisclose when their realistic content is altered or syntheticâ through YouTube Studio's disclosure tools. This requirement applies to content that âappears realistic but does not reflect actual events,â particularly around sensitive topics like elections, conflicts, public health crises, or public officials.
But disclosure requires access to YouTube Studio, which requires an account that has not been terminated. The catch-22 is brutal: you must disclose AI-generated content through the platform's tools, but if the platform terminates your account before you can access those tools, disclosure becomes impossible. The eight-second test video that triggered termination never had the opportunity to be disclosed as AI-generated because the account was destroyed before the creator could navigate to the disclosure settings.
Even if the creator had managed to add disclosure before upload, there is no evidence YouTube's automated moderation systems factor this into their decisions. The disclosure tools exist for audience transparency, not for communicating with moderation algorithms. A properly disclosed AI-generated video can still trigger termination if the account behaviour patterns match bot detection signatures.
This is not isolated to YouTube and Google AI Studio. It reflects a broader architectural problem across major platforms: the right hand genuinely does not know what the left hand is doing. These companies have grown so vast, their systems so complex, that internal coherence has become aspirational rather than operational.
Consider the timeline of events in 2024 and 2025. Google returned to using human moderators for YouTube after AI moderation errors, acknowledging that replacing humans entirely with AI âis rarely a good idea.â Yet simultaneously, YouTube CEO Neal Mohan announced that the platform is pushing ahead with expanded AI moderation tools, even as creators continue reporting wrongful bans tied to automated systems.
The contradiction is not subtle. The same organisation that acknowledged AI moderation produces too many errors committed to deploying more of it. The same ecosystem encouraging creators to experiment with AI tools punishes them when they do.
Or consider YouTube's AI moderation system pulling Windows 11 workaround videos. Tech YouTuber Rich White had a how-to video on installing Windows 11 with a local account removed, with YouTube allegedly claiming the content could âlead to serious harm or even death.â The absurdity of the claim underscores the system's inability to understand context. An AI classifier flagged content based on pattern matching without comprehending the actual subject matter.
This problem extends beyond YouTube. AI-generated NSFW images slipped past YouTube moderators by hiding manipulated visuals in what appear to be harmless images when viewed by automated systems. These AI-generated composites are designed to evade moderation tools, highlighting that systems designed to stop bad actors are being outpaced by them, with AI making detection significantly harder.
The asymmetry is striking: sophisticated bad actors using AI to evade detection succeed, while legitimate creators using official Google tools get terminated. The moderation systems are calibrated to catch the wrong threat level. Adversarial actors understand how the moderation systems work and engineer content to exploit their weaknesses. Legitimate creators follow official workflows and trigger false positives. The arms race between platform security and bad actors has created collateral damage among users who are not even aware they are in a battlefield.
Behind every terminated account is disruption. For casual users, it might be minor annoyance. For professional creators, it is existential threat. Channels representing years of work, carefully built audiences, established revenue streams, and commercial partnerships can vanish overnight. The appeals process, even when it functions correctly, takes days or weeks. Most appeals are unsuccessful. According to YouTube's official statistics, âThe majority of appealed decisions are upheld,â meaning creators who believe they were wrongly terminated rarely receive reinstatement.
The creator whose account was terminated twice never got past the starting line. There was no audience to lose because none had been built. There was no revenue to protect because none existed yet. But there was intent: the intent to learn, to experiment, to understand the tools Google itself promotes. That intent was met with immediate, automated rejection.
This has chilling effects beyond individual cases. When creators observe that experimentation carries risk of permanent account termination, they stop experimenting. When new creators see established channels with hundreds of thousands of subscribers vanish without explanation, they hesitate to invest time building on the platform. When the appeals process demonstrably operates through automation despite claims of human review, trust in the system's fairness evaporates.
The psychological impact is significant. Creators describe the experience as Kafkaesque: accused of violations they did not commit, unable to get specific explanations, denied meaningful recourse, and left with the sense that they are arguing with machines that cannot hear them. The verified creator who followed every rule, used official tools, and still faced termination twice experiences not just frustration but a fundamental questioning of whether the system can ever be navigated successfully.
A survey on trust in the creator economy found that more than half of consumers (52%), creators (55%), and marketers (48%) agreed that generative AI decreased consumer trust in creator content. The same survey found that similar majorities agree AI increased misinformation in the creator economy. When platforms cannot distinguish between legitimate AI-assisted creation and malicious automation, this erosion accelerates.
The response from many creators has been diversification: building presence across multiple platforms, developing owned channels like email lists and websites, and creating alternative revenue streams outside platform advertising revenue. This is rational risk management when platform stability cannot be assumed. But it represents a failure of the centralised platform model. If YouTube were genuinely stable and trustworthy, creators would not need elaborate backup plans.
The economic implications are substantial. Creators who might have invested their entire creative energy into YouTube now split attention across multiple platforms. This reduces the quality and consistency of content on any single platform, creates audience fragmentation, and increases the overhead required simply to maintain presence. The inefficiency is massive, but it is rational when the alternative is catastrophic loss.
Beneath the technical failures and operational contradictions lies a philosophical problem: can automated systems make fair judgements about content when they cannot understand intent, context, or the ecosystem they serve?
YouTube's moderation challenges stem from attempting to solve a fundamentally human problem with non-human tools. Determining whether content violates policies requires understanding not just what the content contains but why it exists, who created it, and what purpose it serves. An eight-second test video from a creator learning Google's tools is categorically different from an eight-second spam video from a bot network, even if the surface characteristics appear similar.
Humans make this distinction intuitively. Automated systems struggle because intent is not encoded in pixels or metadata. It exists in the creator's mind, in the context of their broader activities, in the trajectory of their learning. These signals are invisible to pattern-matching algorithms.
The reliance on automation at YouTube's scale is understandable. Human moderation of 500 hours of video uploaded every minute is impossible. But the current approach assumes automation can carry judgements it is not equipped to make. When automation fails, human review should catch it. But if human review is itself automated, the system has no correction mechanism.
This creates what might be called âsystemic illegibilityâ: situations where the system cannot read what it needs to read to make correct decisions. The creator using Google AI Studio is legible to Google's AI division but illegible to YouTube's moderation systems. The two parts of the same company cannot see each other.
The philosophical question extends beyond YouTube. As more critical decisions get delegated to automated systems, across platforms, governments, and institutions, the question of what these systems can legitimately judge becomes urgent. There is a category error in assuming that because a system can process vast amounts of data quickly, it can make nuanced judgements about human behaviour and intent. Speed and scale are not substitutes for understanding.
For developers, creators, and businesses considering building on Google's platforms, this fragmentation raises uncomfortable questions. If you cannot trust that content created through Google's own tools will be accepted by Google's own platforms, what can you trust?
The standard advice in the creator economy has been to âown your platformâ: build your own website, maintain your own mailing list, control your own infrastructure. But this advice assumes platforms like YouTube are stable foundations for reaching audiences, even if they should not be sole revenue sources. When the foundation itself is unstable, the entire structure becomes precarious.
Consider the creator pipeline: develop skills with Google AI Studio, create content, upload to YouTube, build an audience, establish a business. This pipeline breaks at step three. The content created in step two triggers termination before step four can begin. The entire sequence is non-viable.
This is not about one creator's bad luck. It reflects structural instability in how these platforms operate. YouTube's October 2024 glitch resulted in erroneous removal of numerous channels and bans of several accounts, highlighting potential flaws in the automated moderation system. The system wrongly flagged accounts that had never posted content, catching inactive accounts, regular subscribers, and long-time creators indiscriminately. The automated system operated without adequate human review.
When âglitchesâ of this magnitude occur repeatedly, they stop being glitches and start being features. The system is working as designed, which means the design is flawed.
For technical creators, this instability is particularly troubling. The entire value proposition of experimenting with AI tools is to learn through iteration. You generate content, observe results, refine your approach, and gradually develop expertise. But if the first iteration triggers account termination, learning becomes impossible. The platform has made experimentation too dangerous to attempt.
The risk calculus becomes perverse. Established creators with existing audiences and revenue streams can afford to experiment because they have cushion against potential disruption. New creators who would benefit most from experimentation cannot afford the risk. The platform's instability creates barriers to entry that disproportionately affect exactly the people Google claims to be empowering with accessible AI tools.
This dysfunction occurs against a backdrop of increasing regulatory scrutiny of major platforms and growing competition in the AI space. The EU AI Act and US Executive Order are responding to concerns about AI-generated content with disclosure requirements and accountability frameworks. YouTube's policies requiring disclosure of AI-generated content align with this regulatory direction.
But regulation assumes platforms can implement policies coherently. When a platform requires disclosure of AI content but terminates accounts before creators can make those disclosures, the regulatory framework becomes meaningless. Compliance is impossible when the platform's own systems prevent it.
Meanwhile, alternative platforms are positioning themselves as more creator-friendly. Decentralised AI platforms are emerging as infrastructure for the $385 billion creator economy, with DAO-driven ecosystems allowing creators to vote on policies rather than having them imposed unilaterally. These platforms explicitly address the trust erosion creators experience with centralised platforms, where algorithmic bias, opaque data practices, unfair monetisation, and bot-driven engagement have deepened the divide between platforms and users.
Google's fragmented ecosystem inadvertently makes the case for these alternatives. When creators cannot trust that official Google tools will work with official Google platforms, they have incentive to seek platforms where tool and platform are genuinely integrated, or where governance is transparent enough that policy failures can be addressed.
YouTube's dominant market position has historically insulated it from competitive pressure. But as 76% of consumers report trusting AI influencers for product recommendations, and new platforms optimised for AI-native content emerge, YouTube's advantage is not guaranteed. Platform stability and creator trust become competitive differentiators.
The competitive landscape is shifting. TikTok has demonstrated that dominant platforms can lose ground rapidly when creators perceive better opportunities elsewhere. Instagram Reels and YouTube Shorts were defensive responses to this competitive pressure. But defensive features do not address fundamental platform stability issues. If creators conclude that YouTube's moderation systems are too unpredictable to build businesses on, no amount of feature parity with competitors will retain them.
There are several paths forward, each with different implications for creators, platforms, and the broader digital ecosystem.
Scenario One: Continued Fragmentation
The status quo persists. Google's various divisions continue operating with insufficient coordination. AI tools evolve independently of content moderation systems. Periodic waves of false terminations occur, the platform apologises, and nothing structurally changes. Creators adapt by assuming platform instability and planning accordingly. Trust continues eroding incrementally.
This scenario is remarkably plausible because it requires no one to make different decisions. Organisational inertia favours it. The consequences are distributed and gradual rather than acute and immediate, making them easy to ignore. Each individual termination is a small problem. The aggregate pattern is a crisis, but crises that accumulate slowly do not trigger the same institutional response as sudden disasters.
Scenario Two: Integration and Coherence
Google recognises the contradiction and implements systematic fixes. AI Studio outputs carry embedded metadata identifying them as Google-sanctioned. YouTube's moderation systems whitelist content from verified Google tools. Appeals processes receive genuine human review with meaningful oversight. Cross-team coordination ensures policies align across the ecosystem.
This scenario is technically feasible but organisationally challenging. It requires admitting current approaches have failed, allocating significant engineering resources to integration work that does not directly generate revenue, and imposing coordination overhead across divisions that currently operate autonomously. It is the right solution but requires the political will to implement it.
The technical implementation would not be trivial but is well within Google's capabilities. Embedding cryptographic signatures in AI Studio outputs, creating API bridges between moderation systems and content creation tools, implementing graduated trust systems for accounts using official tools, all of these are solvable engineering problems. The challenge is organisational alignment and priority allocation.
Scenario Three: Regulatory Intervention
External pressure forces change. Regulators recognise that platforms cannot self-govern effectively and impose requirements for appeals transparency, moderation accuracy thresholds, and penalties for wrongful terminations. YouTube faces potential FTC Act violations regarding AI terminations, with fines up to $53,088 per violation. Compliance costs force platforms to improve systems.
This scenario trades platform autonomy for external accountability. It is slow, politically contingent, and risks creating rigid requirements that cannot adapt to rapidly evolving AI capabilities. But it may be necessary if platforms prove unable or unwilling to self-correct.
Regulatory intervention has precedent. The General Data Protection Regulation (GDPR) forced significant changes in how platforms handle user data. Similar regulations focused on algorithmic transparency and appeals fairness could mandate the changes platforms resist implementing voluntarily. The risk is that poorly designed regulations could ossify systems in ways that prevent beneficial innovation alongside harmful practices.
Scenario Four: Platform Migration
Creators abandon unstable platforms for alternatives offering better reliability. The creator economy fragments across multiple platforms, with YouTube losing its dominant position. Decentralised platforms, niche communities, and direct creator-to-audience relationships replace centralised platform dependency.
This scenario is already beginning. Creators increasingly maintain presence across YouTube, TikTok, Instagram, Patreon, Substack, and independent websites. As platform trust erodes, this diversification accelerates. YouTube remains significant but no longer monopolistic.
The migration would not be sudden or complete. YouTube's network effects, existing audiences, and infrastructure advantages provide substantial lock-in. But at the margins, new creators might choose to build elsewhere first, established creators might reduce investment in YouTube content, and audiences might follow creators to platforms offering better experiences. Death by a thousand cuts, not catastrophic collapse.
While waiting for platforms to fix themselves is unsatisfying, creators facing this reality have immediate options.
Document Everything
Screenshot account creation processes, save copies of content before upload, document appeal submissions and responses, and preserve metadata. When systems fail and appeals are denied, documentation provides evidence for escalation or public accountability. In the current environment, the ability to demonstrate exactly what you did, when you did it, and how the platform responded is essential both for potential legal recourse and for public pressure campaigns.
Diversify Platforms
Do not build solely on YouTube. Establish presence on multiple platforms, maintain an email list, consider independent hosting, and develop direct relationships with audiences that do not depend on platform intermediation. This is not just about backup plans. It is about creating multiple paths to reach audiences so that no single platform's dysfunction can completely destroy your ability to communicate and create.
Understand the Rules
YouTube's disclosure requirements for AI content are specific. Review the policies, use the disclosure tools proactively, and document compliance. Even if moderation systems fail, having evidence of good-faith compliance strengthens appeals. The policies are available in YouTube's Creator Academy and Help Centre. Read them carefully, implement them consistently, and keep records proving you did so.
Join Creator Communities
When individual creators face termination, they are isolated and powerless. Creator communities can collectively document patterns, amplify issues, and pressure platforms for accountability. The November 2025 termination wave gained attention because multiple creators publicly shared their experiences simultaneously. Collective action creates visibility that individual complaints cannot achieve.
Consider Legal Options
When platforms make provably false claims about their processes or wrongfully terminate accounts, legal recourse may exist. This is expensive and slow, but class action lawsuits or regulatory complaints can force change when individual appeals cannot. Several law firms have begun specialising in creator rights and platform accountability. While litigation should not be the first resort, knowing it exists as an option can be valuable.
Beyond the immediate technical failures and policy contradictions, this situation raises a question about the digital infrastructure we have built: are platforms like YouTube, which billions depend upon daily for communication, education, entertainment, and commerce, actually stable enough for that dependence?
We tend to treat major platforms as permanent features of the digital landscape, as reliable as electricity or running water. But the repeated waves of mass terminations, the automation failures, the gap between stated policy and operational reality, and the inability of one part of Google's ecosystem to recognise another part's legitimate outputs suggest this confidence is misplaced.
The creator terminated twice for uploading Google-generated content is not an edge case. They represent the normal user trying to do exactly what Google's marketing encourages: experiment with AI tools, create content, and engage with the platform. If normal use triggers termination, the system is not working.
This matters beyond individual inconvenience. The creator economy represents hundreds of billions of dollars in economic activity and provides livelihoods for millions of people. Educational content on YouTube reaches billions of students. Cultural conversations happen on these platforms. When the infrastructure is this fragile, all of it is at risk.
The paradox is that Google possesses the technical capability to fix this. The company that built AlphaGo, developed transformer architectures that revolutionised natural language processing, and created the infrastructure serving billions of searches daily can certainly ensure its AI tools are recognised by its video platform. The failure is not technical capability but organisational priority.
The creator whose verified account was terminated twice will likely not try a third time. The rational response to repeated automated rejection is to go elsewhere, to build on more stable foundations, to invest time and creativity where they might actually yield results.
This is how platform dominance erodes: not through dramatic competitive defeats but through thousands of individual creators making rational decisions to reduce their dependence. Each termination, each denied appeal, each gap between promise and reality drives more creators toward alternatives.
Google's AI Studio and YouTube should be natural complements, two parts of an integrated creative ecosystem. Instead, they are adversaries, with one producing what the other punishes. Until this contradiction is resolved, creators face an impossible choice: trust the platform and risk termination, or abandon the ecosystem entirely.
The evidence suggests the latter is becoming the rational choice. When the platform cannot distinguish between its own sanctioned tools and malicious bots, when appeals are automated despite claims of human review, when accounts are terminated twice for the same harmless content, trust becomes unsustainable.
The technology exists to fix this. The question is whether Google will prioritise coherence over the status quo, whether it will recognise that platform stability is not a luxury but a prerequisite for the creator economy it claims to support.
Until then, the paradox persists: Google's left hand creating tools for human creativity, Google's right hand terminating humans for using them. The ouroboros consuming itself, wondering why the creators are walking away.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * Two things that I'm particularly happy about today: 1.) ordered the wife's Christmas and birthday gifts and cards. (Her birthday is Dec. 31, so I usually order her stuff at the same time.) and 2.) found an early men's basketball game to follow, so I'll be able to finish the game and still ease my way into a sensible senior bedtime.
Prayers, etc.: * My daily prayers
Health Metrics: * bw= 223.66 lbs. * bp= 154/89 (63)
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 06:00 â 1 banana, 1 blueberry muffin, toast & butter * 10:00 â baked salmon w. mushroom sauce * 16:30 â 1 more blueberry muffin
Activities, Chores, etc.: * 04:45 â listen to local news talk radio * 05:50 â bank accounts activity monitored * 06:00 â read, pray, follow news reports from various sources, surf the socials * 16:00 â listening to The Jack Riccardi Show * 17:00 â listening to The Joe Pags Show * 18:00 â tuned into the radio call of an NCAA men's basketball game, Toledo Rockets vs. Michigan St. Spartans, late in the 1st half. * 19:30 â MSU wins 92 to 69. Time now to turn off the radio, put on some relaxing music, and quietly read before bed.
Chess: * 11:15 â moved in all pending CC games
from
The happy place
Iâve met a lot of interesting people during my travels
Iâve even been to England, I saw some tourists there, whereas I was there for business
Once in Germany I even drank beer from a giant glass shoe, maybe one litre, just like Cinderella.
Iâve been to America too, but I donât recommend.
I didnât know what âsmogâ was before. Still not sure.
Actually, I donât like travelling unless itâs to Norway.
But on all of these places, shines the same moon
And in Canada once, I ate poutine
That was remarkable.
And there was a giant waterslide, which I saw in a mall.
It was winter there. In Canada (although in the mall itâs all the same)
It doesnât matter
I have been a few times to Paris
The french are role models
There were poor beggars eating cucumbers from glass jars in the park outside the Eiffel Tower
In Italy there were bad memories of a broken family, my father got blisters on his feet.
Long ago.
I like Greece more than italy
I felt like Theseus once when we were at Rhodos, but the minotaur was long gone.
Itâs the same sky there
There are corpses in the Mediterranean Sea
In Finland they frequently drink beer for lunch, or I did anyway
from
fromjunia
I found myself welling up with tears before my Buddha statue.
âHow are you here? How is the Buddha-nature here? Iâm not doubting that it is. Iâm asking how? Because this is awful.â
As Iâve talked about before, Iâve been spending the last several months in very dark moods. Iâm definitely better than I used to be, but itâs still been about four months since I left the upper-end of depression for longer than a single day. This has given me time to see what the dark moods have to teach me, because they certainly arenât going anywhere with any haste. Why fight it when it can deepen my understanding of what it means to be human?
This has landed me in a kind of pessimistic liberal theism. Of sorts. Like many Westerners with multiple religious identities including Buddhism, it gets a little murky in places. Nevertheless, a picture has begun to form, drawing from four sources: SĂžren Kierkegaard, Alfred Whitehead, Walter Benjamin, and Mahayana Buddhism (inflected by Zen and Arthur Schopenhauer).
Kierkegaard felt that existing as a human was a pretty rough deal. He was a very sad boy and felt overwhelming depression and anxiety his whole life. He even broke off his marriage because he felt that she didnât deserve to deal with his moods (although, maybe that was ultimately the correct call, as his fiancĂ© was 14; a right answer with the wrong equation). But he spent his time engaging with these moods in a deep way, and came away with a pretty remarkable account of the role of anxiety and despair.
For Kierkegaard, anxiety is a response to the freedom that humans have. We can make meaningful choices that shape our lives. And we donât have assurance that itâll work out in the end. Thatâs scary. I sometimes present anxiety as the general knowledge that even if you do everything right, things can still turn out wrong, which I think Kierkegaard would empathize with. However you slice it, a critical part of Kierkegaardâs position is that anxiety isnât pathological per se, but rather comes from a confrontation with our base reality as humans. It can be a sign of health, or of moving in a healthy direction.
Something similar happens with despair. Per Kierkegaard, most people are in a state of despair, even if they donât realize it. Thatâs because being a human is impossible. We are stuck between who we areâour history, our social circumstances, our habitsâand who we are becoming, and we are always becoming and often yearning to become something else. Thatâs not a stable arrangement. Itâs so easy, natural even, to cling to our current state and despair that we are forced to change, or to embrace change and despair that we cannot change certain things about ourselves. According to Kierkegaard, all humans at some point are one or the other, perhaps even shifting between the two. But without an existential anchor to stabilize this process between being and becoming, we are stuck in despair. Kierkegaard thought this existential anchor was the Christian God. As someone who is not a Christian, at least not in any way that would be widely recognizable as such to Christians, Iâm inclined to look elsewhere.
Whitehead had an interesting take on reality and God. He, like Kierkegaard, thought that we are both being and becoming. He thought all things were being and becoming, actually. That includes God.
Whitehead influenced a lot of liberal theologians with his process thought. He articulated a God that was compassionateâliterally, suffering with others, experiencing all that happens directlyâand drawing reality to a higher good. He saw a God that held a memory of the universe, grounding the past, experienced the present with all of creation, and non-coercively drew reality towards a more intense future, a âharmony of oppositesâ where conflicts are not resolved per se but do come to exist in a way that drives things towards aesthetic greatness.
This is an optimistic theology. Whitehead was inclined to think that things get better because the structure of the universe was tilted towards improvement, with God pulling it non-coercively towards an aesthetic greater good.
But if Kierkegaard is right, itâs unclear to me why God would not feel despair either. God cannot fix the past. Maybe God hopes to integrate a disastrous past into a greater harmony of opposites and in that way redeem it. But God canât do that reliably. Not without the cooperation of the rest of the universe, which is shot through with freedom. There is no promise that the past will ever be redeemed, and it certainly seems that in the arc of human history there is much left to be redeemed, and more happening all the time. From the human angle, there are many things that are irredeemable, generating despair. If God experiences our despair about this as well, then it would seem that God too is unable to resolve the tension between the poles of being and becoming.
Walter Benjamin wrote about this human perspective in a divine register. His story about the Angel of History has been one of my touchstones for the last decade, and I can only see Whiteheadâs God in it.
There is a painting by Klee called Angelus Novus. An angel is depicted there who looks as though he were about to distance himself from something which he is staring at. His eyes are opened wide, his mouth stands open and his wings are outstretched. The Angel of History must look just so. His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair, to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high. That which we call progress, is this storm.
(Courtesy of marxists.org)
God is the repository of history, the eternal memory, and presently experiencing the suffering of all of creation. And per Benjamin, we are experiencing suffering in a particularly salient way: We have perpetually experienced eternal defeat in the form of being forgotten. Whitehead might feel that Godâs eternal memory alleviates this, but we do not experience it. God experiences our despair, and the despair itself taints Godâs memory, and God wishes it would not, that it be redeemed into a harmony of opposites, but is forever limited by experiencing the facts of reality, which are that we are trapped.
To articulate what is past does not mean to recognize âhow it really was.â It means to take control of a memory, as it flashes in a moment of danger. For historical materialism it is a question of holding fast to a picture of the past, just as if it had unexpectedly thrust itself, in a moment of danger, on the historical subject. The danger threatens the stock of tradition as much as its recipients. For both it is one and the same: handing itself over as the tool of the ruling classes. In every epoch, the attempt must be made to deliver tradition anew from the conformism which is on the point of overwhelming it. For the Messiah arrives not merely as the Redeemer; he also arrives as the vanquisher of the Anti-Christ. The only writer of history with the gift of setting alight the sparks of hope in the past, is the one who is convinced of this: that not even the dead will be safe from the enemy, if he is victorious. And this enemy has not ceased to be victorious.
âŠ
The tradition of the oppressed teaches us that the âemergency situationâ in which we live is the rule.
Indeed, Benjamin entirely rejects our ability to access an eternal and complete memory, and for good reason. That is simply not how we experience time. We experience time with salience, with some things more citable than others. We experience time emotionally and dripping with value. We can only imagine that God is the same way.
It is well-known that the Jews were forbidden to look into the future. The Torah and the prayers instructed them, by contrast, in remembrance. This disenchanted those who fell prey to the future, who sought advice from the soothsayers. For that reason the future did not, however, turn into a homogenous and empty time for the Jews. For in it every second was the narrow gate, through which the Messiah could enter.
Buddhism has a weird public image. âThe end of suffering,â it proclaims. Perhaps more clear and honest is the saying âpain is unavoidable but suffering is optional.â Even more honest: âYou still suffer, but itâs okay now.â âNo end to suffering,â as the Heart Sutra teaches. (I understand the context provides nuance; just stay with me here.)
Thatâs the perspective of Zen Buddhism, particularly of the tradition Iâve had the most engagement with, Ordinary Mind. Itâs also aligned somewhat with the interpretation of reality that Arthur Schopenhauer walked away with. According to Schopenhauer, reality is fundamentally unsatisfying. The Buddha would probably agree, with all the caveats and nuances and paradoxes the Buddha always offers. But letâs stay with what we can learn here. Reality is fundamentally unsatisfying, but we canât escape reality. Itâs a pretty bleak situation.
What can we do? Schopenhauer said that we should simply withdraw and engage with reality as little as possible. Iâm not sure thatâs right. Iâd break from Schopenhauer here and follow Ordinary Mind in saying that by coming to reality and letting it teach us, as I have with my dark moods, as Kierkegaard did, it becomes a little more okay. In therapy Iâve heard this referred to as clean pain and dirty pain. Thereâs the clean pain of reality, and the dirty pain we heap on it. We can at least reduce our suffering by wiping away the dirty pain and leave ourselves with the clean pain by seeing reality as it is, without the delusions we tend to experience.
This is an awful tragic view of reality. Itâs a tragic view of God, because it means that God is always suffering, and perhaps in perpetually intensifying ways, depending on if you try to save the progression of harmony of opposites and how you understand âaestheticâ here. It means that we can try to stabilize ourselves and end our despair by anchoring ourselves to God, but if we truly do that then weâd be introduced to the despair of others through Godâs universal compassion. Mahayana teaches that weâre here to be compassionate to the despair of others and to alleviate it. Perhaps a pessimistic variant of Mahayana Buddhism would say that we can never fully escape suffering, but we can reduce it by caring about others.
Hope is usually understood as forward-looking. It says that in the future, things will be better, or that thereâs something in the future to hold on to. Iâm not a fan of the latter because it seems like denying parts of reality, and Iâm not an optimist about the future, so I donât like the former either.
But if reality isnât doing any work for usâif the universe is fundamentally orthogonal to our happiness, if not hostile to itâthat means that if we give a damn, we better roll up our sleeves and build it ourselves. It means that there is an imperative to reduce suffering. It means that we find hope not in the future, but right now, in the actions we do to make suffering a little less. It saves us from the idolatry of the future, as pessimist philosopher Emil Cioran says, and frees us to find hope in the reality in front of us, in compassion and care.
I had to pass through pretty hopeless times to find a seed of hope again. I might never have if I hadnât let myself sit and engage with my dark moods. I tried to return to the optimism so popular in contemporary culture, and so prevalent in liberal theology, but I couldnât experience it as anything other than a lie.
I found hope again. Not in the creative advance of Whitehead, or the existential anchor of Kierkegaard, or the belief in the fundamental goodness of people so common in Unitarian Universalism (one of my faith traditions). I found hope in pessimism. I found compassion in universal suffering. I found a way forward with my faith by understanding my faith as flexible enough to accommodate the suffering that humans experience. Instead of seeing my depression as purely pathological, I let myself understand it as a thing that happens to humans, and as I believe that all things that happen to humans are able to be analyzed under a religious lens, I found religion in depression.
I doubt Iâm alone. Like I said, depression happens to people, including religious people. I hope that I can share my pessimistic faith with others and save them from the oppression of mandatory optimism. For now, I return to the compassion of the Buddha, and find it makes my suffering a little more okay.
from
Build stuff; Break stuff; Have fun!
I'm falling a bit behind. Started another freelance project, so things are a bit slower now.
Most of the MVP is done, and I'm starting to polish some things. For Day 15, there was a small refactor of the Add/Edit forms to make them more robust and improve the UX.
One of the notable things here is the introduction of react-hook-form and zod. Which makes the most sense in combination with Supabase. In addition, I moved all form fields into shared components.
All the changes will give me a good feeling that this app can grow after the MVP. :)
đ
73 of #100DaysToOffload
#log #AdventOfProgress
Thoughts?
from Tuesdays in Autumn
Early December must be one of the least propitious times of the year for reading. There always seems to be far too much else to do. Only this evening have I reached the end of a short novel started a few weeks ago: Audition by Katie Kitamura. A correspondent's recommendation had made me curious to read it.
It's a story that struck me as very satisfyingly ambiguous. The narration had a clean, polished surface, which nevertheless gave the impression of considerable depth yawning beneath. Just about every small expectation that came to mind about where it might go next was soon afterwards neatly confounded: seldom did I feel sure of where I stood. It's a book that says a good deal and suggests a great deal more within its relatively narrow span of 197 pages.
The week has largely been taken up with work and with Christmas shopping. I left my annual campaign of on-line festive consumerism inadvisably late this year, and catching up has felt onerous. There have been lengthy sessions switching between browser tabs in search of appropriate items. There has been a barrage of notifications about orders and deliveries. There have been parcels to collect; parcels brought to my door; parcels left outside my door; parcels left outside neighbours' doors. Items damaged in transit or bought in error or subject to buyer's remorse have had to be returned. There have been second thoughts and changes of mind, and items at first intended for one recipient now earmarked for another. Even then, there is at least one item I know I will regret giving. There has been a rapid depletion of funds; a faint nausea about the excess of it all. I have to draw a line under it all now even if some dissatisfaction remains. Only gift-wrapping, distribution and presentation are left to manage.
The last time I'd tried blue cheese before my belated acquisition this year of a taste for the stuff, it had been in the shape of some Stilton four or five Christmasses ago. That experience had not been a happy one, and the recollection of it had pushed Stilton some way back in the queue of the cheeses I've wanted to try since my 'conversion'. Fortunately, having now seen the blue light, I have come back to it, and I'm finding the wedge I bought at Tesco on Saturday very much to my liking.
Wine of the week: a somewhat costly but very delicious Manzanilla Pasada, which is âdelightfully aromatic with reminiscences of green apples and the characteristic hint of sea breeze.â
from
đŠââŹé¶ · doooong - blog
#doooongDaily





from Micro Dispatch đĄ
This one's a good one! With good headphones, it really sounds like there's a downpour outside. Good for focusing at work too, which is what I'm using it for right now.
Link: Stormy Weather
#Status #FocusMusic
I used to like Medium articles. Anything related to writing, marketing, or business, Medium was my go-to. However, too many paywalled articles and forcing you to register just to read the free articles turned me away from the site.
I do love Substack and there are many informative and thoughtful creators I follow. However, I donât think Iâm intelligent enough nor have the time to write such articles. Who knows, maybe in the future when Iâm not so busy.
The main reason I choose Write.as for my primary blog is focusing solely on writing without worrying about click stats, email marketing, or selling a product or service. All my posts are free and they are not monetized. So enjoy, take what you can learn, and spread the word. And I will try to do the same with yours.
#writing #medium #substack