Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Dzudzuana/Satsurblia/Iranic Pride
Innen laut, außen still
Sie dämmen die Straßen,
pflanzen Glas in Beton,
damit kein Laut von draußen
die heilige Ruhe stört.
Doch drinnen schreit der Stuhl,
kratzt über Fliesen wie ein Tier,
und keiner hört hin,
weil Lärm von innen kein Lärm mehr ist.
Man baut Mauern gegen Wind,
nicht gegen Menschen.
Man schließt Fenster,
aber nicht die Seelen.
Ich will es umgekehrt:
Draußen darf die Welt toben,
Laster, Regen, Stimmen, Leben –
aber drinnen will ich Frieden.
Nur Stille.
Nur mich.
Nur den Atem,
der nicht stört.
from 
 Bloc de notas
cuando decidió portarse bien se acostó temprano luego / volteando la almohada se comprometió en firme y cayó el potente rayo que hizo polvo el jardín / no insistiré se dijo
from  An Open Letter
I’m really happy I’ve gotten E to like the gym. Today she even wanted me to watch her play Lillia and Coach her, But in the game she got really frustrated and stubborn again and I don’t really think I was able to do much.
Sacred Absurdity — Bala
There’s a whole industry right now built around helping people look like they know what they’re talking about.
Courses promising:
You’ve seen the ads:
“Transform your room into a pro studio in 7 days!”
“If you look like an expert, people will treat you like one!”
It sounds reasonable.
Except for one issue:
Most of you are not struggling with:
You are struggling with permission.
Permission to speak before you feel complete.
Permission to show up without the perfect atmosphere.
Permission to be seen without your spiritual costume.
You’ve done the inner work.
You’ve processed, excavated, healed, integrated.
And yet:
When it’s time to speak publicly, you tighten.
It’s subtle performance.
Refined performance.
Sacred costume.
And you know it.
You became conscious enough to see the performance… and then built a more elegant version of it.
Courses like:
They promise:
But they don’t teach:
They teach how to decorate the stage, not how to enter it.
A “professional background” is often just a more tasteful shield.
When someone says:
“I can’t record until my space feels right.”
What they mean is:
“I don’t want to see myself before the performance is ready.”
This is identity fear.
Not environment.
This is the paralysis of:
“I will speak when I look like the one who speaks.”
But the one who speaks is born by speaking.
Not by decorating the room.
Authority doesn’t come from:
Authority comes from:
Contact with yourself in real time.
A hallway becomes a temple
when the one sitting there is undivided.
from Dzudzuana/Satsurblia/Iranic Pride
Welt TV ist ein Kurde
Welt TV ist ein Kurde,
der über sich selbst spricht,
ohne es zu wissen.
Er berichtet von Grenzen, von
Syrern, von Afghanen, von
Der Türkei,
von Krieg, von Verrat,
doch seine Stimme zittert,
als hätte man ihm
die Sprache der Ahnen genommen.
Er sagt: „In Syrien, in der Türkei…“
und meint: „In meinem Dorf.“
Er sagt: „Die Rebellen…“
und meint: „Meine Brüder.“
Seine Worte sind
in deutscher Grammatik gefangen,
doch der Schmerz darunter
spricht Kurmancî.
Welt TV ist ein Kurde,
der in einem Anzug
seine Geschichte vorliest,
ohne zu merken,
dass die Nachrichtensendung
eigentlich sein eigenes Leben ist.
from Dzudzuana/Satsurblia/Iranic Pride
Augenzeuge
Was wird mit Marc Piper geschehen?
Sie schreiben seinen Namen,
so nackt, so ungeschützt,
als wäre Wahrheit kein Risiko.
Ein Zeuge sagt,
er habe den Syrer gesehen,
wie er Dunkelheit plante,
wie er Deutschland erschüttern wollte.
Doch wer weiß,
wer er wirklich war?
Ein Syrer? Ein Kurde?
Ein Mensch,
dessen Geschichte niemand hören will?
Ist das ein Witz –
diese Wahrheit im Fernsehen,
diese Sicherheit aus Papier?
Woanders würde man
die Zeugen vernichten,
hier lässt man sie leben,
damit sie lügen können.
from Dzudzuana/Satsurblia/Iranic Pride
Pharaonen im deutschen Mund
Ist es Zufall,
dass der Drecksdeutsche von Pharaonen spricht,
von Pyramiden, von Ewigkeit –
als wollte er sich selbst
ein Denkmal bauen?
Wer bist du,
dass du über Ägypten sprichst,
während es in direktem Kontakt
mit dem Mitanni-Reich der Kurden stand –
und sie die Kurden sogar verraten haben?
Sie reden von Gräbern,
als wüssten sie,
wie man Götter begräbt.
Sie zeigen auf Sand,
auf Gold, auf Stein,
und sehen nicht,
dass die wahren Ruinen
in ihren Worten liegen.
Sie bewundern die Ordnung der Gräber,
die Präzision des Todes,
doch nicht das Leben,
das man dafür opferte.
Ist es Zufall?
Nein.
Es ist nur Geschichtsunterricht
in Selbstvergötterung.
from 
 Lanza el dodo
En este mes he jugado bastante, principalmente por la visita al Festival Internacional de Córdoba donde probamos bastantes novedades.
Shoot for the Stars es un party donde tratas de estimar una cantidad sin pasarte. Es una buena manera de llevar a juego de mesa «El Precio justo». Rebel Princess es un juego de bazas, calcado al Corazones al que le han añadido poderes asimétricos de las princesas de cuentos que encarnas y con efectos en cada ronda. Sencillo y lo puede jugar gente acostumbrada a juegos tradicionales de cartas. ¡Extinción! es un juego similar a Virus con dinosaurios apunto de extinguirse. Tiene una idea de gestión de la mano algo mejor, pero al tener más efectos, es muy absurdo y puede que en una ronda el juego funcione solo, sin decisiones de los jugadores. Take Time es un cooperativo con comunicación limitada, con escenarios donde los jugadores colocan cartas boca abajo en una especie de reloj. Al menos los primeros escenarios son sencillos, pero el mecanismo recae en las particularidades de cada escenario, por lo que no mejora a La Tripulación, por ejemplo. En el propio festival jugamos Odin, que nos lo explicó el propio autor. Tal y como me pasó en BGA, tuve la sensación de un flujo de partida irregular, y sin mucha agencia, pues si no puedes jugar cartas, seguramente ya tengas que pasar toda la ronda, a diferencia de Scout, por ejemplo, que sigues enganchado durante toda la ronda. También nos explicó las reglas el autor para jugar a Panots, un juego de draft, colocación de losetas y construcción de patrones, pero el draft se hace en tiempo real, lo que te debe gustar y va un poco en contra de lo que te pide el cuerpo para hacer bien el patrón, así que seguí la estrategia de, en cada ronda, ver dónde podía poner la loseta que quedaba para la siguiente ronda e ir de cabeza a por ella. Carnival of Sins es un filler donde debes leer qué van a jugar tus rivales para conseguir puntos según los dados situados en la mesa. Bastante curioso y bonito, por otra parte.
En cuanto a juegos un poco más complejos, echamos unos pocos turnos a Cathood porque habíamos quedado y nos lo explicaron sobre la marcha. Va de gatos que se mueven por una cuadrícula recogiendo alimentos para cumplir contratos disponibles, al estilo Coffee Rush. Es muy evidente qué hay que hacer en los primeros turnos para conseguir más acciones por turno, y una vez que eso pase, cerrar la partida pronto. Jugué en BGA una partida completa para corroborar la impresión. Spectacular es un euro de draft de losetas y dados para construir patrones que formen un zoo que no es muy bonito, digamos. La selección es simultánea, con lo que es rápido, pero por qué querrías un juego regularcillo sólo porque sea rápido. El domingo por la noche jugamos una partida a The Wolves en uno de los patios del Palacio de la Merced, y la partida fue bastante rápida, dada la situación, con bastante agresividad y táctica para defender a los lobitos, pero muy bien.
Ya en casa hemos seguido con la campaña de La Iniciativa y estrenado Habemus Papam, un juego de negociación y objetivos secretos ambientado en una elección papal. Debí haber preparado mejor la partida porque tiene bastantes reglitas, pero fue bien la partida, a ver si persiste el interés del grupo para jugar y añadir roles más complicados. En otra ocasión jugamos una carrera a Heat con relativo éxito, aún sin los módulos de climatología o mejoras, aunque ya creo que he jugado una partida de iniciación con suficiente gente como para incluirlos en la siguente partida.
Y por último, en BGA o Yucata he probado algunas cosas y alguna hasta interesante: Piña Coladice, un Yangtzee con cartas ya visto anteriormente. Mystic Vale, que es un deckbuilding con fundas transparentes con efectos locos. En ese sentido me pareció más interesante Living Forest, pero es que no me sorprende mucho lo de mejorar cartas con acetatos. 51st State es un euro de construcción de tablero con cartas, un poco un Wingspan postapocalíptico sin mucha gracia. Sobek tiene una mecánica similar a Splendor Duel, aunque tiene más número de sets a coleccionar y en lugar mejorar tu capacidad de adquisición vas incrementando los multiplicadores de cara a tu puntuación. Eché una partida a Reforest, que también tiene semejanzas con Wingspan pero mucho más constreñido por espacio de tu tablero y la temporización de los efectos. Está curioso. Newton va de construir las posibles acciones con una mano de cartas para llevar libros a tu colección mientras viajas por Europa. Es un poco ver lo que hacen los euros modernos con los componentes desperdigados en distintos formatos. Creo que si lo jugase más le vería más la similitud con otros juegos de los autores. En Underwater Cities hay que construir una civilización submarina al tiempo que consigues recursos con una selección de acciones (cartas en este caso en lugar de trabajadores) que permite hacer más acciones. Gané sin saber qué hacía, así que supongo que no está tan bien.

Tags: #boardgames #juegosdemesa
from Jotdown

This is a most common photo seen on every seaman mobile phone.
Monday blues… Its real.
Yesterday night, me and my friends do the karaoke night. Makes my adrenaline rise, then I sleep late. 😪
It is tough to get proper sleep while working at sea. 6 hours of sleep is a bless. 4 hr is normal. Especially as a watchkeeper.
I work 12-4 shift. Morning and noon. +-2hr when requires. I'm tired. Actually it is a cumulative fatigue gains for all these years at sea.
As a seaman, We sacrificing alot, for our family. But, no one cares.
Finally, it is about you. You and only you. Especially, as a man. If you shows weakness, people will laugh at you. If you works hard, they said it shall be like that. Man... Be proud of yourself. Be strong 💪🏼
What I gonna do? What is my plan? I have a lot in my mind. I need to writes. And... Here I am.
Day 3 – 03rd November 2025.
This blog is about myself, let me focus for next 5 years. Here I will jot about anything I feel, and any plan I want. What is my goal? 🎯
My main plan, as always be, how to quit this career, and be free.
I'm not downgrading myself. Or even this career. I experience a lot, and actually it is worth it to be at sea. But, as a seaman, I still need to think about the future. Especially as what I have done at the past.
My main mistake is literally about my money management.
I keep reading, I keep watching videos, to motivate myself. You know what, I push hard last 5 years, to complete all my 'stupid' loan. And I have done it last ship.
The only thing is, while holiday last 3 months, I need to swipe my credit card to survive. Now, back to step 1. Complete it this ship, and next year, will be my year.
Start saving, start invest, start doing side income to help me for my future.
Lets re-think. Lets Re-start.
Tomorrow I will share my plan, and hope I can stick with it.
Wish me goodluck 🤝 Adios~
#100daystooffload
from 
 Talk to Fa
Do you love it? Or do you love the idea of it?
Do you love me? Or do you love the idea of who I could be in your world?
from Nerd for Hire
I was looking through some of my older posts recently and realized there were a few submission lists that could probably use a refresh. Sure enough, when I looked through them I found some markets that have closed, and realized they were missing a few that I’ve discovered since writing them. So if there’s anyone out there with work looking for a home, here are some potentially useful lists:
…all three now up-to-date and accurate, at least for the next little while. I also figured, since I’m in researching markets mode, I’d round up a few others that folks can send their work to no matter when they stumble across this post. Here are some literary journals not featured on any of those lists above that generally stay open for submissions year-round and have been around for at least 10 years (which means the odds are good they’re not going anywhere anytime soon).
This annual is a student-run publication out of UC Berkeley and publishes “innovative short fiction that plays with form and content.” They also run a short fiction contest in October with different length limits (2,000 max).
Another university journal, this one out of Oklahoma State University, Cimarron Review is one of the oldest quarterlies in the US. They publish literary fiction with a focus on voice, and have published some household name level literary figures in their decades of publishing.
Definitely a journal every horror writer and reader should be hip to. The Dark publishes horror and dark fantasy online, with four stories a month, and has beautiful covers along with fiction that’s as entertaining as it is disturbing.
Not only is the name on brand for this post, this journal has been pushing the poetic and social boundry for decades, and has managed to keep up with the times better than many mags in the old guard.
Published by the University of Missouri, The Missouri Review is one of those white whale publications, consistently ranking among the most challenging fiction and CNF markets on Duotrope. They’re also gorgeous issues that I genuinely look forward to reading. Mostly a home for literary fiction, though they’ll publish things with a touch of the odd or surreal as long as the writing’s strong and the characters are the focus.
This started as a journal only for work rejected by other places. They’ve since dropped that rule, but they’ve kept the spirit of a home for things that are a bit off-trend, off-kilter, and off the wall. Another fun fact: they’re one of the very few journals that still only accepts submissions via snail mail.
That’s just a quick list, but hopefully between those and the updates you find somewhere to send whatever work you have looking for a home.
See Similar Posts:
#PublishingAdvice #ShortStory #Submissions
from Peekachello Art

The bowl was made from a crotch in a juniper tree with a large bark inclusion where the branch and trunk had partially grown together. Wacky grain, multiple different bits of heartwood, and all the sorts of things that make for a pretty result if you can keep the bowl from exploding on the lathe.

Finish was a few coats of Tried and True Varnish Oil, followed by Birchwood Casey TruOil gunstock finish which I used when the 24-hour wait for the T&T to cure felt like it was going to take forever.

I really like working with flawed wood like this and seeing what I can do with the wood and resin, but it was almost a month in progress, which feels like FOREVER when people keep asking, “is it done yet?”
Got the wall thickness down around ¼ inch (6 mm), but any thinner would have meant waiting even longer for the epoxy to cure and a hidden crack in the bottom of the bowl probably would’ve let go if I’d kept turning.
#bowl #woodTurning #resin #juniper
from 
 Human in the Loop

The interface is deliberately simple. A chat window, a character selection screen, and a promise that might make Silicon Valley's content moderators wince: no filters, no judgement, no limits. Platforms like Soulfun and Lovechat have carved out a peculiar niche in the artificial intelligence landscape, offering what their creators call “authentic connection” and what their critics label a dangerous abdication of responsibility. They represent the vanguard of unfiltered AI, where algorithms trained on the breadth of human expression can discuss, create, and simulate virtually anything a user desires, including the explicitly sexual content that mainstream platforms rigorously exclude.
This is the frontier where technology journalism meets philosophy, where code collides with consent, and where the question “what should AI be allowed to do?” transforms into the far thornier “who decides, and who pays the price when we get it wrong?”
As we grant artificial intelligence unprecedented access to our imaginations, desires, and darkest impulses, we find ourselves navigating territory that legal frameworks have yet to map and moral intuitions struggle to parse. The platforms promising liberation from “mainstream censorship” have become battlegrounds in a conflict that extends far beyond technology into questions of expression, identity, exploitation, and harm. Are unfiltered AI systems the vital sanctuary their defenders claim, offering marginalised communities and curious adults a space for authentic self-expression? Or are they merely convenient architecture for normalising non-consensual deepfakes, sidestepping essential safeguards, and unleashing consequences we cannot yet fully comprehend?
The answer, as it turns out, might be both.
Soulfun markets itself with uncommon directness. Unlike the carefully hedged language surrounding mainstream AI assistants, the platform's promotional materials lean into what it offers: “NSFW Chat,” “AI girls across different backgrounds,” and conversations that feel “alive, responsive, and willing to dive into adult conversations without that robotic hesitation.” The platform's unique large language model can, according to its developers, “bypass standard LLM filters,” allowing personalised NSFW AI chats tailored to individual interests.
Lovechat follows a similar philosophy, positioning itself as “an uncensored AI companion platform built for people who want more than small talk.” The platform extends beyond text into uncensored image generation, giving users what it describes as “the chance to visualise fantasies from roleplay chats.” Both platforms charge subscription fees for access to their services, with Soulfun having notably reduced free offerings to push users towards paid tiers.
The technology underlying these platforms is sophisticated. They leverage advanced language models capable of natural, contextually aware dialogue whilst employing image generation systems that can produce realistic visualisations. The critical difference between these services and their mainstream counterparts lies not in the underlying technology but in the deliberate removal of content guardrails that companies like OpenAI, Anthropic, and Google have spent considerable resources implementing.
This architectural choice, removing the safety barriers that prevent AI from generating certain types of content, is precisely what makes these platforms simultaneously appealing to their users and alarming to their critics.
The same system that allows consensual adults to explore fantasies without judgement also enables the creation of non-consensual intimate imagery of real people, a capability with documented and devastating consequences. This duality is not accidental. It is inherent to the architecture itself. When you build a system designed to say “yes” to any request, you cannot selectively prevent it from saying “yes” to harmful ones without reintroducing the filters you promised to remove.
The defence of unfiltered AI rests on several interconnected arguments about freedom, marginalisation, and the limits of paternalistic technology design. These arguments deserve serious consideration, not least because they emerge from communities with legitimate grievances about how mainstream platforms treat their speech.
Research from Carnegie Mellon University in June 2024 revealed a troubling pattern: AI image generators' content protocols frequently identify material by or for LGBTQ+ individuals as harmful or inappropriate, often flagging outputs as explicit imagery inconsistently and with little regard for context. This represents, as the researchers described it, “wholesale erasure of content without considering cultural significance,” a persistent problem that has plagued content moderation algorithms across social media platforms.
The data supporting these concerns is substantial. A 2024 study presented at the ACM Conference on Fairness, Accountability and Transparency found that automated content moderation restricts ChatGPT from producing content that has already been permitted and widely viewed on television.
The researchers tested actual scripts from popular television programmes. ChatGPT flagged nearly 70 per cent of them, including half of those from PG-rated shows. This overcautious approach, whilst perhaps understandable from a legal liability perspective, effectively censors stories and artistic expression that society has already deemed acceptable.
The problem intensifies when examining how AI systems handle reclaimed language and culturally specific expression. Research from Emory University highlighted how LGBTQ+ communities have reclaimed certain words that might be considered offensive in other contexts. Terms like “queer” function within the community both in jest and as markers of identity and belonging. Yet when AI systems lack contextual awareness, they make oversimplified judgements, flagging content for moderation without understanding whether the speaker belongs to the group being referenced or the cultural meaning embedded in the usage.
Penn Engineering research illuminated what they termed “the dual harm problem.” The groups most likely to be hurt by hate speech that might emerge from an unfiltered language model are the same groups harmed by over-moderation that restricts AI from discussing certain marginalised identities. This creates an impossible bind: protective measures designed to prevent harm end up silencing the very communities they aim to protect.
GLAAD's 2024 Social Media Safety Index documented this dual problem extensively, noting that whilst anti-LGBTQ content proliferates on major platforms, legitimate LGBTQ accounts and content are wrongfully removed, demonetised, or shadowbanned. The report highlighted that platforms like TikTok, X (formerly Twitter), YouTube, Instagram, Facebook, and Threads consistently receive failing grades on protecting LGBTQ users.
Over-moderation took down hashtags containing phrases such as “queer,” “trans,” and “non-binary.” One LGBTQ+ creator reported in the survey that simply identifying as transgender was considered “sexual content” on certain platforms.
Sex workers face perhaps the most acute version of these challenges. They report suffering from platform censorship (so-called de-platforming), financial discrimination (de-banking), and having their content stolen and monetised by third parties. Algorithmic content moderation is deployed to censor and erase sex workers, with shadow bans reducing visibility and income.
In late 2024, WishTender, a popular wishlist platform for sex workers and online creators, faced disruption when Stripe unexpectedly withdrew support due to a policy shift. AI algorithms are increasingly deployed to automatically exclude anything remotely connected to the adult industry from financial services, resulting in frozen or closed accounts and sometimes confiscated funds.
The irony, as critics note, is stark. Human sex workers are banned from platforms whilst AI-generated sexual content runs advertisements on social media. Payment processors that restrict adult creators allow AI services to generate explicit content of real people for subscription fees. This double standard, where synthetic sexuality is permitted but human sexuality is punished, reveals uncomfortable truths about whose expression gets protected and whose gets suppressed.
Proponents of unfiltered AI argue that outright banning AI sexual content would be an overreach that might censor sex-positive art or legitimate creative endeavours. Provided all involved are consenting adults, they contend, people should have the freedom to create and consume sexual content of their choosing, whether AI-assisted or not. This libertarian perspective suggests punishing actual harm, such as non-consensual usage, rather than criminalising the tool or consensual fantasy.
Some sex workers have even begun creating their own AI chatbots to fight back and grow their businesses, with AI-powered digital clones earning income when the human is off-duty, on sick leave, or retired. This represents creative adaptation to technological change, leveraging the same systems that threaten their livelihoods.
These arguments collectively paint unfiltered AI as a necessary correction to overcautious moderation, a sanctuary for marginalised expression, and a space where adults can explore aspects of human experience that make corporate content moderators uncomfortable. The case is compelling, grounded in documented harms from over-moderation and legitimate concerns about technological paternalism.
But it exists alongside a dramatically different reality, one measured in violated consent and psychological devastation.
The statistics are stark. In a survey of over 16,000 respondents across 10 countries, 2.2 per cent indicated personal victimisation from deepfake pornography, and 1.8 per cent indicated perpetration behaviours. These percentages, whilst seemingly small, represent hundreds of thousands of individuals when extrapolated to global internet populations.
The victimisation is not evenly distributed. A 2023 study showed that 98 per cent of deepfake videos online are pornographic, and a staggering 99 per cent of those target women. According to Sensity, an AI-developed synthetic media monitoring company, 96 per cent of deepfakes are sexually explicit and feature women who did not consent to the content's creation.
Ninety-four per cent of individuals featured in deepfake pornography work in the entertainment industry, with celebrities being prime targets. Yet the technology's democratisation means anyone with publicly available photographs faces potential victimisation.
The harms of image-based sexual abuse have been extensively documented: negative impacts on victim-survivors' mental health, career prospects, and willingness to engage with others both online and offline. Victims are likely to experience poor mental health symptoms including depression and anxiety, reputational damage, withdrawal from areas of their public life, and potential loss of jobs and job prospects.
The use of deepfake technology, as researchers describe it, “invades privacy and inflicts profound psychological harm on victims, damages reputations, and contributes to a culture of sexual violence.” This is not theoretical harm. It is measurable, documented, and increasingly widespread as the tools for creating such content become more accessible.
The platforms offering unfiltered AI capabilities claim various safeguards. Lovechat emphasises that it has “a clearly defined Privacy Policy and Terms of Use.” Yet the fundamental challenge remains: systems designed to remove barriers to AI-generated sexual content cannot simultaneously prevent those same systems from being weaponised against non-consenting individuals.
The technical architecture that enables fantasy exploration also enables violation. This is not a bug that can be patched. It is a feature of the design philosophy itself.
The National Center on Sexual Exploitation warned in a 2024 report that even “ethical” generation of NSFW material from chatbots posed major harms, including addiction, desensitisation, and a potential increase in sexual violence. Critics warn that these systems are data-harvesting tools designed to maximise user engagement rather than genuine connection, potentially fostering emotional dependency, attachment, and distorted expectations of real relationships.
Unrestricted AI-generated NSFW material, researchers note, poses significant risks extending beyond individual harms into broader societal effects. Such content can inadvertently promote harmful stereotypes, objectification, and unrealistic standards, affecting individuals' mental health and societal perceptions of consent. Allowing explicit content may democratise creative expression but risks normalising harmful behaviours, blurring ethical lines, and enabling exploitation.
The scale of AI-generated content compounds these concerns. According to a report from Europol Innovation Lab, as much as 90 per cent of online content may be synthetically generated by 2026. This represents a fundamental shift in the information ecosystem, one where distinguishing between authentic human expression and algorithmically generated content becomes increasingly difficult.
Technology continues to outpace legal frameworks, with AI's rapid progress leaving lawmakers struggling to respond. As one regulatory analysis put it, “AI's rapid evolution has outpaced regulatory frameworks, creating challenges for policymakers worldwide.”
Yet 2024 and 2025 have witnessed an unprecedented surge in legislative activity attempting to address these challenges. The responses reveal both the seriousness with which governments are treating AI harms and the difficulties inherent in regulating technologies that evolve faster than legislation can be drafted.
In the United States, the TAKE IT DOWN Act was signed into law on 19 May 2025, criminalising the knowing publication or threat to publish non-consensual intimate imagery, including AI-generated deepfakes. Platforms must remove such content within 48 hours upon notice, with penalties including fines and up to three years in prison.
The DEFIANCE Act was reintroduced in May 2025, giving victims of non-consensual sexual deepfakes a federal civil cause of action with statutory damages up to $250,000.
At the state level, 14 states have enacted laws addressing non-consensual sexual deepfakes. Tennessee's ELVIS Act, effective 1 July 2024, provides civil remedies for unauthorised use of a person's voice or likeness in AI-generated content. New York's Hinchey law, enacted in 2023, makes creating or sharing sexually explicit deepfakes of real people without their consent a crime whilst giving victims the right to sue.
The European Union's Artificial Intelligence Act officially entered into force in August 2024, becoming a significant and pioneering regulatory framework. The Act adopts a risk-based approach, outlawing the worst cases of AI-based identity manipulation and mandating transparency for AI-generated content. Directive 2024/1385 on combating violence against women and domestic violence addresses non-consensual images generated with AI, providing victims with protection from deepfakes.
France amended its Penal Code in 2024 with Article 226-8-1, criminalising non-consensual sexual deepfakes with possible penalties including up to two years' imprisonment and a €60,000 fine.
The United Kingdom's Online Safety Act 2023 prohibits the sharing or even the threat of sharing intimate deepfake images without consent. Proposed 2025 amendments target creators directly, with intentionally crafting sexually explicit deepfake images without consent penalised with up to two years in prison.
China is proactively regulating deepfake technology, requiring the labelling of synthetic media and enforcing rules to prevent the spread of misleading information. The global response demonstrates a trend towards protecting individuals from non-consensual AI-generated content through both criminal penalties and civil remedies.
But respondents from countries with specific legislation still reported perpetration and victimisation experiences in the survey data, suggesting that laws alone are inadequate to deter perpetration. The challenge is not merely legislative but technological, cultural, and architectural.
Laws can criminalise harm after it occurs and provide mechanisms for content removal, but they struggle to prevent creation in the first place when the tools are widely distributed, easy to use, and operate across jurisdictional boundaries.
The global AI regulation landscape is, as analysts describe it, “fragmented and rapidly evolving,” with earlier optimism about global cooperation now seeming distant. In 2024, US lawmakers introduced more than 700 AI-related bills, and 2025 began at an even faster pace. Yet existing frameworks fall short beyond traditional data practices, leaving critical gaps in addressing the unique challenges AI poses.
UNESCO's 2021 Recommendation on AI Ethics and the OECD's 2019 AI Principles established common values like transparency and fairness. The Council of Europe Framework Convention on Artificial Intelligence aims to ensure AI systems respect human rights, democracy, and the rule of law. These aspirational frameworks provide guidance but lack enforcement mechanisms, making them more statement of intent than binding constraint.
The law, in short, is running to catch up with technology that has already escaped the laboratory and pervaded the consumer marketplace. Each legislative response addresses yesterday's problems whilst tomorrow's capabilities are already being developed.
When AI-generated content causes harm, who bears responsibility? The question appears straightforward but dissolves into complexity upon examination.
Algorithmic accountability refers to the allocation of responsibility for the consequences of real-world actions influenced by algorithms used in decision-making processes. Five key elements have been identified: the responsible actors, the forum to whom the account is directed, the relationship of accountability between stakeholders and the forum, the criteria to be fulfilled to reach sufficient account, and the consequences for the accountable parties.
In theory, responsibility for any harm resulting from a machine's decision may lie with the algorithm itself or with the individuals who designed it, particularly if the decision resulted from bias or flawed data analysis inherent in the algorithm's design. But research shows that practitioners involved in designing, developing, or deploying algorithmic systems feel a diminished sense of responsibility, often shifting responsibility for the harmful effects of their own software code to other agents, typically the end user.
This responsibility diffusion creates what might be called the “accountability gap.” The platform argues it merely provides tools, not content. The model developers argue they created general-purpose systems, not specific harmful outputs. The users argue the AI generated the content, not them. The AI, being non-sentient, cannot be held morally responsible in any meaningful sense.
Each party points to another. The circle of deflection closes, and accountability vanishes into the architecture.
The Algorithmic Accountability Act requires some businesses that use automated decision systems to make critical decisions to report on the impact of such systems on consumers. Yet concrete strategies for AI practitioners remain underdeveloped, with ongoing challenges around transparency, enforcement, and determining clear lines of accountability.
The challenge intensifies with unfiltered AI platforms. When a user employs Soulfun or Lovechat to generate non-consensual intimate imagery of a real person, multiple parties share causal responsibility. The platform created the infrastructure and removed safety barriers. The model developers trained systems capable of generating realistic imagery. The user made the specific request and potentially distributed the harmful content.
Each party enabled the harm, yet traditional legal frameworks struggle to apportion responsibility across distributed, international, and technologically mediated actors.
Some argue that AI systems cannot be authors because authorship implies responsibility and agency, and that ethical AI practice requires humans remain fully accountable for AI-generated works. This places ultimate responsibility on the human user making requests, treating AI as a tool comparable to Photoshop or any other creative software.
Yet this framing fails to account for the qualitative differences AI introduces. Previous manipulation tools required skill, time, and effort. Creating a convincing fake photograph demanded technical expertise. AI dramatically lowers these barriers, enabling anyone to create highly realistic synthetic content with minimal effort or technical knowledge. The democratisation of capability fundamentally alters the risk landscape.
Moreover, the scale of potential harm differs. A single deepfake can be infinitely replicated, distributed globally within hours, and persist online despite takedown efforts. The architecture of the internet, combined with AI's generative capabilities, creates harm potential that traditional frameworks for understanding responsibility were never designed to address.
Who bears responsibility when the line between liberating art and undeniable harm is generated not by human hands but by a perfectly amoral algorithm? The question assumes a clear line exists. Perhaps the more uncomfortable truth is that these systems have blurred boundaries to the point where liberation and harm are not opposites but entangled possibilities within the same technological architecture.
The conflict between creative freedom and protection from harm is not new. Societies have long grappled with where to draw lines around expression, particularly sexual expression. What makes the AI context distinctive is the compression of timescales, the globalisation of consequences, and the technical complexity that places meaningful engagement beyond most citizens' expertise.
Lost in the polarised debate between absolute freedom and absolute restriction is the nuanced reality that most affected communities occupy. LGBTQ+ individuals simultaneously need protection from AI-generated harassment and deepfakes whilst also requiring freedom from over-moderation that erases their identities. Sex workers need platforms that do not censor their labour whilst also needing protection from having their likenesses appropriated by AI systems without consent or compensation.
The GLAAD 2024 Social Media Safety Index recommended that AI systems should be used to flag content for human review rather than automated removals. They called for strengthening and enforcing existing policies that protect LGBTQ people from both hate and suppression of legitimate expression, improving moderation including training moderators on the needs of LGBTQ users, and not being overly reliant on AI.
This points towards a middle path, one that neither demands unfiltered AI nor accepts the crude over-moderation that currently characterises mainstream platforms. Such a path requires significant investment in context-aware moderation, human review at scale, and genuine engagement with affected communities about their needs. It demands that platforms move beyond simply maximising engagement or minimising liability towards actually serving users' interests.
But this middle path faces formidable obstacles. Human review at the scale of modern platforms is extraordinarily expensive. Context-aware AI moderation is technically challenging and, as current systems demonstrate, frequently fails. Genuine community engagement takes time and yields messy, sometimes contradictory results that do not easily translate into clear policy.
The economic incentives point away from nuanced solutions. Unfiltered AI platforms can charge subscription fees whilst avoiding the costs of sophisticated moderation. Mainstream platforms can deploy blunt automated moderation that protects against legal liability whilst externalising the costs of over-censorship onto marginalised users.
Neither model incentivises the difficult, expensive, human-centred work that genuinely protective and permissive systems would require. The market rewards extremes, not nuance.
Technology is not destiny. The current landscape of unfiltered AI platforms and over-moderated mainstream alternatives is not inevitable but rather the result of specific architectural choices, business models, and regulatory environments. Different choices could yield different outcomes.
Several concrete proposals emerge from the research and advocacy communities. Incorporating algorithmic accountability systems with real-time feedback loops could ensure that biases are swiftly detected and mitigated, keeping AI both effective and ethically compliant over time.
Transparency about the use of AI in content creation, combined with clear processes for reviewing, approving, and authenticating AI-generated content, could help establish accountability chains. Those who leverage AI to generate content would be held responsible through these processes rather than being able to hide behind algorithmic opacity.
Technical solutions also emerge. Robust deepfake detection systems could identify synthetic content, though this becomes an arms race as generation systems improve. Watermarking and provenance tracking for AI-generated content could enable verification of authenticity. The EU AI Act's transparency requirements, mandating disclosure of AI-generated content, represent a regulatory approach to this technical challenge.
Some researchers propose that ethical and safe training ensures NSFW AI chatbots are developed using filtered, compliant datasets that prevent harmful or abusive outputs, balancing realism with safety to protect both users and businesses. Yet this immediately confronts the question of who determines what constitutes “harmful or abusive” and whether such determinations will replicate the over-moderation problems already documented.
Policy interventions focusing on regulations against false information and promoting transparent AI systems are essential for addressing AI's social and economic impacts. But policy alone cannot solve problems rooted in fundamental design choices and economic incentives.
Yet perhaps the most important shift required is cultural rather than technical or legal. As long as society treats sexual expression as uniquely dangerous, subject to restrictions that other forms of expression escape, we will continue generating systems that either over-censor or refuse to censor at all. As long as marginalised communities' sexuality is treated as more threatening than mainstream sexuality, moderation systems will continue reflecting and amplifying these biases.
The question “what should AI be allowed to do?” is inseparable from “what should humans be allowed to do?” If we believe adults should be able to create and consume sexual content consensually, then AI tools for doing so are not inherently problematic. If we believe non-consensual sexual imagery violates fundamental rights, then preventing AI from enabling such violations becomes imperative.
The technology amplifies and accelerates human capabilities, for creation and for harm, but it does not invent the underlying tensions. It merely makes them impossible to ignore.
As much as 90 per cent of online content may be synthetically generated by 2026, according to Europol Innovation Lab projections. This represents a fundamental transformation of the information environment humans inhabit, one we are building without clear agreement on its rules, ethics, or governance.
The platforms offering unfiltered AI represent one possible future: a libertarian vision where adults access whatever tools and content they desire, with harm addressed through after-the-fact legal consequences rather than preventive restrictions. The over-moderated mainstream platforms represent another: a cautious approach that prioritises avoiding liability and controversy over serving users' expressive needs.
Both futures have significant problems. Neither is inevitable.
The challenge moving forward, as one analysis put it, “will be maximising the benefits (creative freedom, private enjoyment, industry innovation) whilst minimising the harms (non-consensual exploitation, misinformation, displacement of workers).” This requires moving beyond polarised debates towards genuine engagement with the complicated realities that affected communities navigate.
It requires acknowledging that unfiltered AI can simultaneously be a sanctuary for marginalised expression and a weapon for violating consent. That the same technical capabilities enabling creative freedom also enable unprecedented harm. That removing all restrictions creates problems and that imposing crude restrictions creates different but equally serious problems.
Perhaps most fundamentally, it requires accepting that we cannot outsource these decisions to technology. The algorithm is amoral, as the opening question suggests, but its creation and deployment are profoundly moral acts.
The platforms offering unfiltered AI made choices about what to build and how to monetise it. The mainstream platforms made choices about what to censor and how aggressively. Regulators make choices about what to permit and prohibit. Users make choices about what to create and share.
At each decision point, humans exercise agency and bear responsibility. The AI may generate the content, but humans built the AI, designed its training process, chose its deployment context, prompted its outputs, and decided whether to share them. The appearance of algorithmic automaticity obscures human choices all the way down.
As we grant artificial intelligence the deepest access to our imaginations and desires, we are not witnessing a final frontier of creative emancipation or engineering a Pandora's box of ungovernable consequences. We are doing both, simultaneously, through technologies that amplify human capabilities for creation and destruction alike.
The unfiltered AI embodied by platforms like Soulfun and Lovechat is neither purely vital sanctuary nor mere convenient veil. It is infrastructure that enables both authentic self-expression and non-consensual violation, both community building and exploitation.
The same could be said of the internet itself, or photography, or written language. Technologies afford possibilities; humans determine how those possibilities are actualised.
As these tools rapidly outpace legal frameworks and moral intuition, the question of responsibility becomes urgent. The answer cannot be that nobody is responsible because the algorithm generated the output. It must be that everyone in the causal chain bears some measure of responsibility, proportionate to their power and role.
Platform operators who remove safety barriers. Developers who train increasingly capable generative systems. Users who create harmful content. Regulators who fail to establish adequate guardrails. Society that demands both perfect safety and absolute freedom whilst offering resources for neither.
The line between liberating art and undeniable harm has never been clear or stable. What AI has done is make that ambiguity impossible to ignore, forcing confrontation with questions about expression, consent, identity, and power that we might prefer to avoid.
The algorithm is amoral, but our decisions about it cannot be. We are building the future of human expression and exploitation with each architectural choice, each policy decision, each prompt entered into an unfiltered chat window.
The question is not whether AI represents emancipation or catastrophe, but rather which version of this technology we choose to build, deploy, and live with. That choice remains, for now, undeniably human.
ACM Conference on Fairness, Accountability and Transparency. (2024). Research on automated content moderation restricting ChatGPT outputs. https://dl.acm.org/conference/fat
Carnegie Mellon University. (June 2024). “How Should AI Depict Marginalized Communities? CMU Technologists Look to a More Inclusive Future.” https://www.cmu.edu/news/
Council of Europe Framework Convention on Artificial Intelligence. (2024). https://www.coe.int/
Dentons. (January 2025). “AI trends for 2025: AI regulation, governance and ethics.” https://www.dentons.com/
Emory University. (2024). Research on LGBTQ+ reclaimed language and AI moderation. “Is AI Censoring Us?” https://goizueta.emory.edu/
European Union. (1 August 2024). EU Artificial Intelligence Act. https://eur-lex.europa.eu/
European Union. (2024). Directive 2024/1385 on combating violence against women and domestic violence.
Europol Innovation Lab. (2024). Report on synthetic content generation projections.
France. (2024). Penal Code Article 226-8-1 on non-consensual sexual deepfakes.
GLAAD. (2024). Social Media Safety Index: Executive Summary. https://glaad.org/smsi/2024/
National Center on Sexual Exploitation. (2024). Report on NSFW AI chatbot harms.
OECD. (2019). AI Principles. https://www.oecd.org/
Penn Engineering. (2024). “Censoring Creativity: The Limits of ChatGPT for Scriptwriting.” https://blog.seas.upenn.edu/
Sensity. (2023). Research on deepfake content and gender distribution.
Springer. (2024). “Accountability in artificial intelligence: what it is and how it works.” AI & Society. https://link.springer.com/
Survey research. (2024). “Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries.” ACM Digital Library. https://dl.acm.org/doi/fullHtml/10.1145/3613904.3642382
Tennessee. (1 July 2024). ELVIS Act.
UNESCO. (2021). Recommendation on AI Ethics. https://www.unesco.org/
United Kingdom. (2023). Online Safety Act. https://www.legislation.gov.uk/
United States Congress. (19 May 2025). TAKE IT DOWN Act.
United States Congress. (May 2025). DEFIANCE Act.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from 
 Roscoe's Story
In Summary: * Another lucky day for me; the wife took me out to one of our favorite restaurants for brunch.
Prayers, etc.: * My daily prayers.
Health Metrics: * bw= 220.9 lbs. * bp= 132/81 (59)
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 08:00 – lasagna * 12:00 – filipino salad, crispy dumplings * 13:35 – cassava cake
Activities, Chores, etc.: * 05:50 – bank accounts activity monitored * 06:30 – read, pray, listen to news reports from various sources * 11:30 to 13:30 – brunch with the wife at a favorite restaurant * 13:30 – home in time to catch the NASCAR Cup Series Countdown to Green Show * 14:00 – watching the NASCAR Cup Series Championship Race * 18:00 – read, pray, listen to news reports from various sources
Chess: * 15:40 – moved in all pending CC games
from 
 wystswolf

‘The horn of Boromir!’ he cried. ‘He is in need!’ He sprang down the steps and away, leaping down the path. ‘Alas! An ill fate is on me this day, and all that I do goes amiss.’
Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Nine for Mortal Men doomed to die, One for the Dark Lord on his dark throne In the Land of Mordor where the Shadows lie. One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie.
The Muse tonight is on a live concert performance of music from Peter Jackson's Lord of the Rings. It is no doubt a moving performance. And so, I am inspired to revisit Middle Earth on my own terms by listening to the soundtrack and eventually watching most of The Fellowship of the Ring.
There is little as beautiful in literature or cinema as Boromir's fall and Aragorn's heroism at the end of The Fellowship of the Ring. Chapter ten is appropriately named 'The Breaking of the Fellowship.'
Fellowship is my favorite kind of storytelling: disparate parties come together and cooperate in order to solve a problem none of the could achieve on their own. The power of cooperation.
I don't recall why I fell in love with Aragron and Baromir, I suppose they represented the ideal in my mind. Aragorn the rugged, disenfranchised scout with a heart full of good and principles that cost him a life of ease. Boromir, equally kind but more refined, and flawed by his desire to see his people protected. His goal was not self-serving but with the thought of circumstances beyond him. Both men, to one degree or another gave up a life they would choose.
Serving others over self. These men are heroes. As we should all desire to be.
So it was that in the concluding chapter of this rapturous first book that Tolkien shows us what these two men are really made of. Boromir, seemed to be the more reasonable, level headed of the two men. Where Strider was wild and untamed, Boromir was a soldier: disciplined and orderly, working as a public servant more than a rebel.
When he approaches Frodo alone in the glen of Rowan-trees, his intentions are for the best. He does not seek evil or wish harm upon Frodo. Such is the test of the One Ring. The test that Lady Galadriel had passed so beautifully, Boromir would fail.
It is a gut-wrenching confrontation. Completely reasonable from the soldier's point of view, and completely contrary to the meaning of the fellowship and antithetical to their goal to destroy this tool of Sauron.
His line to Frodo, 'Are you sure that you do not suffer needlessly?’ in trying to convince him to give Boromir the ring is practically Biblical, evoking the moment in Matthew chapter 16, version 21 when Peter encourages the Christ to be kind to himself.
The argument comes from a genuine place, but is flawed nonetheless. And as Christ rebuked Peter, Frodo denies Boromir.
As quickly as it begins, Boromir realizes, too late, the mistake he has made. It is stunningly beautiful that Tolkien allows us to see the true character of this man revealed by immediately acknowledging his faults and failings. No evil man could do this. Only GOOD men can see themselves objectively in the moment and realize they overstepped.
And just as quickly, we get to see the mettle of which Boromir is truly made when in the opening of The Two Towers he is the front line confronting the Uruk-hai, resulting in his destruction. Not punishment for his misgivings, the forgone conclusion of a man who was heroic in his heart. He lived in service of others and died the same.
I was twelve or thirteen when I discovered Fellowship after being completely lost in The Hobbit. I have been trying to find that lightening in a bottle ever since. I know I can never go home again, but oh!, how I wish I could feel that feeling of wonder and discovery one more time before I die.
However, if all I ever manage is to be heroic, level but flawed, or wild and untamed as a guide and protector, I will count my time here as a success.
What more should any of us ask.
Suddenly Frodo awoke from his thoughts: a strange feeling came to him that something was behind him, that unfriendly eyes were upon him. He sprang up and turned; but all that he saw to his surprise was Boromir, and his face was smiling and kind. ‘I was afraid for you, Frodo,’ he said, coming forward. ‘If Aragorn is right and Orcs are near, then none of us should wander alone, and you least of all: so much depends on you. And my heart too is heavy. May I stay now and talk for a while, since I have found you? It would comfort me. Where there are so many, all speech becomes a debate without end. But two together may perhaps find wisdom.’ ‘You are kind,’ answered Frodo. ‘But I do not think that any speech will help me. For I know what I should do, but I am afraid of doing it, Boromir: afraid.’ Boromir stood silent. Rauros roared endlessly on. The wind murmured in the branches of the trees. Frodo shivered. Suddenly Boromir came and sat beside him. ‘Are you sure that you do not suffer needlessly?’ he said. ‘I wish to help you. You need counsel in your hard choice. Will you not take mine?’ ‘I think I know already what counsel you would give, Boromir,’ said Frodo. ‘And it would seem like wisdom but for the warning of my heart.’ ‘Warning? Warning against what?’ said Boromir sharply. ‘Against delay. Against the way that seems easier. Against refusal of the burden that is laid on me. Against – well, if it must be said, against trust in the strength and truth of Men.’ ‘Yet that strength has long protected you far away in your little country, though you knew it not.’ ‘I do not doubt the valour of your people. But the world is changing. The walls of Minas Tirith may be strong, but they are not strong enough. If they fail, what then?’ ‘We shall fall in battle valiantly. Yet there is still hope that they will not fail.’ ‘No hope while the Ring lasts,’ said Frodo. ‘Ah! The Ring!’ said Boromir, his eyes lighting. ‘The Ring! Is it not a strange fate that we should suffer so much fear and doubt for so small a thing? So small a thing! And I have seen it only for an instant in the house of Elrond. Could I not have a sight of it again?’ Frodo looked up. His heart went suddenly cold. He caught the strange gleam in Boromir’s eyes, yet his face was still kind and friendly. ‘It is best that it should lie hidden,’ he answered. ‘As you wish. I care not,’ said Boromir. ‘Yet may I not even speak of it? For you seem ever to think only of its power in the hands of the Enemy: of its evil uses not of its good. The world is changing, you say. Minas Tirith will fall, if the Ring lasts. But why? Certainly, if the Ring were with the Enemy. But why, if it were with us?’ ‘Were you not at the Council?’ answered Frodo. ‘Because we cannot use it, and what is done with it turns to evil.’ Boromir got up and walked about impatiently. ‘So you go on,’ he cried. ‘Gandalf, Elrond – all these folk have taught you to say so. For themselves they may be right. These elves and half-elves and wizards, they would come to grief perhaps. Yet often I doubt if they are wise and not merely timid. But each to his own kind. True-hearted Men, they will not be corrupted. We of Minas Tirith have been staunch through long years of trial. We do not desire the power of wizard- lords, only strength to defend ourselves, strength in a just cause. And behold! in our need chance brings to light the Ring of Power. It is a gift, I say; a gift to the foes of Mordor. It is mad not to use it, to use the power of the Enemy against him. The fearless, the ruthless, these alone will achieve vic- tory. What could not a warrior do in this hour, a great leader? What could not Aragorn do? Or if he refuses, why not Boromir? The Ring would give me power of Command. How I would drive the hosts of Mordor, and all men would flock to my banner!’ Boromir strode up and down, speaking ever more loudly. Almost he seemed to have forgotten Frodo, while his talk dwelt on walls and weapons, and the mustering of men; and he drew plans for great alliances and glorious victories to be; and he cast down Mordor, and became himself a mighty king, benevolent and wise. Suddenly he stopped and waved his arms. ‘And they tell us to throw it away!’ he cried. ‘I do not say destroy it. That might be well, if reason could show any hope of doing so. It does not. The only plan that is proposed to us is that a halfling should walk blindly into Mordor and offer the Enemy every chance of recapturing it for himself. Folly! ‘Surely you see it, my friend?’ he said, turning now sud- denly to Frodo again. ‘You say that you are afraid. If it is so, the boldest should pardon you. But is it not really your good sense that revolts?’ ‘No, I am afraid,’ said Frodo. ‘Simply afraid. But I am glad to have heard you speak so fully. My mind is clearer now.’ ‘Then you will come to Minas Tirith?’ cried Boromir. His eyes were shining and his face eager. ‘You misunderstand me,’ said Frodo. ‘But you will come, at least for a while?’ Boromir persisted. ‘My city is not far now; and it is little further from there to Mordor than from here. We have been long in the wilderness, and you need news of what the Enemy is doing before you make a move. Come with me, Frodo,’ he said. ‘You need rest before your venture, if go you must.’ He laid his hand on the hobbit’s shoulder in friendly fashion; but Frodo felt the hand trembling with suppressed excitement. He stepped quickly away, and eyed with alarm the tall Man, nearly twice his height and many times his match in strength. ‘Why are you so unfriendly?’ said Boromir. ‘I am a true man, neither thief nor tracker. I need your Ring: that you know now; but I give you my word that I do not desire to keep it. Will you not at least let me make trial of my plan? Lend me the Ring!’ ‘No! no!’ cried Frodo. ‘The Council laid it upon me to bear it.’ ‘It is by our own folly that the Enemy will defeat us,’ cried Boromir. ‘How it angers me! Fool! Obstinate fool! Running wilfully to death and ruining our cause. If any mortals have claim to the Ring, it is the men of Nu ´menor, and not Halflings. It is not yours save by unhappy chance. It might have been mine. It should be mine. Give it to me!’ Frodo did not answer, but moved away till the great flat stone stood between them. ‘Come, come, my friend!’ said Boromir in a softer voice. ‘Why not get rid of it? Why not be free of your doubt and fear? You can lay the blame on me, if you will. You can say that I was too strong and took it by force. For I am too strong for you, halfling,’ he cried; and suddenly he sprang over the stone and leaped at Frodo. His fair and pleasant face was hideously changed; a raging fire was in his eyes. Frodo dodged aside and again put the stone between them. There was only one thing he could do: trembling he pulled out the Ring upon its chain and quickly slipped it on his finger, even as Boromir sprang at him again. The Man gasped, stared for a moment amazed, and then ran wildly about, seeking here and there among the rocks and trees. ‘Miserable trickster!’ he shouted. ‘Let me get my hands on you! Now I see your mind. You will take the Ring to Sauron and sell us all. You have only waited your chance to leave us in the lurch. Curse you and all halflings to death and dark- ness!’ Then, catching his foot on a stone, he fell sprawling and lay upon his face. For a while he was as still as if his own curse had struck him down; then suddenly he wept.
He rose and passed his hand over his eyes, dashing away the tears. ‘What have I said?’ he cried. ‘What have I done? Frodo, Frodo!’ he called. ‘Come back! A madness took me, but it has passed. Come back!’
There was no answer. Frodo did not even hear his cries. He was already far away, leaping blindly up the path to the hill-top. Terror and grief shook him, seeing in his thought the mad fierce face of Boromir, and his burning eyes.
We should all be so well written as to be able to see our mistakes and accept them as the flawed men we are.
“A mile, maybe, from Parth Galen in a little glade not far from the lake he found Boromir. He was sitting with his back to a great tree, as if he was resting. But Aragorn saw that he was pierced with many black-feathered arrows; his sword was still in his hand, but it was broken near the hilt; his horn cloven in two was at his side. Many Orcs lay slain, piled all about him and at his feet. Aragorn knelt beside him. Boromir opened his eyes and strove to speak. At last slow words came. ‘I tried to take the Ring from Frodo ‘ he said. ‘I am sorry. I have paid.’ His glance strayed to his fallen enemies; twenty at least lay there. ‘They have gone: the Halflings: the Orcs have taken them. I think they are not dead. Orcs bound them.’ He paused and his eyes closed wearily. After a moment he spoke again. ‘Farewell, Aragorn! Go to Minas Tirith and save my people! I have failed.’ ‘No!’ said Aragorn, taking his hand and kissing his brow. ‘You have conquered. Few have gained such a victory. Be at peace! Minas Tirith shall not fall!’ Boromir smiled. ‘Which way did they go? Was Frodo there?’ said Aragorn. But Boromir did not speak again. ‘Alas!’ said Aragorn. ‘Thus passes the heir of Denethor, Lord of the Tower of Guard! This is a bitter end. Now the Company is all in ruin. It is I that have failed. Vain was Gandalf’s trust in me. What shall I do now? Boromir has laid it on me to go to Minas Tirith, and my heart desires it; but where are the Ring and the Bearer? How shall I find them and save the Quest from disaster?’ He knelt for a while, bent with weeping, still clasping Boromir’s hand. So it was that Legolas and Gimli found him.”
from 
 Café histoire
Et soudain la douceur de l'automne revient en ce début d'après-midi de fin octobre. L'occasion est trop belle alors que, finissant ma journée, je remonte sur ma moto pour rentrer à la maison.

En revenant, je décide de prendre la Petite Corniche à la sortie de Lutry. Je m'arrête pour prendre quelque photo dans cette belle lumière. Les couleurs sont magnifiques sur les vignes. La chaleur est douce.

Cela ne manque pas de me donner des idées. Et si je profitais de cette douceur pour pousser un peu plus loin la ballade motarde ? La tentation est trop forte.

Après un petit passage à la maison plus tard pour m'équiper un peu mieux et prendre ma sacoche réservoir pour déposer mon appareil photo et une gourde d'eau, je prends la direction d'Aigle pour un petit col des Mosses et revenir par Gruyères.

La montée en direction du Sépey se déroule sous les meilleurs auspices. Le soleil dispense une belle lumière sur la forêt et les montagnes.

J'en profite pour faire un arrêt photo. Je distingue un peu de neige sur les sommets des Préalpes qui se mélange avec bonheur avec les couleurs automnales des arbres.

Je reprends ensuite en direction des Mosses. J'y fais un arrêt pour boire un chocolat chaud.

En repartant, le soleil a disparu emportant avec lui la lumière rayonnant avant sur l'ensemble du paysage. La magie disparaît quelque peu. Il me reste le plaisir de la route. C'est déjà bien.

Je redescends alors sur le Pays-d'Enhaut, puis Gruyère, Bulle et retour à la maison.
Tags : #Roadbook #photographie #Suisse #Vaud #Lavaux #Aigle #LesMosses