Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Talk to Fa

from
Roscoe's Story
In Summary: * One of my favorite chores before a big mid-week holiday is switching off my alarms the night before the holiday. And I've just now done that. The wife has Thanksgiving Day off from work so there's no need for me to be up early, fixing her coffee, making sure she's out the door in a timely manner, etc.
Prayers, etc.: * My daily prayers.
Health Metrics: * bw= 220.57 lbs. * bp= 141/82 (58)
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 07:30 – 1 peanut butter and cheese sandwich * 12:00 – snack on saltine crackers * 13:00 – 1 bean and cheese taco * 17:00 – 1 bean and cheese taco, 2 mini-cupcakes
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:15 – bank accounts activity monitored * 06:30 – read, pray, listen to news reports from various sources, and nap * 11:00 – listening to The Markley, van Camp and Robbins Show. * 15:10 – listening to The Jack Ricardi Show, guest hosted by Chris Krok this afternoon. * 17:00 – listening to The Joe Pags Show
Chess: * 14:40 – moved in all pending CC games
from
Human in the Loop

When Nathalie Berdat joined the BBC two years ago as “employee number one” in the data governance function, she entered a role that barely existed in media organisations a decade prior. Today, as Head of Data and AI Governance, Berdat represents the vanguard of an emerging professional class: specialists tasked with navigating the treacherous intersection of artificial intelligence, creative integrity, and legal compliance. These aren't just compliance officers with new titles. They're architects of entirely new organisational frameworks designed to operationalise ethical AI use whilst preserving what makes creative work valuable in the first place.
The rise of generative AI has created an existential challenge for creative industries. How do you harness tools that can generate images, write scripts, and compose music whilst ensuring that human creativity remains central, copyrights are respected, and the output maintains authentic provenance? The answer, increasingly, involves hiring people whose entire professional existence revolves around these questions.
“AI governance is a responsibility that touches an organisation's vast group of stakeholders,” explains research from IBM on AI governance frameworks. “It is a collaboration between AI product teams, legal and compliance departments, and business and product owners.” This collaborative necessity has spawned roles that didn't exist five years ago: AI ethics officers, responsible AI leads, copyright liaisons, content authenticity managers, and digital provenance specialists. These positions sit at the confluence of technology, law, ethics, and creative practice, requiring a peculiar blend of competencies that traditional hiring pipelines weren't designed to produce.
The statistics tell a story of rapid transformation. Recruitment for Chief AI Officers has tripled in the past five years, according to industry research. By 2026, over 40% of Fortune 500 companies are expected to have a Chief AI Officer role. The U.S. White House's Office of Management and Budget mandated in March 2024 that all executive departments and agencies appoint a Chief AI Officer within 60 days.
Consider Getty Images, which employs over 1,700 individuals and represents the work of more than 600,000 journalists and creators worldwide. When the company launched its ethically-trained generative AI tool in 2023, CEO Craig Peters became one of the industry's most vocal advocates for copyright protection and responsible AI development. Getty's approach, which includes compensating contributors whose work was included in training datasets, established a template that many organisations are now attempting to replicate.
The Writers Guild of America strike in 2023 crystallised the stakes. Hollywood writers walked out, in part, to protect their livelihoods from generative AI. The resulting contract included specific provisions requiring writers to obtain consent before using generative AI, and allowing studios to “reject a use of GAI that could adversely affect the copyrightability or exploitation of the work.” These weren't abstract policy statements. They were operational requirements that needed enforcement mechanisms and people to run them.
Similarly, SAG-AFTRA established its “Four Pillars of Ethical AI” in 2024: transparency (a performer's right to know the intended use of their likeness), consent (the right to grant or deny permission), compensation (the right to fair compensation), and control (the right to set limits on how, when, where and for how long their likeness can be used). Each pillar translates into specific production pipeline requirements. Someone must verify that consent was obtained, track where digital replicas are used, ensure performers are compensated appropriately, and audit compliance.
The job descriptions emerging across creative industries reveal roles that are equal parts philosopher, technologist, and operational manager. According to comprehensive analyses of AI ethics officer positions, the core responsibilities break down into several categories.
Policy Development and Implementation: AI ethics officers develop governance frameworks, conduct AI audits, and implement compliance processes to mitigate risks related to algorithmic bias, privacy violations, and discriminatory outcomes. This involves translating abstract ethical principles into concrete operational guidelines that production teams can follow.
At the BBC, James Fletcher serves as Lead for Responsible Data and AI, working alongside Berdat to engage staff on artificial intelligence issues. Their work includes creating frameworks that balance innovation with responsibility. Laura Ellis, the BBC's head of technology forecasting, focuses on ensuring the organisation is positioned to leverage emerging technology appropriately. This tripartite structure reflects a mature approach to operationalising ethics across a large media organisation.
Technical Assessment and Oversight: AI ethics officers need substantial technical literacy. They must understand machine learning algorithms, data processing, and model interpretability. When Adobe's AI Ethics Review Board evaluates new features before market release, the review involves technical analysis, not just philosophical deliberation. The company implemented this comprehensive AI programme in 2019, requiring that all products undergo training, testing, and ethics review guided by principles of accountability, responsibility, and transparency.
Dana Rao, who served as Adobe's Executive Vice President, General Counsel and Chief Trust Officer until September 2024, oversaw the integration of ethical considerations across Adobe's AI initiatives, including the Firefly generative AI tool. The role required bridging legal expertise with technical understanding, illustrating how these positions demand polymath capabilities.
Stakeholder Education and Training: Perhaps the most time-consuming aspect involves educating team members about AI ethics guidelines and developing a culture that preserves ethical and human rights considerations. Career guidance materials emphasise that AI ethics roles require “a strong foundation in computer science, philosophy, or social sciences. Understanding ethical frameworks, data privacy laws, and AI technologies is crucial.”
Operational Integration: The most challenging aspect involves embedding ethical considerations into existing production pipelines without creating bottlenecks that stifle creativity. Research on responsible AI frameworks emphasises that “mitigating AI harms requires a fundamental re-architecture of the AI production pipeline through an augmented AI lifecycle consisting of five interconnected phases: co-framing, co-design, co-implementation, co-deployment, and co-maintenance.”
Whilst AI ethics officers handle broad responsibilities, copyright liaisons focus intensely on intellectual property considerations specific to AI-assisted creative work. The U.S. Copyright Office's guidance, developed after reviewing over 10,000 public comments, established that AI-generated outputs based on prompts alone don't merit copyright protection. Creators must add considerable manual input to AI-assisted work to claim ownership.
This creates immediate operational challenges. How much human input is “considerable”? What documentation proves human authorship? Who verifies compliance before publication? Copyright liaisons exist to answer these questions on a case-by-case basis.
Provenance Documentation: Ensuring that creators keep records of their contributions to AI-assisted works. The Content Authenticity Initiative (CAI), founded in November 2019 by Adobe, The New York Times and Twitter, developed standards for exactly this purpose. By February 2021, Adobe and Microsoft, along with Truepic, Arm, Intel and the BBC, founded the Coalition for Content Provenance and Authenticity (C2PA), which now includes over 3,700 members.
The C2PA standard captures and preserves details about origin, creation, and modifications in a verifiable way. Information such as the creator's name, tools used, editing history, and time and place of publication is cryptographically signed. Copyright liaisons in creative organisations must understand these technical standards and ensure their implementation across production workflows.
Legal Assessment and Risk Mitigation: Getty Images' lawsuit against Stability AI, which proceeded through 2024, exemplifies the legal complexities at stake. The case involved claims of copyright infringement, database right infringement, trademark infringement and passing off. Grant Farhall, Chief Product Officer at Getty Images, and Lindsay Lane, Getty's trial lawyer, navigated these novel legal questions. Organisations need internal expertise to avoid similar litigation risks.
Rights Clearance and Licensing: AI-assisted production complicates traditional rights clearance exponentially. If an AI tool was trained on copyrighted material, does using its output require licensing? If a tool generates content similar to existing copyrighted work, what's the liability? The Hollywood studios' June 2024 lawsuit against AI companies reflected industry-wide anxiety. Major figures including Ron Howard, Cate Blanchett and Paul McCartney signed letters expressing alarm about AI models training on copyrighted works.
Research indicates significant variation in reporting structures, with important implications for how effectively these roles can operate.
Reporting to the General Counsel: In 71% of the World's Most Ethical Companies, ethics and compliance teams report to the General Counsel. This structure ensures that ethical considerations are integrated with legal compliance. Adobe's structure, with Dana Rao serving as both General Counsel and Chief Trust Officer, exemplified this approach. The downside is potential over-emphasis on legal risk mitigation at the expense of broader ethical considerations.
Reporting to the Chief AI Officer: As Chief AI Officer roles proliferate, many organisations structure AI ethics officers as direct reports to the CAIO. This creates clear lines of authority and ensures ethics considerations are integrated into AI strategy from the beginning. The advantage is proximity to technical decision-making; the risk is potential subordination of ethical concerns to business priorities.
Direct Reporting to the CEO: Some organisations position ethics leadership with direct CEO oversight. This structure, used by 23% of companies, emphasises the strategic importance of ethics and gives ethics officers significant organisational clout. The BBC's structure, with Berdat and Fletcher operating at senior levels with broad remits, suggests this model.
The Question of Centralisation: Research indicates that centralised AI governance provides better risk management and policy consistency. However, creative organisations face a particular tension. Centralised governance risks becoming a bottleneck that slows creative iteration. The emerging consensus involves centralised policy development with distributed implementation. A central AI ethics team establishes principles and standards, whilst embedded specialists within creative teams implement these standards in context-specific ways.
The true test of these roles involves daily operational reality. How do abstract ethical principles translate into production workflows that creative professionals can follow without excessive friction?
Intake and Assessment Protocols: Leading organisations implement AI portfolio management intake processes that identify and assess AI risks before projects commence. This involves initial use case selection frameworks and AI Risk Tiering assessments. For example, using AI to generate background textures for a video game presents different risks than using AI to generate character dialogue or player likenesses. Risk tiering enables proportionate oversight.
Checkpoint Integration: Rather than ethics review happening at project completion, leading organisations integrate ethics checkpoints throughout development. A typical production pipeline might include checkpoints at project initiation (risk assessment, use case approval), development (training data audit, bias testing), pre-production (content authenticity setup, consent verification), production (ongoing monitoring), post-production (final compliance audit), and distribution (rights verification, authenticity certification).
SAG-AFTRA's framework provides concrete examples. Producers must provide performers with “notice ahead of time about scanning requirements with clear and conspicuous consent requirements” and “detailed information about how they will use the digital replica and get consent, including a 'reasonably specific description' of the intended use each time it will be used.”
Automated Tools and Manual Oversight: Adobe's PageProof Smart Check feature automatically reveals authenticity data, showing who created content, what AI tools were used, and how it's been modified. However, research consistently emphasises that “human oversight remains crucial to validate results and ensure accurate verification.” Automated tools flag potential issues; human experts make final determinations.
Documentation and Audit Trails: Every AI-assisted creative project requires comprehensive records: what tools were used, what training data those tools employed, what human contributions were made, what consent was obtained, what rights were cleared, and what the final provenance trail shows. The C2PA standard provides technical infrastructure, but as one analysis noted: “as of 2025, adoption is lacking, with very little internet content using C2PA.” The gap between technical capability and practical implementation reflects the operational challenges these roles must overcome.
Traditional educational pathways don't produce candidates with the full spectrum of required competencies. These roles require a combination of skills that academic programmes weren't designed to teach together.
Technical Foundations: AI ethics officers typically hold bachelor's degrees in computer science, data science, philosophy, ethics, or related fields. Technical proficiency is essential, but technical knowledge alone is insufficient. An AI ethics officer who understands neural networks but lacks philosophical grounding will struggle to translate technical capabilities into ethical constraints. Conversely, an ethicist who can't understand how algorithms function will propose impractical guidelines that technologists ignore.
Legal and Regulatory Expertise: The U.S. Copyright Office published its updated report in 2024 confirming that AI-generated content may be eligible for copyright protection if a human has made substantial creative contribution. However, as legal analysts noted, “the guidance is still vague, and whilst it affirms that selecting and arranging AI-generated material can qualify as authorship, the threshold of 'sufficient creativity' remains undefined.”
Working in legal ambiguity requires particular skills: comfort with uncertainty, ability to make judgement calls with incomplete information, understanding of how to manage risk when clear rules don't exist. The European Union's AI Act, passed in 2024, identifies AI as high-risk technology and emphasises transparency, safety, and fundamental rights. The U.S. Congressional AI Working Group introduced the “Transparent AI Training Data Act” in May 2024, requiring companies to disclose datasets used in training models.
Creative Industry Domain Knowledge: These roles require deep understanding of creative production workflows. An ethics officer who doesn't understand how animation pipelines work or what constraints animators face will design oversight mechanisms that creative teams circumvent or ignore. The integration of AI into post-production requires treating “the entire post-production pipeline as a single, interconnected system, not a series of siloed steps.”
Domain knowledge also includes understanding creative culture. Creative professionals value autonomy, iteration, and experimentation. Oversight mechanisms that feel like bureaucratic impediments will generate resistance. Effective ethics officers frame their work as enabling creativity within ethical bounds rather than restricting it.
Communication and Change Management: An AI ethics officer might need to explain transformer architectures to the legal team, copyright law to data scientists, and production pipeline requirements to executives who care primarily about budget and schedule. This requires translational fluency across multiple professional languages. Change management skills are equally critical, as implementing new AI governance frameworks means changing how people work.
Ethical Frameworks and Philosophical Grounding: Microsoft's framework for responsible AI articulates six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Applying these principles to specific cases requires philosophical sophistication. When is an AI-generated character design “fair” to human artists? How much transparency about AI use is necessary in entertainment media versus journalism? These questions require reasoned judgement informed by ethical frameworks.
Analysis of AI ethics officer and copyright liaison job descriptions across creative companies reveals both commonalities and variations reflecting different organisational priorities.
Entry to Mid-Level Positions typically emphasise bachelor's degrees in relevant fields, 2-5 years experience, technical literacy with AI/ML systems, familiarity with regulations and ethical frameworks, and strong communication skills. Salary ranges typically £60,000-£100,000. These positions focus on implementation: executing governance frameworks, conducting audits, providing guidance, and maintaining documentation.
Senior-Level Positions (AI Ethics Lead, Head of Responsible AI) emphasise advanced degrees, 7-10+ years progressive experience, demonstrated thought leadership, experience building governance programmes from scratch, and strategic thinking capability. Salary ranges typically £100,000-£200,000+. Senior roles focus on strategy: establishing governance frameworks, defining organisational policy, external representation, and building teams.
Specialist Copyright Liaison Positions emphasise law degrees or equivalent IP expertise, deep knowledge of copyright law, experience with rights clearance and licensing, familiarity with technical standards like C2PA, and understanding of creative production workflows. These positions bridge legal expertise with operational implementation.
Organisational Variations: Tech platforms (Adobe, Microsoft) emphasise technical AI expertise. Media companies (BBC, The New York Times) emphasise editorial judgement. Entertainment studios emphasise union negotiations experience. Stock content companies (Getty Images, Shutterstock) emphasise rights management and creator relations.
Whilst formal interview archives remain limited (the roles are too new), available commentary from practitioners reveals common challenges and emerging best practices.
The Cold Start Problem: Nathalie Berdat's description of joining the BBC as “employee number one” in data governance captures a common experience. Early hires often enter organisations without established frameworks or organisational understanding of what the role should accomplish. Successful early hires emphasise the importance of quick wins: identifying high-visibility, high-value interventions that demonstrate the role's value and build organisational credibility.
Balancing Principle and Pragmatism: A recurring theme involves tension between ethical ideals and operational reality. Effective ethics officers develop pragmatic frameworks that move organisations toward ethical ideals whilst acknowledging constraints. The WGA agreement provides an instructive example, permitting generative AI use under specific circumstances with guardrails that protect writers whilst protecting studios' copyright.
The Importance of Cross-Functional Relationships: AI governance “touches an organisation's vast group of stakeholders.” Effective ethics officers invest heavily in building relationships across functions. These relationships provide early visibility into initiatives that may raise ethical issues, create channels for influence, and build reservoirs of goodwill. Adobe's structure, with the Ethical Innovation team collaborating closely with Trust and Safety, Legal, and International teams, exemplifies this approach.
Technical Credibility Matters: Ethics officers without technical credibility struggle to influence technical teams. Successful ethics officers invest in building technical literacy to engage meaningfully with data scientists and ML engineers. Conversely, technical experts transitioning into ethics roles must develop complementary skills: philosophical reasoning, stakeholder communication, and change management capabilities.
Documentation Is Thankless but Essential: Much of the work involves unglamorous documentation: creating records of decisions, establishing audit trails, maintaining compliance evidence. The C2PA framework's slow adoption despite technical maturity reflects this challenge. Technical infrastructure exists, but getting thousands of creators to actually implement provenance tracking requires persistent operational effort.
Several trends are reshaping these roles and spawning new specialisations.
Fragmentation and Specialisation: As AI governance matures, broad “AI ethics officer” roles are fragmenting into specialised positions. Emerging job titles include AI Content Creator (+134.5% growth), Data Quality Specialist, AI-Human Interface Designer, Digital Provenance Specialist, Algorithmic Bias Auditor, and AI Rights Manager. This specialisation enables deeper expertise but creates coordination challenges.
Integration into Core Business Functions: The trend is toward integration, with ethics expertise embedded within product teams, creative departments, and technical divisions. Research on AI competency frameworks emphasises that “companies are increasingly prioritising skills such as technological literacy; creative thinking; and knowledge of AI, big data and cybersecurity” across all roles.
Shift from Compliance to Strategy: Early-stage AI ethics roles focused heavily on risk mitigation. As organisations gain experience, these roles are expanding to include strategic opportunity identification. Craig Peters of Getty Images exemplifies this strategic orientation, positioning ethical AI development as business strategy rather than compliance burden.
Regulatory Response and Professionalisation: As AI governance roles proliferate, professional standards are emerging. UNESCO's AI Competency Frameworks represent early steps toward standardised training. The Scaled Agile Framework now offers a “Achieving Responsible AI” micro-credential. This professionalisation will likely accelerate as regulatory requirements crystallise.
Technology-Enabled Governance: Tools for detecting bias, verifying provenance, auditing training data, and monitoring compliance are becoming more sophisticated. However, research consistently emphasises that human judgement remains essential. The future involves humans and algorithms working together to achieve governance at scale.
The fundamental question underlying these roles is whether creative industries can harness AI's capabilities whilst preserving what makes creative work valuable. Creative integrity involves multiple interrelated concerns: authenticity (can audiences trust that creative work represents human expression?), attribution (do creators receive appropriate credit and compensation?), autonomy (do creative professionals retain meaningful control?), originality (does AI-assisted creation maintain originality?), and cultural value (does creative work continue to reflect human culture and experience?).
AI ethics officers and copyright liaisons exist to operationalise these concerns within production systems. They translate abstract values into concrete practices: obtaining consent, documenting provenance, auditing bias, clearing rights, and verifying human contribution. The success of these roles will determine whether creative industries navigate the AI transition whilst preserving creative integrity.
Research and early practice suggest several principles for structuring these roles effectively: senior-level positioning with clear executive support, cross-functional integration, appropriate resourcing, clear accountability, collaborative frameworks that balance central policy development with distributed implementation, and ongoing evolution treating governance frameworks as living systems.
Organisations face a shortage of candidates with the full spectrum of required competencies. Addressing this requires interdisciplinary hiring that values diverse backgrounds, structured development programmes, cross-functional rotations, external partnerships with academic institutions, and knowledge sharing across organisations through industry forums.
A persistent challenge involves measuring success. Traditional compliance metrics capture activity but not impact. More meaningful metrics might include rights clearance error rates, consent documentation completeness, time-to-resolution for ethics questions, creator satisfaction with AI governance processes, reduction in legal disputes, and successful integration of new AI tools without ethical incidents.
The emergence of AI ethics officers and copyright liaisons represents creative industries' attempt to build scaffolding around AI adoption: structures that enable its use whilst preventing collapse of the foundations that make creative work valuable.
The early experience reveals significant challenges. The competencies required are rare. Organisational structures are experimental. Technology evolves faster than governance frameworks. Legal clarity remains elusive. Yet the alternative is untenable. Ungovernably rapid AI adoption risks legal catastrophe, creative community revolt, and erosion of creative integrity. The 2023 Hollywood strikes demonstrated that creative workers will not accept unbounded AI deployment.
The organisations succeeding at this transition share common characteristics. They hire ethics and copyright specialists early, position them with genuine authority, resource them appropriately, and integrate governance into production workflows. They build cross-functional collaboration, invest in competency development, and treat governance frameworks as living systems.
Perhaps most importantly, they frame AI governance not as constraint on creativity but as enabler of sustainable innovation. By establishing clear guidelines, obtaining proper consent, documenting provenance, and respecting rights, they create conditions where creative professionals can experiment with AI tools without fear of legal exposure or ethical compromise.
The roles emerging today will likely evolve significantly over coming years. Some will fragment into specialisations. Others will integrate into broader functions. But the fundamental need these roles address is permanent. As long as creative industries employ AI tools, they will require people whose professional expertise centres on ensuring that deployment respects human creativity, legal requirements, and ethical principles.
The 3,700 members of the Coalition for Content Provenance and Authenticity, the negotiated agreements between SAG-AFTRA and studios, the AI governance frameworks at the BBC and Adobe, these represent early infrastructure. The people implementing these frameworks day by day, troubleshooting challenges, adapting to new technologies, and operationalising abstract principles into concrete practices, are writing the playbook for responsible AI in creative industries.
Their success or failure will echo far beyond their organisations, shaping the future of creative work itself.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Space Goblin Diaries
Behold the face of Vorak!
This month I've finally got round to replacing my hand-drawn placeholder image with a set of placeholder images of Vorak with different facial expressions. After various attempts at drawing these or looking for appropriate stock art, I finally realised I could just...use emojis. So these are all created by using the Emojipedia emoji mashup page to combine the alien face emoji with various expressions.

To be clear, these are still placeholders, and I'll commission some real art for the final game—but now I can finally have Vorak's expression change based on what's happening.

Apart from mashing emojis together, I've spent most of the month rewriting the chapter where you crash land on the moon after destroying Vorak's command ship. This included fleshing out the character of General Zorg, the giant alien cyborg who leads Vorak's ground forces.
I also revised the chapter where the hero is captured and Vorak interrogates them (or rather, monologues at them). This one needed hardly any changes from the playtest version.

My plan now is to focus on writing chapters that form a single path through the whole game, rather than writing all the alternative early-game chapters before I move on to the mid-game chapters, etc. Once I've got a complete path through the game written I'll go back and write the chapters for alternative paths.

Will the new faces of Vorak inspire our hero to complete his project? Learn more in next month's thrilling developer diary!
#FoolishEarthCreatures #DevDiary
from
The happy place
Hello friends! I am alive and I have completed another day without incident .
I did fitness which is a highlight there was a gym class with mostly middle aged women on the cross trainer and then immediately I went to another class , a pretty high intense class — I am doing cardio you see.
It’s a little bit of a life hack that I’ve learned, that by going to double classes there’s minimal overhead and laundry,
That’s pretty clever. I am pretty clever.
But it was hard, but hardships follows everyone everywhere it seems. They certainly are no strangers to me.
Come at me hardships I will stand ready to karate chop!!
You’ll never even tell from my sweat drenched face whether I am crying or not as I chop.
Nobody will know
from Poésies en Folies
Désormais plus ouvert, j'ai vu s'allumer un feu; vert ! L'incendie de mes émotions est quasi sous contrôle.
Des sabliers, j'en ai retournés depuis ma venue au monde et la terre est toujours ronde. N'en déplaise aux platistes, je me dis que finalement, la vie pourrait être fantastique.
Des mes oreilles à mon cerveau, le chemin commence à se dégager, pas facile de nettoyer la boue accumulée. Je suis enfin plus à l'écoute mais, Dieu que ça me coûte. Pas si simple d'être attentif au battement de leurs cœurs. Il me faut leur faire oublier mes erreurs. Se concentrer sur eux, leurs récits, joies et heurts.
Encore fatigué, j'essaie d'être comme les tomates, concentré. Mais souvent, j'ai plutôt l'impression d'être broyé. L'exercice me coûte, mais pourtant, Je découvre de nouvelles saveurs, pour leur plus grand bonheur. L'appétit vient en mangeant, alors, enfin, à tout je goûte.
Mon infirmier a souligné un point, J'ai gagné en sérénité et si j'ai rangé mes poings, Ils restent à proximité, J'espère un jour les égarer.
A petits pas, on essaie de progresser, comme des animaux sauvages on doit s'apprivoiser. Un soleil intérieur doit à nouveau briller, sa chaleur intense devrait me faire fondre. Plus de quinze kilos, de quoi se graisser les doigts. Plus de quinze kilos, je porte un sac de sable sur moi. Plus de quinze kilos de trop, ma foi.
Des poils masquent ma bouche; il est temps de les tondre, Un premier geste pour redorer l'image de soit : la confiance reviendra.
Dans son tiroir, un dictaphone, une touche : reset. Comme j’aimerais avoir la même pour ma tête…
Bien des bilans comptables sont moins complexes. Celui de ma vie couvre plus de quatre décennies... Un bordel infini ! Rien n'a jamais été rangé, Des feuilles volantes, des avions en papiers. Des classeurs non fermés, des dossiers non triés. Que garder, que jeter ? Dans ce fouillis, je sais que sur eux, je peux compter. Je me plonge dans leurs yeux et j'en suis sûr, Ça va m'aider.
J'ai vidé l'encre noire de mes idées, dans l'évier. J'ai bien rincé, de l'eau j'en ai fait couler. Pour écrire un nouveau futur, plus sûr. Inspiré, je sort les crayons de couleurs, Je vais tout recouvrir, même leurs peurs.
Deux ans avant, la mort sur moi rôdait. Comme sur un champ de bataille, mon âme semblait s'élever. Disparaitre me semblait la meilleure chose à faire. Depuis c'est l'envie de vieillir tard dont je fais mon affaire.
#santémentale #psychiatrie #thérapie #poésie
from
Roscoe's Quick Notes

I won this server-based correspondence chess club game playing Black a few minutes ago with a basic 2 Rooks Checkmate. Is this the first combination checkmate everyone learns? I remember learning it as a young boy from my father nearly seventy years ago, and thinking then it was the coolest thing! I still smile every time I use it. :)
The graphic near the top of this post shows the position of pieces on our board at game's end. Our full move record follows: 1. e4 a6 2. Nf3 Nc6 3. Nc3 e5 4. a3 Nf6 5. d3 h6 6. Nd5 Nxd5 7. exd5 Nd4 8. Nxd4 exd4 9. Qe2+ Qe7 10. Kd1 Qxe2+ 11. Bxe2 Bc5 12. b4 Bb6 13. Bh5 g6 14. Bg4 h5 15. Bf3 O-O 16. Re1 c5 17. bxc5 Bxc5 18. g3 b6 19. Re7 Bxe7 20. Bh6 Re8 21. a4 Kh7 22. Bf4 Bb7 23. Rb1 Bc5 24. Bg5 Re5 25. Bf6 Rf5 26. Be5 Rxe5 27. a5 b5 28. g4 Rae8 29. Kd2 Bxd5 30. Bd1 f5 31. gxh5 Re5e6 32. h4 Be7 33. hxg6+ Kxg6 34. h5+ Kh6 35. c4 Bg2 36. f4 Rg8 37. cxb5 Bd5 38. b6 Rb8 39. Ba4 Bc6 40. Bb3 d5 41. Rg1 Kxh5 42. Bd1+ Kh6 43. Rh1+ Kg7 44. Rh5 Rf8 45. Rh1 Rh8 46. Rg1+ Kf8 47. Bf3 Bb4+ 48. Kc2 Rh2+ 49. Kb3 Bxa5 50. Rg5 Rf6 51. Rg1 Bxb6 52. Ra1 a5 53. Rc1 Bd7 54. Bxd5 Rd6 55. Bc4 a4+ 56. Kb4 Rh4 57. Rf1 Bd8 58. Kc5 Rc6+ 59. Kd5 Bf6 60. Re1 Rxf4 61. Bb5 Rc7 62. Ba6 Ra7 63. Bc4 a3 64. Ba2 Rf2 65. Ra1 Ra5+ 66. Kd6 Be8 67. Ke6 Ra6+ 68. Kd5 Bf7+ 69. Kc5 Rc2+ 70. Kb5 Ra8 71. Bb1 Rb2+ 72. Kc6 Ra6+ 73. Kc7 Be5+ 74. Kc8 Be6+ 75. Kd8 Bf6+ 76. Kc7 Rb5 77. Ba2 f4 78. Bxe6 Rxe6 79. Rxa3 Be5+ 80. Kd7 Kf7 81.Kc8 Ke7 82. Ra7+ Kf6 83. Rh7 f3 84. Rh6+ Kf5 85. Rh5+ Kf4 86. Rh4+ Ke3 87. Rh3 Rd5 88. Rh1 Rc6+ 89. Kb7 Rc3 90. Re1+ Kxd3 91. Rd1+ Kc2 92. Rh1 Rb5+ 93. Ka6 Rb1 94. Rh5 Ra3# 0-1
And the adventure does continue.