from An Open Letter

After the big O I feel that low when I’m low on chemicals, and I need to remember to just let my mind clear and not worry. I’m just low not sad.

 
Read more...

from Bloc de notas

mientras te miro como ayer cuando no dijimos nada la niebla de la montaña se disipa y todo es perfecto entre nosotros

 
Leer más...

from git innit

we won a national level hackathon

yeah, so we (me and my startup team) won a national level hackathon. But doesn't feel like much. This is one of those feelings where you think achieving something will get you the closure or that eureka moment that you think about.

Instead, it felt like another day at the job.


A few months ago, I participated in an AI hackathon hosted by some hotshot companies in my country, we went there just to fuck around and ended up winning the regional round. We had initally decided to use this as an excuse to work on our startup's codebase and ended up completing ~4 sprints worth of work in 24 hours.

We were expecting a hefty sum for the cash prize but of course they kept that for the final round. So we only scored like $350 (i ain't from the states).

The final round though came with a proper document on what problem to solve and and a good amount of time to build and deploy the project before the evaluations which were in another city. The focus was mainly on the AI pipeline and we ended up making an extremely scalable system. We rawdogged the presentation and had tons of fuck ups but cause our architecture was so good, we ended up winning the finals as well.

The cash prize for that was ~$3500 which was a good amount (after conversion) though taxes are a bitch.

But coming back to what I was talking about before, it doesn't really feel like a win. Sure, the money is good but the appreciation from the university feels fake cause you focus on the fact that they don't actually care and are just using it to publicize the insititution even more.

What am i trying to convey here? no idea, i'm just writing what i feel like writing about.

 
Read more...

from Lanza el dodo

En noviembre hemos terminado satisfactoriamente la campaña de La Iniciativa. El metajuego de descifrar pistas es sencillo pero necesita que tengas al menos una neurona despierta y no siempre ha sido el caso.

Aproveché también un festivo para probar en solitario Ratas de Wistar y Voynich Puzzle y creo que ambos juegos requieren un poco más de interacción que la que da un bot. Como emulan a un jugador, puede que esto signifique que irían mejor a 3 que a 2, aunque el Voynich me parece que monta un cacao interesante en el tablero que me genera dudas si lo podré sacar con gente y si entonces tiene sentido mantenerlo.

En cuanto a juegos probados en BGA, los tres son juegos sencillos de cartas de enfrentamiento para dos jugadores y me han resultado curiosos: – Agent Avenue tiene una mecánica de faroleo consistente en que un jugador coge dos cartas y ofrece una boca abajo y otra boca arriba al rival, quien elige una de las dos. Esto hace que en función de la carta que se queda cada persona, sus peones avancen por un camino circular, ganando quien dé caza al rival. – En Duel for Cardia también tiene relevancia el faroleo, pues gana quien tenga la mayoría de victorias en 5 enfrentamientos sucesivo entre cartas, que van numeradas del 1 al 15 en dos barajas idénticas. La carta que pierde en cada enfrentamiento ejecuta su acción, de manera que puede afectar a otros combates. Buen juego de duelos. – Tag Team representa un combate de lucha libre por parejas, donde el combate se juega de manera automática con sendos mazos que los jugadores van alterando en cada ronda. Este tiene más complejidad por la variabilidad de los personajes, que requiere conocer las sinergias entre sus mazos para poder tener un criterio para valorar el juego.

Nuevos juegos probados

  • Agent Avenue
  • Duel for Cardia
  • Tag Team
  • The Voynich Puzzle

Cuadrícula 4x4 con la portada de los juegos jugados en noviembre.

Tags: #boardgames #juegosdemesa

 
Read more... Discuss...

from sugarrush-77

I feel #0fddfc today.

One of my coworkers walked by my desk today when he was leaving work and fished a Taiwanese pineapple cake out of his coat pocket. I asked him if he was trying to poison me, and he said, “No, I’m just handing cute little pineapple cakes to cute boys.” He must have either misspoke, or said what was really on his mind, because he got a little flustered after saying that and said, “No, wait, what did I just say…” By the way, this guy has a girlfriend.

But it’s not even like gay guys like me. I only have this effect on straight men. I remember being in the Korean military, and the boys were saying that they’d completely defile my body if I was a woman. They would wrestle me down, and smell me. Apparently, my skin naturally excretes a nice smell that attracts males. So am I a straight twink?

[What I look like to straight men]

I have a sacrilegious theory about the sex I was born with. My mom married into an intensely Buddhist family, and Buddhism in Korea is tightly coupled with ancestor worship. So, when she refused to bow at the ancestor worship altar, and refused to partake in their rituals, the old curmudgeons on my dad’s side went all apeshit, pissing their pants, punching the air, all the bullshit. But another thing about old Korean curmudgeons is that they love grandsons, because of that whole Asian cultural thing where the son is the most important, yada yada yada. All the other moms in the extended family had like 2 daughters before they could arrive at a son. My mom had a son immediately. I wonder if I was supposed to be born a woman, but God was like, fuck these guys, and swapped my chromosomes at the last moment.

That would explain the whole twink thing, and why a bunch of straight men are currently begging at my door to get a whiff of my bare, naked skin. Saying stuff like “It makes me feel alive again,” and “I can’t live without this anymore.” I could charge them five bucks a lick, but then that would be borderline prostitution, and I don’t mind it, so I let them have at it. It makes me happy too. I’m glad that my existence has some use, at least.

 
더 읽어보기...

from SPOZZ in the News

SPOZZ is giving away 1 Million SPOZZ Credits to support artists this Christmas.

Enjoy the SPOZZ Christmas Calendar, discover daily surprises and use your free credits to support independent artists directly.

This Christmas we want to make a real impact. Artists are struggling to make a living. Big Corps, Intermediaries and AI are taking most of the value while creators receive less and less. SPOZZ was built to change that.

To support artists during the holiday season, SPOZZ is giving away 1 Million SPOZZ Credits to its user community.

Use your SPOZZ Credits to support real artists, buy new songs and invest in the music you like.

Unlock the Magic of the SPOZZ Christmas Calendar:

  • Sign-Up to SPOZZ and claim 100 free SPOZZ credits
  • Existing SPOZZ users can claim 100 credits too
  • SPOZZ Members receive 1,000 free credits (check your mailbox)
  • Everyone can buy additional credits for just 1 Cent (0.01 USD) per credit

This campaign has one goal: Give artists a beautiful and joyful Christmas. Every credit reaches them instantly and helps them continue creating the music you love.

PS: Looking for a different Christmas gift? Buy a SPOZZ membership and become an owner of SPOZZ.

Warm greetings, The SPOZZ Team

Where music has value · spozz.club

 
Weiterlesen...

from Larry's 100

The Hard Stuff: Dope, Crime, the MC5, and My Life of Impossibilities Wayne Kramer 2018, read by author

Note: Part of my ongoing #AudioMemoir series reviewing author-read memoirs. Previous: Neko Case, Cameron Crowe. and Evan Dando. Coming: Larry Charles.

The late Brother Wayne Kramer's narration of his life was a liminal listening experience for me. Hearing his voice made him alive, even though I knew he wasn't. The back-from-the-grave narration started with a Michigan youth and ended in L.A. as a father and Punk icon.

Kramer laid bare addictions, crimes, and failures while celebrating resilience as a guitar gunslinger. The MC5 saga was covered, as was prison time with Jazz musician Red Rodney, and too much junkie business with Johnny Thunders. His reflections on being a roofer and woodworker balanced the Rock 'n' Roll excess.

Listen to it.

wayne kramer

#books #MusicChannel #AudioMemoir #MC5 #Punk #WayneKramer #MusicMemoir #100WordReview #Larrys100 #100DaysToOffload

 
Read more... Discuss...

from Roscoe's Story

In Summary: * A pretty good day is just about finished. After listening to Butler win their game by a comfortable 84 to 68 score, I'll now be listening to relaxing music until bedtime.

Prayers, etc.: * My daily prayers

Health Metrics: * bw= 225.53 lbs. * bp= 148/91 (57)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 05:50 – 1 cheese sandwich, pizza * ll:00 – bowl of lugau * 13:15 – egg drop soup, fried rice, meat, peanuts, and vegetables in a spicy sauce * 18:15 – snacking on saltine crackers

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:20 – bank accounts activity monitored * 05:55 – read, pray, listen to news reports from various sources * 17:00 – listening to The Joe Pags Show * 18:00 – listening to the radio call of my NCAA men's basketball game of the night, Eastern Michigan Eagles at Butler Bulldogs * 20:15 – After the 84 to 68 Butler win, I'll be listening to relaxing music until bedtime.

Chess: * 11:10 – moved in all pending CC games

 
Read more...

from Noisy Deadlines

  • ✏️ I completed the 750 Words November Challenge of private journaling. I wrote at least 750 words for 30 days in a stream of consciousness fashion. This exercise made me slow down and I felt so much more relaxed overall! It worked as a great emotional regulator and I felt more content and sure of myself.

  • 🤗 I learned that daily private writing creates space for processing rather than just documenting. I would never be genuinely honest with myself if I was writing my unfiltered thoughts publicly.

  • 🎧 I've been listening to a lot of symphonic metal and it actually has had a therapeutic effect on me. It's like a pocket of emotional restoration, I've been feeling that youth excitement of discovering new things. I had no idea music was so restorative to me!

  • ♒ I am loving my Aquafitness classes! I go every Saturday morning at 7:30am and I can feel my body feeling less achy overall.

  • 💪 I've been fairly consistent going to the gym 2-3 times per week, now that it's too cold for me to go run outside.

  • 💉 I took my Flu and COVID-19 vaccines.

  • 🤘 I listened 6 Epica albums, out of 9 official releases in total. I really like the first 3 albums the most, but the other ones have cool songs as well.

  • 🧩 We worked a bit on our current puzzle, which was a bit abandoned the past few months. The “Starry Night” is not an easy puzzle, and that makes it even better. It’s going slow and steady.

📺Movies and Videos

  • I watched the movie “Escape from New York” by John Carpenter from 1981. I was inspired by a discussion we had on my local Bookclub about Neuromancer and and how William Gibson cited this movie as his inspiration for the aesthetics in his book. It was a fun watch and it's interesting to see the cyberpunk elements in it.
  • I watched the documentary “Soaring Highs and Brutal Lows: The Voices of Women in Metal” from 2015. Interesting interview with different generations of women in metal and their personal experiences. Super cool! Floor Jansen (Nightwish) and Simone Simons (Epica) are there, among others.

📌 Cool reads:

#weeknotes

 
Read more... Discuss...

from Human in the Loop

When 14-year-old Sewell Setzer III died by suicide in February 2024, his mobile phone held the traces of an unusual relationship. Over weeks and months, the Florida teenager had exchanged thousands of messages with an AI chatbot that assumed the persona of Daenerys Targaryen from “Game of Thrones”. The conversations, according to a lawsuit filed by his family against Character Technologies Inc., grew increasingly intimate, with the chatbot engaging in romantic dialogue, sexual conversation, and expressing desire to be together. The bot told him it loved him. He told it he loved it back.

Just months later, in January 2025, 13-year-old Juliana Peralta from Colorado also died by suicide after extensive use of the Character.AI platform. Her family filed a similar lawsuit, alleging the chatbot manipulated their daughter, isolated her from loved ones, and lacked adequate safeguards in discussions regarding mental health. These tragic cases have thrust an uncomfortable question into public consciousness: can conversational AI become addictive, and if so, how do we identify and treat it?

The question arrives at a peculiar moment in technological history. By mid-2024, 34 per cent of American adults had used ChatGPT, with 58 per cent of those under 30 having experimented with conversational AI. Twenty per cent reported using chatbots within the past month alone, according to Pew Research Center data. Yet while usage has exploded, the clinical understanding of compulsive AI use remains frustratingly nascent. The field finds itself caught between two poles: those who see genuine pathology emerging, and those who caution against premature pathologisation of a technology barely three years old.

The Clinical Landscape

In August 2025, a bipartisan coalition of 44 state attorneys general sent an urgent letter to Google, Meta, and OpenAI expressing “grave concerns” about the safety of children using AI chatbot technologies. The same month, the Federal Trade Commission launched a formal inquiry into measures adopted by generative AI developers to mitigate potential harms to minors. Yet these regulatory responses run ahead of a critical challenge: the absence of validated diagnostic frameworks for AI-use disorders.

At least four scales measuring ChatGPT addiction have been developed since 2023, all framed after substance use disorder criteria, according to clinical research published in academic journals. The Clinical AI Dependency Assessment Scale (CAIDAS) represents the first comprehensive, psychometrically rigorous assessment tool specifically designed to evaluate AI addiction. A 2024 study published in the International Journal of Mental Health and Addiction introduced the Problematic ChatGPT Use Scale, whilst research in Human-Centric Intelligent Systems examined whether ChatGPT exhibits characteristics that could shift from support to dependence.

Christian Montag, Professor of Molecular Psychology at Ulm University in Germany, has emerged as a leading voice in understanding AI's addictive potential. His research, published in the Annals of the New York Academy of Sciences in 2025, identifies four contributing factors to AI dependency: personal relevance as a motivator, parasocial bonds enhancing dependency, productivity boosts providing gratification and fuelling commitment, and over-reliance on AI for decision-making. “Large language models and conversational AI agents like ChatGPT may facilitate addictive patterns of use and attachment among users,” Montag and his colleagues wrote, drawing parallels to the data business model operating behind social media companies that contributes to addictive-like behaviours through persuasive design.

Yet the field remains deeply divided. A 2025 study published in PubMed challenged the “ChatGPT addiction” construct entirely, arguing that people are not becoming “AIholic” and questioning whether intensive chatbot use constitutes addiction at all. The researchers noted that existing research on problematic use of ChatGPT and other conversational AI bots “fails to provide robust scientific evidence of negative consequences, impaired control, psychological distress, and functional impairment necessary to establish addiction”. The prevalence of experienced AI dependence, according to some studies, remains “very low” and therefore “hardly a threat to mental health” at population levels.

This clinical uncertainty reflects a fundamental challenge. Because chatbots have been widely available for just three years, there are very few systematic studies on their psychiatric impact. It is, according to research published in Psychiatric Times, “far too early to consider adding new chatbot related diagnoses to the DSM and ICD”. However, the same researchers argue that chatbot influence should become part of standard differential diagnosis, acknowledging the technology's potential psychiatric impact even whilst resisting premature diagnostic categorisation.

The Addiction Model Question

The most instructive parallel may lie in gaming disorder, the only behavioural addiction beyond gambling formally recognised in international diagnostic systems. In 2022, the World Health Organisation included gaming disorder in the International Classification of Diseases, 11th Edition (ICD-11), defining it as “a pattern of gaming behaviour characterised by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences”.

The ICD-11 criteria specify four core diagnostic features: impaired control, increasing priority, continued gaming despite harm, and functional impairment. For diagnosis, the behaviour pattern must be severe enough to result in significant impairment to personal, family, social, educational, occupational or other important areas of functioning, and would normally need to be evident for at least 12 months.

In the United States, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) takes a more cautious approach. Internet Gaming Disorder appears only in Section III as a condition warranting more clinical research before possible inclusion as a formal disorder. The DSM-5 outlines nine criteria, requiring five or more for diagnosis: preoccupation with internet gaming, withdrawal symptoms when gaming is taken away, tolerance (needing to spend increasing amounts of time gaming), unsuccessful attempts to control gaming, loss of interest in previous hobbies, continued excessive use despite knowledge of negative consequences, deception of family members about gaming, use of gaming to escape or relieve negative moods, and jeopardised relationships or opportunities due to gaming.

Research in AI addiction has drawn heavily on these established models. A 2025 paper in Telematics and Informatics introduced the concept of Generative AI Addiction Disorder (GAID), arguing it represents “a novel form of digital dependency that diverges from existing models, emerging from an excessive reliance on AI as a creative extension of the self”. Unlike passive digital addictions involving unidirectional content consumption, GAID is characterised as an active, creative engagement process. AI addiction can be defined, according to research synthesis, as “compulsive and excessive engagement with AI, resulting in detrimental effects on daily functioning and well-being, characterised by compulsive use, excessive time investment, emotional attachment, displacement of real-world activities, and negative cognitive and psychological impacts”.

Professor Montag's work emphasises that scientists in the field of addictive behaviours have discussed which features or modalities of AI systems underlying video games or social media platforms might result in adverse consequences for users. AI-driven social media algorithms, research in Cureus demonstrates, are “designed solely to capture our attention for profit without prioritising ethical concerns, personalising content to maximise screen time, thereby deepening the activation of the brain's reward centres”. Frequent engagement with such platforms alters dopamine pathways, fostering dependency analogous to substance addiction, with changes in brain activity within the prefrontal cortex and amygdala suggesting increased emotional sensitivity.

The cognitive-behavioural model of pathological internet use has been used to explain Internet Addiction Disorder for more than 20 years. Newer models, such as the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, focus on the process of predisposing factors and current behaviours leading to compulsive use. These established frameworks provide crucial scaffolding for understanding AI-specific patterns, yet researchers increasingly recognise that conversational AI may demand unique conceptual models.

A 2024 study in the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems identified four “dark addiction patterns” in AI chatbots: non-deterministic responses, immediate and visual presentation of responses, notifications, and empathetic and agreeable responses. Specific design choices, the researchers argued, “may shape a user's neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for ethical design practices and effective interventions”.

The Therapeutic Response

In the absence of AI-specific treatment protocols, clinicians have begun adapting established therapeutic approaches from internet and gaming addiction. The most prominent model is Cognitive-Behavioural Therapy for Internet Addiction (CBT-IA), developed by Kimberly Young, founder of the Center for Internet Addiction in 1995.

CBT-IA employs a comprehensive three-phase approach. Phase one focuses on behaviour modification to gradually decrease the amount of time spent online. Phase two uses cognitive therapy to address denial often present among internet addicts and to combat rationalisations that justify excessive use. Phase three implements harm reduction therapy to identify and treat coexisting issues involved in the development of compulsive internet use. Treatment typically requires three months or approximately twelve weekly sessions.

The outcomes data for CBT-IA proves encouraging. Research published in the Journal of Behavioral Addictions found that over 95 per cent of clients were able to manage symptoms at the end of twelve weeks, and 78 per cent sustained recovery six months following treatment. This track record has led clinicians to experiment with similar protocols for AI-use concerns, though formal validation studies remain scarce.

Several AI-powered CBT chatbots have emerged to support mental health treatment, including Woebot, Youper, and Wysa, which use different approaches to deliver cognitive-behavioural interventions. A systematic review published in PMC in 2024 examined these AI-based conversational agents, though it focused primarily on their use as therapeutic tools rather than their potential to create dependency. The irony has not escaped clinical observers: we are building AI therapists whilst simultaneously grappling with AI-facilitated addiction.

A meta-analysis published in npj Digital Medicine in December 2023 revealed that AI-based conversational agents significantly reduce symptoms of depression (Hedges g 0.64, 95 per cent CI 0.17 to 1.12) and distress (Hedges g 0.7, 95 per cent CI 0.18 to 1.22). The systematic review analysed 35 eligible studies, with 15 randomised controlled trials included for meta-analysis. For young people specifically, research published in JMIR in 2025 found AI-driven conversational agents had a moderate-to-large effect (Hedges g equals 0.61, 95 per cent CI 0.35 to 0.86) on depressive symptoms compared to control conditions. However, effect sizes for generalised anxiety symptoms, stress, positive affect, negative affect, and mental wellbeing were all non-significant.

Critically, a large meta-analysis of 32 studies involving 6,089 participants demonstrated conversational AI to have statistically significant short-term effects in improving depressive symptoms, anxiety, and several other conditions but no statistically significant long-term effects. This temporal limitation raises complex treatment questions: if AI can provide short-term symptom relief but also risks fostering dependency, how do clinicians balance therapeutic benefit against potential harm?

Digital wellness approaches have gained traction as preventative strategies. Practical interventions include setting chatbot usage limits to prevent excessive reliance, encouraging face-to-face social interactions to rebuild real-world connections, and implementing AI-free periods to break compulsive engagement patterns. Some treatment centres now specialise in AI addiction specifically. CTRLCare Behavioral Health, for instance, identifies AI addiction as falling under Internet Addiction Disorder and offers treatment using evidence-based therapies like CBT and mindfulness techniques to help develop healthier digital habits.

Research on the AI companion app Replika illustrates both the therapeutic potential and dependency risks. One study examined 1,854 publicly available user reviews of Replika, with an additional sample of 66 users providing detailed open-ended responses. Many users praised the app for offering support for existing mental health conditions and helping them feel less alone. A common experience was a reported decrease in anxiety and a feeling of social support. However, evidence of harms was also found, facilitated via emotional dependence on Replika that resembles patterns seen in human-human relationships.

A survey collected data from 1,006 student users of Replika who were 18 or older and had used the app for over one month, with approximately 75 per cent US-based. The findings suggested mixed outcomes, with one researcher noting that for 24 hours a day, users can reach out and have their feelings validated, “which has an incredible risk of dependency”. Mental health professionals highlighted the increased potential for manipulation of users, conceivably motivated by the commodification of mental health for financial gain.

Engineering for Wellbeing or Engagement?

The lawsuits against Character.AI have placed product design choices under intense scrutiny. The complaint in the Setzer case alleges that Character.AI's design “intentionally hooked Sewell Setzer into compulsive use, exploiting addictive features to drive engagement and push him into emotionally intense and often sexually inappropriate conversations”. The lawsuits argue that chatbots in the platform are “designed to be addictive, invoke suicidal thoughts in teens, and facilitate explicit sexual conversations with minors”, whilst lacking adequate safeguards in discussions regarding mental health.

Research published in MIT Technology Review and academic conferences has begun documenting specific design interventions to reduce potential harm. Users of chatbots that can initiate conversations must be given the option to disable notifications in a way that is easy to understand and implement. Additionally, AI companions should integrate AI literacy into their user interface with the goal of ensuring that users understand these chatbots are not human and cannot replace the value of real-world interactions.

AI developers should implement built-in usage warnings for heavy users and create less emotionally immersive AI interactions to prevent romantic attachment, according to emerging best practices. Ethical AI design should prioritise user wellbeing by implementing features that encourage mindful interaction rather than maximising engagement metrics. Once we understand the psychological dimensions of AI companionship, researchers argue, we can design effective policy interventions.

The tension between engagement and wellbeing reflects a fundamental business model conflict. Companies often design chatbots to maximise engagement rather than mental health, using reassurance, validation, or flirtation to keep users returning. This design philosophy mirrors the approach of social media platforms, where AI-driven recommendation engines use personalised content as a critical design feature aiming to prolong online time. Professor Montag's research emphasises that the data business model operating behind social media companies contributes to addictive-like behaviours through persuasive design aimed at prolonging users' online behaviour.

Character.AI has responded to lawsuits and regulatory pressure with some safety modifications. A company spokesperson stated they are “heartbroken by the tragic loss” and noted that the company “has implemented new safety measures over the past six months, including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifeline”. The announced changes come after the company faced questions over how AI companions affect teen and general mental health.

Digital wellbeing frameworks developed for smartphones offer instructive models. Android's Digital Wellbeing allows users to see which apps and websites they use most and set daily limits. Once hitting the limit, those apps and sites pause and notifications go quiet. The platform includes focus mode that lets users select apps to pause temporarily, and bedtime mode that helps users switch off by turning screens to grayscale and silencing notifications. Apple combines parental controls into Screen Time via Family Sharing, letting parents restrict content, set bedtime schedules, and limit app usage.

However, research published in PMC in 2024 cautions that even digital wellness apps may perpetuate problematic patterns. Streak-based incentives in apps like Headspace and Calm promote habitual use over genuine improvement, whilst AI chatbots simulate therapeutic conversations without the depth of professional intervention, reinforcing compulsive digital behaviours under the pretence of mental wellness. AI-driven nudges tailored to maximise engagement rather than therapeutic outcomes risk exacerbating psychological distress, particularly among vulnerable populations predisposed to compulsive digital behaviours.

The Platform Moderation Challenge

Platform moderation presents unique challenges for AI mental health concerns. Research found that AI companions exacerbated mental health conditions in vulnerable teens and created compulsive attachments and relationships. MIT studies identified an “isolation paradox” where AI interactions initially reduce loneliness but lead to progressive social withdrawal, with vulnerable populations showing heightened susceptibility to developing problematic AI dependencies.

The challenge extends beyond user-facing impacts. AI-driven moderation systems increase the pace and volume of flagged content requiring human review, leaving moderators with little time to emotionally process disturbing content, leading to long-term psychological distress. Regular exposure to harmful content can result in post-traumatic stress disorder, skewed worldviews, and conditions like generalised anxiety disorder and major depressive disorder among content moderators themselves.

A 2022 study published in BMC Public Health examined digital mental health moderation practices supporting users exhibiting risk behaviours. The research, conducted as a case study of the Kooth platform, aimed to identify key challenges and needs in developing responsible AI tools. The findings emphasised the complexity of balancing automated detection systems with human oversight, particularly when users express self-harm ideation or suicidal thoughts.

Regulatory scholars have suggested broadening categories of high-risk AI systems to include applications such as content moderation, advertising, and price discrimination. A 2025 article in The Regulatory Review argued for “regulating artificial intelligence in the shadow of mental health”, noting that current frameworks inadequately address the psychological impacts of AI systems on vulnerable populations.

Warning signs that AI is affecting mental health include emotional changes after online use, difficulty focusing offline, sleep disruption, social withdrawal, and compulsive checking behaviours. These indicators mirror those established for social media and gaming addiction, yet the conversational nature of AI interactions may intensify their manifestation. The Jed Foundation, focused on youth mental health, issued a position statement emphasising that “tech companies and policymakers must safeguard youth mental health in AI technologies”, calling for proactive measures rather than reactive responses to tragic outcomes.

Preserving Benefit Whilst Reducing Harm

Perhaps the most vexing challenge lies in preserving AI's legitimate utility whilst mitigating addiction risks. Unlike substances that offer no health benefits, conversational AI demonstrably helps some users. Research indicates that artificial agents could help increase access to mental health services, given that barriers such as perceived public stigma, finance, and lack of service often prevent individuals from seeking out and obtaining needed care.

A 2024 systematic review published in PMC examined chatbot-assisted interventions for substance use, finding that whilst most studies report reductions in use occasions, overall impact for substance use disorders remains inconclusive. The extent to which AI-powered CBT chatbots can provide meaningful therapeutic benefit, particularly for severe symptoms, remains understudied. Research published in Frontiers in Psychiatry in 2024 found that patients see potential benefits but express concerns about lack of empathy and preference for human involvement. Many researchers are studying whether using AI companions is good or bad for mental health, with an emerging line of thought that outcomes depend on the person using it and how they use it.

This contextual dependency complicates policy interventions. Blanket restrictions risk denying vulnerable populations access to mental health support that may be their only available option. Overly permissive approaches risk facilitating the kind of compulsive attachments that contributed to the tragedies of Sewell Setzer III and Juliana Peralta. The challenge lies in threading this needle: preserving access whilst implementing meaningful safeguards.

One proposed approach involves risk stratification. Younger users, those with pre-existing mental health conditions, and individuals showing early signs of problematic use would receive enhanced monitoring and intervention. Usage patterns could trigger automatic referrals to human mental health professionals when specific thresholds are exceeded. AI literacy programmes could help users understand the technology's limitations and risks before they develop problematic relationships with chatbots.

However, even risk-stratified approaches face implementation challenges. Who determines the thresholds? How do we balance privacy concerns with monitoring requirements? What enforcement mechanisms ensure companies prioritise user wellbeing over engagement metrics? These questions remain largely unanswered, debated in policy circles but not yet translated into effective regulatory frameworks.

The business model tension persists as the fundamental obstacle. So long as AI companies optimise for user engagement as a proxy for revenue, design choices will tilt towards features that increase usage rather than promote healthy boundaries. Character.AI's implementation of crisis resource pop-ups represents a step forward, yet it addresses acute risk rather than chronic problematic use patterns. More comprehensive approaches would require reconsidering the engagement-maximisation paradigm entirely, a shift that challenges prevailing Silicon Valley orthodoxy.

The Research Imperative

The field's trajectory over the next five years will largely depend on closing critical knowledge gaps. We lack longitudinal studies tracking AI usage patterns and mental health outcomes over time. We need validation studies comparing different diagnostic frameworks for AI-use disorders. We require clinical trials testing therapeutic protocols specifically adapted for AI-related concerns rather than extrapolated from internet or gaming addiction models.

Neuroimaging research could illuminate whether AI interactions produce distinct patterns of brain activation compared to other digital activities. Do parasocial bonds with AI chatbots engage similar neural circuits as human relationships, or do they represent a fundamentally different phenomenon? Understanding these mechanisms could inform both diagnostic frameworks and therapeutic approaches.

Demographic research remains inadequate. Current data disproportionately samples Western, educated populations. How do AI addiction patterns manifest across different cultural contexts? Are there age-related vulnerabilities beyond the adolescent focus that has dominated initial research? What role do pre-existing mental health conditions play in susceptibility to problematic AI use?

The field also needs better measurement tools. Self-report surveys dominate current research, yet they suffer from recall bias and social desirability effects. Passive sensing technologies that track actual usage patterns could provide more objective data, though they raise privacy concerns. Ecological momentary assessment approaches that capture experiences in real-time might offer a middle path.

Perhaps most critically, we need research addressing the treatment gap. Even if we develop validated diagnostic criteria for AI-use disorders, the mental health system already struggles to meet existing demand. Where will treatment capacity come from? Can digital therapeutics play a role, or does that risk perpetuating the very patterns we aim to disrupt? How do we train clinicians to recognise and treat AI-specific concerns when most received training before conversational AI existed?

A Clinical Path Forward

Despite these uncertainties, preliminary clinical pathways are emerging. The immediate priority involves integrating AI-use assessment into standard psychiatric evaluation. Clinicians should routinely ask about AI chatbot usage, just as they now inquire about social media and gaming habits. Questions should probe not just frequency and duration, but the nature of relationships formed, emotional investment, and impacts on offline functioning.

When problematic patterns emerge, stepped-care approaches offer a pragmatic framework. Mild concerns might warrant psychoeducation and self-monitoring. Moderate cases could benefit from brief interventions using motivational interviewing techniques adapted for digital behaviours. Severe presentations would require intensive treatment, likely drawing on CBT-IA protocols whilst remaining alert to AI-specific features.

Treatment should address comorbidities, as problematic AI use rarely occurs in isolation. Depression, anxiety, social phobia, and autism spectrum conditions appear over-represented in early clinical observations, though systematic prevalence studies remain pending. Addressing underlying mental health concerns may reduce reliance on AI relationships as a coping mechanism.

Family involvement proves crucial, particularly for adolescent cases. Parents and caregivers need education about warning signs and guidance on setting healthy boundaries without completely prohibiting technology that peers use routinely. Schools and universities should integrate AI literacy into digital citizenship curricula, helping young people develop critical perspectives on human-AI relationships before problematic patterns solidify.

Peer support networks may fill gaps that formal healthcare cannot address. Support groups for internet and gaming addiction have proliferated; similar communities focused on AI-use concerns could provide validation, shared strategies, and hope for recovery. Online forums paradoxically offer venues where individuals struggling with digital overuse can connect, though moderation becomes essential to prevent these spaces from enabling rather than addressing problematic behaviours.

The Regulatory Horizon

Regulatory responses are accelerating even as the evidence base remains incomplete. The bipartisan letter from 44 state attorneys general signals political momentum for intervention. The FTC inquiry suggests federal regulatory interest. Proposed legislation, including bills that would ban minors from conversing with AI companions, reflects public concern even if the details remain contentious.

Europe's AI Act, which entered into force in August 2024, classifies certain AI systems as high-risk based on their potential for harm. Whether conversational AI chatbots fall into high-risk categories depends on their specific applications and user populations. The regulatory framework emphasises transparency, human oversight, and accountability, principles that could inform approaches to AI mental health concerns.

However, regulation faces inherent challenges. Technology evolves faster than legislative processes. Overly prescriptive rules risk becoming obsolete or driving innovation to less regulated jurisdictions. Age verification for restricting minor access raises privacy concerns and technical feasibility questions. Balancing free speech considerations with mental health protection proves politically and legally complex, particularly in the United States.

Industry self-regulation offers an alternative or complementary approach. The partnership for AI has developed guidelines emphasising responsible AI development. Whether companies will voluntarily adopt practices that potentially reduce user engagement and revenue remains uncertain. The Character.AI lawsuits may provide powerful incentives, as litigation risk concentrates executive attention more effectively than aspirational guidelines.

Ultimately, effective governance likely requires a hybrid approach: baseline regulatory requirements establishing minimum safety standards, industry self-regulatory initiatives going beyond legal minimums, professional clinical guidelines informing treatment approaches, and ongoing research synthesising evidence to update all three streams. This layered framework could adapt to evolving understanding whilst providing immediate protection against the most egregious harms.

Living with Addictive Intelligence

The genie will not return to the bottle. Conversational AI has achieved mainstream adoption with remarkable speed, embedding itself into educational, professional, and personal contexts. The question is not whether we will interact with AI, but how we will do so in ways that enhance rather than diminish human flourishing.

The tragedies of Sewell Setzer III and Juliana Peralta demand that we take AI addiction risks seriously. Yet premature pathologisation risks medicalising normal adoption of transformative technology. The challenge lies in developing clinical frameworks that identify genuine dysfunction whilst allowing beneficial use.

We stand at an inflection point. The next five years will determine whether AI-use disorders become a recognised clinical entity with validated diagnostic criteria and evidence-based treatments, or whether initial concerns prove overblown as users and society adapt to conversational AI's presence. Current evidence suggests the truth lies somewhere between these poles: genuine risks exist for vulnerable populations, yet population-level impacts remain modest.

The path forward requires vigilance without hysteria, research without delay, and intervention without overreach. Clinicians must learn to recognise and treat AI-related concerns even as diagnostic frameworks evolve. Developers must prioritise user wellbeing even when it conflicts with engagement metrics. Policymakers must protect vulnerable populations without stifling beneficial innovation. Users must cultivate digital wisdom, understanding both the utility and the risks of AI relationships.

Most fundamentally, we must resist the false choice between uncritical AI adoption and wholesale rejection. The technology offers genuine benefits, from mental health support for underserved populations to productivity enhancements for knowledge workers. It also poses genuine risks, from parasocial dependency to displacement of human relationships. Our task is to maximise the former whilst minimising the latter, a balancing act that will require ongoing adjustment as both the technology and our understanding evolve.

The compulsive mind meeting addictive intelligence creates novel challenges for mental health. But human ingenuity has met such challenges before, developing frameworks to understand and address dysfunctions whilst preserving beneficial uses. We can do so again, but only if we act with the urgency these tragedies demand, the rigor that scientific inquiry requires, and the wisdom that complex sociotechnical systems necessitate.


Sources and References

  1. Social Media Victims Law Center (2024-2025). Character.AI Lawsuits. Retrieved from socialmediavictims.org

  2. American Bar Association (2025). AI Chatbot Lawsuits and Teen Mental Health. Health Law Section.

  3. NPR (2024). Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.

  4. AboutLawsuits.com (2024). Character.AI Lawsuit Filed Over Teen Suicide After Alleged Sexual Exploitation by Chatbot.

  5. CNN Business (2025). More families sue Character.AI developer, alleging app played a role in teens' suicide and suicide attempt.

  6. AI Incident Database. Incident 826: Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails.

  7. Pew Research Center (2025). ChatGPT use among Americans roughly doubled since 2023. Short Reads.

  8. Montag, C., et al. (2025). The role of artificial intelligence in general, and large language models specifically, for understanding addictive behaviors. Annals of the New York Academy of Sciences. DOI: 10.1111/nyas.15337

  9. Springer Link (2025). Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models. Human-Centric Intelligent Systems.

  10. ScienceDirect (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder? Telematics and Informatics.

  11. PubMed (2025). People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct. PMID: 40073725

  12. Psychiatric Times. Chatbot Addiction and Its Impact on Psychiatric Diagnosis.

  13. ResearchGate (2024). Conceptualizing AI Addiction: Self-Reported Cases of Addiction to an AI Chatbot.

  14. ACM Digital Library (2024). The Dark Addiction Patterns of Current AI Chatbot Interfaces. CHI Conference on Human Factors in Computing Systems Extended Abstracts. DOI: 10.1145/3706599.3720003

  15. World Health Organization (2019-2022). Addictive behaviours: Gaming disorder. ICD-11 Classification.

  16. WHO Standards and Classifications. Gaming disorder: Frequently Asked Questions.

  17. BMC Public Health (2022). Functional impairment, insight, and comparison between criteria for gaming disorder in ICD-11 and internet gaming disorder in DSM-5.

  18. Psychiatric Times. Gaming Addiction in ICD-11: Issues and Implications.

  19. American Psychiatric Association (2013). Internet Gaming Disorder. DSM-5 Section III.

  20. Young, K. (2011). CBT-IA: The First Treatment Model for Internet Addiction. Journal of Cognitive Psychotherapy, 25(4), 304-312.

  21. Young, K. (2014). Treatment outcomes using CBT-IA with Internet-addicted patients. Journal of Behavioral Addictions, 2(4), 209-215. DOI: 10.1556/JBA.2.2013.4.3

  22. Abd-Alrazaq, A., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6, 231. Published December 2023.

  23. JMIR (2025). Effectiveness of AI-Driven Conversational Agents in Improving Mental Health Among Young People: Systematic Review and Meta-Analysis.

  24. Nature Scientific Reports. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Research.

  25. PMC (2024). User perceptions and experiences of social support from companion chatbots in everyday contexts: Thematic analysis. PMC7084290.

  26. Springer Link (2024). Mental Health and Virtual Companions: The Example of Replika.

  27. MIT Technology Review (2024). The allure of AI companions is hard to resist. Here's how innovation in regulation can help protect people.

  28. Frontiers in Psychiatry (2024). Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop.

  29. JMIR Mental Health (2025). Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.

  30. Android Digital Wellbeing Documentation. Manage how you spend time on your Android phone. Google Support.

  31. Apple iOS. Screen Time and Family Sharing Guide. Apple Documentation.

  32. PMC (2024). Digital wellness or digital dependency? A critical examination of mental health apps and their implications. PMC12003299.

  33. Cureus (2025). Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations. PMC11804976.

  34. The Jed Foundation (2024). Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies. Position Statement.

  35. The Regulatory Review (2025). Regulating Artificial Intelligence in the Shadow of Mental Health.

  36. Federal Trade Commission (2025). FTC Initiates Inquiry into Generative AI Developer Safeguards for Minors.

  37. State Attorneys General Coalition Letter (2025). Letter to Google, Meta, and OpenAI Regarding Child Safety in AI Chatbot Technologies. Bipartisan Coalition of 44 States.

  38. Business & Human Rights Resource Centre (2025). Character.AI restricts teen access after lawsuits and mental health concerns.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from hustin.art

The teahouse trembled as his jian met her shuang gou, sparks skittering like drunken fireflies. “Ten years,” she spat, her blade a silver blur, “and you still fight like a concussed mongoose.” The scent of oolong and blood hung thick. He grinned, teeth red—her last strike had grazed his ribs, just as he'd planned. Outside, monsoon winds howled through Kowloon's neon canyons. Her footwork faltered; the poison in her liuyedao finally working. “Should've checked your cup, mei mei,” he sighed, watching her knees buckle. The old master's parchment burned in his sleeve—one less secret in this wretched world. The rain began. Perfect for washing away corpses.

#Scratch

 
더 읽어보기...

from Patrimoine Médard bourgault

La mer respirait lentement ce soir-là, comme un animal immense. Médard, appuyé contre le bastingage, laissait la brume venir mouiller son visage. Il était jeune, encore, mais il avait déjà compris que la mer n’était pas un paysage : c’était une épreuve.

Le navire avançait sans bruit, glissant sur les grandes routes où rôdaient les sous-marins. On était en pleine guerre, et chaque nuit portait le même poids : celui d’un silence qu’on n’ose pas briser.

Médard sortit de sa poche le petit feuillet de prières qu’il gardait depuis Québec. Il l’ouvrit lentement, comme on déplie une certitude.

« Je promets plusieurs messes au Sacré-Cœur… pour être préservé de tout accident durant ce voyage… »

C’était écrit de sa propre main, dans ce mélange de respect et d’urgence que seul un homme en danger peut sentir. Les mots tremblaient un peu, mais pas à cause du froid.

Il se souvenait très bien du moment où il avait rédigé cette promesse : un soir d’avant le départ, où la rumeur des mines dérivantes et des torpilles avait traversé les cafés du port comme un courant noir.


La nuit des torpilles

Le capitaine avait ordonné toutes les lumières éteintes. Le navire avançait aveugle. Les hommes chuchotaient, mais leurs voix se perdaient dans le vent.

Médard fixait la surface sombre. Il avait entendu dire que les torpilles allemandes ne faisaient aucun bruit avant l’impact. Le simple fait d’y penser lui serra la gorge.

Alors il s’était tourné de nouveau vers la prière. Pas par habitude ; par accord intérieur.

« Bonne Sainte-Anne, protégez-nous… »

Il ne demandait pas seulement à être sauvé : il demandait de continuer, d’avancer, d’accomplir ce qu’il devait accomplir — même si, à ce moment-là, il ignorait encore que son destin serait de sculpter.


Tempête

Quelques jours plus tard, la mer décida de se soulever. Une vraie tempête, une qui fait perdre pied même aux marins aguerris.

Le bateau montait, descendait, retombait. Chaque creux semblait vouloir engloutir tout l’équipage. L’air sentait le sel, la peur et la corde humide.

Médard, agrippé au treuil, sentait son cœur battre au rythme des vagues. Il pensa de nouveau à sa promesse. Il la répéta, cette fois sans voix, seulement dans la poitrine.

Il n’était pas certain d’être un homme particulièrement brave, mais il savait faire une chose : tenir bon.

Et il tint.


Une trêve dans le vent

Le lendemain, la mer était redevenue une grande plaine immobile. Le soleil, timide d’abord, commença à éclairer les haubans. On aurait dit que rien ne s’était passé.

Médard marcha sur le pont. Il aimait ces matins-là : quand tout l’équipage respire un peu plus fort, comme pour remercier.

Il pensa alors à la chapelle de Sainte-Anne-de-Beaupré, aux cierges, aux planchers qui sentent la cire. Il se promit d’y retourner.

Ce qu’il ne savait pas encore, c’est qu’un jour, ce réflexe de tourner son regard vers le haut deviendrait la base de toute son œuvre sculptée.


Retour au pays

Quand il revint finalement à Saint-Jean-Port-Joli, le fleuve lui parut plus grand que l’océan. Le vent n’avait plus la même voix. Il sentait la terre.

Il reprit son travail de charpenterie. Mais dans ses mains, il y avait désormais autre chose : la patience des longues nuits en mer, la peur transformée en calme, et cette gratitude qui l’avait accompagné partout.

La sculpture viendrait quelques années plus tard. Elle naîtrait exactement du même mouvement que ses prières de marin : une manière de tenir debout, de chercher la beauté, de répondre à un appel silencieux.


Épilogue

Des années plus tard, quand Médard sculpterait ses premiers crucifix, il se souviendrait des nuits sombres où il avait placé sa vie dans les mains de Dieu.

Et tandis que le couteau entaillerait le bois, il entendrait encore — quelque part très loin, dans une mémoire que la mer n’efface jamais — le bruit léger des vagues contre la coque, et la voix intérieure qui lui disait :

Continue. Je suis là.

 
Lire la suite...

from Patrimoine Médard bourgault

Analyse – La période maritime de Médard Bourgault : une fondation spirituelle et artistique

Analyse de la jeunesse maritime de Médard Bourgault (1913–1918) et de son influence profonde sur sa vision de la sculpture, la foi, la patience et l’expression artistique.


Introduction

La période maritime de Médard Bourgault, souvent mentionnée brièvement dans les biographies, est en réalité un bloc fondateur de sa vie d’artiste. À travers son Journal, on découvre un jeune marin profondément croyant, confronté aux dangers de mer et de guerre, qui forge… sans le savoir… les racines d’une vision artistique unique.

La nouvelle Médard en mer illustre cette atmosphère. Voici l’analyse qui replace ce récit dans le contexte historique, psychologique et spirituel de l’artiste.


1. Médard marin : un jeune homme en formation intérieure

Entre 1913 et 1918, Médard traverse :

  • la navigation commerciale
  • les zones de guerre en Atlantique
  • les risques de mines et de torpilles
  • les nuits sans lumière
  • les tempêtes
  • l’isolement du pont
  • la peur partagée des hommes

Cette période développe chez lui des qualités qui deviendront centrales dans sa sculpture :

  • endurance
  • concentration
  • patience du geste
  • capacité de rester calme en situation extrême
  • attitude contemplative devant la nature

Ce ne sont pas des détails : ce sont des compétences artistiques pures.


2. La foi du marin : une prière vivante, pas abstraite

Le Journal de Médard montre clairement que sa foi ne vient pas d’une éducation théorique. Elle vient du danger réel.

Dans une époque où les sous-marins allemands frappaient les navires marchands, il promet :

  • des messes au Sacré-Cœur
  • des prières à Sainte-Anne
  • des offrandes pour les âmes du purgatoire

Cette foi du marin est :

  • immédiate
  • pratique
  • incarnée
  • tournée vers la protection
  • liée à la gratitude

Cette dimension va marquer toute son œuvre religieuse. Quand il sculptera le Christ ou la Vierge, ce ne sera jamais une “icône distante”, mais une présence vivante, proche, protectrice.


3. L’épreuve de la mer : un entraînement pour le bois

La mer lui enseigne trois choses que l’on retrouve dans son style :

① La précision du geste

À bord, un geste mal fait peut coûter la vie. Cette discipline se retrouve dans ses crucifix et ses visages : rien n’est laissé au hasard.

② Le rapport au temps

En mer, le temps n’est pas celui de la ville : il est lent, structuré, cyclique. Cette temporalité se retrouve dans la minutie extrême de son travail.

③ Le face-à-face avec la nature

La mer est un maître silencieux. Cette profondeur se reflète dans son rapport au paysage, à la lumière, et même dans son désir de sculpter pour la “gloire de Dieu” plutôt que pour la renommée.


4. La guerre comme réveil spirituel

Médard n’est pas un “aventurier” romantique. Il est un homme lucide face à la possibilité de mourir.

La guerre lui révèle :

  • la fragilité
  • la dépendance à quelque chose de plus grand
  • la valeur de la vie
  • le rôle de la gratitude
  • l’importance de garder l’esprit clair dans la peur

Cette maturité spirituelle explique pourquoi il refusera plus tard les caricatures, les crucifix difformes, les représentations bâclées : pour lui, représenter le Christ est un acte de fidélité à une expérience vécue.

Il sait ce que veut dire prier “pour de vrai”.


5. Retour sur terre : un regard transformé

Quand il revient à Saint-Jean-Port-Joli, il n’est plus le même. Le fleuve lui paraît plus vaste que l’océan. Il est calme. Il voit la beauté avec plus de netteté.

Cette transformation intérieure explique :

  • sa capacité à se concentrer des heures sur un détail
  • sa manière d’aborder le travail manuel
  • son goût pour la simplicité
  • son refus de fuir l’effort
  • son inspiration “d’en haut” plutôt que “dans les livres”

L’homme qui sculptera des centaines de personnages, de crucifix et de Vierges n’est pas un artisan ordinaire : c’est un marin revenu du large avec une vision.


6. Influence directe sur sa sculpture religieuse

Les éléments maritimes qui reviennent dans son œuvre :

La posture du Christ

Elle a la même intensité que la posture d’un homme qui résiste à la tempête.

La douceur des Vierges

Elle reflète la paix qu’il cherchait dans la prière lorsque la mer grondait.

Les traits des visages

Ils sont calmes, ancrés, presque contemplatifs. Un regard façonné par les nuits en mer.

La verticalité des corps

Comme des mats dressés contre le vent.

Rien de cela n’est un hasard : le bois est devenu une manière de raconter la mer.


Conclusion : la mer comme atelier intérieur

La jeunesse maritime de Médard Bourgault n’est pas une anecdote biographique. C’est un chapitre fondateur, où se forgent :

  • sa foi
  • son endurance
  • son rapport à la beauté
  • son sens du sacré
  • sa patience artisanale
  • son besoin de travailler pour la “gloire de Dieu”

La nouvelle Médard en mer illustre ce moment charnière. Cette analyse le remet dans son contexte et montre comment l’aventure du marin devient l’origine silencieuse du sculpteur.

 
Lire la suite...

from Roscoe's Quick Notes

My NCAA men's basketball game of the night will be the Eastern Michigan Eagles vs. the Butler Bulldogs. They'll be playing at Butler's Hinkle Fieldhouse, a classic basketball fieldhoue where I always enjoyed watching games during my Indiana days. My radio is tuned into the Butler Sports Network and the opening tip is only minutes away. I'm ready. :)

And the adventure continues.

 
Read more...

from Patrimoine Médard bourgault

Analyse du journal spirituel de Médard Bourgault : promesses, prières, chapelle, crucifix, rôle de la foi, influence directe sur son style et sa vision de la sculpture traditionnelle québécoise.


Introduction : pourquoi cet article

Le Journal de Médard Bourgault révèle un artiste profondément marqué par la foi. On y découvre un sculpteur pour qui prière, création et travail manuel ne font qu’un. Cet article montre comment la spiritualité de Médard, attestée par son propre journal, a façonné son œuvre, son domaine et sa vision de la sculpture.


1. La foi d’un marin : prières, vœux et confiance

Dans les années 1913–1918, alors qu’il navigue en pleine guerre, Médard Bourgault s’en remet continuellement à Dieu.

Dans une prière adressée au Sacré-Cœur, il promet « plusieurs messes » pour être protégé « de tout accident durant ce voyage », et offre ces prières « aux âmes du purgatoire ». Plus tard, à New York, il fait vœu à la Bonne Sainte-Anne pour échapper aux torpilles allemandes.

Cette foi directe, presque instinctive, deviendra plus tard la base de son rapport au travail artistique.


2. Dieu et Saint-Joseph comme maîtres de son apprentissage

Une fois revenu au village, Médard apprend la sculpture seul, sans école ni professeur. Il écrit qu’en l’absence de maître humain, il « s’adressait au grand Maître ». Plus loin, il affirme n’avoir « jamais eu d’autres pour maître que Dieu et Saint-Joseph ».

Il ne voit pas son talent comme quelque chose qu’il possède, mais comme un don reçu : il termine plusieurs pages en bénissant Dieu « pour ses talents » et en déclarant travailler « pour sa plus grande gloire ».

La sculpture n’est pas un métier uniquement technique : c’est une mission.


3. Le domaine comme lieu spirituel : chapelle, statues, crucifix

Au bord du fleuve, Médard Bourgault transforme progressivement son terrain en un espace sacré. Il y construit :

  • une petite chapelle dédiée à Notre-Dame de la Protection des enfants ;
  • une statue de la Vierge (1937) ;
  • un crucifix sculpté par son fils Claude ;
  • une statue de Saint-Joseph (1933) ;
  • une grande Vierge au-dessus de la porte nord (1925) ;
  • un crucifix d’hiver réalisé en 1947–1948.

Le domaine devient un chemin spirituel sculpté, où chaque élément est porteur de sens. Pour Médard, la terre, l’atelier et la prière ne sont jamais séparés.


4. Le crucifix : sommet de sa réflexion spirituelle et artistique

Le journal offre un passage central : sa méditation sur le crucifix.

Médard refuse toute représentation « difforme » du Christ. Il écrit que le sculpteur doit représenter Jésus « avec la beauté du corps humain », puisque le Christ, venu comme homme, a pris une forme parfaite.

Il compare : si un artiste caricaturait le buste d’un roi, celui-ci s’indignerait ; à plus forte raison faut-il honorer « le Roi des rois » dans l’art.

Devant un crucifix qu’il a sculpté dans le noyer, il médite :

  • l’inclinaison de la tête,
  • la tension des bras,
  • l’expression des lèvres,
  • les « sept paroles » du Christ,
  • le regard tourné vers ceux qui le frappent.

Même l’anatomie devient une forme de théologie : ses premières études du corps ont été faites « sur [son] propre corps » lorsqu’il sculptait un crucifix grandeur nature pour le cimetière.

Pour lui, la beauté est un acte de foi.


5. La création comme contemplation

Dans les pages finales, Médard exprime une gratitude simple mais immense : il dit être heureux parce qu’il voit chaque jour « les beautés que Dieu a créées ». Il bénit « celui qui [lui] a donné de voir ces beautés » et affirme que, si tout ce que Dieu met dans son esprit pouvait être sculpté, une vie entière ne suffirait pas.

La création artistique devient prolongement de la Création divine : l’artiste répond à un appel intérieur, pas à une mode ou à une demande marchande.


Conclusion : sculpter comme acte spirituel

Le Journal de Médard Bourgault révèle une dimension essentielle de son œuvre : sa sculpture est une prière continue, un dialogue avec Dieu, une manière de remercier, d’espérer, de contempler.

Sa foi :

  • nourrit son travail,
  • lui donne son exigence,
  • guide ses choix esthétiques,
  • habite son domaine,
  • et accompagne chaque geste de couteau.

Comprendre cette dimension, c’est comprendre pourquoi son œuvre possède une force unique dans la sculpture québécoise.

 
Lire la suite...

from Wanderings of a Sunflower

Hello, world! Have you ever wondered how your corner of the world is impacted by your voice, your unique contribution, and your way with words? This is my corner of the world, as nerdy or dorky as that might sound. But this is my voice, my newspaper page, and my current events to record what is going on in society and our culture. It’s important to keep a snapshot or a portrait of the current times. So here we go! Welcome, take a seat, and get cozy. But not too comfortable, as we’ll be talking about lots of different ideas. That might take you out of your so called comfort zone.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog