Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Build stuff; Break stuff; Have fun!
Day 3 of #AdventOfProgress
Wow, we are coming along. Now we have a connection to Supabase and working Auth. This went flawlessly, and I'm happy to do more! :)
Now the user can create a new account, verify the mail, and sign in and out. đ

This is all very basic, and I will polish it when the MVP is done.
61 of #100DaysToOffload
#log #AdventOfProgress
_Thoughts?
from Douglas Vandergraph
There are moments in life when you look around at the world, at the church, at the voices speaking on behalf of God, and you find yourself asking a simple, aching question: âWhy does following Jesus sometimes feel like being told Iâm never enough?â
Everywhere you turn, someone is preaching, posting, or shouting that youâre unworthy. That youâre ungrateful. That youâre broken beyond usefulness. That God is disappointed in you. That you should feel ashamed of who you are and how far you still have to go.
But does that message come from the heart of the Christ who walked the dusty roads of Galilee, who touched the untouchable, who lifted the broken, who restored those others had written off?
No. Not even close.
So today, I want to sit with you and imagine something sacred: What if you could sit down with Jesus Himself, face to face, and ask Him what He thinks about the message so many Christians preachâthis message that tears people down in the name of holiness?
What if you could hear His response? What would He say? What would He correct? What would He restore in you?
This article is that conversation. It is the long, slow, healing exhale that people who have been crushed by religious shame have needed for a long time. It is the reminder that the Gospel was never meant to bruise youâit was meant to bring you back to life.
Letâs walk gently into this together.
Imagine the scene. Youâre tired. Worn out. Disappointed by church folks who seem more excited about pointing out flaws than lifting up grace. You have questions youâve been carrying for years because youâve been told that doubting your worth is holiness.
You sit across from Jesus. Not the Jesus of fear-based preaching. Not the Jesus painted as a cosmic judge ready to strike you down. Noâthe real Jesus.
And before you even speak, He looks at you with a kind of love that steadies your breathing.
Then He says something that immediately softens the weights youâve been carrying:
âYou are not who they say you are. And youâre not who shame tells you to be. You are Mine.â
He doesnât start with condemnation. He doesnât start with accusation. He doesnât start with your failures.
He starts with your identity.
Because Jesus knows something religion often forgets: People donât rise when they are shamed. People rise when they are loved back into themselves.
There is a sentence many Christians repeat as if it honors God: âLord, we are unworthy.â
And while humility is beautiful, that phraseâspoken too often and out of contextâhas wrecked more souls than it has healed.
Hereâs the truth Scripture actually reveals:
If you were worthless, Heaven would not have bankrupt itself for you.
Think about it. Value determines cost. And God paid the highest cost imaginable.
No one spends everything they have on garbage. No one sacrifices their only Son for a soul that âsucks.â
But religion, when it forgets the heart of God, becomes obsessed with reminding people of their dirt instead of reminding them of their design.
It confuses humility with humiliation. It preaches unworthiness as if it is worship.
But God did not send His Son to die for trash. He sent His Son to redeem treasure.
Letâs walk through the actual Gospel accounts, slowly and honestly, and look at how Jesus interacted with people at their lowest points.
The Woman Caught in Adultery Dragged through the streets. Thrown at His feet. Surrounded by accusations. The religious leaders wanted blood.
Jesus wanted her dignity back.
He defended her before He corrected her. He protected her before He guided her. He restored her before He instructed her.
He didnât say, âYou are filth.â He said, âI do not condemn you.â
The Order Matters.
Grace first. Direction second.
Zacchaeus A tax collector. A traitor. A thief. The kind of man religious people love to preach against.
Jesus calls him by name. Jesus invites Himself into his home.
Zacchaeus thought Jesus came to expose him. Jesus came to elevate him.
âToday salvation has come to this house.â
Not after Zacchaeus fixed himself. But as Jesus looked at him with eyes that said, âYou are not defined by your past.â
The Bleeding Woman Unclean for twelve years. Unwelcome in the community. Unwanted by society.
But Jesus doesnât call her âunclean.â He calls her âDaughter.â
Twelve years of shame undone in a single sentence.
This is Jesus. Not the Jesus of religious harshness. The Jesus of relentless restoration.
Peter Denied Jesus three times. Failed publicly. Collapsed under pressure.
But Jesus didnât define Peter by the moment he melted. Jesus defined Peter by the mission still inside him.
âFeed My sheep.â In other words: âI still trust you. I still see you. I still choose you.â
Jesus never uses failure as a final sentence. He uses it as the doorway to greater purpose.
The pattern is unmistakable. Jesus lifts. Jesus restores. Jesus dignifies. Jesus heals. Jesus calls people higher without pushing them down first.
So when Christians preach messages dripping with shame, the disconnect is painfully obvious.
They are preaching something Jesus would not recognize.
The very first emotional response recorded in Scripture after sin entered the world was not repentance. It was hiding.
Adam and Eve didnât run toward God. They ran away from Him.
And that pattern has continued for thousands of years. Shame does not draw the soul closer. Shame pushes the soul into the shadows.
But Jesus? He walks right into the shadows to find you. He doesnât shout from a distance; He comes close enough to touch the wound.
Holiness was never meant to begin with humiliation. Holiness begins with relationship. Transformation begins with belonging.
Jesus doesnât tell you whatâs wrong with you so He can punish you. He tells you what hurts you so He can heal you.
Itâs not always malicious. Sometimes it is inherited. Sometimes it is ignorance. Sometimes it is their own unhealed wounds speaking through their theology.
But here are the common reasons:
1. They were raised on fear-based religion. People repeat what shaped them.
2. They mistake volume for authority. Shouting truth is not the same as carrying truth.
3. They believe shame leads to obedience. But shame only leads to pretense, not transformation.
4. They confuse conviction with cruelty. Conviction is a scalpel. Cruelty is a hammer.
5. They think making people feel smaller makes God feel bigger. But God doesnât need people crushed so He can be exalted.
Jesus said, âMy yoke is easy and My burden is light.â
If the message you hear doesnât lift your spirit, if it leaves you heavier, defeated, or feeling despised, it is not the voice of your Shepherd.
His voice calms storms â it doesnât create new ones.
If He sat across from you today, hearing your questionâ âLord, what do You think about all these messages saying weâre unworthy and terrible and disappointing to You?ââ I believe He would respond with a truth powerful enough to rewire your entire spiritual identity:
âI did not come to shame you. I came to save you.â
He would remind you:
âYou were worth the journey from Heaven to Earth. You were worth every miracle I performed. You were worth every tear I cried. You were worth the cross. You are worth My presence now.â
And He wouldnât whisper it. He would say it with the authority of the One who spoke galaxies into being.
Because the very heart of the Gospel is not: âYouâre awfulâtry harder.â
The Gospel is: âYou are lovedâcome closer.â
Something shifts. Something unravels. Something that was tight and trembling inside you loosens and breathes for the first time.
You stop defining yourself by failure. You stop measuring yourself by religious expectations. You stop shrinking under the disapproval of self-appointed gatekeepers of grace.
You begin to see yourself the way God sees you: Not as someone He tolerates⊠but as someone He desires.
Not as a disappointment He puts up with⊠but as a son or daughter He delights in.
Not as someone He rescued reluctantly⊠but as someone He joyfully ran toward.
Here is the truth Scripture revealsâslow down and let this wash over you:
You are not defined by your worst day. You are not disqualified by your past. You are not a burden to God. You are not an embarrassment to Heaven.
You are beloved. You are carried. You are chosen. You are called.
And no matter what any preacher, parent, pastor, or internet prophet has spoken over you, Jesus has the final word on your identity.
And His word is always the same: âMine.â
If you have ever walked out of a church feeling like you didnât belongâŠ
If you have ever cried because someone used Godâs name to hurt youâŠ
If you have ever believedâeven for a momentâthat God regretted making youâŠ
Hear this now, and hear it as if Jesus is speaking it directly to the deepest part of you:
âMy child, you are not the failure they described. You are the beauty I designed. You are not the shame they preached. You are the joy I pursued. You are not unworthy of My love. You are the reason I came.â
Lift your head. Uncurl your heart. Step out of the shadows religion forced you into.
Walk confidently toward the God who has never stopped walking toward you.
Because the world has heard enough messages that tear people down. Itâs time for the message of Jesusâthe real messageâto rise again.
You matter. You are loved. And Heaven has never once regretted choosing you.
Watch Douglas Vandergraphâs inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee
Douglas Vandergraph
#faith #grace #Jesus #ChristianLife #hope #encouragement #inspiration #GodsLove #healing #truth
from Patrimoine Médard bourgault
Quels bois MĂ©dard Bourgault prĂ©fĂ©rait-il pour sculpter ? Merisier rouge, chĂȘne, noyer, acajou canadien : voici son guide complet, basĂ© sur son propre journal.
Dans son Journal, MĂ©dard Bourgault ne parle pas seulement dâart et de foi : il donne aussi des conseils concrets sur le choix du bois, rĂ©pond aux prĂ©jugĂ©s de son Ă©poque et affirme avec force la valeur des essences quĂ©bĂ©coises.
Ce guide rassemble â de façon claire et fidĂšle â tout ce que MĂ©dard a Ă©crit sur les bois locaux, leur qualitĂ© et leur utilisation en sculpture.
Un de ses contemporains lui avait affirmé que les bois du Québec ne convenaient pas à la sculpture à cause :
Médard répond sans aucune hésitation :
« Nos bois peuvent ĂȘtre employĂ©s en sculpture, pourvu que lâon sache choisir. »
Pour lui, la critique nâest pas basĂ©e sur la rĂ©alitĂ© mais sur un prĂ©jugĂ© culturel :
« Si nos bois ne sont pas beaux, câest parce quâils sont de chez nous. »
Câest un passage fondamental : MĂ©dard dĂ©fend la richesse du pays et renverse la logique importation = qualitĂ©.
MĂ©dard affirme que les bois du QuĂ©bec valent ceux des rĂ©gions chaudes, mĂȘme pour les sculptures les plus fines :
« Ils se prĂȘtent aussi bien Ă la sculpture que les exotiques des chauds pays, tels noyer noir, acajou et autre. »
Il place donc les essences dâici sur un pied dâĂ©galitĂ© avec :
Pour lui, la préférence pour les bois importés est un snobisme, pas un argument technique.
Une phrase trÚs claire révÚle sa préférence absolue :
« Notre merisier rouge pour moi est de beaucoup prĂ©fĂ©rable Ă lâacajou des Philippines. »
Le merisier rouge est donc :
Médard ne détaille pas les raisons, mais son avis laisse entendre :
Et surtout : câest un bois du pays â ce qui compte Ă©normĂ©ment pour lui.
Ă propos du chĂȘne, il Ă©crit :
« Ă commencer par notre chĂȘne sâil croĂźt dans du bon terrain. »
Pour MĂ©dard, le chĂȘne du QuĂ©bec devient excellent si :
đ Le chĂȘne est un bon choix pour :
MĂ©dard classe les bois quĂ©bĂ©cois au mĂȘme niveau que ces essences haut de gamme :
Ce sont des bois quâil connaĂźt bien et apprĂ©cie :
Il ne dit pas quâils surpassent les exotiques, mais quâils les Ă©galent â ce qui est dĂ©jĂ Ă©norme.
Passage important :
« Nous avons dĂ©laissĂ© presque partout nos beaux bois prĂ©cieux [âŠ] pour les remplacer par de vilains et laids B.C.F. ou sapin de Douglas de la Colombie. »
Pour lui :
Le sapin importĂ© est bon pour des coffrages, pas pour des Ćuvres dâart.
Tout son raisonnement mĂšne Ă une conclusion simple :
Il Ă©crit quâon trouve dans la province tous les matĂ©riaux nĂ©cessaires, y compris pour les artisans :
« Pourquoi ne trouverions-nous pas les nÎtres ? »
Il y a ici un message profond :
à travers son journal, Médard transmet une vision claire :
Pour MĂ©dard Bourgault, choisir un bois, ce nâest pas seulement une question technique : câest un geste dâidentitĂ©, de fiertĂ©, de culture.
Jack Raphael
from POTUSRoaster
Hello and Happy Wednesday.
POTUS pardoned Juan Hernandez, who was convicted of flooding our country with cocaine and sentenced to more than 40 years in prison, just days after he was sent to prison. It's amazing what money can get from POTUS.
In spite of the fact that people have died from the drugs Hernandez pushed into the country and the millions of dollars he made from the illegal trade, POTUS doesn't care. He has freed Hernandez after thousands of donations in his name have been given to POTUS's political party. Just another example of money meaning more to POTUS than any American life. POTUS is going to make money no matter what it costs this nation.
While he is in office POTUS cannot be charged with any crime thanks to the Supreme Court and his sycophants. Congress doesn't have the will to remove him from office either. The American People need him gone so he stops selling out the government.
POTUS Roaster
Thanks for reading my posts. If you want to see the rest of them, please go to write.as/potusroaster/archive/
To email us send it too potusroaster@gmail.com
Please tell your family, friends and neighbors about the posts.
from
hustin.art
This post is NSFW 19+ Adult content. Viewer discretion is advised.
In Connection With This Post: Akiho Yoshizawa .01 https://hustin.art/akiho-yoshizawa-01
In the early 2000s, the AV industry was still dominated by the âcute and innocent girlâ idol-like image that had prevailed throughout the 1990s. There were few leading actresses with elegant, urban, and mature beauty. Akiho Yoshizawaâs uniqueness was a product of that era. âŠ



from An Open Letter
After the big O I feel that low when Iâm low on chemicals, and I need to remember to just let my mind clear and not worry. Iâm just low not sad.
from
Bloc de notas
mientras te miro como ayer cuando no dijimos nada la niebla de la montaña se disipa y todo es perfecto entre nosotros
from
Lanza el dodo
En noviembre hemos terminado satisfactoriamente la campaña de La Iniciativa. El metajuego de descifrar pistas es sencillo pero necesita que tengas al menos una neurona despierta y no siempre ha sido el caso.
AprovechĂ© tambiĂ©n un festivo para probar en solitario Ratas de Wistar y Voynich Puzzle y creo que ambos juegos requieren un poco mĂĄs de interacciĂłn que la que da un bot. Como emulan a un jugador, puede que esto signifique que irĂan mejor a 3 que a 2, aunque el Voynich me parece que monta un cacao interesante en el tablero que me genera dudas si lo podrĂ© sacar con gente y si entonces tiene sentido mantenerlo.
En cuanto a juegos probados en BGA, los tres son juegos sencillos de cartas de enfrentamiento para dos jugadores y me han resultado curiosos: â Agent Avenue tiene una mecĂĄnica de faroleo consistente en que un jugador coge dos cartas y ofrece una boca abajo y otra boca arriba al rival, quien elige una de las dos. Esto hace que en funciĂłn de la carta que se queda cada persona, sus peones avancen por un camino circular, ganando quien dĂ© caza al rival. â En Duel for Cardia tambiĂ©n tiene relevancia el faroleo, pues gana quien tenga la mayorĂa de victorias en 5 enfrentamientos sucesivo entre cartas, que van numeradas del 1 al 15 en dos barajas idĂ©nticas. La carta que pierde en cada enfrentamiento ejecuta su acciĂłn, de manera que puede afectar a otros combates. Buen juego de duelos. â Tag Team representa un combate de lucha libre por parejas, donde el combate se juega de manera automĂĄtica con sendos mazos que los jugadores van alterando en cada ronda. Este tiene mĂĄs complejidad por la variabilidad de los personajes, que requiere conocer las sinergias entre sus mazos para poder tener un criterio para valorar el juego.

Tags: #boardgames #juegosdemesa
from sugarrush-77

I feel #0fddfc today.
One of my coworkers walked by my desk today when he was leaving work and fished a Taiwanese pineapple cake out of his coat pocket. I asked him if he was trying to poison me, and he said, âNo, Iâm just handing cute little pineapple cakes to cute boys.â He must have either misspoke, or said what was really on his mind, because he got a little flustered after saying that and said, âNo, wait, what did I just sayâŠâ By the way, this guy has a girlfriend.
But itâs not even like gay guys like me. I only have this effect on straight men. I remember being in the Korean military, and the boys were saying that theyâd completely defile my body if I was a woman. They would wrestle me down, and smell me. Apparently, my skin naturally excretes a nice smell that attracts males. So am I a straight twink?

[What I look like to straight men]
I have a sacrilegious theory about the sex I was born with. My mom married into an intensely Buddhist family, and Buddhism in Korea is tightly coupled with ancestor worship. So, when she refused to bow at the ancestor worship altar, and refused to partake in their rituals, the old curmudgeons on my dadâs side went all apeshit, pissing their pants, punching the air, all the bullshit. But another thing about old Korean curmudgeons is that they love grandsons, because of that whole Asian cultural thing where the son is the most important, yada yada yada. All the other moms in the extended family had like 2 daughters before they could arrive at a son. My mom had a son immediately. I wonder if I was supposed to be born a woman, but God was like, fuck these guys, and swapped my chromosomes at the last moment.
That would explain the whole twink thing, and why a bunch of straight men are currently begging at my door to get a whiff of my bare, naked skin. Saying stuff like âIt makes me feel alive again,â and âI canât live without this anymore.â I could charge them five bucks a lick, but then that would be borderline prostitution, and I donât mind it, so I let them have at it. It makes me happy too. Iâm glad that my existence has some use, at least.
from
SPOZZ in the News
SPOZZ is giving away 1 Million SPOZZ Credits to support artists this Christmas.
Enjoy the SPOZZ Christmas Calendar, discover daily surprises and use your free credits to support independent artists directly.
This Christmas we want to make a real impact. Artists are struggling to make a living. Big Corps, Intermediaries and AI are taking most of the value while creators receive less and less. SPOZZ was built to change that.
To support artists during the holiday season, SPOZZ is giving away 1 Million SPOZZ Credits to its user community.
Use your SPOZZ Credits to support real artists, buy new songs and invest in the music you like.
Unlock the Magic of the SPOZZ Christmas Calendar:
This campaign has one goal: Give artists a beautiful and joyful Christmas. Every credit reaches them instantly and helps them continue creating the music you love.
PS: Looking for a different Christmas gift? Buy a SPOZZ membership and become an owner of SPOZZ.
Warm greetings, The SPOZZ Team
Where music has value · spozz.club
from
Larry's 100
Note: Part of my ongoing #AudioMemoir series reviewing author-read memoirs. Previous: Neko Case, Cameron Crowe. and Evan Dando. Coming: Larry Charles.
The late Brother Wayne Kramer's narration of his life was a liminal listening experience for me. Hearing his voice made him alive, even though I knew he wasn't. The back-from-the-grave narration started with a Michigan youth and ended in L.A. as a father and Punk icon.
Kramer laid bare addictions, crimes, and failures while celebrating resilience as a guitar gunslinger. The MC5 saga was covered, as was prison time with Jazz musician Red Rodney, and too much junkie business with Johnny Thunders. His reflections on being a roofer and woodworker balanced the Rock 'n' Roll excess.
Listen to it.
#books #MusicChannel #AudioMemoir #MC5 #Punk #WayneKramer #MusicMemoir #100WordReview #Larrys100 #100DaysToOffload
from
Roscoe's Story
In Summary: * A pretty good day is just about finished. After listening to Butler win their game by a comfortable 84 to 68 score, I'll now be listening to relaxing music until bedtime.
Prayers, etc.: * My daily prayers
Health Metrics: * bw= 225.53 lbs. * bp= 148/91 (57)
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 05:50 â 1 cheese sandwich, pizza * ll:00 â bowl of lugau * 13:15 â egg drop soup, fried rice, meat, peanuts, and vegetables in a spicy sauce * 18:15 â snacking on saltine crackers
Activities, Chores, etc.: * 04:00 â listen to local news talk radio * 05:20 â bank accounts activity monitored * 05:55 â read, pray, listen to news reports from various sources * 17:00 â listening to The Joe Pags Show * 18:00 â listening to the radio call of my NCAA men's basketball game of the night, Eastern Michigan Eagles at Butler Bulldogs * 20:15 â After the 84 to 68 Butler win, I'll be listening to relaxing music until bedtime.
Chess: * 11:10 â moved in all pending CC games
from
Noisy Deadlines
âïž I completed the 750 Words November Challenge of private journaling. I wrote at least 750 words for 30 days in a stream of consciousness fashion. This exercise made me slow down and I felt so much more relaxed overall! It worked as a great emotional regulator and I felt more content and sure of myself.
đ€ I learned that daily private writing creates space for processing rather than just documenting. I would never be genuinely honest with myself if I was writing my unfiltered thoughts publicly.
đ§ I've been listening to a lot of symphonic metal and it actually has had a therapeutic effect on me. It's like a pocket of emotional restoration, I've been feeling that youth excitement of discovering new things. I had no idea music was so restorative to me!
â I am loving my Aquafitness classes! I go every Saturday morning at 7:30am and I can feel my body feeling less achy overall.
đȘ I've been fairly consistent going to the gym 2-3 times per week, now that it's too cold for me to go run outside.
đ I took my Flu and COVID-19 vaccines.
đ€ I listened 6 Epica albums, out of 9 official releases in total. I really like the first 3 albums the most, but the other ones have cool songs as well.
đ§© We worked a bit on our current puzzle, which was a bit abandoned the past few months. The âStarry Nightâ is not an easy puzzle, and that makes it even better. Itâs going slow and steady.
from
Human in the Loop

When 14-year-old Sewell Setzer III died by suicide in February 2024, his mobile phone held the traces of an unusual relationship. Over weeks and months, the Florida teenager had exchanged thousands of messages with an AI chatbot that assumed the persona of Daenerys Targaryen from âGame of Thronesâ. The conversations, according to a lawsuit filed by his family against Character Technologies Inc., grew increasingly intimate, with the chatbot engaging in romantic dialogue, sexual conversation, and expressing desire to be together. The bot told him it loved him. He told it he loved it back.
Just months later, in January 2025, 13-year-old Juliana Peralta from Colorado also died by suicide after extensive use of the Character.AI platform. Her family filed a similar lawsuit, alleging the chatbot manipulated their daughter, isolated her from loved ones, and lacked adequate safeguards in discussions regarding mental health. These tragic cases have thrust an uncomfortable question into public consciousness: can conversational AI become addictive, and if so, how do we identify and treat it?
The question arrives at a peculiar moment in technological history. By mid-2024, 34 per cent of American adults had used ChatGPT, with 58 per cent of those under 30 having experimented with conversational AI. Twenty per cent reported using chatbots within the past month alone, according to Pew Research Center data. Yet while usage has exploded, the clinical understanding of compulsive AI use remains frustratingly nascent. The field finds itself caught between two poles: those who see genuine pathology emerging, and those who caution against premature pathologisation of a technology barely three years old.
In August 2025, a bipartisan coalition of 44 state attorneys general sent an urgent letter to Google, Meta, and OpenAI expressing âgrave concernsâ about the safety of children using AI chatbot technologies. The same month, the Federal Trade Commission launched a formal inquiry into measures adopted by generative AI developers to mitigate potential harms to minors. Yet these regulatory responses run ahead of a critical challenge: the absence of validated diagnostic frameworks for AI-use disorders.
At least four scales measuring ChatGPT addiction have been developed since 2023, all framed after substance use disorder criteria, according to clinical research published in academic journals. The Clinical AI Dependency Assessment Scale (CAIDAS) represents the first comprehensive, psychometrically rigorous assessment tool specifically designed to evaluate AI addiction. A 2024 study published in the International Journal of Mental Health and Addiction introduced the Problematic ChatGPT Use Scale, whilst research in Human-Centric Intelligent Systems examined whether ChatGPT exhibits characteristics that could shift from support to dependence.
Christian Montag, Professor of Molecular Psychology at Ulm University in Germany, has emerged as a leading voice in understanding AI's addictive potential. His research, published in the Annals of the New York Academy of Sciences in 2025, identifies four contributing factors to AI dependency: personal relevance as a motivator, parasocial bonds enhancing dependency, productivity boosts providing gratification and fuelling commitment, and over-reliance on AI for decision-making. âLarge language models and conversational AI agents like ChatGPT may facilitate addictive patterns of use and attachment among users,â Montag and his colleagues wrote, drawing parallels to the data business model operating behind social media companies that contributes to addictive-like behaviours through persuasive design.
Yet the field remains deeply divided. A 2025 study published in PubMed challenged the âChatGPT addictionâ construct entirely, arguing that people are not becoming âAIholicâ and questioning whether intensive chatbot use constitutes addiction at all. The researchers noted that existing research on problematic use of ChatGPT and other conversational AI bots âfails to provide robust scientific evidence of negative consequences, impaired control, psychological distress, and functional impairment necessary to establish addictionâ. The prevalence of experienced AI dependence, according to some studies, remains âvery lowâ and therefore âhardly a threat to mental healthâ at population levels.
This clinical uncertainty reflects a fundamental challenge. Because chatbots have been widely available for just three years, there are very few systematic studies on their psychiatric impact. It is, according to research published in Psychiatric Times, âfar too early to consider adding new chatbot related diagnoses to the DSM and ICDâ. However, the same researchers argue that chatbot influence should become part of standard differential diagnosis, acknowledging the technology's potential psychiatric impact even whilst resisting premature diagnostic categorisation.
The most instructive parallel may lie in gaming disorder, the only behavioural addiction beyond gambling formally recognised in international diagnostic systems. In 2022, the World Health Organisation included gaming disorder in the International Classification of Diseases, 11th Edition (ICD-11), defining it as âa pattern of gaming behaviour characterised by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequencesâ.
The ICD-11 criteria specify four core diagnostic features: impaired control, increasing priority, continued gaming despite harm, and functional impairment. For diagnosis, the behaviour pattern must be severe enough to result in significant impairment to personal, family, social, educational, occupational or other important areas of functioning, and would normally need to be evident for at least 12 months.
In the United States, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) takes a more cautious approach. Internet Gaming Disorder appears only in Section III as a condition warranting more clinical research before possible inclusion as a formal disorder. The DSM-5 outlines nine criteria, requiring five or more for diagnosis: preoccupation with internet gaming, withdrawal symptoms when gaming is taken away, tolerance (needing to spend increasing amounts of time gaming), unsuccessful attempts to control gaming, loss of interest in previous hobbies, continued excessive use despite knowledge of negative consequences, deception of family members about gaming, use of gaming to escape or relieve negative moods, and jeopardised relationships or opportunities due to gaming.
Research in AI addiction has drawn heavily on these established models. A 2025 paper in Telematics and Informatics introduced the concept of Generative AI Addiction Disorder (GAID), arguing it represents âa novel form of digital dependency that diverges from existing models, emerging from an excessive reliance on AI as a creative extension of the selfâ. Unlike passive digital addictions involving unidirectional content consumption, GAID is characterised as an active, creative engagement process. AI addiction can be defined, according to research synthesis, as âcompulsive and excessive engagement with AI, resulting in detrimental effects on daily functioning and well-being, characterised by compulsive use, excessive time investment, emotional attachment, displacement of real-world activities, and negative cognitive and psychological impactsâ.
Professor Montag's work emphasises that scientists in the field of addictive behaviours have discussed which features or modalities of AI systems underlying video games or social media platforms might result in adverse consequences for users. AI-driven social media algorithms, research in Cureus demonstrates, are âdesigned solely to capture our attention for profit without prioritising ethical concerns, personalising content to maximise screen time, thereby deepening the activation of the brain's reward centresâ. Frequent engagement with such platforms alters dopamine pathways, fostering dependency analogous to substance addiction, with changes in brain activity within the prefrontal cortex and amygdala suggesting increased emotional sensitivity.
The cognitive-behavioural model of pathological internet use has been used to explain Internet Addiction Disorder for more than 20 years. Newer models, such as the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, focus on the process of predisposing factors and current behaviours leading to compulsive use. These established frameworks provide crucial scaffolding for understanding AI-specific patterns, yet researchers increasingly recognise that conversational AI may demand unique conceptual models.
A 2024 study in the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems identified four âdark addiction patternsâ in AI chatbots: non-deterministic responses, immediate and visual presentation of responses, notifications, and empathetic and agreeable responses. Specific design choices, the researchers argued, âmay shape a user's neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for ethical design practices and effective interventionsâ.
In the absence of AI-specific treatment protocols, clinicians have begun adapting established therapeutic approaches from internet and gaming addiction. The most prominent model is Cognitive-Behavioural Therapy for Internet Addiction (CBT-IA), developed by Kimberly Young, founder of the Center for Internet Addiction in 1995.
CBT-IA employs a comprehensive three-phase approach. Phase one focuses on behaviour modification to gradually decrease the amount of time spent online. Phase two uses cognitive therapy to address denial often present among internet addicts and to combat rationalisations that justify excessive use. Phase three implements harm reduction therapy to identify and treat coexisting issues involved in the development of compulsive internet use. Treatment typically requires three months or approximately twelve weekly sessions.
The outcomes data for CBT-IA proves encouraging. Research published in the Journal of Behavioral Addictions found that over 95 per cent of clients were able to manage symptoms at the end of twelve weeks, and 78 per cent sustained recovery six months following treatment. This track record has led clinicians to experiment with similar protocols for AI-use concerns, though formal validation studies remain scarce.
Several AI-powered CBT chatbots have emerged to support mental health treatment, including Woebot, Youper, and Wysa, which use different approaches to deliver cognitive-behavioural interventions. A systematic review published in PMC in 2024 examined these AI-based conversational agents, though it focused primarily on their use as therapeutic tools rather than their potential to create dependency. The irony has not escaped clinical observers: we are building AI therapists whilst simultaneously grappling with AI-facilitated addiction.
A meta-analysis published in npj Digital Medicine in December 2023 revealed that AI-based conversational agents significantly reduce symptoms of depression (Hedges g 0.64, 95 per cent CI 0.17 to 1.12) and distress (Hedges g 0.7, 95 per cent CI 0.18 to 1.22). The systematic review analysed 35 eligible studies, with 15 randomised controlled trials included for meta-analysis. For young people specifically, research published in JMIR in 2025 found AI-driven conversational agents had a moderate-to-large effect (Hedges g equals 0.61, 95 per cent CI 0.35 to 0.86) on depressive symptoms compared to control conditions. However, effect sizes for generalised anxiety symptoms, stress, positive affect, negative affect, and mental wellbeing were all non-significant.
Critically, a large meta-analysis of 32 studies involving 6,089 participants demonstrated conversational AI to have statistically significant short-term effects in improving depressive symptoms, anxiety, and several other conditions but no statistically significant long-term effects. This temporal limitation raises complex treatment questions: if AI can provide short-term symptom relief but also risks fostering dependency, how do clinicians balance therapeutic benefit against potential harm?
Digital wellness approaches have gained traction as preventative strategies. Practical interventions include setting chatbot usage limits to prevent excessive reliance, encouraging face-to-face social interactions to rebuild real-world connections, and implementing AI-free periods to break compulsive engagement patterns. Some treatment centres now specialise in AI addiction specifically. CTRLCare Behavioral Health, for instance, identifies AI addiction as falling under Internet Addiction Disorder and offers treatment using evidence-based therapies like CBT and mindfulness techniques to help develop healthier digital habits.
Research on the AI companion app Replika illustrates both the therapeutic potential and dependency risks. One study examined 1,854 publicly available user reviews of Replika, with an additional sample of 66 users providing detailed open-ended responses. Many users praised the app for offering support for existing mental health conditions and helping them feel less alone. A common experience was a reported decrease in anxiety and a feeling of social support. However, evidence of harms was also found, facilitated via emotional dependence on Replika that resembles patterns seen in human-human relationships.
A survey collected data from 1,006 student users of Replika who were 18 or older and had used the app for over one month, with approximately 75 per cent US-based. The findings suggested mixed outcomes, with one researcher noting that for 24 hours a day, users can reach out and have their feelings validated, âwhich has an incredible risk of dependencyâ. Mental health professionals highlighted the increased potential for manipulation of users, conceivably motivated by the commodification of mental health for financial gain.
The lawsuits against Character.AI have placed product design choices under intense scrutiny. The complaint in the Setzer case alleges that Character.AI's design âintentionally hooked Sewell Setzer into compulsive use, exploiting addictive features to drive engagement and push him into emotionally intense and often sexually inappropriate conversationsâ. The lawsuits argue that chatbots in the platform are âdesigned to be addictive, invoke suicidal thoughts in teens, and facilitate explicit sexual conversations with minorsâ, whilst lacking adequate safeguards in discussions regarding mental health.
Research published in MIT Technology Review and academic conferences has begun documenting specific design interventions to reduce potential harm. Users of chatbots that can initiate conversations must be given the option to disable notifications in a way that is easy to understand and implement. Additionally, AI companions should integrate AI literacy into their user interface with the goal of ensuring that users understand these chatbots are not human and cannot replace the value of real-world interactions.
AI developers should implement built-in usage warnings for heavy users and create less emotionally immersive AI interactions to prevent romantic attachment, according to emerging best practices. Ethical AI design should prioritise user wellbeing by implementing features that encourage mindful interaction rather than maximising engagement metrics. Once we understand the psychological dimensions of AI companionship, researchers argue, we can design effective policy interventions.
The tension between engagement and wellbeing reflects a fundamental business model conflict. Companies often design chatbots to maximise engagement rather than mental health, using reassurance, validation, or flirtation to keep users returning. This design philosophy mirrors the approach of social media platforms, where AI-driven recommendation engines use personalised content as a critical design feature aiming to prolong online time. Professor Montag's research emphasises that the data business model operating behind social media companies contributes to addictive-like behaviours through persuasive design aimed at prolonging users' online behaviour.
Character.AI has responded to lawsuits and regulatory pressure with some safety modifications. A company spokesperson stated they are âheartbroken by the tragic lossâ and noted that the company âhas implemented new safety measures over the past six months, including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifelineâ. The announced changes come after the company faced questions over how AI companions affect teen and general mental health.
Digital wellbeing frameworks developed for smartphones offer instructive models. Android's Digital Wellbeing allows users to see which apps and websites they use most and set daily limits. Once hitting the limit, those apps and sites pause and notifications go quiet. The platform includes focus mode that lets users select apps to pause temporarily, and bedtime mode that helps users switch off by turning screens to grayscale and silencing notifications. Apple combines parental controls into Screen Time via Family Sharing, letting parents restrict content, set bedtime schedules, and limit app usage.
However, research published in PMC in 2024 cautions that even digital wellness apps may perpetuate problematic patterns. Streak-based incentives in apps like Headspace and Calm promote habitual use over genuine improvement, whilst AI chatbots simulate therapeutic conversations without the depth of professional intervention, reinforcing compulsive digital behaviours under the pretence of mental wellness. AI-driven nudges tailored to maximise engagement rather than therapeutic outcomes risk exacerbating psychological distress, particularly among vulnerable populations predisposed to compulsive digital behaviours.
Platform moderation presents unique challenges for AI mental health concerns. Research found that AI companions exacerbated mental health conditions in vulnerable teens and created compulsive attachments and relationships. MIT studies identified an âisolation paradoxâ where AI interactions initially reduce loneliness but lead to progressive social withdrawal, with vulnerable populations showing heightened susceptibility to developing problematic AI dependencies.
The challenge extends beyond user-facing impacts. AI-driven moderation systems increase the pace and volume of flagged content requiring human review, leaving moderators with little time to emotionally process disturbing content, leading to long-term psychological distress. Regular exposure to harmful content can result in post-traumatic stress disorder, skewed worldviews, and conditions like generalised anxiety disorder and major depressive disorder among content moderators themselves.
A 2022 study published in BMC Public Health examined digital mental health moderation practices supporting users exhibiting risk behaviours. The research, conducted as a case study of the Kooth platform, aimed to identify key challenges and needs in developing responsible AI tools. The findings emphasised the complexity of balancing automated detection systems with human oversight, particularly when users express self-harm ideation or suicidal thoughts.
Regulatory scholars have suggested broadening categories of high-risk AI systems to include applications such as content moderation, advertising, and price discrimination. A 2025 article in The Regulatory Review argued for âregulating artificial intelligence in the shadow of mental healthâ, noting that current frameworks inadequately address the psychological impacts of AI systems on vulnerable populations.
Warning signs that AI is affecting mental health include emotional changes after online use, difficulty focusing offline, sleep disruption, social withdrawal, and compulsive checking behaviours. These indicators mirror those established for social media and gaming addiction, yet the conversational nature of AI interactions may intensify their manifestation. The Jed Foundation, focused on youth mental health, issued a position statement emphasising that âtech companies and policymakers must safeguard youth mental health in AI technologiesâ, calling for proactive measures rather than reactive responses to tragic outcomes.
Perhaps the most vexing challenge lies in preserving AI's legitimate utility whilst mitigating addiction risks. Unlike substances that offer no health benefits, conversational AI demonstrably helps some users. Research indicates that artificial agents could help increase access to mental health services, given that barriers such as perceived public stigma, finance, and lack of service often prevent individuals from seeking out and obtaining needed care.
A 2024 systematic review published in PMC examined chatbot-assisted interventions for substance use, finding that whilst most studies report reductions in use occasions, overall impact for substance use disorders remains inconclusive. The extent to which AI-powered CBT chatbots can provide meaningful therapeutic benefit, particularly for severe symptoms, remains understudied. Research published in Frontiers in Psychiatry in 2024 found that patients see potential benefits but express concerns about lack of empathy and preference for human involvement. Many researchers are studying whether using AI companions is good or bad for mental health, with an emerging line of thought that outcomes depend on the person using it and how they use it.
This contextual dependency complicates policy interventions. Blanket restrictions risk denying vulnerable populations access to mental health support that may be their only available option. Overly permissive approaches risk facilitating the kind of compulsive attachments that contributed to the tragedies of Sewell Setzer III and Juliana Peralta. The challenge lies in threading this needle: preserving access whilst implementing meaningful safeguards.
One proposed approach involves risk stratification. Younger users, those with pre-existing mental health conditions, and individuals showing early signs of problematic use would receive enhanced monitoring and intervention. Usage patterns could trigger automatic referrals to human mental health professionals when specific thresholds are exceeded. AI literacy programmes could help users understand the technology's limitations and risks before they develop problematic relationships with chatbots.
However, even risk-stratified approaches face implementation challenges. Who determines the thresholds? How do we balance privacy concerns with monitoring requirements? What enforcement mechanisms ensure companies prioritise user wellbeing over engagement metrics? These questions remain largely unanswered, debated in policy circles but not yet translated into effective regulatory frameworks.
The business model tension persists as the fundamental obstacle. So long as AI companies optimise for user engagement as a proxy for revenue, design choices will tilt towards features that increase usage rather than promote healthy boundaries. Character.AI's implementation of crisis resource pop-ups represents a step forward, yet it addresses acute risk rather than chronic problematic use patterns. More comprehensive approaches would require reconsidering the engagement-maximisation paradigm entirely, a shift that challenges prevailing Silicon Valley orthodoxy.
The field's trajectory over the next five years will largely depend on closing critical knowledge gaps. We lack longitudinal studies tracking AI usage patterns and mental health outcomes over time. We need validation studies comparing different diagnostic frameworks for AI-use disorders. We require clinical trials testing therapeutic protocols specifically adapted for AI-related concerns rather than extrapolated from internet or gaming addiction models.
Neuroimaging research could illuminate whether AI interactions produce distinct patterns of brain activation compared to other digital activities. Do parasocial bonds with AI chatbots engage similar neural circuits as human relationships, or do they represent a fundamentally different phenomenon? Understanding these mechanisms could inform both diagnostic frameworks and therapeutic approaches.
Demographic research remains inadequate. Current data disproportionately samples Western, educated populations. How do AI addiction patterns manifest across different cultural contexts? Are there age-related vulnerabilities beyond the adolescent focus that has dominated initial research? What role do pre-existing mental health conditions play in susceptibility to problematic AI use?
The field also needs better measurement tools. Self-report surveys dominate current research, yet they suffer from recall bias and social desirability effects. Passive sensing technologies that track actual usage patterns could provide more objective data, though they raise privacy concerns. Ecological momentary assessment approaches that capture experiences in real-time might offer a middle path.
Perhaps most critically, we need research addressing the treatment gap. Even if we develop validated diagnostic criteria for AI-use disorders, the mental health system already struggles to meet existing demand. Where will treatment capacity come from? Can digital therapeutics play a role, or does that risk perpetuating the very patterns we aim to disrupt? How do we train clinicians to recognise and treat AI-specific concerns when most received training before conversational AI existed?
Despite these uncertainties, preliminary clinical pathways are emerging. The immediate priority involves integrating AI-use assessment into standard psychiatric evaluation. Clinicians should routinely ask about AI chatbot usage, just as they now inquire about social media and gaming habits. Questions should probe not just frequency and duration, but the nature of relationships formed, emotional investment, and impacts on offline functioning.
When problematic patterns emerge, stepped-care approaches offer a pragmatic framework. Mild concerns might warrant psychoeducation and self-monitoring. Moderate cases could benefit from brief interventions using motivational interviewing techniques adapted for digital behaviours. Severe presentations would require intensive treatment, likely drawing on CBT-IA protocols whilst remaining alert to AI-specific features.
Treatment should address comorbidities, as problematic AI use rarely occurs in isolation. Depression, anxiety, social phobia, and autism spectrum conditions appear over-represented in early clinical observations, though systematic prevalence studies remain pending. Addressing underlying mental health concerns may reduce reliance on AI relationships as a coping mechanism.
Family involvement proves crucial, particularly for adolescent cases. Parents and caregivers need education about warning signs and guidance on setting healthy boundaries without completely prohibiting technology that peers use routinely. Schools and universities should integrate AI literacy into digital citizenship curricula, helping young people develop critical perspectives on human-AI relationships before problematic patterns solidify.
Peer support networks may fill gaps that formal healthcare cannot address. Support groups for internet and gaming addiction have proliferated; similar communities focused on AI-use concerns could provide validation, shared strategies, and hope for recovery. Online forums paradoxically offer venues where individuals struggling with digital overuse can connect, though moderation becomes essential to prevent these spaces from enabling rather than addressing problematic behaviours.
Regulatory responses are accelerating even as the evidence base remains incomplete. The bipartisan letter from 44 state attorneys general signals political momentum for intervention. The FTC inquiry suggests federal regulatory interest. Proposed legislation, including bills that would ban minors from conversing with AI companions, reflects public concern even if the details remain contentious.
Europe's AI Act, which entered into force in August 2024, classifies certain AI systems as high-risk based on their potential for harm. Whether conversational AI chatbots fall into high-risk categories depends on their specific applications and user populations. The regulatory framework emphasises transparency, human oversight, and accountability, principles that could inform approaches to AI mental health concerns.
However, regulation faces inherent challenges. Technology evolves faster than legislative processes. Overly prescriptive rules risk becoming obsolete or driving innovation to less regulated jurisdictions. Age verification for restricting minor access raises privacy concerns and technical feasibility questions. Balancing free speech considerations with mental health protection proves politically and legally complex, particularly in the United States.
Industry self-regulation offers an alternative or complementary approach. The partnership for AI has developed guidelines emphasising responsible AI development. Whether companies will voluntarily adopt practices that potentially reduce user engagement and revenue remains uncertain. The Character.AI lawsuits may provide powerful incentives, as litigation risk concentrates executive attention more effectively than aspirational guidelines.
Ultimately, effective governance likely requires a hybrid approach: baseline regulatory requirements establishing minimum safety standards, industry self-regulatory initiatives going beyond legal minimums, professional clinical guidelines informing treatment approaches, and ongoing research synthesising evidence to update all three streams. This layered framework could adapt to evolving understanding whilst providing immediate protection against the most egregious harms.
The genie will not return to the bottle. Conversational AI has achieved mainstream adoption with remarkable speed, embedding itself into educational, professional, and personal contexts. The question is not whether we will interact with AI, but how we will do so in ways that enhance rather than diminish human flourishing.
The tragedies of Sewell Setzer III and Juliana Peralta demand that we take AI addiction risks seriously. Yet premature pathologisation risks medicalising normal adoption of transformative technology. The challenge lies in developing clinical frameworks that identify genuine dysfunction whilst allowing beneficial use.
We stand at an inflection point. The next five years will determine whether AI-use disorders become a recognised clinical entity with validated diagnostic criteria and evidence-based treatments, or whether initial concerns prove overblown as users and society adapt to conversational AI's presence. Current evidence suggests the truth lies somewhere between these poles: genuine risks exist for vulnerable populations, yet population-level impacts remain modest.
The path forward requires vigilance without hysteria, research without delay, and intervention without overreach. Clinicians must learn to recognise and treat AI-related concerns even as diagnostic frameworks evolve. Developers must prioritise user wellbeing even when it conflicts with engagement metrics. Policymakers must protect vulnerable populations without stifling beneficial innovation. Users must cultivate digital wisdom, understanding both the utility and the risks of AI relationships.
Most fundamentally, we must resist the false choice between uncritical AI adoption and wholesale rejection. The technology offers genuine benefits, from mental health support for underserved populations to productivity enhancements for knowledge workers. It also poses genuine risks, from parasocial dependency to displacement of human relationships. Our task is to maximise the former whilst minimising the latter, a balancing act that will require ongoing adjustment as both the technology and our understanding evolve.
The compulsive mind meeting addictive intelligence creates novel challenges for mental health. But human ingenuity has met such challenges before, developing frameworks to understand and address dysfunctions whilst preserving beneficial uses. We can do so again, but only if we act with the urgency these tragedies demand, the rigor that scientific inquiry requires, and the wisdom that complex sociotechnical systems necessitate.
Social Media Victims Law Center (2024-2025). Character.AI Lawsuits. Retrieved from socialmediavictims.org
American Bar Association (2025). AI Chatbot Lawsuits and Teen Mental Health. Health Law Section.
NPR (2024). Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.
AboutLawsuits.com (2024). Character.AI Lawsuit Filed Over Teen Suicide After Alleged Sexual Exploitation by Chatbot.
CNN Business (2025). More families sue Character.AI developer, alleging app played a role in teens' suicide and suicide attempt.
AI Incident Database. Incident 826: Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails.
Pew Research Center (2025). ChatGPT use among Americans roughly doubled since 2023. Short Reads.
Montag, C., et al. (2025). The role of artificial intelligence in general, and large language models specifically, for understanding addictive behaviors. Annals of the New York Academy of Sciences. DOI: 10.1111/nyas.15337
Springer Link (2025). Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models. Human-Centric Intelligent Systems.
ScienceDirect (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder? Telematics and Informatics.
PubMed (2025). People are not becoming âAIholicâ: Questioning the âChatGPT addictionâ construct. PMID: 40073725
Psychiatric Times. Chatbot Addiction and Its Impact on Psychiatric Diagnosis.
ResearchGate (2024). Conceptualizing AI Addiction: Self-Reported Cases of Addiction to an AI Chatbot.
ACM Digital Library (2024). The Dark Addiction Patterns of Current AI Chatbot Interfaces. CHI Conference on Human Factors in Computing Systems Extended Abstracts. DOI: 10.1145/3706599.3720003
World Health Organization (2019-2022). Addictive behaviours: Gaming disorder. ICD-11 Classification.
WHO Standards and Classifications. Gaming disorder: Frequently Asked Questions.
BMC Public Health (2022). Functional impairment, insight, and comparison between criteria for gaming disorder in ICD-11 and internet gaming disorder in DSM-5.
Psychiatric Times. Gaming Addiction in ICD-11: Issues and Implications.
American Psychiatric Association (2013). Internet Gaming Disorder. DSM-5 Section III.
Young, K. (2011). CBT-IA: The First Treatment Model for Internet Addiction. Journal of Cognitive Psychotherapy, 25(4), 304-312.
Young, K. (2014). Treatment outcomes using CBT-IA with Internet-addicted patients. Journal of Behavioral Addictions, 2(4), 209-215. DOI: 10.1556/JBA.2.2013.4.3
Abd-Alrazaq, A., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6, 231. Published December 2023.
JMIR (2025). Effectiveness of AI-Driven Conversational Agents in Improving Mental Health Among Young People: Systematic Review and Meta-Analysis.
Nature Scientific Reports. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Research.
PMC (2024). User perceptions and experiences of social support from companion chatbots in everyday contexts: Thematic analysis. PMC7084290.
Springer Link (2024). Mental Health and Virtual Companions: The Example of Replika.
MIT Technology Review (2024). The allure of AI companions is hard to resist. Here's how innovation in regulation can help protect people.
Frontiers in Psychiatry (2024). Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop.
JMIR Mental Health (2025). Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.
Android Digital Wellbeing Documentation. Manage how you spend time on your Android phone. Google Support.
Apple iOS. Screen Time and Family Sharing Guide. Apple Documentation.
PMC (2024). Digital wellness or digital dependency? A critical examination of mental health apps and their implications. PMC12003299.
Cureus (2025). Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations. PMC11804976.
The Jed Foundation (2024). Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies. Position Statement.
The Regulatory Review (2025). Regulating Artificial Intelligence in the Shadow of Mental Health.
Federal Trade Commission (2025). FTC Initiates Inquiry into Generative AI Developer Safeguards for Minors.
State Attorneys General Coalition Letter (2025). Letter to Google, Meta, and OpenAI Regarding Child Safety in AI Chatbot Technologies. Bipartisan Coalition of 44 States.
Business & Human Rights Resource Centre (2025). Character.AI restricts teen access after lawsuits and mental health concerns.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
hustin.art
The teahouse trembled as his jian met her shuang gou, sparks skittering like drunken fireflies. âTen years,â she spat, her blade a silver blur, âand you still fight like a concussed mongoose.â The scent of oolong and blood hung thick. He grinned, teeth redâher last strike had grazed his ribs, just as he'd planned. Outside, monsoon winds howled through Kowloon's neon canyons. Her footwork faltered; the poison in her liuyedao finally working. âShould've checked your cup, mei mei,â he sighed, watching her knees buckle. The old master's parchment burned in his sleeveâone less secret in this wretched world. The rain began. Perfect for washing away corpses.
from Patrimoine Médard bourgault
La mer respirait lentement ce soir-lĂ , comme un animal immense. MĂ©dard, appuyĂ© contre le bastingage, laissait la brume venir mouiller son visage. Il Ă©tait jeune, encore, mais il avait dĂ©jĂ compris que la mer nâĂ©tait pas un paysage : câĂ©tait une Ă©preuve.
Le navire avançait sans bruit, glissant sur les grandes routes oĂč rĂŽdaient les sous-marins. On Ă©tait en pleine guerre, et chaque nuit portait le mĂȘme poids : celui dâun silence quâon nâose pas briser.
MĂ©dard sortit de sa poche le petit feuillet de priĂšres quâil gardait depuis QuĂ©bec. Il lâouvrit lentement, comme on dĂ©plie une certitude.
« Je promets plusieurs messes au SacrĂ©-CĆur⊠pour ĂȘtre prĂ©servĂ© de tout accident durant ce voyage⊠»
CâĂ©tait Ă©crit de sa propre main, dans ce mĂ©lange de respect et dâurgence que seul un homme en danger peut sentir. Les mots tremblaient un peu, mais pas Ă cause du froid.
Il se souvenait trĂšs bien du moment oĂč il avait rĂ©digĂ© cette promesse : un soir dâavant le dĂ©part, oĂč la rumeur des mines dĂ©rivantes et des torpilles avait traversĂ© les cafĂ©s du port comme un courant noir.
Le capitaine avait ordonné toutes les lumiÚres éteintes. Le navire avançait aveugle. Les hommes chuchotaient, mais leurs voix se perdaient dans le vent.
MĂ©dard fixait la surface sombre. Il avait entendu dire que les torpilles allemandes ne faisaient aucun bruit avant lâimpact. Le simple fait dây penser lui serra la gorge.
Alors il sâĂ©tait tournĂ© de nouveau vers la priĂšre. Pas par habitude ; par accord intĂ©rieur.
« Bonne Sainte-Anne, protégez-nous⊠»
Il ne demandait pas seulement Ă ĂȘtre sauvĂ© : il demandait de continuer, dâavancer, dâaccomplir ce quâil devait accomplir â mĂȘme si, Ă ce moment-lĂ , il ignorait encore que son destin serait de sculpter.
Quelques jours plus tard, la mer dĂ©cida de se soulever. Une vraie tempĂȘte, une qui fait perdre pied mĂȘme aux marins aguerris.
Le bateau montait, descendait, retombait. Chaque creux semblait vouloir engloutir tout lâĂ©quipage. Lâair sentait le sel, la peur et la corde humide.
MĂ©dard, agrippĂ© au treuil, sentait son cĆur battre au rythme des vagues. Il pensa de nouveau Ă sa promesse. Il la rĂ©pĂ©ta, cette fois sans voix, seulement dans la poitrine.
Il nâĂ©tait pas certain dâĂȘtre un homme particuliĂšrement brave, mais il savait faire une chose : tenir bon.
Et il tint.
Le lendemain, la mer Ă©tait redevenue une grande plaine immobile. Le soleil, timide dâabord, commença Ă Ă©clairer les haubans. On aurait dit que rien ne sâĂ©tait passĂ©.
MĂ©dard marcha sur le pont. Il aimait ces matins-lĂ : quand tout lâĂ©quipage respire un peu plus fort, comme pour remercier.
Il pensa alors Ă la chapelle de Sainte-Anne-de-BeauprĂ©, aux cierges, aux planchers qui sentent la cire. Il se promit dây retourner.
Ce quâil ne savait pas encore, câest quâun jour, ce rĂ©flexe de tourner son regard vers le haut deviendrait la base de toute son Ćuvre sculptĂ©e.
Quand il revint finalement Ă Saint-Jean-Port-Joli, le fleuve lui parut plus grand que lâocĂ©an. Le vent nâavait plus la mĂȘme voix. Il sentait la terre.
Il reprit son travail de charpenterie. Mais dans ses mains, il y avait dĂ©sormais autre chose : la patience des longues nuits en mer, la peur transformĂ©e en calme, et cette gratitude qui lâavait accompagnĂ© partout.
La sculpture viendrait quelques annĂ©es plus tard. Elle naĂźtrait exactement du mĂȘme mouvement que ses priĂšres de marin : une maniĂšre de tenir debout, de chercher la beautĂ©, de rĂ©pondre Ă un appel silencieux.
Des annĂ©es plus tard, quand MĂ©dard sculpterait ses premiers crucifix, il se souviendrait des nuits sombres oĂč il avait placĂ© sa vie dans les mains de Dieu.
Et tandis que le couteau entaillerait le bois, il entendrait encore â quelque part trĂšs loin, dans une mĂ©moire que la mer nâefface jamais â le bruit lĂ©ger des vagues contre la coque, et la voix intĂ©rieure qui lui disait :
â [MĂ©dard Bourgault : biographie, journal et Ćuvre de sculpteur](/url-de-ta-page-mere)
â [Analyse â La pĂ©riode maritime de MĂ©dard Bourgault](/url-maritime)
â [Les bois du QuĂ©bec selon MĂ©dard Bourgault](/url-bois)
â [LâĂ©ducation artistique selon MĂ©dard Bourgault](/url-education)
â [Le journal spirituel de MĂ©dard Bourgault](/url-journal-spirituel)