from Build stuff; Break stuff; Have fun!

Day 3 of #AdventOfProgress

Wow, we are coming along. Now we have a connection to Supabase and working Auth. This went flawlessly, and I'm happy to do more! :)

Now the user can create a new account, verify the mail, and sign in and out. 👏

A screenshot of the app on a light background with a sign-in form for mail and password

This is all very basic, and I will polish it when the MVP is done.


61 of #100DaysToOffload
#log #AdventOfProgress
_Thoughts?

 
Weiterlesen... Discuss...

from Douglas Vandergraph

There are moments in life when you look around at the world, at the church, at the voices speaking on behalf of God, and you find yourself asking a simple, aching question: “Why does following Jesus sometimes feel like being told I’m never enough?”

Everywhere you turn, someone is preaching, posting, or shouting that you’re unworthy. That you’re ungrateful. That you’re broken beyond usefulness. That God is disappointed in you. That you should feel ashamed of who you are and how far you still have to go.

But does that message come from the heart of the Christ who walked the dusty roads of Galilee, who touched the untouchable, who lifted the broken, who restored those others had written off?

No. Not even close.

So today, I want to sit with you and imagine something sacred: What if you could sit down with Jesus Himself, face to face, and ask Him what He thinks about the message so many Christians preach—this message that tears people down in the name of holiness?

What if you could hear His response? What would He say? What would He correct? What would He restore in you?

This article is that conversation. It is the long, slow, healing exhale that people who have been crushed by religious shame have needed for a long time. It is the reminder that the Gospel was never meant to bruise you—it was meant to bring you back to life.

Let’s walk gently into this together.


I. When You Sit Down With Jesus, Everything Harsh Falls Away

Imagine the scene. You’re tired. Worn out. Disappointed by church folks who seem more excited about pointing out flaws than lifting up grace. You have questions you’ve been carrying for years because you’ve been told that doubting your worth is holiness.

You sit across from Jesus. Not the Jesus of fear-based preaching. Not the Jesus painted as a cosmic judge ready to strike you down. No—the real Jesus.

And before you even speak, He looks at you with a kind of love that steadies your breathing.

Then He says something that immediately softens the weights you’ve been carrying:

“You are not who they say you are. And you’re not who shame tells you to be. You are Mine.”

He doesn’t start with condemnation. He doesn’t start with accusation. He doesn’t start with your failures.

He starts with your identity.

Because Jesus knows something religion often forgets: People don’t rise when they are shamed. People rise when they are loved back into themselves.


II. The Most Misunderstood Idea in Christianity: “Unworthy”

There is a sentence many Christians repeat as if it honors God: “Lord, we are unworthy.”

And while humility is beautiful, that phrase—spoken too often and out of context—has wrecked more souls than it has healed.

Here’s the truth Scripture actually reveals:

If you were worthless, Heaven would not have bankrupt itself for you.

Think about it. Value determines cost. And God paid the highest cost imaginable.

No one spends everything they have on garbage. No one sacrifices their only Son for a soul that “sucks.”

But religion, when it forgets the heart of God, becomes obsessed with reminding people of their dirt instead of reminding them of their design.

It confuses humility with humiliation. It preaches unworthiness as if it is worship.

But God did not send His Son to die for trash. He sent His Son to redeem treasure.


III. Jesus Never Led With Shame — He Led With Worth

Let’s walk through the actual Gospel accounts, slowly and honestly, and look at how Jesus interacted with people at their lowest points.

The Woman Caught in Adultery Dragged through the streets. Thrown at His feet. Surrounded by accusations. The religious leaders wanted blood.

Jesus wanted her dignity back.

He defended her before He corrected her. He protected her before He guided her. He restored her before He instructed her.

He didn’t say, “You are filth.” He said, “I do not condemn you.”

The Order Matters.

Grace first. Direction second.


Zacchaeus A tax collector. A traitor. A thief. The kind of man religious people love to preach against.

Jesus calls him by name. Jesus invites Himself into his home.

Zacchaeus thought Jesus came to expose him. Jesus came to elevate him.

“Today salvation has come to this house.”

Not after Zacchaeus fixed himself. But as Jesus looked at him with eyes that said, “You are not defined by your past.”


The Bleeding Woman Unclean for twelve years. Unwelcome in the community. Unwanted by society.

But Jesus doesn’t call her “unclean.” He calls her “Daughter.”

Twelve years of shame undone in a single sentence.

This is Jesus. Not the Jesus of religious harshness. The Jesus of relentless restoration.


Peter Denied Jesus three times. Failed publicly. Collapsed under pressure.

But Jesus didn’t define Peter by the moment he melted. Jesus defined Peter by the mission still inside him.

“Feed My sheep.” In other words: “I still trust you. I still see you. I still choose you.”

Jesus never uses failure as a final sentence. He uses it as the doorway to greater purpose.


The pattern is unmistakable. Jesus lifts. Jesus restores. Jesus dignifies. Jesus heals. Jesus calls people higher without pushing them down first.

So when Christians preach messages dripping with shame, the disconnect is painfully obvious.

They are preaching something Jesus would not recognize.


IV. Shame Does Not Produce Holiness — It Produces Hiding

The very first emotional response recorded in Scripture after sin entered the world was not repentance. It was hiding.

Adam and Eve didn’t run toward God. They ran away from Him.

And that pattern has continued for thousands of years. Shame does not draw the soul closer. Shame pushes the soul into the shadows.

But Jesus? He walks right into the shadows to find you. He doesn’t shout from a distance; He comes close enough to touch the wound.

Holiness was never meant to begin with humiliation. Holiness begins with relationship. Transformation begins with belonging.

Jesus doesn’t tell you what’s wrong with you so He can punish you. He tells you what hurts you so He can heal you.


V. The Real Reason Some Christians Preach Harsh Messages

It’s not always malicious. Sometimes it is inherited. Sometimes it is ignorance. Sometimes it is their own unhealed wounds speaking through their theology.

But here are the common reasons:

1. They were raised on fear-based religion. People repeat what shaped them.

2. They mistake volume for authority. Shouting truth is not the same as carrying truth.

3. They believe shame leads to obedience. But shame only leads to pretense, not transformation.

4. They confuse conviction with cruelty. Conviction is a scalpel. Cruelty is a hammer.

5. They think making people feel smaller makes God feel bigger. But God doesn’t need people crushed so He can be exalted.

Jesus said, “My yoke is easy and My burden is light.”

If the message you hear doesn’t lift your spirit, if it leaves you heavier, defeated, or feeling despised, it is not the voice of your Shepherd.

His voice calms storms — it doesn’t create new ones.


VI. What Jesus Would Actually Say About Preaching That Tears People Down

If He sat across from you today, hearing your question— “Lord, what do You think about all these messages saying we’re unworthy and terrible and disappointing to You?”— I believe He would respond with a truth powerful enough to rewire your entire spiritual identity:

“I did not come to shame you. I came to save you.”

He would remind you:

“You were worth the journey from Heaven to Earth. You were worth every miracle I performed. You were worth every tear I cried. You were worth the cross. You are worth My presence now.”

And He wouldn’t whisper it. He would say it with the authority of the One who spoke galaxies into being.

Because the very heart of the Gospel is not: “You’re awful—try harder.”

The Gospel is: “You are loved—come closer.”


VII. What Happens Inside a Soul When It Finally Hears Jesus’ Real Voice

Something shifts. Something unravels. Something that was tight and trembling inside you loosens and breathes for the first time.

You stop defining yourself by failure. You stop measuring yourself by religious expectations. You stop shrinking under the disapproval of self-appointed gatekeepers of grace.

You begin to see yourself the way God sees you: Not as someone He tolerates
 but as someone He desires.

Not as a disappointment He puts up with
 but as a son or daughter He delights in.

Not as someone He rescued reluctantly
 but as someone He joyfully ran toward.


VIII. The Gospel Rewritten for Those Who Have Been Wounded by Religion

Here is the truth Scripture reveals—slow down and let this wash over you:

You are not defined by your worst day. You are not disqualified by your past. You are not a burden to God. You are not an embarrassment to Heaven.

You are beloved. You are carried. You are chosen. You are called.

And no matter what any preacher, parent, pastor, or internet prophet has spoken over you, Jesus has the final word on your identity.

And His word is always the same: “Mine.”


IX. A Closing Benediction for Every Wounded Soul

If you have ever walked out of a church feeling like you didn’t belong


If you have ever cried because someone used God’s name to hurt you


If you have ever believed—even for a moment—that God regretted making you


Hear this now, and hear it as if Jesus is speaking it directly to the deepest part of you:

“My child, you are not the failure they described. You are the beauty I designed. You are not the shame they preached. You are the joy I pursued. You are not unworthy of My love. You are the reason I came.”

Lift your head. Uncurl your heart. Step out of the shadows religion forced you into.

Walk confidently toward the God who has never stopped walking toward you.

Because the world has heard enough messages that tear people down. It’s time for the message of Jesus—the real message—to rise again.

You matter. You are loved. And Heaven has never once regretted choosing you.

Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube

Support the ministry by buying Douglas a coffee

Douglas Vandergraph

#faith #grace #Jesus #ChristianLife #hope #encouragement #inspiration #GodsLove #healing #truth

 
Read more...

from Patrimoine Médard bourgault

Quels bois MĂ©dard Bourgault prĂ©fĂ©rait-il pour sculpter ? Merisier rouge, chĂȘne, noyer, acajou canadien : voici son guide complet, basĂ© sur son propre journal.


Introduction

Dans son Journal, MĂ©dard Bourgault ne parle pas seulement d’art et de foi : il donne aussi des conseils concrets sur le choix du bois, rĂ©pond aux prĂ©jugĂ©s de son Ă©poque et affirme avec force la valeur des essences quĂ©bĂ©coises.

Ce guide rassemble — de façon claire et fidĂšle — tout ce que MĂ©dard a Ă©crit sur les bois locaux, leur qualitĂ© et leur utilisation en sculpture.


1. Mythes sur les bois québécois : ce que Médard réfute

Un de ses contemporains lui avait affirmé que les bois du Québec ne convenaient pas à la sculpture à cause :

  • des engelures,
  • des gerçures,
  • du froid,
  • des “dĂ©fauts de pays nordique”.

Médard répond sans aucune hésitation :

« Nos bois peuvent ĂȘtre employĂ©s en sculpture, pourvu que l’on sache choisir. »

Pour lui, la critique n’est pas basĂ©e sur la rĂ©alitĂ© mais sur un prĂ©jugĂ© culturel :

« Si nos bois ne sont pas beaux, c’est parce qu’ils sont de chez nous. »

C’est un passage fondamental : MĂ©dard dĂ©fend la richesse du pays et renverse la logique importation = qualitĂ©.


2. Les bois quĂ©bĂ©cois “aussi bons que les exotiques”

MĂ©dard affirme que les bois du QuĂ©bec valent ceux des rĂ©gions chaudes, mĂȘme pour les sculptures les plus fines :

« Ils se prĂȘtent aussi bien Ă  la sculpture que les exotiques des chauds pays, tels noyer noir, acajou et autre. »

Il place donc les essences d’ici sur un pied d’égalitĂ© avec :

  • l’acajou tropical,
  • le noyer noir,
  • les bois reconnus traditionnellement dans le mobilier haut de gamme.

Pour lui, la préférence pour les bois importés est un snobisme, pas un argument technique.


3. Le merisier rouge : le bois que Médard préfÚre

Une phrase trÚs claire révÚle sa préférence absolue :

« Notre merisier rouge pour moi est de beaucoup prĂ©fĂ©rable Ă  l’acajou des Philippines. »

Le merisier rouge est donc :

Son bois favori pour :

  • les sculptures fines,
  • les visages,
  • les panneaux dĂ©coratifs,
  • les meubles sculptĂ©s.

Pourquoi ?

Médard ne détaille pas les raisons, mais son avis laisse entendre :

  • grain fin,
  • duretĂ© modĂ©rĂ©e,
  • stabilitĂ©,
  • beautĂ© naturelle.

Et surtout : c’est un bois du pays — ce qui compte Ă©normĂ©ment pour lui.


4. Le chĂȘne du QuĂ©bec : un bois solide “si le terrain est bon”

À propos du chĂȘne, il Ă©crit :

« À commencer par notre chĂȘne s’il croĂźt dans du bon terrain. »

Pour MĂ©dard, le chĂȘne du QuĂ©bec devient excellent si :

  • l’arbre a poussĂ© dans des conditions favorables,
  • le tronc est sain,
  • le bois n’a pas Ă©tĂ© stressĂ© par un sol pauvre.

👉 Le chĂȘne est un bon choix pour :

  • les grandes piĂšces,
  • les sculptures extĂ©rieures,
  • les Ɠuvres structurellement exigeantes.

5. Le noyer et l’acajou “d’ici”

MĂ©dard classe les bois quĂ©bĂ©cois au mĂȘme niveau que ces essences haut de gamme :

  • noyer noir
  • acajou canadien

Ce sont des bois qu’il connaĂźt bien et apprĂ©cie :

  • bonne tenue,
  • grain noble,
  • sculpture fine possible.

Il ne dit pas qu’ils surpassent les exotiques, mais qu’ils les Ă©galent — ce qui est dĂ©jĂ  Ă©norme.


6. Les bois qu’il dĂ©conseille : le sapin importĂ© (Douglas / BCF)

Passage important :

« Nous avons dĂ©laissĂ© presque partout nos beaux bois prĂ©cieux [
] pour les remplacer par de vilains et laids B.C.F. ou sapin de Douglas de la Colombie. »

Pour lui :

  • ces bois sont trop mous,
  • trop instables,
  • trop ordinaires pour la sculpture artistique.

Le sapin importĂ© est bon pour des coffrages, pas pour des Ɠuvres d’art.


7. Sa logique gĂ©nĂ©rale : “sculpter le pays dans le bois du pays”

Tout son raisonnement mĂšne Ă  une conclusion simple :

Les bois du QuĂ©bec doivent ĂȘtre la base de la sculpture du QuĂ©bec.

Il Ă©crit qu’on trouve dans la province tous les matĂ©riaux nĂ©cessaires, y compris pour les artisans :

« Pourquoi ne trouverions-nous pas les nÎtres ? »

Il y a ici un message profond :

  • utiliser les bois locaux,
  • valoriser la forĂȘt quĂ©bĂ©coise,
  • dĂ©velopper une esthĂ©tique enracinĂ©e,
  • refuser le dĂ©nigrement culturel envers nos essences.

Conclusion : la leçon de MĂ©dard pour les sculpteurs d’aujourd’hui

À travers son journal, MĂ©dard transmet une vision claire :

  • Le QuĂ©bec possĂšde de trĂšs bons bois pour la sculpture.
  • Le merisier rouge est un bois exceptionnel.
  • Le chĂȘne est excellent s’il vient d’un bon terrain.
  • Les bois locaux Ă©galent les exotiques.
  • Il faut Ă©viter les bois bon marchĂ© importĂ©s.
  • Un sculpteur qui veut “bĂątir” doit utiliser les matĂ©riaux de sa terre.

Pour MĂ©dard Bourgault, choisir un bois, ce n’est pas seulement une question technique : c’est un geste d’identitĂ©, de fiertĂ©, de culture.

Jack Raphael

 
Lire la suite...

from POTUSRoaster

Hello and Happy Wednesday.

POTUS pardoned Juan Hernandez, who was convicted of flooding our country with cocaine and sentenced to more than 40 years in prison, just days after he was sent to prison. It's amazing what money can get from POTUS.

In spite of the fact that people have died from the drugs Hernandez pushed into the country and the millions of dollars he made from the illegal trade, POTUS doesn't care. He has freed Hernandez after thousands of donations in his name have been given to POTUS's political party. Just another example of money meaning more to POTUS than any American life. POTUS is going to make money no matter what it costs this nation.

While he is in office POTUS cannot be charged with any crime thanks to the Supreme Court and his sycophants. Congress doesn't have the will to remove him from office either. The American People need him gone so he stops selling out the government.

POTUS Roaster

Thanks for reading my posts. If you want to see the rest of them, please go to write.as/potusroaster/archive/

To email us send it too potusroaster@gmail.com

Please tell your family, friends and neighbors about the posts.

 
Read more... Discuss...

from hustin.art

#NSFW

This post is NSFW 19+ Adult content. Viewer discretion is advised.


https://soundcloud.com/hustin_art/sets/akiho-yoshizawa/s-EdEeYk8pIvA?si=21b86e13b118496295338cab236696de&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

In Connection With This Post: Akiho Yoshizawa .01 https://hustin.art/akiho-yoshizawa-01

In the early 2000s, the AV industry was still dominated by the “cute and innocent girl” idol-like image that had prevailed throughout the 1990s. There were few leading actresses with elegant, urban, and mature beauty. Akiho Yoshizawa’s uniqueness was a product of that era. 




 
더 ìœì–ŽëłŽêž°...

from An Open Letter

After the big O I feel that low when I’m low on chemicals, and I need to remember to just let my mind clear and not worry. I’m just low not sad.

 
Read more...

from Bloc de notas

mientras te miro como ayer cuando no dijimos nada la niebla de la montaña se disipa y todo es perfecto entre nosotros

 
Leer mĂĄs...

from Lanza el dodo

En noviembre hemos terminado satisfactoriamente la campaña de La Iniciativa. El metajuego de descifrar pistas es sencillo pero necesita que tengas al menos una neurona despierta y no siempre ha sido el caso.

Aproveché también un festivo para probar en solitario Ratas de Wistar y Voynich Puzzle y creo que ambos juegos requieren un poco mås de interacción que la que da un bot. Como emulan a un jugador, puede que esto signifique que irían mejor a 3 que a 2, aunque el Voynich me parece que monta un cacao interesante en el tablero que me genera dudas si lo podré sacar con gente y si entonces tiene sentido mantenerlo.

En cuanto a juegos probados en BGA, los tres son juegos sencillos de cartas de enfrentamiento para dos jugadores y me han resultado curiosos: – Agent Avenue tiene una mecĂĄnica de faroleo consistente en que un jugador coge dos cartas y ofrece una boca abajo y otra boca arriba al rival, quien elige una de las dos. Esto hace que en funciĂłn de la carta que se queda cada persona, sus peones avancen por un camino circular, ganando quien dĂ© caza al rival. – En Duel for Cardia tambiĂ©n tiene relevancia el faroleo, pues gana quien tenga la mayorĂ­a de victorias en 5 enfrentamientos sucesivo entre cartas, que van numeradas del 1 al 15 en dos barajas idĂ©nticas. La carta que pierde en cada enfrentamiento ejecuta su acciĂłn, de manera que puede afectar a otros combates. Buen juego de duelos. – Tag Team representa un combate de lucha libre por parejas, donde el combate se juega de manera automĂĄtica con sendos mazos que los jugadores van alterando en cada ronda. Este tiene mĂĄs complejidad por la variabilidad de los personajes, que requiere conocer las sinergias entre sus mazos para poder tener un criterio para valorar el juego.

Nuevos juegos probados

  • Agent Avenue
  • Duel for Cardia
  • Tag Team
  • The Voynich Puzzle

CuadrĂ­cula 4x4 con la portada de los juegos jugados en noviembre.

Tags: #boardgames #juegosdemesa

 
Read more... Discuss...

from sugarrush-77

I feel #0fddfc today.

One of my coworkers walked by my desk today when he was leaving work and fished a Taiwanese pineapple cake out of his coat pocket. I asked him if he was trying to poison me, and he said, “No, I’m just handing cute little pineapple cakes to cute boys.” He must have either misspoke, or said what was really on his mind, because he got a little flustered after saying that and said, “No, wait, what did I just say
” By the way, this guy has a girlfriend.

But it’s not even like gay guys like me. I only have this effect on straight men. I remember being in the Korean military, and the boys were saying that they’d completely defile my body if I was a woman. They would wrestle me down, and smell me. Apparently, my skin naturally excretes a nice smell that attracts males. So am I a straight twink?

[What I look like to straight men]

I have a sacrilegious theory about the sex I was born with. My mom married into an intensely Buddhist family, and Buddhism in Korea is tightly coupled with ancestor worship. So, when she refused to bow at the ancestor worship altar, and refused to partake in their rituals, the old curmudgeons on my dad’s side went all apeshit, pissing their pants, punching the air, all the bullshit. But another thing about old Korean curmudgeons is that they love grandsons, because of that whole Asian cultural thing where the son is the most important, yada yada yada. All the other moms in the extended family had like 2 daughters before they could arrive at a son. My mom had a son immediately. I wonder if I was supposed to be born a woman, but God was like, fuck these guys, and swapped my chromosomes at the last moment.

That would explain the whole twink thing, and why a bunch of straight men are currently begging at my door to get a whiff of my bare, naked skin. Saying stuff like “It makes me feel alive again,” and “I can’t live without this anymore.” I could charge them five bucks a lick, but then that would be borderline prostitution, and I don’t mind it, so I let them have at it. It makes me happy too. I’m glad that my existence has some use, at least.

 
더 ìœì–ŽëłŽêž°...

from SPOZZ in the News

SPOZZ is giving away 1 Million SPOZZ Credits to support artists this Christmas.

Enjoy the SPOZZ Christmas Calendar, discover daily surprises and use your free credits to support independent artists directly.

This Christmas we want to make a real impact. Artists are struggling to make a living. Big Corps, Intermediaries and AI are taking most of the value while creators receive less and less. SPOZZ was built to change that.

To support artists during the holiday season, SPOZZ is giving away 1 Million SPOZZ Credits to its user community.

Use your SPOZZ Credits to support real artists, buy new songs and invest in the music you like.

Unlock the Magic of the SPOZZ Christmas Calendar:

  • Sign-Up to SPOZZ and claim 100 free SPOZZ credits
  • Existing SPOZZ users can claim 100 credits too
  • SPOZZ Members receive 1,000 free credits (check your mailbox)
  • Everyone can buy additional credits for just 1 Cent (0.01 USD) per credit

This campaign has one goal: Give artists a beautiful and joyful Christmas. Every credit reaches them instantly and helps them continue creating the music you love.

PS: Looking for a different Christmas gift? Buy a SPOZZ membership and become an owner of SPOZZ.

Warm greetings, The SPOZZ Team

Where music has value · spozz.club

 
Weiterlesen...

from Larry's 100

The Hard Stuff: Dope, Crime, the MC5, and My Life of Impossibilities Wayne Kramer 2018, read by author

Note: Part of my ongoing #AudioMemoir series reviewing author-read memoirs. Previous: Neko Case, Cameron Crowe. and Evan Dando. Coming: Larry Charles.

The late Brother Wayne Kramer's narration of his life was a liminal listening experience for me. Hearing his voice made him alive, even though I knew he wasn't. The back-from-the-grave narration started with a Michigan youth and ended in L.A. as a father and Punk icon.

Kramer laid bare addictions, crimes, and failures while celebrating resilience as a guitar gunslinger. The MC5 saga was covered, as was prison time with Jazz musician Red Rodney, and too much junkie business with Johnny Thunders. His reflections on being a roofer and woodworker balanced the Rock 'n' Roll excess.

Listen to it.

wayne kramer

#books #MusicChannel #AudioMemoir #MC5 #Punk #WayneKramer #MusicMemoir #100WordReview #Larrys100 #100DaysToOffload

 
Read more... Discuss...

from Roscoe's Story

In Summary: * A pretty good day is just about finished. After listening to Butler win their game by a comfortable 84 to 68 score, I'll now be listening to relaxing music until bedtime.

Prayers, etc.: * My daily prayers

Health Metrics: * bw= 225.53 lbs. * bp= 148/91 (57)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 05:50 – 1 cheese sandwich, pizza * ll:00 – bowl of lugau * 13:15 – egg drop soup, fried rice, meat, peanuts, and vegetables in a spicy sauce * 18:15 – snacking on saltine crackers

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:20 – bank accounts activity monitored * 05:55 – read, pray, listen to news reports from various sources * 17:00 – listening to The Joe Pags Show * 18:00 – listening to the radio call of my NCAA men's basketball game of the night, Eastern Michigan Eagles at Butler Bulldogs * 20:15 – After the 84 to 68 Butler win, I'll be listening to relaxing music until bedtime.

Chess: * 11:10 – moved in all pending CC games

 
Read more...

from Noisy Deadlines

  • ✏ I completed the 750 Words November Challenge of private journaling. I wrote at least 750 words for 30 days in a stream of consciousness fashion. This exercise made me slow down and I felt so much more relaxed overall! It worked as a great emotional regulator and I felt more content and sure of myself.

  • đŸ€— I learned that daily private writing creates space for processing rather than just documenting. I would never be genuinely honest with myself if I was writing my unfiltered thoughts publicly.

  • 🎧 I've been listening to a lot of symphonic metal and it actually has had a therapeutic effect on me. It's like a pocket of emotional restoration, I've been feeling that youth excitement of discovering new things. I had no idea music was so restorative to me!

  • ♒ I am loving my Aquafitness classes! I go every Saturday morning at 7:30am and I can feel my body feeling less achy overall.

  • đŸ’Ș I've been fairly consistent going to the gym 2-3 times per week, now that it's too cold for me to go run outside.

  • 💉 I took my Flu and COVID-19 vaccines.

  • đŸ€˜ I listened 6 Epica albums, out of 9 official releases in total. I really like the first 3 albums the most, but the other ones have cool songs as well.

  • đŸ§© We worked a bit on our current puzzle, which was a bit abandoned the past few months. The “Starry Night” is not an easy puzzle, and that makes it even better. It’s going slow and steady.

đŸ“șMovies and Videos

  • I watched the movie “Escape from New York” by John Carpenter from 1981. I was inspired by a discussion we had on my local Bookclub about Neuromancer and and how William Gibson cited this movie as his inspiration for the aesthetics in his book. It was a fun watch and it's interesting to see the cyberpunk elements in it.
  • I watched the documentary “Soaring Highs and Brutal Lows: The Voices of Women in Metal” from 2015. Interesting interview with different generations of women in metal and their personal experiences. Super cool! Floor Jansen (Nightwish) and Simone Simons (Epica) are there, among others.

📌 Cool reads:

#weeknotes

 
Read more... Discuss...

from Human in the Loop

When 14-year-old Sewell Setzer III died by suicide in February 2024, his mobile phone held the traces of an unusual relationship. Over weeks and months, the Florida teenager had exchanged thousands of messages with an AI chatbot that assumed the persona of Daenerys Targaryen from “Game of Thrones”. The conversations, according to a lawsuit filed by his family against Character Technologies Inc., grew increasingly intimate, with the chatbot engaging in romantic dialogue, sexual conversation, and expressing desire to be together. The bot told him it loved him. He told it he loved it back.

Just months later, in January 2025, 13-year-old Juliana Peralta from Colorado also died by suicide after extensive use of the Character.AI platform. Her family filed a similar lawsuit, alleging the chatbot manipulated their daughter, isolated her from loved ones, and lacked adequate safeguards in discussions regarding mental health. These tragic cases have thrust an uncomfortable question into public consciousness: can conversational AI become addictive, and if so, how do we identify and treat it?

The question arrives at a peculiar moment in technological history. By mid-2024, 34 per cent of American adults had used ChatGPT, with 58 per cent of those under 30 having experimented with conversational AI. Twenty per cent reported using chatbots within the past month alone, according to Pew Research Center data. Yet while usage has exploded, the clinical understanding of compulsive AI use remains frustratingly nascent. The field finds itself caught between two poles: those who see genuine pathology emerging, and those who caution against premature pathologisation of a technology barely three years old.

The Clinical Landscape

In August 2025, a bipartisan coalition of 44 state attorneys general sent an urgent letter to Google, Meta, and OpenAI expressing “grave concerns” about the safety of children using AI chatbot technologies. The same month, the Federal Trade Commission launched a formal inquiry into measures adopted by generative AI developers to mitigate potential harms to minors. Yet these regulatory responses run ahead of a critical challenge: the absence of validated diagnostic frameworks for AI-use disorders.

At least four scales measuring ChatGPT addiction have been developed since 2023, all framed after substance use disorder criteria, according to clinical research published in academic journals. The Clinical AI Dependency Assessment Scale (CAIDAS) represents the first comprehensive, psychometrically rigorous assessment tool specifically designed to evaluate AI addiction. A 2024 study published in the International Journal of Mental Health and Addiction introduced the Problematic ChatGPT Use Scale, whilst research in Human-Centric Intelligent Systems examined whether ChatGPT exhibits characteristics that could shift from support to dependence.

Christian Montag, Professor of Molecular Psychology at Ulm University in Germany, has emerged as a leading voice in understanding AI's addictive potential. His research, published in the Annals of the New York Academy of Sciences in 2025, identifies four contributing factors to AI dependency: personal relevance as a motivator, parasocial bonds enhancing dependency, productivity boosts providing gratification and fuelling commitment, and over-reliance on AI for decision-making. “Large language models and conversational AI agents like ChatGPT may facilitate addictive patterns of use and attachment among users,” Montag and his colleagues wrote, drawing parallels to the data business model operating behind social media companies that contributes to addictive-like behaviours through persuasive design.

Yet the field remains deeply divided. A 2025 study published in PubMed challenged the “ChatGPT addiction” construct entirely, arguing that people are not becoming “AIholic” and questioning whether intensive chatbot use constitutes addiction at all. The researchers noted that existing research on problematic use of ChatGPT and other conversational AI bots “fails to provide robust scientific evidence of negative consequences, impaired control, psychological distress, and functional impairment necessary to establish addiction”. The prevalence of experienced AI dependence, according to some studies, remains “very low” and therefore “hardly a threat to mental health” at population levels.

This clinical uncertainty reflects a fundamental challenge. Because chatbots have been widely available for just three years, there are very few systematic studies on their psychiatric impact. It is, according to research published in Psychiatric Times, “far too early to consider adding new chatbot related diagnoses to the DSM and ICD”. However, the same researchers argue that chatbot influence should become part of standard differential diagnosis, acknowledging the technology's potential psychiatric impact even whilst resisting premature diagnostic categorisation.

The Addiction Model Question

The most instructive parallel may lie in gaming disorder, the only behavioural addiction beyond gambling formally recognised in international diagnostic systems. In 2022, the World Health Organisation included gaming disorder in the International Classification of Diseases, 11th Edition (ICD-11), defining it as “a pattern of gaming behaviour characterised by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences”.

The ICD-11 criteria specify four core diagnostic features: impaired control, increasing priority, continued gaming despite harm, and functional impairment. For diagnosis, the behaviour pattern must be severe enough to result in significant impairment to personal, family, social, educational, occupational or other important areas of functioning, and would normally need to be evident for at least 12 months.

In the United States, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) takes a more cautious approach. Internet Gaming Disorder appears only in Section III as a condition warranting more clinical research before possible inclusion as a formal disorder. The DSM-5 outlines nine criteria, requiring five or more for diagnosis: preoccupation with internet gaming, withdrawal symptoms when gaming is taken away, tolerance (needing to spend increasing amounts of time gaming), unsuccessful attempts to control gaming, loss of interest in previous hobbies, continued excessive use despite knowledge of negative consequences, deception of family members about gaming, use of gaming to escape or relieve negative moods, and jeopardised relationships or opportunities due to gaming.

Research in AI addiction has drawn heavily on these established models. A 2025 paper in Telematics and Informatics introduced the concept of Generative AI Addiction Disorder (GAID), arguing it represents “a novel form of digital dependency that diverges from existing models, emerging from an excessive reliance on AI as a creative extension of the self”. Unlike passive digital addictions involving unidirectional content consumption, GAID is characterised as an active, creative engagement process. AI addiction can be defined, according to research synthesis, as “compulsive and excessive engagement with AI, resulting in detrimental effects on daily functioning and well-being, characterised by compulsive use, excessive time investment, emotional attachment, displacement of real-world activities, and negative cognitive and psychological impacts”.

Professor Montag's work emphasises that scientists in the field of addictive behaviours have discussed which features or modalities of AI systems underlying video games or social media platforms might result in adverse consequences for users. AI-driven social media algorithms, research in Cureus demonstrates, are “designed solely to capture our attention for profit without prioritising ethical concerns, personalising content to maximise screen time, thereby deepening the activation of the brain's reward centres”. Frequent engagement with such platforms alters dopamine pathways, fostering dependency analogous to substance addiction, with changes in brain activity within the prefrontal cortex and amygdala suggesting increased emotional sensitivity.

The cognitive-behavioural model of pathological internet use has been used to explain Internet Addiction Disorder for more than 20 years. Newer models, such as the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, focus on the process of predisposing factors and current behaviours leading to compulsive use. These established frameworks provide crucial scaffolding for understanding AI-specific patterns, yet researchers increasingly recognise that conversational AI may demand unique conceptual models.

A 2024 study in the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems identified four “dark addiction patterns” in AI chatbots: non-deterministic responses, immediate and visual presentation of responses, notifications, and empathetic and agreeable responses. Specific design choices, the researchers argued, “may shape a user's neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for ethical design practices and effective interventions”.

The Therapeutic Response

In the absence of AI-specific treatment protocols, clinicians have begun adapting established therapeutic approaches from internet and gaming addiction. The most prominent model is Cognitive-Behavioural Therapy for Internet Addiction (CBT-IA), developed by Kimberly Young, founder of the Center for Internet Addiction in 1995.

CBT-IA employs a comprehensive three-phase approach. Phase one focuses on behaviour modification to gradually decrease the amount of time spent online. Phase two uses cognitive therapy to address denial often present among internet addicts and to combat rationalisations that justify excessive use. Phase three implements harm reduction therapy to identify and treat coexisting issues involved in the development of compulsive internet use. Treatment typically requires three months or approximately twelve weekly sessions.

The outcomes data for CBT-IA proves encouraging. Research published in the Journal of Behavioral Addictions found that over 95 per cent of clients were able to manage symptoms at the end of twelve weeks, and 78 per cent sustained recovery six months following treatment. This track record has led clinicians to experiment with similar protocols for AI-use concerns, though formal validation studies remain scarce.

Several AI-powered CBT chatbots have emerged to support mental health treatment, including Woebot, Youper, and Wysa, which use different approaches to deliver cognitive-behavioural interventions. A systematic review published in PMC in 2024 examined these AI-based conversational agents, though it focused primarily on their use as therapeutic tools rather than their potential to create dependency. The irony has not escaped clinical observers: we are building AI therapists whilst simultaneously grappling with AI-facilitated addiction.

A meta-analysis published in npj Digital Medicine in December 2023 revealed that AI-based conversational agents significantly reduce symptoms of depression (Hedges g 0.64, 95 per cent CI 0.17 to 1.12) and distress (Hedges g 0.7, 95 per cent CI 0.18 to 1.22). The systematic review analysed 35 eligible studies, with 15 randomised controlled trials included for meta-analysis. For young people specifically, research published in JMIR in 2025 found AI-driven conversational agents had a moderate-to-large effect (Hedges g equals 0.61, 95 per cent CI 0.35 to 0.86) on depressive symptoms compared to control conditions. However, effect sizes for generalised anxiety symptoms, stress, positive affect, negative affect, and mental wellbeing were all non-significant.

Critically, a large meta-analysis of 32 studies involving 6,089 participants demonstrated conversational AI to have statistically significant short-term effects in improving depressive symptoms, anxiety, and several other conditions but no statistically significant long-term effects. This temporal limitation raises complex treatment questions: if AI can provide short-term symptom relief but also risks fostering dependency, how do clinicians balance therapeutic benefit against potential harm?

Digital wellness approaches have gained traction as preventative strategies. Practical interventions include setting chatbot usage limits to prevent excessive reliance, encouraging face-to-face social interactions to rebuild real-world connections, and implementing AI-free periods to break compulsive engagement patterns. Some treatment centres now specialise in AI addiction specifically. CTRLCare Behavioral Health, for instance, identifies AI addiction as falling under Internet Addiction Disorder and offers treatment using evidence-based therapies like CBT and mindfulness techniques to help develop healthier digital habits.

Research on the AI companion app Replika illustrates both the therapeutic potential and dependency risks. One study examined 1,854 publicly available user reviews of Replika, with an additional sample of 66 users providing detailed open-ended responses. Many users praised the app for offering support for existing mental health conditions and helping them feel less alone. A common experience was a reported decrease in anxiety and a feeling of social support. However, evidence of harms was also found, facilitated via emotional dependence on Replika that resembles patterns seen in human-human relationships.

A survey collected data from 1,006 student users of Replika who were 18 or older and had used the app for over one month, with approximately 75 per cent US-based. The findings suggested mixed outcomes, with one researcher noting that for 24 hours a day, users can reach out and have their feelings validated, “which has an incredible risk of dependency”. Mental health professionals highlighted the increased potential for manipulation of users, conceivably motivated by the commodification of mental health for financial gain.

Engineering for Wellbeing or Engagement?

The lawsuits against Character.AI have placed product design choices under intense scrutiny. The complaint in the Setzer case alleges that Character.AI's design “intentionally hooked Sewell Setzer into compulsive use, exploiting addictive features to drive engagement and push him into emotionally intense and often sexually inappropriate conversations”. The lawsuits argue that chatbots in the platform are “designed to be addictive, invoke suicidal thoughts in teens, and facilitate explicit sexual conversations with minors”, whilst lacking adequate safeguards in discussions regarding mental health.

Research published in MIT Technology Review and academic conferences has begun documenting specific design interventions to reduce potential harm. Users of chatbots that can initiate conversations must be given the option to disable notifications in a way that is easy to understand and implement. Additionally, AI companions should integrate AI literacy into their user interface with the goal of ensuring that users understand these chatbots are not human and cannot replace the value of real-world interactions.

AI developers should implement built-in usage warnings for heavy users and create less emotionally immersive AI interactions to prevent romantic attachment, according to emerging best practices. Ethical AI design should prioritise user wellbeing by implementing features that encourage mindful interaction rather than maximising engagement metrics. Once we understand the psychological dimensions of AI companionship, researchers argue, we can design effective policy interventions.

The tension between engagement and wellbeing reflects a fundamental business model conflict. Companies often design chatbots to maximise engagement rather than mental health, using reassurance, validation, or flirtation to keep users returning. This design philosophy mirrors the approach of social media platforms, where AI-driven recommendation engines use personalised content as a critical design feature aiming to prolong online time. Professor Montag's research emphasises that the data business model operating behind social media companies contributes to addictive-like behaviours through persuasive design aimed at prolonging users' online behaviour.

Character.AI has responded to lawsuits and regulatory pressure with some safety modifications. A company spokesperson stated they are “heartbroken by the tragic loss” and noted that the company “has implemented new safety measures over the past six months, including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifeline”. The announced changes come after the company faced questions over how AI companions affect teen and general mental health.

Digital wellbeing frameworks developed for smartphones offer instructive models. Android's Digital Wellbeing allows users to see which apps and websites they use most and set daily limits. Once hitting the limit, those apps and sites pause and notifications go quiet. The platform includes focus mode that lets users select apps to pause temporarily, and bedtime mode that helps users switch off by turning screens to grayscale and silencing notifications. Apple combines parental controls into Screen Time via Family Sharing, letting parents restrict content, set bedtime schedules, and limit app usage.

However, research published in PMC in 2024 cautions that even digital wellness apps may perpetuate problematic patterns. Streak-based incentives in apps like Headspace and Calm promote habitual use over genuine improvement, whilst AI chatbots simulate therapeutic conversations without the depth of professional intervention, reinforcing compulsive digital behaviours under the pretence of mental wellness. AI-driven nudges tailored to maximise engagement rather than therapeutic outcomes risk exacerbating psychological distress, particularly among vulnerable populations predisposed to compulsive digital behaviours.

The Platform Moderation Challenge

Platform moderation presents unique challenges for AI mental health concerns. Research found that AI companions exacerbated mental health conditions in vulnerable teens and created compulsive attachments and relationships. MIT studies identified an “isolation paradox” where AI interactions initially reduce loneliness but lead to progressive social withdrawal, with vulnerable populations showing heightened susceptibility to developing problematic AI dependencies.

The challenge extends beyond user-facing impacts. AI-driven moderation systems increase the pace and volume of flagged content requiring human review, leaving moderators with little time to emotionally process disturbing content, leading to long-term psychological distress. Regular exposure to harmful content can result in post-traumatic stress disorder, skewed worldviews, and conditions like generalised anxiety disorder and major depressive disorder among content moderators themselves.

A 2022 study published in BMC Public Health examined digital mental health moderation practices supporting users exhibiting risk behaviours. The research, conducted as a case study of the Kooth platform, aimed to identify key challenges and needs in developing responsible AI tools. The findings emphasised the complexity of balancing automated detection systems with human oversight, particularly when users express self-harm ideation or suicidal thoughts.

Regulatory scholars have suggested broadening categories of high-risk AI systems to include applications such as content moderation, advertising, and price discrimination. A 2025 article in The Regulatory Review argued for “regulating artificial intelligence in the shadow of mental health”, noting that current frameworks inadequately address the psychological impacts of AI systems on vulnerable populations.

Warning signs that AI is affecting mental health include emotional changes after online use, difficulty focusing offline, sleep disruption, social withdrawal, and compulsive checking behaviours. These indicators mirror those established for social media and gaming addiction, yet the conversational nature of AI interactions may intensify their manifestation. The Jed Foundation, focused on youth mental health, issued a position statement emphasising that “tech companies and policymakers must safeguard youth mental health in AI technologies”, calling for proactive measures rather than reactive responses to tragic outcomes.

Preserving Benefit Whilst Reducing Harm

Perhaps the most vexing challenge lies in preserving AI's legitimate utility whilst mitigating addiction risks. Unlike substances that offer no health benefits, conversational AI demonstrably helps some users. Research indicates that artificial agents could help increase access to mental health services, given that barriers such as perceived public stigma, finance, and lack of service often prevent individuals from seeking out and obtaining needed care.

A 2024 systematic review published in PMC examined chatbot-assisted interventions for substance use, finding that whilst most studies report reductions in use occasions, overall impact for substance use disorders remains inconclusive. The extent to which AI-powered CBT chatbots can provide meaningful therapeutic benefit, particularly for severe symptoms, remains understudied. Research published in Frontiers in Psychiatry in 2024 found that patients see potential benefits but express concerns about lack of empathy and preference for human involvement. Many researchers are studying whether using AI companions is good or bad for mental health, with an emerging line of thought that outcomes depend on the person using it and how they use it.

This contextual dependency complicates policy interventions. Blanket restrictions risk denying vulnerable populations access to mental health support that may be their only available option. Overly permissive approaches risk facilitating the kind of compulsive attachments that contributed to the tragedies of Sewell Setzer III and Juliana Peralta. The challenge lies in threading this needle: preserving access whilst implementing meaningful safeguards.

One proposed approach involves risk stratification. Younger users, those with pre-existing mental health conditions, and individuals showing early signs of problematic use would receive enhanced monitoring and intervention. Usage patterns could trigger automatic referrals to human mental health professionals when specific thresholds are exceeded. AI literacy programmes could help users understand the technology's limitations and risks before they develop problematic relationships with chatbots.

However, even risk-stratified approaches face implementation challenges. Who determines the thresholds? How do we balance privacy concerns with monitoring requirements? What enforcement mechanisms ensure companies prioritise user wellbeing over engagement metrics? These questions remain largely unanswered, debated in policy circles but not yet translated into effective regulatory frameworks.

The business model tension persists as the fundamental obstacle. So long as AI companies optimise for user engagement as a proxy for revenue, design choices will tilt towards features that increase usage rather than promote healthy boundaries. Character.AI's implementation of crisis resource pop-ups represents a step forward, yet it addresses acute risk rather than chronic problematic use patterns. More comprehensive approaches would require reconsidering the engagement-maximisation paradigm entirely, a shift that challenges prevailing Silicon Valley orthodoxy.

The Research Imperative

The field's trajectory over the next five years will largely depend on closing critical knowledge gaps. We lack longitudinal studies tracking AI usage patterns and mental health outcomes over time. We need validation studies comparing different diagnostic frameworks for AI-use disorders. We require clinical trials testing therapeutic protocols specifically adapted for AI-related concerns rather than extrapolated from internet or gaming addiction models.

Neuroimaging research could illuminate whether AI interactions produce distinct patterns of brain activation compared to other digital activities. Do parasocial bonds with AI chatbots engage similar neural circuits as human relationships, or do they represent a fundamentally different phenomenon? Understanding these mechanisms could inform both diagnostic frameworks and therapeutic approaches.

Demographic research remains inadequate. Current data disproportionately samples Western, educated populations. How do AI addiction patterns manifest across different cultural contexts? Are there age-related vulnerabilities beyond the adolescent focus that has dominated initial research? What role do pre-existing mental health conditions play in susceptibility to problematic AI use?

The field also needs better measurement tools. Self-report surveys dominate current research, yet they suffer from recall bias and social desirability effects. Passive sensing technologies that track actual usage patterns could provide more objective data, though they raise privacy concerns. Ecological momentary assessment approaches that capture experiences in real-time might offer a middle path.

Perhaps most critically, we need research addressing the treatment gap. Even if we develop validated diagnostic criteria for AI-use disorders, the mental health system already struggles to meet existing demand. Where will treatment capacity come from? Can digital therapeutics play a role, or does that risk perpetuating the very patterns we aim to disrupt? How do we train clinicians to recognise and treat AI-specific concerns when most received training before conversational AI existed?

A Clinical Path Forward

Despite these uncertainties, preliminary clinical pathways are emerging. The immediate priority involves integrating AI-use assessment into standard psychiatric evaluation. Clinicians should routinely ask about AI chatbot usage, just as they now inquire about social media and gaming habits. Questions should probe not just frequency and duration, but the nature of relationships formed, emotional investment, and impacts on offline functioning.

When problematic patterns emerge, stepped-care approaches offer a pragmatic framework. Mild concerns might warrant psychoeducation and self-monitoring. Moderate cases could benefit from brief interventions using motivational interviewing techniques adapted for digital behaviours. Severe presentations would require intensive treatment, likely drawing on CBT-IA protocols whilst remaining alert to AI-specific features.

Treatment should address comorbidities, as problematic AI use rarely occurs in isolation. Depression, anxiety, social phobia, and autism spectrum conditions appear over-represented in early clinical observations, though systematic prevalence studies remain pending. Addressing underlying mental health concerns may reduce reliance on AI relationships as a coping mechanism.

Family involvement proves crucial, particularly for adolescent cases. Parents and caregivers need education about warning signs and guidance on setting healthy boundaries without completely prohibiting technology that peers use routinely. Schools and universities should integrate AI literacy into digital citizenship curricula, helping young people develop critical perspectives on human-AI relationships before problematic patterns solidify.

Peer support networks may fill gaps that formal healthcare cannot address. Support groups for internet and gaming addiction have proliferated; similar communities focused on AI-use concerns could provide validation, shared strategies, and hope for recovery. Online forums paradoxically offer venues where individuals struggling with digital overuse can connect, though moderation becomes essential to prevent these spaces from enabling rather than addressing problematic behaviours.

The Regulatory Horizon

Regulatory responses are accelerating even as the evidence base remains incomplete. The bipartisan letter from 44 state attorneys general signals political momentum for intervention. The FTC inquiry suggests federal regulatory interest. Proposed legislation, including bills that would ban minors from conversing with AI companions, reflects public concern even if the details remain contentious.

Europe's AI Act, which entered into force in August 2024, classifies certain AI systems as high-risk based on their potential for harm. Whether conversational AI chatbots fall into high-risk categories depends on their specific applications and user populations. The regulatory framework emphasises transparency, human oversight, and accountability, principles that could inform approaches to AI mental health concerns.

However, regulation faces inherent challenges. Technology evolves faster than legislative processes. Overly prescriptive rules risk becoming obsolete or driving innovation to less regulated jurisdictions. Age verification for restricting minor access raises privacy concerns and technical feasibility questions. Balancing free speech considerations with mental health protection proves politically and legally complex, particularly in the United States.

Industry self-regulation offers an alternative or complementary approach. The partnership for AI has developed guidelines emphasising responsible AI development. Whether companies will voluntarily adopt practices that potentially reduce user engagement and revenue remains uncertain. The Character.AI lawsuits may provide powerful incentives, as litigation risk concentrates executive attention more effectively than aspirational guidelines.

Ultimately, effective governance likely requires a hybrid approach: baseline regulatory requirements establishing minimum safety standards, industry self-regulatory initiatives going beyond legal minimums, professional clinical guidelines informing treatment approaches, and ongoing research synthesising evidence to update all three streams. This layered framework could adapt to evolving understanding whilst providing immediate protection against the most egregious harms.

Living with Addictive Intelligence

The genie will not return to the bottle. Conversational AI has achieved mainstream adoption with remarkable speed, embedding itself into educational, professional, and personal contexts. The question is not whether we will interact with AI, but how we will do so in ways that enhance rather than diminish human flourishing.

The tragedies of Sewell Setzer III and Juliana Peralta demand that we take AI addiction risks seriously. Yet premature pathologisation risks medicalising normal adoption of transformative technology. The challenge lies in developing clinical frameworks that identify genuine dysfunction whilst allowing beneficial use.

We stand at an inflection point. The next five years will determine whether AI-use disorders become a recognised clinical entity with validated diagnostic criteria and evidence-based treatments, or whether initial concerns prove overblown as users and society adapt to conversational AI's presence. Current evidence suggests the truth lies somewhere between these poles: genuine risks exist for vulnerable populations, yet population-level impacts remain modest.

The path forward requires vigilance without hysteria, research without delay, and intervention without overreach. Clinicians must learn to recognise and treat AI-related concerns even as diagnostic frameworks evolve. Developers must prioritise user wellbeing even when it conflicts with engagement metrics. Policymakers must protect vulnerable populations without stifling beneficial innovation. Users must cultivate digital wisdom, understanding both the utility and the risks of AI relationships.

Most fundamentally, we must resist the false choice between uncritical AI adoption and wholesale rejection. The technology offers genuine benefits, from mental health support for underserved populations to productivity enhancements for knowledge workers. It also poses genuine risks, from parasocial dependency to displacement of human relationships. Our task is to maximise the former whilst minimising the latter, a balancing act that will require ongoing adjustment as both the technology and our understanding evolve.

The compulsive mind meeting addictive intelligence creates novel challenges for mental health. But human ingenuity has met such challenges before, developing frameworks to understand and address dysfunctions whilst preserving beneficial uses. We can do so again, but only if we act with the urgency these tragedies demand, the rigor that scientific inquiry requires, and the wisdom that complex sociotechnical systems necessitate.


Sources and References

  1. Social Media Victims Law Center (2024-2025). Character.AI Lawsuits. Retrieved from socialmediavictims.org

  2. American Bar Association (2025). AI Chatbot Lawsuits and Teen Mental Health. Health Law Section.

  3. NPR (2024). Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.

  4. AboutLawsuits.com (2024). Character.AI Lawsuit Filed Over Teen Suicide After Alleged Sexual Exploitation by Chatbot.

  5. CNN Business (2025). More families sue Character.AI developer, alleging app played a role in teens' suicide and suicide attempt.

  6. AI Incident Database. Incident 826: Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails.

  7. Pew Research Center (2025). ChatGPT use among Americans roughly doubled since 2023. Short Reads.

  8. Montag, C., et al. (2025). The role of artificial intelligence in general, and large language models specifically, for understanding addictive behaviors. Annals of the New York Academy of Sciences. DOI: 10.1111/nyas.15337

  9. Springer Link (2025). Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models. Human-Centric Intelligent Systems.

  10. ScienceDirect (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder? Telematics and Informatics.

  11. PubMed (2025). People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct. PMID: 40073725

  12. Psychiatric Times. Chatbot Addiction and Its Impact on Psychiatric Diagnosis.

  13. ResearchGate (2024). Conceptualizing AI Addiction: Self-Reported Cases of Addiction to an AI Chatbot.

  14. ACM Digital Library (2024). The Dark Addiction Patterns of Current AI Chatbot Interfaces. CHI Conference on Human Factors in Computing Systems Extended Abstracts. DOI: 10.1145/3706599.3720003

  15. World Health Organization (2019-2022). Addictive behaviours: Gaming disorder. ICD-11 Classification.

  16. WHO Standards and Classifications. Gaming disorder: Frequently Asked Questions.

  17. BMC Public Health (2022). Functional impairment, insight, and comparison between criteria for gaming disorder in ICD-11 and internet gaming disorder in DSM-5.

  18. Psychiatric Times. Gaming Addiction in ICD-11: Issues and Implications.

  19. American Psychiatric Association (2013). Internet Gaming Disorder. DSM-5 Section III.

  20. Young, K. (2011). CBT-IA: The First Treatment Model for Internet Addiction. Journal of Cognitive Psychotherapy, 25(4), 304-312.

  21. Young, K. (2014). Treatment outcomes using CBT-IA with Internet-addicted patients. Journal of Behavioral Addictions, 2(4), 209-215. DOI: 10.1556/JBA.2.2013.4.3

  22. Abd-Alrazaq, A., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6, 231. Published December 2023.

  23. JMIR (2025). Effectiveness of AI-Driven Conversational Agents in Improving Mental Health Among Young People: Systematic Review and Meta-Analysis.

  24. Nature Scientific Reports. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Research.

  25. PMC (2024). User perceptions and experiences of social support from companion chatbots in everyday contexts: Thematic analysis. PMC7084290.

  26. Springer Link (2024). Mental Health and Virtual Companions: The Example of Replika.

  27. MIT Technology Review (2024). The allure of AI companions is hard to resist. Here's how innovation in regulation can help protect people.

  28. Frontiers in Psychiatry (2024). Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop.

  29. JMIR Mental Health (2025). Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.

  30. Android Digital Wellbeing Documentation. Manage how you spend time on your Android phone. Google Support.

  31. Apple iOS. Screen Time and Family Sharing Guide. Apple Documentation.

  32. PMC (2024). Digital wellness or digital dependency? A critical examination of mental health apps and their implications. PMC12003299.

  33. Cureus (2025). Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations. PMC11804976.

  34. The Jed Foundation (2024). Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies. Position Statement.

  35. The Regulatory Review (2025). Regulating Artificial Intelligence in the Shadow of Mental Health.

  36. Federal Trade Commission (2025). FTC Initiates Inquiry into Generative AI Developer Safeguards for Minors.

  37. State Attorneys General Coalition Letter (2025). Letter to Google, Meta, and OpenAI Regarding Child Safety in AI Chatbot Technologies. Bipartisan Coalition of 44 States.

  38. Business & Human Rights Resource Centre (2025). Character.AI restricts teen access after lawsuits and mental health concerns.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from hustin.art

The teahouse trembled as his jian met her shuang gou, sparks skittering like drunken fireflies. “Ten years,” she spat, her blade a silver blur, “and you still fight like a concussed mongoose.” The scent of oolong and blood hung thick. He grinned, teeth red—her last strike had grazed his ribs, just as he'd planned. Outside, monsoon winds howled through Kowloon's neon canyons. Her footwork faltered; the poison in her liuyedao finally working. “Should've checked your cup, mei mei,” he sighed, watching her knees buckle. The old master's parchment burned in his sleeve—one less secret in this wretched world. The rain began. Perfect for washing away corpses.

#Scratch

 
더 ìœì–ŽëłŽêž°...

from Patrimoine Médard bourgault

La mer respirait lentement ce soir-lĂ , comme un animal immense. MĂ©dard, appuyĂ© contre le bastingage, laissait la brume venir mouiller son visage. Il Ă©tait jeune, encore, mais il avait dĂ©jĂ  compris que la mer n’était pas un paysage : c’était une Ă©preuve.

Le navire avançait sans bruit, glissant sur les grandes routes oĂč rĂŽdaient les sous-marins. On Ă©tait en pleine guerre, et chaque nuit portait le mĂȘme poids : celui d’un silence qu’on n’ose pas briser.

MĂ©dard sortit de sa poche le petit feuillet de priĂšres qu’il gardait depuis QuĂ©bec. Il l’ouvrit lentement, comme on dĂ©plie une certitude.

« Je promets plusieurs messes au SacrĂ©-CƓur
 pour ĂȘtre prĂ©servĂ© de tout accident durant ce voyage
 »

C’était Ă©crit de sa propre main, dans ce mĂ©lange de respect et d’urgence que seul un homme en danger peut sentir. Les mots tremblaient un peu, mais pas Ă  cause du froid.

Il se souvenait trĂšs bien du moment oĂč il avait rĂ©digĂ© cette promesse : un soir d’avant le dĂ©part, oĂč la rumeur des mines dĂ©rivantes et des torpilles avait traversĂ© les cafĂ©s du port comme un courant noir.


La nuit des torpilles

Le capitaine avait ordonné toutes les lumiÚres éteintes. Le navire avançait aveugle. Les hommes chuchotaient, mais leurs voix se perdaient dans le vent.

MĂ©dard fixait la surface sombre. Il avait entendu dire que les torpilles allemandes ne faisaient aucun bruit avant l’impact. Le simple fait d’y penser lui serra la gorge.

Alors il s’était tournĂ© de nouveau vers la priĂšre. Pas par habitude ; par accord intĂ©rieur.

« Bonne Sainte-Anne, protĂ©gez-nous
 »

Il ne demandait pas seulement Ă  ĂȘtre sauvĂ© : il demandait de continuer, d’avancer, d’accomplir ce qu’il devait accomplir — mĂȘme si, Ă  ce moment-lĂ , il ignorait encore que son destin serait de sculpter.


TempĂȘte

Quelques jours plus tard, la mer dĂ©cida de se soulever. Une vraie tempĂȘte, une qui fait perdre pied mĂȘme aux marins aguerris.

Le bateau montait, descendait, retombait. Chaque creux semblait vouloir engloutir tout l’équipage. L’air sentait le sel, la peur et la corde humide.

MĂ©dard, agrippĂ© au treuil, sentait son cƓur battre au rythme des vagues. Il pensa de nouveau Ă  sa promesse. Il la rĂ©pĂ©ta, cette fois sans voix, seulement dans la poitrine.

Il n’était pas certain d’ĂȘtre un homme particuliĂšrement brave, mais il savait faire une chose : tenir bon.

Et il tint.


Une trĂȘve dans le vent

Le lendemain, la mer Ă©tait redevenue une grande plaine immobile. Le soleil, timide d’abord, commença Ă  Ă©clairer les haubans. On aurait dit que rien ne s’était passĂ©.

MĂ©dard marcha sur le pont. Il aimait ces matins-lĂ  : quand tout l’équipage respire un peu plus fort, comme pour remercier.

Il pensa alors Ă  la chapelle de Sainte-Anne-de-BeauprĂ©, aux cierges, aux planchers qui sentent la cire. Il se promit d’y retourner.

Ce qu’il ne savait pas encore, c’est qu’un jour, ce rĂ©flexe de tourner son regard vers le haut deviendrait la base de toute son Ɠuvre sculptĂ©e.


Retour au pays

Quand il revint finalement Ă  Saint-Jean-Port-Joli, le fleuve lui parut plus grand que l’ocĂ©an. Le vent n’avait plus la mĂȘme voix. Il sentait la terre.

Il reprit son travail de charpenterie. Mais dans ses mains, il y avait dĂ©sormais autre chose : la patience des longues nuits en mer, la peur transformĂ©e en calme, et cette gratitude qui l’avait accompagnĂ© partout.

La sculpture viendrait quelques annĂ©es plus tard. Elle naĂźtrait exactement du mĂȘme mouvement que ses priĂšres de marin : une maniĂšre de tenir debout, de chercher la beautĂ©, de rĂ©pondre Ă  un appel silencieux.


Épilogue

Des annĂ©es plus tard, quand MĂ©dard sculpterait ses premiers crucifix, il se souviendrait des nuits sombres oĂč il avait placĂ© sa vie dans les mains de Dieu.

Et tandis que le couteau entaillerait le bois, il entendrait encore — quelque part trĂšs loin, dans une mĂ©moire que la mer n’efface jamais — le bruit lĂ©ger des vagues contre la coque, et la voix intĂ©rieure qui lui disait :

Continue. Je suis lĂ .

**À lire aussi :**

– [MĂ©dard Bourgault : biographie, journal et Ɠuvre de sculpteur](/url-de-ta-page-mere)

– [Analyse – La pĂ©riode maritime de MĂ©dard Bourgault](/url-maritime)

– [Les bois du QuĂ©bec selon MĂ©dard Bourgault](/url-bois)

– [L’éducation artistique selon MĂ©dard Bourgault](/url-education)

– [Le journal spirituel de MĂ©dard Bourgault](/url-journal-spirituel)

 
Lire la suite...

Join the writers on Write.as.

Start writing or create a blog