Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from instantliveusblog
Trusted Online Sports Betting Platform | match44
Choosing the right online sports betting platform can significantly impact your overall experience. Players today look for secure transactions, competitive odds, fast withdrawals, and a smooth interface that works flawlessly across devices. match44 is designed to meet these expectations by offering a reliable, feature-rich, and user-friendly sports betting platform for enthusiasts who value performance and transparency.
Online betting is more than just placing wagers—it’s about strategy, timing, and staying informed. match44 provides real-time match updates, dynamic odds, and a secure system that allows users to focus on the excitement of the game without worrying about technical issues or payment delays. Whether you enjoy cricket, football, or other major sporting events, match44 delivers a professional and engaging betting environment.
Why Players Should Choose match44
At match44, we understand that trust and security are essential when selecting a betting platform. That’s why we use advanced technology and encrypted systems to safeguard user data and financial transactions. From registration to withdrawal, every step is streamlined to ensure efficiency and convenience.
Our platform offers access to major international tournaments, domestic leagues, and high-demand sporting events. With regularly updated odds and diverse betting markets, match44 gives players the tools they need to make informed decisions. We focus on creating a seamless experience that combines speed, accuracy, and reliability.
Our Expertise:
Extensive Sports Betting Markets match44 provides a wide range of betting options, including match outcomes, total scores, player performances, session betting, and live in-play markets. This variety allows users to explore multiple strategies and tailor their betting approach according to their preferences.
Real-Time Live Betting Experience Our live betting feature keeps you connected to the action as it unfolds. With instant score updates and continuously adjusted odds, players can react quickly to game developments and enhance their strategic advantage.
Secure and Fast Transactions We offer multiple deposit and withdrawal options designed for convenience and speed. match44 ensures quick processing times while maintaining strict security measures to protect all financial information.
Mobile-Optimized and User-Friendly Platform The match44 platform is fully responsive across desktops, smartphones, and tablets. Its intuitive layout and fast loading speeds make navigation simple for both new and experienced users.
Transparent Policies and Fair Practices We believe in clear communication and honest operations. All terms, conditions, and promotional details are outlined clearly to avoid confusion. match44 promotes fair play and responsible betting practices.
24/7 Dedicated Customer Support Our professional support team is available around the clock to assist with account setup, technical concerns, or payment inquiries. Prompt assistance ensures a smooth and stress-free experience for every user.
Why You Should Choose Us for Online Sports Betting
match44 stands out for its commitment to security, innovation, and customer satisfaction. Our platform is built for speed and stability, ensuring every bet is placed quickly and accurately.
We combine competitive odds with engaging promotions to provide added value to our users. Our focus on transparency and reliable service makes match44 a trusted choice for sports betting enthusiasts.
Get a Free Estimate with match44
If you’re looking for a secure and professional online sports betting platform, match44 is ready to assist you. Our team offers a free, no-obligation estimate to help you understand account setup, platform features, and exclusive promotional opportunities tailored to your preferences.
Contact match44 today to request your free estimate and discover how our platform can elevate your sports betting experience. Take the next step toward secure, fast, and exciting online betting with a platform built on trust and performance.
match44 is dedicated to delivering a premium sports betting experience backed by advanced technology, secure payment systems, competitive odds, and reliable customer support. With diverse betting markets and real-time updates, we provide everything you need in one trusted platform. Join match44 today and experience online sports betting with confidence and convenience.
from 下川友
「この前さ、お前が公園で、ただ立ってるのを見たんだ」
「声かけてよ」
「無理だよ。ほんとうに立ってるだけだったんだもん。スマホを触るとか、お茶を飲むとか、ベンチに座るとかしてたら、まだ人に見えるのに」
「今さ、金がなくて。持ってるもの全部止まってるんだよ。スマホが止まってんだ。飲み物なんて当然買えないし」
「じゃあせめて、ぼんやり座ってればよかったのに」
「ヘルニアでね」
「……悪かった」
「いや、気にすんなよ」
「でもさ、そんな立ってるだけで気が紛れるのか」
「紛れるよ。遠くで誰かの笑い声がほどけて、小さな靴音が落ち葉を踏んで消えていく感じとかさ」
「ん?」
「夕方の光って、どこか遠い場所から届いた手紙みたいでさ。少し遅れて世界を照らすんだよ」
「おい」
「ん?」
「お前、時間余りすぎて、だいぶ詩的になってきてるぞ」
「どうしよう」
「このままだと完全に詩人になる……」
「そんな脅威みたいに言うなよ」
「こんな夕方の公園にいたらダメだ」
「どこ行くんだよ」
「業務スーパーだよ。あそこからは風情が生まれない」
「酒、買ってくれ」
「いいよ」
その夜は俺の部屋で二人で飲んだ。 少しだけ、お前は詩人から戻ってきた。 けれど次に会うときには、きっとまた別の色に沈んでいる気がした。 灰色か、それとももっと静かな色か。
from
Vida Pensada
Desde pequeño he sido fan del juego, me encantaba jugar. Pasaba horas jugando con mis juguetes, inventando un lore y un guion con historias y desarrollos de personajes, con desenlaces, dramas y traiciones. Mi creatividad e imaginación estaban a flote. Y todas esas historias solo yo las conocía, y nadie más; eran para mí y solo para mí, el disfrute era permanente.
También dibujaba, creaba muchos personajes, héroes, antihéroes, villanos y secuaces. Practicaba mucho deporte, sobre todo fútbol, mucho fútbol, baloncesto y voleibol, y demasiadas horas jugando videojuegos.
En aquel tiempo, la vida como niño y adolescente estaba más llena de juego y, de alguna manera, en mi caso al menos, era mucho más divertida.
No sentía el peso ni la rigidez de ahora, esas expectativas fuertes frente a los demás, frente a la sociedad, el clásico “debería hacer esto porque ya tengo esta edad”, “una casa, un carro, casarme, etc.” Y bueno, aunque tenía responsabilidades de adolescente, como ir al colegio o hacer tareas, no era lo que reinaba o dominaba mis pensamientos.
«Las cosas que los niños y niñas aprenden por iniciativa propia durante el juego libre, no pueden ser aprendidas de otra manera” Peter Gray.
Cuando empiezas a ser adulto, lentamente se empieza a perder ese disfrute, esta visión de la vida. De repente, tu vida gira en torno, casi exclusivamente, al trabajo y a las obligaciones de la vida adulta. Y aun si eres de los pocos privilegiados que tienen un trabajo que les gusta y disfrutan, habrá momentos en que se tornará pesado: papeleos y actividades burocráticas que seguramente no disfrutarás del todo.
Si tuviste una infancia de juego, verás con nostalgia esos tiempos en los que no querías ir a dormir porque querías seguir jugando.
Ahora seguramente se preguntarán: “Pues sí, pero ya crecimos, nos corresponde ser adultos responsables, no podemos vivir en fantasías”.
Estoy de acuerdo en parte, pero no tenemos que renunciar al juego en nuestras vidas; podemos incorporarlo y, de hecho, nos vendría muy bien. Es más, puede hasta salvar la vida.
El juego, para el antropólogo Johan Huizinga, es la categoría fundamental del comportamiento humano: sin el juego, la civilización no existiría. Y no es solo entretenimiento infantil. Es una actividad estructurante de sentido, con características específicas.
Divertido El juego se disfruta. No aburre, entretiene
Libre El juego no se impone. Si es obligatorio, deja de ser juego.
Separado de la vida ordinaria Ocurre en un “como si”: un espacio y tiempo propios (el campo, el tablero, el escenario). Es un escenario que se comparte por un rato.
Cargado de significado Aunque no sea “útil” en términos prácticos, es profundamente significativo.
Generador de comunidad Quien juega entra en un pacto simbólico con otros. Suele ser más una diversión compartida que un placer solitario.
Espontáneo Aunque haya reglas —algunas explícitas, otras implícitas— y se juegue en serio, aunque se controle su desarrollo y se oriente a una meta, no suele ser tan rígido como el trabajo en una oficina o en una fábrica.
Hasta entonces, el ser humano se había definido principalmente como:
Homo sapiens (el que piensa) y Homo faber (el que fabrica)
Huizinga propone una tercera raíz más profunda: el ser humano es, ante todo, un ser que juega (Homo Ludens)
A nivel individual y psicológico, en el juego exploramos identidades, probamos límites sin consecuencias fatales y ensayamos roles (liderar, perder, cooperar).
Además, es una forma segura de autodescubrimiento:
Se restaura la creatividad. Ayuda a entrar en estado de flujo El ego se suaviza.
«Lo que hace excepcional a la especie humana, es que estamos diseñados para jugar durante toda la vida». Stuart Brown.
No he conocido una sola persona en mi vida que no disfrute o haya disfrutado algún tipo de juego, sean deportes, individuales o colectivos, o juegos infantiles.
¿Quién de nosotros no disfrutó al menos alguno de estos: las escondidas, el loco paralizado, la lleva (la eres) o, en su defecto, juegos de mesa como dominó, póker, monopolio, bingo, ludo, Risk, Stop, etc.?
Esta es una explicación fundacional sobre cómo los seres humanos nos hemos desarrollado (y lo seguimos haciendo) a través del juego.
Decía Albert Camus:
“Todo lo que sé sobre la moral y las obligaciones de los hombres se lo debo al fútbol”.
Esta frase la siento en el alma. He sido fan del fútbol y lo he jugado por mucho tiempo a nivel recreacional y amateur. Podría escribir un artículo entero sobre cómo ha influido en mis valores y en cómo veo la vida: cómo trabajar en equipo, cómo ser un líder, cómo ayudar, cómo controlar el ego, cómo equivocarse y aprender a perder, de compañerismo, de solidaridad, etc.
No creo que solo pase en el fútbol; probablemente ocurra en cualquier práctica deportiva colectiva. Camus decía que se pueden encontrar paralelismos con todas las vicisitudes de la vida .
Desde hace unos años he notado que, cuando juego a algo o cuando lo adopto como un “juego”, tengo una energía diferente para encarar los desafíos de esa actividad.
Eso es algo que, a nivel psicológico, Marshall Rosenberg, creador de la CNV (Comunicación No Violenta) y uno de mis héroes personales, me cambió la vida con algo muy sencillo (en realidad, con toda su obra). Él mencionaba que deberíamos ver la vida, y toda acción que hagamos, como un juego, como una forma de contribuir a la vida misma.
Que deberíamos deshacernos de motivaciones guiadas por el miedo, la culpa, la vergüenza, el deber u la obligación.
Él menciona en su libro un par de historias personales. Una de ellas era que odiaba hacer historias clínicas, y lo hizo por mucho tiempo, suponiendo que, como él era psiquiatra, tenía que hacerlo; pues era su trabajo.
Pero al verificar con detenimiento las razones de por qué lo hacía, se dio cuenta de que era por dinero, e inmediatamente comprendió que podía hacer dinero de otra forma o que, a lo mejor, no tendría que hacer todas esas historias clínicas. Se dio cuenta de que podía elegir.
“Tal vez de las más peligrosas de las conductas sea hacer las cosas porque se supone que tenemos que hacerlas”.
Él mencionaba también que llevar a los niños a la escuela le parecía demasiado tedioso. Pero, en esta ocasión, al examinar la causa que justificaba llevarlos, se dio cuenta de los beneficios que obtenían sus hijos al asistir a esa escuela. La institución quedaba lejos, pero ofrecía unos valores educativos que Marshall valoraba. De repente, la energía desde la cual lo hacía cambió totalmente y la queja se desvaneció.
“Tengo que hacer esto” por “Elijo hacer esto porque valoro…”
Usemos un par de ejemplos:
“Tengo que hacer dieta y hacer ejercicio”.
“Elijo hacer dieta y hacer ejercicio porque valoro mi salud y mis niveles de energía”.
“Tengo que ver a mi amigo porque se siente solo después de su separación, aunque estoy cansado y no me apetece”.
“Elijo ir a ver a mi amigo y acompañarlo porque valoro nuestra amistad y su bienestar”.
Cuando hacemos algo porque queremos, porque elegimos hacerlo, incluso algo difícil, el cuerpo y la mente lo viven distinto.
Preguntémonos: ¿cuántas veces llevamos a cabo acciones por obligación, por un sentido del deber, por dinero, por aprobación o para evitar el castigo o la culpa?
Tal vez más de las que estamos dispuestos a admitir.
“En una hora de juego se puede descubrir más acerca de una persona que en un año de conversación” -Platón.
En nuestra educación y en nuestra cultura, y de hecho en nuestro lenguaje mismo, están tan inmersas las palabras “debería”, “tendría”, que hemos olvidado que podemos elegir; hemos olvidado nuestra posibilidad de agencia.
Eso me pasaba: notaba que la energía que daba al jugar, a disfrutar el juego, a hacer todo lo posible por ganar, siguiendo las reglas y sin ser abusivo, me enseñaba de todo y me hacía sentir mejor, más vivo.
A eso siento que se refiere Rosenberg.
Cultivar la conciencia de la energía que se encuentra detrás de nuestras acciones.
Después de leer a Rosenberg, empecé a buscar gamificar mis experiencias, enfocándome en objetivos, desafíos y recompensas, en lugar de verlas como una obligación; se trata de adoptar una mentalidad activa, crear “misiones”, aprender de los “fallos” (errores) y disfrutar el “viaje” de la vida como crecimiento, no solo del destino, aplicando estrategias, tomando decisiones y construyendo un “personaje” fuerte.
Recuerdo que solía ser muy rígido en mi personalidad —muy tímido, muy estructurado— hasta hace unos ocho años. De chico, siempre que sacaba buenas notas en clase, recibía mucha validación externa de mis profesores y de mi familia. Aunque eso reforzaba mis ganas de aprender, también limitaba la posibilidad de explorar otras partes de mí, sobre todo la espontaneidad. No sé exactamente por qué. Creo que, como me conocían como esa versión timida, no me daba el permiso de explorar esa otra parte.
Mi identidad ha estado vinculada fuertemente con mi mente y con el pensamiento lógico-deductivo. Aún sigo luchando con ello: sobrepensando y tratando de resolverlo todo. Eso me trajo muchos beneficios, pero me quitaba, en cierta forma, el disfrute.
Hasta que entré en contacto con el arte: tocando guitarra, escribiendo, haciendo impro. Lo disfruté, y aún lo sigo haciendo, conociéndome a mí mismo y descubriendo facetas de manera compasiva, sin desviarme de los principios que más valoro. Me di cuenta de que mi identidad no tiene por qué ser rígida.
“No estás obligado a ser la misma persona que eras hace cinco minutos.” — Alan Watts
Maté, en su libro El mito de la normalidad, menciona que recuperar el disfrute puede salvarte la vida.
Desde su mirada clínica (trauma, adicciones, enfermedad), Maté observa un patrón claro:
La mayoría de las personas afectadas por situaciones traumáticas, que posteriormente desarrollan enfermedades autoinmunes o adicciones severas, suelen ser personas muy responsables, muy autoexigentes, muy desconectadas del placer y muy duras consigo mismas.
Esta personalidad que desarrollaron fue simplemente una respuesta a eventos duros, donde solo de esa manera podían sobrevivir. Pero esa personalidad no es rígida, y merece cambiar; necesita cambiar y adaptarse cuando ya no estás en modo supervivencia.
Buscar:
No tomarse tan en serio
Recuperar el disfrute
Permitir el juego y la curiosidad
Soltar la identidad rígida
No voy a entrar en detalles acerca del trauma, pero lo que llegué a conectar leyendo ese capítulo del libro de Maté es que, si tu identidad no te la tomas de manera tan rígida y le das espacio al juego en la vida, si te permites un lugar donde tu yo auténtico pueda existir sin estar en juicio permanente, es uno de los mayores actos de amor y autocompasión que podemos tener con nosotros mismos
Para mí está claro: jugar es una forma de autocompasión profunda. Cuando una persona actúa desde el deseo de contribuir y no desde la culpa, cuando puede reírse de sí misma y soltar la identidad rígida, su cuerpo descansa y su vida recupera sentido.
Además, tenemos solo una vida. ¿Por qué hacerla tan seria? Podemos ser responsables y, aun así, jugar y contribuir a esta única y maravillosa experiencia en esta roca flotante junto a otros seres.
Referencias
Huizinga, Johan. Homo Ludens. 1938.
Rosenberg, Marshall B. Nonviolent Communication: A Language of Life. 1999.
Maté, Gabor y Daniel Maté. El mito de la normalidad. 2022.
Confieso que el mundo paranormal no es mi fuerte. Quizás exista, pero soy incrédulo. Tan incrédulo que ni siquiera juego a la lotería.
Una noche soñé con un número que pude recordar al despertarme.
Confieso también que estuve tentado a comprar el número, pero por convicción no lo hice. Sé que no ha salido en ninguna de las loterías que la gente juega. Por tanto, el premio está pendiente.
Cuál no sería mi sorpresa que al registrarme en una página de internet y equivocarme con la contraseña, el número infinito de recuperación comenzó con las cifras exactas del número que soñé.
Al comentárselo a Rosa, mi mujer, tiró la puerta del horno y me dijo:
-Ahi está, y tú tan incrédulo.
Desde entonces, ella algunas veces juega el número en la lotería y yo, aunque discretamente, estudio ese portal, por si de algún modo la suerte está allí.
Pero nada. No lo veo fácil, y en el fondo quizás tampoco quiera. Porque lo que nos falta es que a los dos nos de por la búsqueda de eso que tú estás buscando pero que en el fondo sabes que no va a aparecer y que, como todos, nos vayamos moviendo de aquí para allá, de lado y de vuelta, tratando de descifrar por qué razón, siendo tan especiales, seguimos sin encontrar no se sabe qué.
-Paco, ¿has visto algo?
from Not Not Looking
This is the email address for Not Not Looking, and this text should appear there as a post.
Mucha gente se quiere hacer rica, pero allí no es tan fácil. Te dejan avanzar un poquito, lo suficiente para que te lo vayas creyendo, luego te descuidas, y cuando más infladas tienes la ilusiones, en ese instante terminas reventado.
A mí me llegaron a ofrecer, en petit comité, un mapa del tesoro y una espada de prócer. Eso sí, hice muchos amigos. Personas muy influyentes, con grandes contactos en círculos militares, de hidrocarburos y minería. Yo me divertí mucho y no reniego de lo que gasté porque me enamoré de Florecita, que baila como un trompo. El problema vino cuando me dijo que era sobrina del ministro de transporte y al escucharlo mi hermano Luis, que es poeta, se inventó que tenía unos aviones. Unos Islander, para ser precisos. A mí me juró que sí y me fui resbalando por las ganas de hacerme rico y casarme con la belleza que tenía a mí lado. Me merecía algo así, ya era hora. Él ponía los aviones y yo ponía a la sobrina del ministro. Al cincuenta por ciento o nada.
Cuando Luis dijo que sí, fuimos con Florecita a ver a su tío. Qué tipazo. Lo primero que hizo fue mirarnos de arriba a abajo y señalándome con el puro, me dijo:
-Dame un abrazo, sobrino. Ahí me enganchó, qué detalle.
La reunión, muy productiva. Luego nos llevó a conocer a su compadre el dueño de La Plata donde cenamos ceviches, pulpos, langostinos dorados, crema de langosta, chuletón de buey, helados de papaya con aguacate, whisky de malta a mares, con mucho hielo, puros y más puros mojados al ron, y cuando vino la cuenta no había ni ministro, ni Florecita, ni compadre; por no haber no había dónde agarrarse y terminamos deportados.
Y aquí estamos, en la casa de mi mamá, que nos tiene que comprar ropa porque todo se quedó en el hotel.
-Luis, ¿puedes decirme de dónde ibas a sacar los Islander? -De los pétalos de Florecita.
No me hubiera querido despertar, pero tuve que hacerlo cuando reventaron los timbrazos de la puerta. Créanme, esto no es timbre, es como una sierra que penetra hasta cortar varias partes del tímpano.
Seguía, y seguía.
Salté de la cama, me puse los pantalones y al observar por la mirilla, para qué les voy a dar más rodeos: era un oso.
Pero no un oso cualquiera, tenía cara de susto, como si huyera de algo.
Sin pensarlo, de esas cosas que hace uno como reflejo del karma, lo dejé pasar, entró al salón y le pedí que se sentara.
Me pareció desorientado. Mientras le preparaba un café cargado, me di cuenta que, al contrario de lo que uno cree que es un oso, realmente no era tan alto, más bien normal, de mi tamaño, aunque los pelos le dan más volumen. Tampoco era flaco. No olía bien ni mal. Sus zapatos eran grandes, de marca. Vestía un chaleco con forro de seda de corte italiano, y llevaba una riñonera de cuero. Nada más.
Mientras daba sorbos al café, me dijo:
-Aunque la confianza a veces hace enemigos, quiero abrirle mi corazón. Me acabo de escapar de la casa de su vecino, que me ha tenido encerrado desde niño. No es que tenga queja; me ha tratado como a un hijo, pero yo no soy su hijo, yo soy un oso. Tengo autoconsciencia de oso. Estudié en internet lo que soy y me eduqué leyendo todos los ebooks gratuitos que pude descargarme. Los alojé en una nube y hasta hice algunos resúmenes de libros, en un blog con el seudónimo de Sinforoso. No soy un iletrado. Le digo esto porque así comprenderá que tengo cierto conocimiento de las cosas. Logré invertir en opciones y ganar, realmente ganar, operando con criptomonedas. Quiero escaparme, necesito su ayuda, pero no quiero perder esa fortuna que tanto me ha costado. Necesito una cuenta de banco humana, con su contraseña, para hacer las transferencias. Le prometo hacerlo rico. La cabeza se me fue inflando de codicia pero saqué fuerza de flaqueza y, como pude, le dije: -Oso, tú me quieres estafar.
from An Open Letter
Today E really fucking got me. I told her that I have been feeling neglected by her as of recent, and neglected. This week moving and working from home has been very isolating, and it even got so bad that a few days ago I broke down crying. She didn’t follow up and ask me how I was feeling, and also was very distant because she was focused on her school. I made several bids for connection and she rejected them, and when I brought them up today I told her that I needed some space because I was frustrated/hurt by the above. She asked me to talk about it, and then when I did she stopped responding and gave “I’m sorry” as a response to several texts. When I brought it up after a few hours that I felt shitty because it felt like she asked me to explain how I was feeling, and then when I did she just shut down the conversation. I really hoped that she would come out the conversation with a sense of curiosity trying to understand what hurt me, but instead it felt like she just shut down. She then left me on read for over an hour after that. To me I think about flipping the roles and how people would freak out on social media, and say what a shitty boyfriend.
from folgepaula
as if time itself had paused mid sentence and was asking us to finish it. our late night talk still lingering in the air, half smoke, half memory. it felt like we already knew each other as old friends, though we never shared a past, only a coincidence that learned our names. and by the time we reached my door, the entire district seemed to smile, windows were blinking, streets leaning in to listen. and the absurdity of it all, how many times I wandered around without your address, orbiting you by mistake. it would be unnatural not to fall in love with this moment, because every sound I hear now is translated into our crooked slang, language bending itself just to sound like us. It feels as though my instructions were already written in you, and when you hold me, the world slows it spinning. And I think I like it exactly as it is.
/aug 23
from alexjohn
Depression is a serious mental health condition that affects millions of people worldwide. Many individuals who struggle with severe or treatment-resistant depression often search for effective and fast-acting solutions. In recent years, ketamine therapy has gained attention as an innovative option for improving symptoms when traditional methods do not work. Understanding how many ketamine treatments are needed for effective Depression Treatment can help patients make informed decisions about their mental health care.
Ketamine was originally developed as an anesthetic but is now widely used in mental health settings for managing severe depression. Unlike traditional antidepressants that may take weeks to show results, ketamine works quickly by affecting brain receptors related to mood and emotional regulation. This makes it a promising approach for individuals who have not responded well to other forms of Depression Treatment.
Ketamine therapy is usually administered under medical supervision in a clinic or hospital setting. The goal is to reduce symptoms such as persistent sadness, lack of motivation, anxiety, and suicidal thoughts. When included as part of a comprehensive Depression Treatment plan, ketamine can help improve overall emotional stability and quality of life.
The number of ketamine sessions required varies from person to person. Most patients begin with an initial series of treatments known as the induction phase. This phase typically involves six treatments over two to three weeks. During this time, healthcare providers monitor how the patient responds to therapy and adjust the treatment plan accordingly.
After the induction phase, some patients move to a maintenance phase. Maintenance sessions may be scheduled weekly, bi-weekly, or monthly depending on how well symptoms are controlled. This personalized approach ensures that Depression Treatment remains effective over time and prevents symptoms from returning.
Ketamine therapy is not usually a one time solution. Multiple sessions help build and sustain positive changes in the brain. Each treatment supports neural pathways that influence mood and emotional balance. Over time, this process can lead to more stable and lasting improvements in mental health.
Consistency is key in any Depression Treatment plan. Skipping sessions or stopping treatment too early may reduce the benefits. Patients are encouraged to follow the schedule recommended by their healthcare provider to achieve the best possible results.
Several factors determine how many ketamine treatments a patient may need. These include the severity of depression, how long the patient has experienced symptoms, and whether other therapies have been effective. Some individuals notice improvement after just a few sessions, while others require ongoing maintenance to maintain progress.
Lifestyle, stress levels, and overall physical health can also influence how well a patient responds. A well-rounded Depression Treatment plan often includes counseling, lifestyle changes, and medication management alongside ketamine therapy to support long-term recovery.
Ketamine therapy is generally considered safe when administered by trained professionals. Treatments are conducted in controlled environments where patients are closely monitored. Mild side effects such as dizziness or nausea may occur but usually resolve quickly after the session.
Healthcare providers assess each patient carefully before starting therapy to ensure ketamine is an appropriate option. When integrated into a structured Depression Treatment plan, it can provide significant relief for individuals with severe symptoms.
Professional guidance plays a major role in achieving successful outcomes. At St George Hospital, mental health specialists develop personalized plans that focus on patient safety, comfort, and long term recovery. Their approach combines modern therapies with compassionate support to ensure patients receive comprehensive Depression Treatment tailored to their needs.
Having access to experienced healthcare professionals allows patients to track progress, adjust treatment frequency, and address any concerns throughout the process. This structured support system increases the likelihood of sustained improvement.
Each ketamine session typically lasts between 40 minutes and one hour. Patients remain under observation for a short time afterward to ensure they feel stable before leaving. Most people can return home the same day, although they are usually advised not to drive immediately after treatment.
Improvements in mood and emotional clarity may appear within hours or days after the first few sessions. Continued therapy helps reinforce these positive changes, making Depression Treatment more effective over time.
Ketamine therapy offers hope for individuals who have struggled with persistent depression. While the number of treatments varies, most patients benefit from an initial series followed by maintenance sessions. When combined with therapy and lifestyle adjustments, this approach can significantly improve mental well being.
Choosing the right medical team and following a structured plan ensures that Depression Treatment remains safe and effective. With proper care and monitoring, many individuals experience meaningful relief and regain control over their lives.
Many patients notice improvements within hours or days after the first few sessions, making it one of the fastest-acting options for Depression Treatment.
It is commonly used for treatment-resistant or severe cases, but a doctor can determine whether it is suitable for an individual’s condition.
Effects vary by patient. Some experience relief for weeks, while others require maintenance sessions to sustain results.
Ketamine is usually part of a broader Depression Treatment plan that may include therapy and medication for the best outcomes.
Hospitals and specialized mental health centers such as St George Hospital offer supervised ketamine therapy as part of comprehensive mental health care.
from
Chemin tournant
On entend plus le craquement du mot quand il cède et tombe, ni le gémissement final des pendus. Juste l'infernal rotor coupant les choses en deux, coupant le corps, sa syllabe. On retue les morts. Et je tirais aussi le grand rideau sur eux, la couverture des feuillages, sacrifiant à l'oubli. Je ne vois désormais que leur ombre tranchée par la rougeur des toits, engrappée d'oiseaux sales, dévoreurs de boyaux.
Le mot arbre apparait 8 fois au singulier et 13 fois au pluriel dans Ma vie au Village.
#VoyageauLexique
Dans ce deuxième Voyage au Lexique, je continue d’explorer, en me gardant de les exploiter, les mots de Ma vie au village (in Journal de la brousse endormie) dont le nombre d’occurrences est significatif.
from angelllyies
Am i right but we all hate trump ??
from angelllyies
Hello!!!
Today i didnt ate sweets thats SO SO SO BADD :(((
Yeah i dont remember anything cleary so u had to look what i wrote about that , it was quite boring Its weird fronting tho ;–; I wanna go back but i cant the host doesnt want to he’s verryyy tired
By the way my name’s Ivy!!! ^^ Anyway i hope you guys had a great day i dont intend to make a whole “book” like the other loser Everyone should love PINK 💗💕💐💖💞💞💖💕💝💝🌷
Byee!! :3
from
SmarterArticles

In June 2024, Goldman Sachs published a research note that rattled Silicon Valley's most cherished assumptions. The report posed what it called the “$600 billion question”: would the staggering investment in artificial intelligence infrastructure ever generate proportional returns? The note featured analysis from MIT economist Daron Acemoglu, who had recently calculated that AI would produce no more than a 0.93 to 1.16 percent increase in US GDP over the next decade, a figure dramatically lower than the techno-utopian projections circulating through investor presentations and conference keynotes. “Much of what we hear from the industry now is exaggeration,” Acemoglu stated plainly. Two months later, he was awarded the 2024 Nobel Memorial Prize in Economic Sciences, alongside his MIT colleague Simon Johnson and University of Chicago economist James Robinson, for research on the relationship between political institutions and economic growth.
That gap between what AI is promised to deliver and what it actually does is no longer an abstract concern for economists and technologists. It is reshaping public attitudes toward technology at a speed that should alarm anyone who cares about the long-term relationship between innovation and democratic society. When governments deploy algorithmic systems to deny healthcare coverage or detect welfare fraud, when corporations invest billions in tools that fail 95 percent of the time, and when the public is told repeatedly that superintelligence is just around the corner while chatbots still fabricate legal citations, something fundamental breaks in the social contract around technological progress.
The question is not whether AI is useful. It plainly is, in specific, well-defined applications. The question is what happens when an entire civilisation makes strategic decisions based on capabilities that do not yet exist and may never materialise in the form being sold.
By late 2025, the AI industry had entered what Gartner's analysts formally classified as the “Trough of Disillusionment.” Generative AI, which had been perched at the Peak of Inflated Expectations just one year earlier, had slid into the territory where early adopters report performance issues, low return on investment, and a growing sense that the technology's capabilities had been systematically overstated. The positioning reflected difficulties organisations face when attempting to move generative AI from pilot projects to production systems. Integration with existing infrastructure presented technical obstacles, while concerns about data security caused some companies to limit deployment entirely.
The numbers told a damning story. According to MIT's “The GenAI Divide: State of AI in Business 2025” report, published in July 2025 and based on 52 executive interviews, surveys of 153 leaders, and analysis of 300 public AI deployments, 95 percent of generative AI pilot projects delivered no measurable profit-and-loss impact. American enterprises had spent an estimated $40 billion on artificial intelligence systems in 2024, yet the vast majority saw zero measurable bottom-line returns. Only five percent of integrated systems created significant value.
The study's authors, from MIT's NANDA initiative, identified what they termed the “GenAI Divide”: a widening split between high adoption and low transformation. Companies were enthusiastically purchasing and deploying AI tools, but almost none were achieving the business results that had been promised. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report stated. The core barrier, the authors concluded, was not infrastructure, regulation, or talent. It was that most generative AI systems “do not retain feedback, adapt to context, or improve over time,” making them fundamentally ill-suited for the enterprise environments into which they were being thrust.
This was not an outlier finding. A 2024 NTT DATA analysis concluded that between 70 and 85 percent of generative AI deployment efforts were failing to meet their desired return on investment. The Autodesk State of Design & Make 2025 report found that sentiment toward AI had dropped significantly year over year, with just 69 percent of business leaders saying AI would enhance their industry, representing a 12 percent decline from the previous year. Only 40 percent of leaders said they were approaching or had achieved their AI goals, a 16-point decrease that represented a 29 percent drop. S&P Global data revealed that 42 percent of companies scrapped most of their AI initiatives in 2025, up sharply from 17 percent the year before.
The infrastructure spending, meanwhile, continued accelerating even as returns failed to materialise. Meta, Microsoft, Amazon, and Google collectively committed over $250 billion to AI infrastructure during 2025. Amazon alone planned $125 billion in capital expenditure, up from $77 billion in 2024, a 62 percent increase. Goldman Sachs CEO David Solomon publicly acknowledged that he expected “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos called the environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman conceded that “people will overinvest and lose money.”
The gap between AI's promises and its performance is not occurring in a vacuum. It is landing on a public already growing sceptical of the technology industry's claims, and it is accelerating a decline in trust that carries profound implications for democratic governance.
The 2025 Edelman Trust Barometer, based on 30-minute online interviews conducted between October and November 2024, revealed a stark picture. Globally, only 49 percent of respondents trusted artificial intelligence as a technology. In the United States, that figure dropped to just 32 percent. Three times as many Americans rejected the growing use of AI (49 percent) as embraced it (17 percent). In the United Kingdom, trust stood at just 36 percent. In Germany, 39 percent. The Chinese public, by contrast, reported 72 percent trust in AI, a 40-point gap that reflects not just different regulatory environments but fundamentally different cultural relationships with technology and state authority.
These figures represent a significant deterioration. A decade ago, 73 percent of Americans trusted technology companies. By 2025, that number had fallen to 63 percent. Technology, which was the most trusted sector in 90 percent of the countries Edelman studies eight years ago, now held that position in only half. The barometer also found that 59 percent of global employees feared job displacement due to automation, and nearly one in two were sceptical of business use of artificial intelligence.
The Pew Research Center's findings painted an even more granular picture of public anxiety. In an April 2025 report examining how the US public and AI experts view artificial intelligence, Pew found that 50 percent of American adults said they were more concerned than excited about the increased use of AI in daily life, up from 37 percent in 2021. More than half (57 percent) rated the societal risks of AI as high, compared with only 25 percent who said the benefits were high. Over half of US adults (53 percent) believed AI did more harm than good in protecting personal privacy, and 53 percent said AI would worsen people's ability to think creatively.
Perhaps most revealing was the chasm between expert optimism and public unease. While 56 percent of AI experts believed AI would have a positive effect on the United States over the next 20 years, only 17 percent of the general public agreed. While 47 percent of experts said they were more excited than concerned, only 11 percent of ordinary citizens felt the same. And despite their divergent levels of optimism, both groups shared a common scepticism about institutional competence: roughly 60 percent of both experts and the public said they lacked confidence that US companies would develop AI responsibly.
The Stanford HAI AI Index 2025 Report reinforced these trends globally. Across 26 nations surveyed by Ipsos, confidence that AI companies protect personal data fell from 50 percent in 2023 to 47 percent in 2024. Fewer people believed AI systems were unbiased and free from discrimination compared to the previous year. While 18 of 26 nations saw an increase in the proportion of people who believed AI products offered more benefits than drawbacks, the optimism was concentrated in countries like China (83 percent), Indonesia (80 percent), and Thailand (77 percent), while the United States (39 percent), Canada (40 percent), and the Netherlands (36 percent) remained deeply sceptical.
The erosion of public trust in AI would be concerning enough if it were merely a matter of consumer sentiment. But the stakes become existential when governments and corporations use overestimated AI capabilities to make decisions that fundamentally alter people's lives, and when those decisions carry consequences that cannot be undone.
Consider healthcare. In November 2023, a class action lawsuit was filed against UnitedHealth Group and its subsidiary, alleging that the company illegally used an AI algorithm called nH Predict to deny rehabilitation care to seriously ill elderly patients enrolled in Medicare Advantage plans. The algorithm, developed by a company called Senior Metrics and later acquired by UnitedHealth's Optum subsidiary in 2020, was designed to predict how long patients would need post-acute care. According to the lawsuit, UnitedHealth deployed the algorithm knowing it had a 90 percent error rate on appeals, meaning that nine out of ten times a human reviewed the AI's denial, they overturned it. UnitedHealth also allegedly knew that only 0.2 percent of denied patients would file appeals, making the error rate commercially inconsequential for the insurer despite being medically devastating for patients.
The human cost was documented in court filings. Gene Lokken, a 91-year-old Wisconsin resident named in the lawsuit, fractured his leg and ankle in May 2022. After his doctor approved physical therapy, UnitedHealth paid for only 19 days before the algorithm determined he was safe to go home. His doctors appealed, noting his muscles were “paralysed and weak,” but the insurer denied further coverage. His family paid approximately $150,000 over the following year until he died in July 2023. In February 2025, a federal court allowed the case to proceed, denying UnitedHealth's attempt to dismiss the claims and waiving the exhaustion of administrative remedies requirement, noting that patients faced irreparable harm.
The STAT investigative series “Denied by AI,” which broke the UnitedHealth story, was a 2024 Pulitzer Prize finalist in investigative reporting. A US Senate report released in October 2024 found that UnitedHealthcare's prior authorisation denial rate for post-acute care had jumped to 22.7 percent in 2022 from 10.9 percent in 2020. The healthcare AI problem extends far beyond a single insurer. ECRI, a patient safety organisation, ranked insufficient governance of artificial intelligence as the number two patient safety threat in 2025, warning that medical errors generated by AI could compromise patient safety through misdiagnoses and inappropriate treatment decisions. Yet only about 16 percent of hospital executives surveyed said they had a systemwide governance policy for AI use and data access.
The pattern repeats across domains where algorithmic systems are deployed to process vulnerable populations. In the Netherlands, the childcare benefits scandal stands as perhaps the most devastating example of what happens when governments trust flawed algorithms with life-altering decisions. The Dutch Tax and Customs Administration deployed a machine learning model to detect welfare fraud that illegally used dual nationality as a risk characteristic. The system falsely accused over 20,000 parents of fraud, resulting in benefits termination and forced repayments. Families were driven into bankruptcy. Children were removed from their homes. Mental health crises proliferated. Seventy percent of those affected had a migration background, and fifty percent were single-person households, mostly mothers. In January 2021, the Dutch government was forced to resign after a parliamentary investigation concluded that the government had violated the foundational principles of the rule of law.
The related SyRI (System Risk Indication) system, which cross-referenced citizens' employment, benefits, and tax data to flag “unlikely citizen profiles,” was deployed exclusively in neighbourhoods with high numbers of low-income households and disproportionately many residents from immigrant backgrounds. In February 2020, the Hague court ordered SyRI's immediate halt, ruling it violated Article 8 of the European Convention on Human Rights. Amnesty International described the system's targeting criteria as “xenophobic machines.” Yet investigations by Lighthouse Reports later confirmed that similar algorithmic surveillance practices continued under slightly adapted systems, even after the ban, with the government having “silently continued to deploy a slightly adapted SyRI in some of the country's most vulnerable neighbourhoods.”
Understanding why AI hype is so dangerous requires understanding what these systems actually do, as opposed to what their makers claim they do.
Emily Bender, a linguistics professor at the University of Washington who was included in the inaugural TIME100 AI list of most influential people in artificial intelligence in 2023, co-authored a now-famous paper arguing that large language models are fundamentally “stochastic parrots.” They do not understand language in any meaningful sense. They draw on training data to predict which sequence of tokens is most likely to follow a given prompt. The result is an illusion of comprehension, a pattern-matching exercise that produces outputs resembling intelligent thought without any of the underlying cognition.
In 2025, Bender and sociologist Alex Hanna, director of research at the Distributed AI Research Institute and a former Google employee, published “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.” The book argues that AI hype serves as a mask for Big Tech's drive for profit, with the breathless promotion of AI capabilities benefiting technology companies far more than users or society. “Who benefits from this technology, who is harmed, and what recourse do they have?” Bender and Hanna ask, framing these as the essential questions that the hype deliberately obscures. Library Journal called the book “a thorough, witty, and accessible argument against AI that meets the moment.”
The stochastic parrot problem has real-world consequences that compound the trust deficit. When AI systems fabricate information with perfect confidence, they undermine the epistemic foundations that societies rely on for decision-making. Legal scholar Damien Charlotin, who tracks AI hallucinations in court filings through his database, had documented at least 206 instances of lawyers submitting AI-generated fabricated case citations by mid-2025. Stanford University's RegLab found that even premium legal AI tools hallucinated at alarming rates: Westlaw's AI-Assisted Research produced hallucinated or incorrect information 33 percent of the time, providing accurate responses to only 42 percent of queries. LexisNexis's Lexis+ AI hallucinated 17 percent of the time. A 2025 study published in Nature Machine Intelligence found that large language models cannot reliably distinguish between belief and knowledge, or between opinions and facts, noting that “failure to make such distinctions can mislead diagnoses, distort judicial judgements and amplify misinformation.”
If the tools marketed as the most reliable in their field fabricate information roughly one-fifth to one-third of the time, what does this mean for the countless lower-stakes applications where AI outputs are accepted without verification?
The gap between marketing claims and actual capabilities has grown so pronounced that regulators have begun treating AI exaggeration as a form of securities fraud.
In March 2024, the US Securities and Exchange Commission brought its first “AI washing” enforcement actions, simultaneously charging two investment advisory firms, Delphia and Global Predictions, with making false and misleading statements about their use of AI. Delphia paid $225,000 and Global Predictions paid $175,000 in civil penalties. These firms had not been entirely without AI capabilities, but they had overstated what those systems could do, crossing the line from marketing enthusiasm into regulatory violation.
The enforcement actions escalated rapidly. In January 2025, the SEC charged Presto Automation, a formerly Nasdaq-listed company, in the first AI washing action against a public company. Presto had claimed its AI voice system eliminated the need for human drive-through order-taking at fast food restaurants, but the SEC alleged the vast majority of orders still required human intervention and that the AI speech recognition technology was owned and operated by a third party. In April 2025, the SEC and Department of Justice charged the founder of Nate Inc. with fraudulently raising over $42 million by claiming the company's shopping app used AI to process transactions, when in reality manual workers completed the purchases. The claimed automation rate was above 90 percent; the actual rate was essentially zero.
Securities class actions targeting alleged AI misrepresentations increased by 100 percent between 2023 and 2024. In February 2025, the SEC announced the creation of a dedicated Cyber and Emerging Technologies Unit, tasked with combating technology-related misconduct, and flagged AI washing as a top examination priority.
The pattern is instructive. When a technology is overhyped, the incentive to exaggerate capabilities becomes irresistible. Companies that accurately describe their modest AI implementations risk being punished by investors who have been conditioned to expect transformative breakthroughs. The honest actors are penalised while the exaggerators attract capital, creating a market dynamic that systematically rewards deception.
The AI hype cycle is not without historical precedent, and the parallels offer both warnings and qualified reassurance.
During the dot-com era, telecommunications companies laid more than 80 million miles of fibre optic cables across the United States, driven by wildly inflated claims about internet traffic growth. Companies like Global Crossing, Level 3, and Qwest raced to build massive networks. The result was catastrophic overcapacity: even four years after the bubble burst, 85 to 95 percent of the fibre laid remained unused, earning the nickname “dark fibre.” The Nasdaq composite rose nearly 400 percent between 1995 and March 2000, then crashed 78 percent by October 2002.
The parallels to today's AI infrastructure buildout are unmistakable. Meta CEO Mark Zuckerberg announced plans for an AI data centre “so large it could cover a significant part of Manhattan.” The Stargate Project aims to develop a $500 billion nationwide network of AI data centres. Goldman Sachs analysts found that hyperscaler companies had taken on $121 billion in debt over the past year, representing a more than 300 percent increase from typical industry debt levels. AI-related stocks had accounted for 75 percent of S&P 500 returns, 80 percent of earnings growth, and 90 percent of capital spending growth since ChatGPT launched in November 2022.
Yet there are important differences. Unlike many dot-com companies that had no revenue, major AI players are generating substantial income. Microsoft's Azure cloud service grew 39 percent year over year to an $86 billion run rate. OpenAI projects $20 billion in annualised revenue. The Nasdaq's forward price-to-earnings ratio was approximately 26 times in November 2023, compared to approximately 60 times at the dot-com peak.
The more useful lesson from the dot-com era is not about whether the bubble will burst, but about what happens to public trust and institutional decision-making in the aftermath. The internet survived the dot-com crash and eventually fulfilled many of its early promises. But the crash destroyed trillions in wealth, wiped out retirement savings, and created a lasting scepticism toward technology claims that took years to overcome. The institutions and individuals who made decisions based on dot-com hype, from pension funds that invested in companies with no path to profitability to governments that restructured services around technologies that did not yet work, bore costs that were never fully recovered.
Perhaps the most consequential long-term risk of the AI hype gap is its intersection with systemic inequality. When policymakers deploy AI systems in criminal justice, welfare administration, and public services based on inflated claims of accuracy and objectivity, the consequences fall disproportionately on communities that are already marginalised.
Predictive policing offers a stark illustration. The Chicago Police Department's “Strategic Subject List,” implemented in 2012 to identify individuals at higher risk of gun violence, disproportionately targeted young Black and Latino men, leading to intensified surveillance and police interactions in those communities. The system created a feedback loop: more police dispatched to certain neighbourhoods resulted in more recorded crime, which the algorithm interpreted as confirmation that those neighbourhoods were indeed high-risk, which led to even more policing. The NAACP has called on state legislators to evaluate and regulate the use of predictive policing, noting mounting evidence that these tools increase racial biases and citing the lack of transparency inherent in proprietary algorithms that do not allow for public scrutiny.
The COMPAS recidivism prediction tool, widely used in US criminal justice, was found to produce biased predictions against Black defendants compared to white defendants, trained on historical data saturated with racial bias. An audit by the LAPD inspector general found “significant inconsistencies” in how officers entered data into a predictive policing programme, further fuelling biased predictions. These are not edge cases or implementation failures. They are the predictable consequences of deploying pattern-recognition systems trained on data that reflects centuries of structural discrimination.
In welfare administration, the pattern is equally troubling. The Dutch childcare benefits scandal demonstrated how algorithmic systems can automate inequality at scale. The municipality of Rotterdam used a discriminatory algorithm to profile residents and “predict” social welfare fraud for three years, disproportionately targeting young single mothers with limited knowledge of Dutch. In the United Kingdom, the Department for Work and Pensions admitted, in documents released under the Freedom of Information Act, to finding bias in an AI tool used to detect fraud in universal credit claims. The tool's initial iteration correctly matched conditions only 35 percent of the time, and by the DWP's own admission, “chronic fatigue was translated into chronic renal failure” and “partially amputation of foot was translated into partially sighted.”
These failures share a common thread. The AI systems were deployed based on claims of objectivity and accuracy that did not withstand scrutiny. Policymakers, influenced by industry hype about AI's capabilities, trusted algorithmic outputs over human judgement, and the people who paid the price were those least equipped to challenge the decisions being made about their lives.
The long-term consequences of the AI hype gap extend beyond immediate harms to individual victims. They threaten to reshape the relationship between society and technological innovation in ways that could prove difficult to reverse.
First, there is the problem of misallocated resources. The MIT study found that more than half of generative AI budgets were devoted to sales and marketing tools, despite evidence that the best returns came from back-office automation, eliminating business process outsourcing, cutting external agency costs, and streamlining operations. When organisations chase the use cases that sound most impressive rather than those most likely to deliver value, they waste capital that could have funded genuinely productive innovation. The study also revealed a striking shadow economy: while only 40 percent of companies had official large language model subscriptions, 90 percent of workers surveyed reported daily use of personal AI tools for job tasks, suggesting that the gap between corporate AI strategy and actual AI utility is even wider than the headline figures suggest.
Second, the trust deficit creates regulatory feedback loops that can stifle beneficial applications. As public concern about AI grows, so does political pressure for restrictive regulation. The 2025 Stanford HAI report found that references to AI in draft legislation across 75 countries increased by 21.3 percent, continuing a ninefold increase since 2016. In the United States, 73.7 percent of local policymakers agreed that AI should be regulated, up from 55.7 percent in 2022. This regulatory momentum is a direct response to the trust deficit, and while some regulation is necessary and overdue, poorly designed rules driven by public fear rather than technical understanding risk constraining beneficial applications alongside harmful ones. Colorado became the first US state to enact legislation addressing algorithmic bias in 2024, with California and New York following with their own targeted measures.
Third, the hype cycle creates a talent and attention problem. When AI is presented as a solution to every conceivable challenge, researchers and engineers are pulled toward fashionable applications rather than areas of genuine need. Acemoglu has argued that “we currently have the wrong direction for AI. We're using it too much for automation and not enough for providing expertise and information to workers.” The hype incentivises building systems that replace human judgement rather than augmenting it, directing talent and investment away from applications that could produce the greatest social benefit.
Finally, and perhaps most critically, the erosion of public trust in AI threatens to become self-reinforcing. Each failed deployment, each exaggerated claim exposed, each algorithmic system found to be biased or inaccurate further deepens public scepticism. Meredith Whittaker, president of Signal, has warned about the security and privacy risks of granting AI agents extensive access to sensitive data, describing a future where the “magic genie bot” becomes a nightmare if security and privacy are not prioritised. When public trust in AI erodes, even beneficial and well-designed systems face adoption resistance, creating a vicious cycle where good technology is tainted by association with bad marketing.
The AI hype gap is not merely a marketing problem or an investment risk. It is a structural challenge to the relationship between technological innovation and public trust that has been building for years and is now reaching a critical inflection point.
The 2025 Edelman Trust Barometer found that the most powerful drivers of AI enthusiasm are trust and information, with hesitation rooted more in unfamiliarity than negative experiences. This finding suggests a path that does not require abandoning AI, but demands abandoning the hype. As people use AI more and experience its ability to help them learn, work, and solve problems, their confidence rises. The obstacle is not the technology itself but the inflated expectations that set users up for disappointment.
Gartner's placement of generative AI in the Trough of Disillusionment is, paradoxically, encouraging. As the firm's analysts note, the trough does not represent failure. It represents the transition from wild experimentation to rigorous engineering, from breathless promises to honest assessment of what works and what does not. The companies and institutions that emerge successfully from this phase will be those that measured their claims against reality rather than against their competitors' marketing materials.
The lesson from previous technology cycles is clear but routinely ignored. The dot-com bubble popped, but the internet did not disappear. What disappeared were the companies and institutions that confused hype with strategy. The same pattern will likely repeat with AI. The technology will mature, find its genuine applications, and deliver real value. But the path from here to there runs through a period of reckoning that demands honesty about what AI can and cannot do, transparency about the limitations of algorithmic decision-making, and accountability for the real harms caused by deploying immature systems in high-stakes contexts.
As Bender and Hanna urge, the starting point must be asking basic but important questions: who benefits, who is harmed, and what recourse do they have? As Acemoglu wrote in his analysis for “Economic Policy” in 2024, “Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing.” The potential is real. But potential is not performance, and treating it as such has consequences that a $600 billion question only begins to capture.
Acemoglu, D. (2024). “The Simple Macroeconomics of AI.” Economic Policy. Massachusetts Institute of Technology. https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf
Amnesty International. (2021). “Xenophobic Machines: Dutch Child Benefit Scandal.” Retrieved from https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
Bender, E. M. & Hanna, A. (2025). The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Penguin/HarperCollins.
CBS News. (2023). “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims.” Retrieved from https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/
Challapally, A., Pease, C., Raskar, R. & Chari, P. (2025). “The GenAI Divide: State of AI in Business 2025.” MIT NANDA Initiative. As reported by Fortune, 18 August 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
Edelman. (2025). “2025 Edelman Trust Barometer.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer
Edelman. (2025). “Flash Poll: Trust and Artificial Intelligence at a Crossroads.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence
Edelman. (2025). “The AI Trust Imperative: Navigating the Future with Confidence.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector
Gartner. (2025). “Hype Cycle for Artificial Intelligence, 2025.” Retrieved from https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence
Goldman Sachs. (2024). “Top of Mind: AI: in a bubble?” Goldman Sachs Research. Retrieved from https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble
Healthcare Finance News. (2025). “Class action lawsuit against UnitedHealth's AI claim denials advances.” Retrieved from https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances
Lighthouse Reports. (2023). “The Algorithm Addiction.” Retrieved from https://www.lighthousereports.com/investigation/the-algorithm-addiction/
Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D. & Ho, D. E. (2025). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” Journal of Empirical Legal Studies, 0:1-27. https://doi.org/10.1111/jels.12413
MIT Technology Review. (2025). “The great AI hype correction of 2025.” Retrieved from https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/
NAACP. (2024). “Artificial Intelligence in Predictive Policing Issue Brief.” Retrieved from https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
Nature Machine Intelligence. (2025). “Language models cannot reliably distinguish belief from knowledge and fact.” https://doi.org/10.1038/s42256-025-01113-8
Novara Media. (2025). “How Labour Is Using Biased AI to Determine Benefit Claims.” Retrieved from https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/
NTT DATA. (2024). “Between 70-85% of GenAI deployment efforts are failing to meet their desired ROI.” Retrieved from https://www.nttdata.com/global/en/insights/focus/2024/between-70-85p-of-genai-deployment-efforts-are-failing
Pew Research Center. (2025). “How the US Public and AI Experts View Artificial Intelligence.” Retrieved from https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/
Radiologybusiness.com. (2025). “'Insufficient governance of AI' is the No. 2 patient safety threat in 2025.” Retrieved from https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025
SEC. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36. Retrieved from https://www.sec.gov/newsroom/press-releases/2024-36
Stanford HAI. (2025). “The 2025 AI Index Report.” Stanford University Human-Centered Artificial Intelligence. Retrieved from https://hai.stanford.edu/ai-index/2025-ai-index-report
STAT News. (2023). “UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans.” Retrieved from https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/
The Dutch Childcare Benefits Scandal. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal
Washington Post. (2024). “Big Tech is spending billions on AI. Some on Wall Street see a bubble.” Retrieved from https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Shad0w's Echos
#nsfw #CeCe
We had a satisfying orgasm together in that stairway. CeCe bare without any cover. Me with my clothes in a pile watching her use this moment to reset herself and find center. I knew deep down she was hurting. I knew in this moment this is all she had to help ground her. I chose her long ago before I even knew it. I didn't pity her. I still admired her. Even though the world was far different from how I saw it. I did my best to understand her. What made her tick, why she did things. Why she needed porn. Why she needed to be naked and risk it all. No matter how out of hand this got, I would always love her.
We looked each other in the eyes deeply as we rubbed ourselves to orgasm in the cool stairwell. It was late and the building was still. I wasn't worried about getting caught.
We made it back to the dorms that cold night. I was dressed now. CeCe's naked body still trembling from the stairwell release, her caramel skin chilled but her eyes a bit clearer, the orgasm having reset her just enough to function. I wrapped her in a blanket as soon as we slipped inside our room, the open blinds letting in the faint glow of streetlights and moonlight.
She curled up on the bed, silent at first, but I knew it was coming. She started sobbing quietly. I held her, whispering reassurances about jobs and apartments, our future together. “We'll be okay,” I murmured, stroking her thick thighs. “You're safe now.” She nodded, exhausted, and we drifted into an uneasy sleep, her head on my chest, the weight of her meltdown lingering like a shadow.
But the peace shattered around 4 a.m. A frantic pounding echoed through the hall, followed by shouting. Someone was yelling CeCe's name, over and over, laced with hysteria. I bolted upright, heart slamming, as CeCe stirred beside me, her eyes widening in terror. “Mom?” she whispered fearfully, but before we could react, the door burst open.
Somehow, her mother had sweet-talked, or probably forced, her way past the night security at the building entrance, breaking every rule in a desperate bid to “save” her daughter. Somehow with an extreme show of force and strength her mother had rammed the door open. It all happened so fast.
There she stood in the doorway, wild-eyed and disheveled, coat thrown over pajamas, her face a mask of frantic rage. “CeCe! What you said on the phone! Porn!? Masturbation!? You're coming home now!! This isn't you!”
CeCe scrambled back, clutching the blanket to her chest, but her mom lunged forward, grabbing her arm, trying to drag her out—naked, into the cold hallway, the winter air seeping through the building's drafty corridors. “No! Let go!” CeCe screamed, twisting away, her full breasts heaving as the blanket slipped, exposing her curves to the chaos. I jumped up, yelling for her to stop, but her mom was beyond reason, ranting about sin and family honor, all because of CeCe's raw confession about porn and refusal to conform to traditional marriage. It was a full mental breakdown—her mom clawing at CeCe, sobbing incoherently, the scene drawing neighbors out of their rooms in shock.
Someone down the hall must have called campus police; sirens wailed faintly in the distance, growing louder as officers arrived, pulling her mom off CeCe and restraining her as she thrashed and wailed. “She's ruined! My baby girl's ruined by that filth!” The arrest was swift almost as quickly as her mom had arrived. Trespassing, disorderly conduct, and assault charges were pending. Almost ever door was open with a resident peeking out.
CeCe was left standing there in the hallway, naked and exposed to the cold, her ass and pussy on full display under the fluorescent lights, neighbors gawking before averting their eyes. This was not the exposure she wanted or fantasized about. She was just in shock, curled up in a ball on the cold floor. The winter chill bit into her skin, goosebumps rising on her thighs. Everything was all wrong and she was wide eyed and non-responsive. I was horrified.
I rushed to her side, grabbing her favorite hoodie and a stuffed animal from the room. I slid the hoodie over her naked body and gently placed the stuffed animal in her arms hoping she would reach out and hold it. It just laid there on her arms.. as if it didn't exist. She stared blankly unfocused on anything. She didn't focus on me. She was in some far away place.
The anxiety attack gripped her fully. She collapsed against me, hyperventilating, her body shaking uncontrollably, sobs turning to gasps as the world spun. “Tasha... I can't... breathe...” Campus security escorted us to the health center, where they called for professional counseling right then and there. The therapist on call helped stabilize her with breathing exercises and a mild sedative, but when CeCe started sessions the next day, she never revealed the full truth—nothing about her porn watching, her chronic masturbation, or her naked habits. She framed it as “family stress” and “independence issues,” her brilliant mind compartmentalizing to protect her core self. While it was not the whole truth, it still was the root of the problem. Her porn addiction was her discovering her true self and claiming independence from a toxic and oppressive situation. Porn was her safe space. I wasn't going to take that away from her.
After that night, CeCe cut her mom off completely. She had no desire to call, no desire to visit. She visibly shuddered when I asked if she was going to talk to her mom again.
She changed her number the next week. Then as calls from extended family rolled in, she blocked every family contact. Eventually she deleted all of her social media apps entirely. She was on a quest to be totally unreachable.
The cutoff was so complete that never bothered retrieving her belongings from home. “It's not worth it,” she said flatly one evening, naked on the bed with porn muted on her phone, her fingers idly circling her clit as if on autopilot. Instead, she poured all her time and effort into landing a good-paying job. She had a scholarship, but her mom was funding a good portion of her education. She didn't want to rely on anything from her family for any reason.
Using her sheer will and determination, her engineering prowess was able to shine through in interviews. She aced a position at a local tech firm, something entry-level but solid, using her skills to design software prototypes—brilliant work that paid just enough for us to afford a small apartment off-campus by summer. We could live together on our own with my job and her entry level position. When she got the job offer, she smiled. But she was never quite the same.
That night had broken something in her; her dreams narrowed, ambitions stripped down to basics. No more talk of grad school or big career leaps. No talks of upper management or 6 figure salaries. She just wanted a stable savings account to fund our life, endless porn to fuel her obsessions, and the freedom to be naked whenever and wherever she wanted.
Everything else felt hollow, tainted by the trauma. The need to go above her goals felt like something her mother wanted. It wasn't something she really wanted. I was a silent witness to a beautiful woman barely clinging onto normalcy trying to put parts of herself back together again. I knew she was lost. I was her compass and her rock. I didn't complain. I can't help but love her.
We moved out. We were now two college drop outs taking a different path in life. During the move I helped CeCe focus on small goals. I reminded her to focus on small wins and not think about the big stuff. We didn't have much at first, just a queen size mattress on the floor, some cheap furniture to make a small office area for our computer, and basic utensils to cook. I looked at the pitiful state of or living arrangement. CeCe reminded me daily that we have each other. She was right. She knew when I needed her the most.
A year passed in our shared life. We saved up for furniture together, we made financial decisions together. We thrived together. It was a seamless blend of companionship and unspoken intimacy that we never bothered to label. Tasha and CeCe. We never defined our relationship publicly. We never talked about marriage or slapped a title on it. We were just long-term roommates, best friends who shared everything.
CeCe tried, though, more than once, to nudge me toward dating, to “get away from the always naked chick,” as she'd self-deprecatingly call herself. She'd catch me staring during one of her open window goon sessions, fingers buried in her slick pussy as she moaned to a video of black women flashing in public.
She sighed, “Tasha, you deserve someone normal. Go out, find a guy or girl who doesn't spend half the day rubbing one out. I'm holding you back. You could have gotten your degree, but I was too busy on the verge of breaking down for you to focus. Yes I have this job now and I can provide for us, but I wound up dragging you down with me.”
There was a long silence. She didn't stop rubbing or watching, but I saw a tear stream down her cheek. The wound was still wide open from what she endured. I knew she needed more than therapy to make it out of this. When I made that silent vow to stay with her, that resolve never wavered. I wasn't going anywhere. I have shaped my whole world around her.
She was my world. On her good days, I loved the thrill of her escalations. The safety of our bond was perfect. Her autistic focus made her love so intensely, so unfiltered. I got up, pull her close, kissing her deeply, whispering, “You're all I need, CeCe. This is us.” And she'd melt into it, her thick thighs wrapping around me, but the guilt lingered in her eyes, even as she came undone under my touch. “You are my world Tasha.” She turned and looked me in my eyes and kissed me softly.
Eventually, as her job at the tech firm stabilized and my cafe gig evolved into management. We didn't have a degree, but we had stability. We started our careers. We were happy with each other.
Slowly, CeCe started smiling again. She dressed like a baddie for work, but she started wearing her hoodies again on casual days. We started going out on dates again. CeCe started exposing herself in public again. As we approached our 2nd year living together, CeCe was almost back to her old self. I think CeCe was ready to meet my mom. One day I asked, “Do you want to meet my family?” Her eyes twinkled and she smiled. “Yes, I would love that very much.”
from
Reflections
Freedom is what you do with what's been done to you.
—Unknown, often incorrectly attributed to Jean-Paul Sartre
There may be hundreds of quotes from Stoicism and other traditions that make a similar point. I also wrote something along the same lines in “You're the only person you can control”.