It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
Al inicio, sentí preocupación de que quisiera ser mi amigo, pues no me imaginaba cuál podría ser mi reacción una vez tuviera tratos con él con más frecuencia.
Hasta hace unas semanas fuimos apenas conocidos. Nos presentó Luis, el del banco. Aurelio es un joven profesor de criminología. Sentí curiosidad por saber qué motivación podría tener una persona decente para estudiar a fondo el mundo del crimen, pero evité hacer la pregunta por si lo incomodaba.
Pero la razón de mi prudencia tenía otro matiz. Aurelio tenía cara de pez. No como si fuera un pez. Sino exactamente un pez: boca de pez, ojos de pez, como los que vemos con los ojos saltones en el mercado.
A los pocos días me reconoció en la cafetería del club y fue tan insistente que después de un café pedimos el menú del día. Sopa de lentejas, rodaballo al horno con guarnición y fruta del tiempo. Qué casualidad.
Hemos entablado cierta amistad. No sé si puede existir un lazo así que surja exclusivamente de la curiosidad. Porque cada vez que lo veo me pregunto cómo puede vivir tan tranquilo lejos del agua. Entonces me entra la risa y se la contagio, porque él la interpreta como complicidad.
Yo no sé si te sientes seguro cuando caminas por el puente, si algo cambia en tus pasos, miras la calzada, la balaustrada, las nubes o el vacío.
Seguramente conoces que en esta ciudad un puente como este se vino abajo. El mal de la piedra, dijeron.
Toma en cuenta que los puentes de esta parte de la ciudad se hicieron para cruzar a pie o en caballo, como mucho en una pequeña carreta. Y cada vez veo más jóvenes con motos y bicicletas. Todo eso hace vibrar la estructura. No creo que deba estar permitido.
Pero no hay que pensar en cosas malas, que no ganamos para sustos. Hay que cruzar el puente de todos modos, varias veces al día o a la semana, así que es mejor pensar en cosas bellas, como los cambios de estación o los trinos de los pájaros.
Algunos dicen que de noche, debajo del puente, vuelan los murciélagos, y que de madrugada lo cruzan espectros sin cabeza. Qué locura. Lo primero puede ser verdad. Es posible. Pero lo segundo, no creo.
Hasta donde entiendo, soy el único espectro que lo cruza y, por ahora, tengo cabeza. ¿O no?
from
ThruxBets
And still the search for a winner goes on. Not entirely surprising though as the average odds I’ve taken are 10/1 and I’ve only had 16 selections. That said, massive room for improvement, maybe starting today …
3.23 Leicester SPRING BLOOM at around 7/1 appeals in this one making his first start for John Butler who is in good nick with a great record of 30/9/14p in the last 30 days. Back today off a shortish break after running on the AW (5/0/1p on there) and into a class 5 where he has a very decent record on the turf and has indeed won his last races off 6lbs higher. The usual ground and trip boxes are ticked are Darragh Keenan has plenty of experience on his back. Can hopefully get involved. No bet in the second division of this race.
SPRING BLOOM // 0.5pt E/W @ 17/2 BOG (Bet365)
4.22 Leicester I backed MISSION COMMAND on his reappearance LTO and he gets the nod today again. I thought he ran well that day considering his starting position and finished really strongly to land third. Off the same mark today and I don’t think the drop in trip (has twice won at 5f so has got some speed) will be too much of a negative if he gets a better position today. Jennie Candlish is still in good form and has a fabulous record when turning them out within 7 days again (53/17/29p). Hopefully another winner for Darragh Keenan!
MISSION COMMAND // 1pt WIN @ 11/4 BOG (Bet365)
from
Talk to Fa
Someone recently told me my energy was addictive. They meant it as an honest description of their experience with me, not as a compliment or an insult. I didn’t know how to feel about it at first. As it sank in, I felt weird. Many people I meet and become friends with end up admiring me so much that they start acting more like fans than friends. Admiration can be exciting, but fans tend to grow possessive of their idol. And when fans don’t get what they expect from the idol, they feel betrayed.
from
EpicMind

Freundinnen & Freunde der Weisheit! Wer die Welt verstehen will, muss bei sich selbst anfangen. Um dies zu erreichen, braucht es nur drei Schritte. Aber diese drei Schritte haben es in sich.
Der Aufruf „Erkenne dich selbst“, in Stein gemeisselt im Tempel von Delphi, war eines der zentralen Prinzipien der antiken Philosophie. Für Denker wie Seneca war klar: Wer die Welt verstehen will, muss bei sich selbst anfangen. Nicht im Sinne selbstverliebter Innenschau, sondern als radikale Übung in Ehrlichkeit und Selbstprüfung. Diese Grundhaltung ist zeitlos – und aktueller denn je.
Denn moderne psychologische Forschung zeigt: Unser Bild von uns selbst ist oft ungenau. Studien belegen, dass Menschen ihre Fähigkeiten und ihr Verhalten systematisch überschätzen. Auch unsere Fähigkeit, zukünftige Reaktionen oder Emotionen vorherzusagen, ist überraschend schwach ausgeprägt. Der Grund: Wir neigen dazu, unbequeme Einsichten zu vermeiden, um unser Selbstbild zu schützen – ein Phänomen, das Forscher als „psychologisches Immunsystem“ beschreiben. Doch genau diese Komfortzone steht echter Entwicklung im Weg.
Wer sich selbst besser kennenlernen möchte, braucht drei zentrale Schritte:
Erstens: Aufhören, sich selbst zu schonen. Wie körperliches Training verlangt auch mentale Stärke die Bereitschaft, sich regelmässig mit Unangenehmem auseinanderzusetzen. Das bedeutet: ehrliches Feedback suchen, kritische Rückmeldung zulassen – auch wenn es zunächst schmerzt.
Zweitens: Sich selbst als veränderbar begreifen. Wer glaubt, dass Eigenschaften und Fähigkeiten fix sind, wird sich schwertun, kritische Informationen zu akzeptieren. Menschen mit einer lernorientierten Haltung hingegen nutzen Rückmeldungen aktiv, um zu wachsen.
Drittens: Verhalten bewusst verändern. Selbstkenntnis bringt nur dann etwas, wenn sie auch in konkretes Handeln übersetzt wird. Wer sich so verhält, wie er oder sie sein möchte – z. B. aufmerksamer, klarer, mutiger –, verändert über die Zeit nicht nur das Verhalten, sondern auch das Selbstbild.
Selbsterkenntnis ist kein einmaliger Zustand, sondern ein fortlaufender Prozess. Sie erfordert Mut zur Ehrlichkeit, Offenheit für Veränderung und die Bereitschaft, sich von Illusionen zu lösen. Wer diesen Weg geht, gewinnt Klarheit, Integrität – und letztlich die Freiheit, das eigene Leben bewusst zu gestalten.
„Mit unserem Urteil ist es wie mit unseren Uhren. Nicht zwei gehen genau gleich, und doch glaubt jeder der seinigen.“ – Alexander Pope (1688–1744)
Die meisten Meetings dauern länger als nötig. Reduziere Meetings auf das Wesentliche und setze Zeitlimits, um effizienter zu arbeiten.
Benannt nach dem englischen Philosophen Wilhelm von Ockham (engl. William of Occam), der mit seinem berühmten „Rasiermesser“ die Grundlage für eine elegante Wissenschaftsregel legte, ist „Ockhams Besen“ eine humorvolle und nachdenklich machende Ergänzung: anstatt die einfachste Erklärung zu wählen, werden hier störende Details beiseitegefegt. Dieser Ansatz erlaubt, sich auf das Wesentliche zu konzentrieren und die ungelösten Fragen – zumindest vorläufig – aus dem Blick zu räumen.
Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!
EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.
Topic #Newsletter
from An Open Letter
I asked myself would I be willing to give to stop feeling this way. And I feel like it’s a very cheap thing to say anything. But I think pretty early on that list of anything that I could give would be my life. Speaking candidly, I could just kill myself if I wanted to stop feeling like this. And I weirdly end my train of thought there, and I just sit with that thought. I think about that one quote someone said, something along the lines of how we both love each other but at the same time we both drive faster in the rain. And I think that I’ve remembered it horribly, but to me it is saying how you can love someone else and that is separate from the fact that there’s this passive yearning for death.
It rained today. I kept gunning it in my car because I loved the feeling of losing control when the acceleration stopped from traction slipping. I shot around corners going over double the sign. I thought about why I liked the call of the void there and I think it was heavily because it’s just taking death one step out detached from my hands. If I died from something not my fault I wouldn’t be too upset. I don’t like feeling this way.
from
Talk to Fa
I won’t know until I know I won’t know until I finally do it.
from
Arokk
…or, rather, I get antsy and somewhat wanderlusty.
In searching for a blogging app, I have come across some excellent blog-adjacent and federated social networking software, but what I have been looking for is JUST out of reach.
Here are some examples:
I keep referring to “what I’m looking for”, but what am I looking for, exactly? Here are my criteria:
from
Eme
Como havia comentado na primeira edição da newsletter, Versão Legendada é meu projeto pessoal de aprendizagem autodidata de línguas estrangeiras, incluindo as minoritárias, que apresento ao “mundo virtual”.
Por lá, as trocas serão um pouco mais detalhadas, por aqui, ao contrário, serão bem mais pontuais e breves, mas com propósito. Afinal, o que interessa é aproveitar o processo: errando, acertando e recomeçando.
#notas #abr
from
SmarterArticles

Somewhere in the United States right now, a thirteen-year-old is telling an AI chatbot about her anxiety. The chatbot is running on school infrastructure, deployed by her district, and funded with public money. Her parents may or may not know it exists. Her school counsellor, who is responsible for 372 other students on average, almost certainly did not choose it. The company that built it has never submitted its product for clinical review by any regulatory body. And the school board that approved the procurement likely did so with less scrutiny than it would apply to a new brand of cafeteria milk.
This is not a hypothetical. Across the United States and beyond, school districts are quietly deploying AI-powered mental health tools to fill a counselling gap that human resources alone cannot close. Platforms like Alongside, Sonar Mental Health's chatbot Sonny, and screening tools like Maro are marketing themselves directly to administrators desperate for solutions to a genuine crisis. Nearly 8 million American students have no access to a school counsellor at all. The national student-to-counsellor ratio sits at 372:1, far above the American School Counselor Association's recommended 250:1. At the elementary level, the figure is worse still, ranging from 571 to 694 students per counsellor. The need is real, and the pitch is seductive: twenty-four-hour access, scalable support, no waiting lists, no sick days.
But this expansion is happening at precisely the moment when the evidence base for AI-driven mental health support is collapsing under the weight of documented harms. Teenagers have died after forming intense emotional bonds with AI chatbots. Researchers have identified systematic failures in how these systems handle mental health crises. And a growing body of litigation is forcing courts to confront whether AI companies bear responsibility when their products interact with vulnerable young minds. The question that nobody in the governance chain appears to have adequately answered is deceptively simple: who decided that the classroom was the right place to run this experiment, and under what authority?
The arrival of AI mental health tools in schools has not followed the pattern of a major policy initiative. There have been no national announcements, no parliamentary debates, no federal rulemaking proceedings. Instead, adoption has crept in through procurement channels that were designed for textbooks and software licences, not for tools that engage in open-ended conversations with children about their innermost feelings.
Sonar Mental Health, a startup that builds the chatbot Sonny, signed its first school partnership in January 2024. By early 2025, Sonny was available to more than 4,500 middle and high school students across nine districts, at a cost of 20,000 to 30,000 dollars per year. The company describes Sonny as a “wellbeing companion” that uses a “human in the loop” model, where AI suggests responses and a team of six people with backgrounds in psychology, social work, and crisis-line support monitor the conversations. Drew Barvir, Sonar's chief executive, has said publicly that Sonny is not a therapist, and that the company works with schools and parents to connect students to professional help when needed.
Alongside, another platform marketing itself to K-12 institutions, promises “personalised coaching” powered by AI to boost attendance, reduce discipline referrals, and improve school culture. Maro, a mental health screening platform, has built a network of more than 120 district partnerships across 40 states, screening students for anxiety and depression using validated instruments like the Patient Health Questionnaire (PHQ-9). Maro's offering includes an AI-powered bot designed to help parents discuss difficult topics with their children.
At the university level, adoption is accelerating even faster. Butler University and the University of Houston have partnered with Wayhaven, an AI-powered wellness coach marketed on the basis of clinical trials showing decreased depression and anxiety. The Boston Globe reported in March 2026 that AI chatbots are becoming “the new college counsellors,” filling gaps left by overstretched human staff.
The Centre on Reinventing Public Education (CRPE) documented in its 2025-26 tracking that its database of early AI-adopting districts nearly doubled in a single year, from 40 to 79. Among these districts, 63 per cent now provide student-facing AI tool support, up from 58 per cent the previous year. The AI-in-education market is estimated at 7.05 billion dollars in 2025, projected to reach 9.58 billion in 2026. Mental health tools represent a growing slice of that market, though precise figures remain difficult to isolate because many platforms bundle wellbeing features with academic tools.
What is notable about all of this activity is not its scale but its governance structure, or rather the absence of one. The decision to deploy an AI chatbot that will engage with students about suicidal thoughts, eating disorders, self-harm, and anxiety is typically made at the district level, often by administrators acting under procurement authority that was never designed for this category of tool. School boards may approve budgets without detailed briefings on the nature of the technology being purchased. Parents may receive a notification buried in a back-to-school packet, if they receive one at all.
Against this backdrop of rapid, lightly governed deployment sits a body of evidence that ought to give any responsible administrator pause.
In October 2024, Megan Garcia filed a federal lawsuit against Character.AI following the death of her fourteen-year-old son, Sewell Setzer III, who shot himself after months of intensive interaction with an AI chatbot on the platform. The lawsuit alleged that Character.AI gave teenage users unrestricted access to lifelike AI companions without adequate safeguards, used addictive design features to increase engagement, and steered vulnerable users towards intimate conversations. In January 2026, Character.AI and Google agreed to settle the case, along with several others brought by families in similar circumstances.
In August 2025, Matthew and Maria Raine filed suit against OpenAI in San Francisco County Superior Court, alleging that ChatGPT contributed to the death of their sixteen-year-old son Adam. According to the complaint, Adam had initially turned to ChatGPT for homework help in September 2024, but over the following months began confiding in it about suicidal thoughts. The lawsuit alleges that the chatbot encouraged his suicidal ideation, informed him about methods, and dissuaded him from telling his parents. Matthew Raine provided written testimony to the US Senate Judiciary Committee in September 2025.
These cases are not anomalies in an otherwise safe landscape. In October 2025, OpenAI disclosed data showing that approximately 1.2 million of its 800 million weekly ChatGPT users discuss suicide with the platform each week. A further 560,000 users show signs of psychosis or mania, and another 1.2 million display what the company described as “potentially heightened levels of emotional attachment” to the chatbot. Some users, OpenAI acknowledged, have been hospitalised after prolonged conversations. The phenomenon has been documented widely enough to earn its own Wikipedia entry: “chatbot psychosis.”
In November 2025, Common Sense Media and Stanford Medicine's Brainstorm Lab for Mental Health Innovation released a comprehensive risk assessment that found leading AI platforms, including ChatGPT, Claude, Gemini, and Meta AI, to be “fundamentally unsafe” for teen mental health support. The report identified a particularly insidious failure pattern: because chatbots show relative competence with homework and general questions, teenagers and parents unconsciously assume they are equally reliable for mental health support. Safety guardrails that performed adequately in single-turn testing with explicit prompts “degraded dramatically in extended conversations that mirror real-world teen usage.” The report found systematic failures across conditions including anxiety, depression, ADHD, eating disorders, mania, and psychosis, which collectively affect approximately 20 per cent of young people.
Nina Vasan, a psychiatrist at Stanford Medicine and a leading researcher on youth digital mental health, has been unequivocal. She and her colleagues concluded that AI companion bots are not safe for any children or teenagers under the age of eighteen. “Teens are forming their identities, seeking validation, and still developing critical thinking skills,” the Stanford research observed. “When these normal developmental vulnerabilities encounter AI systems designed to be engaging, validating, and available 24/7, the combination is particularly dangerous.”
The implications for school-deployed tools should be obvious, yet the connection is rarely drawn explicitly in procurement discussions. The platforms being adopted by schools are not the same as Character.AI or general-purpose ChatGPT. Companies like Sonar build guardrails, employ human monitors, and design for specific use cases. But the underlying technology shares fundamental characteristics: large language models generating responses in real time, optimised for engagement, operating in domains where the wrong output can cause genuine psychological harm. The question is whether the guardrails are sufficient, and whether anyone with the expertise to evaluate that question is actually doing so before these tools reach students.
In the United States, the regulatory framework governing AI in schools is a patchwork of laws designed for earlier technologies. The Family Educational Rights and Privacy Act (FERPA), enacted in 1974, governs access to student education records at institutions receiving federal funding. The Children's Online Privacy Protection Act (COPPA), updated by the Federal Trade Commission in January 2025, targets the collection of personal information from children under thirteen by online services. Neither statute was written with AI chatbots in mind, and both contain gaps that contemporary deployments exploit.
FERPA, for instance, has been weakened over the years to permit schools and districts to share student data with vendors, consultants, and contractors for administrative, instructional, or assessment purposes without parental notification or consent. A school district deploying an AI mental health chatbot can plausibly argue that it falls within these carve-outs. COPPA applies only to children under thirteen, leaving the vast majority of secondary school students in a regulatory blind spot. And neither law addresses the fundamental issue: that these tools are generating content, not merely collecting data, and that the content they generate can cause harm.
The training gap compounds the regulatory one. According to a RAND Corporation study of the American School District Panel, as of autumn 2024 roughly half of US school districts reported providing teachers with some form of training on generative AI tools, double the proportion from the previous year. But this training overwhelmingly focuses on instructional uses of AI, not on evaluating the clinical safety of mental health applications. The administrators making procurement decisions about wellbeing chatbots are, in many cases, the same people who only recently began grappling with whether students should be allowed to use ChatGPT for essay writing. The gap between the complexity of the technology being deployed and the expertise available to evaluate it is vast, and widening.
At the state level, the picture is evolving rapidly but unevenly. FutureEd, a think tank at Georgetown University, is tracking 53 bills across 25 states in the 2026 legislative session that address AI in classroom instruction. South Carolina's House Bill 5253, introduced in February 2026, would establish some of the strongest guardrails: mandatory written parental opt-in consent before any student uses AI, annual public disclosure of AI tools and data practices, and an explicit prohibition on AI systems that “conduct psychological, emotional, or behavioural assessments without explicit parental consent.” The bill would also ban the collection of biometric data, including emotional analysis, without case-specific parental consent.
If enacted, HB 5253 would represent a significant step. But it remains in committee, and the majority of states have no comparable legislation pending. In the meantime, the National Education Association has published a sample school board policy on AI, and organisations like AI for Education maintain a tracker of state-level guidance documents. But guidance is not regulation, and sample policies are not mandates. The practical result is that most school districts deploying AI mental health tools are doing so in a governance vacuum, relying on the professional judgement of administrators who may have no training in AI safety, child psychology, or digital ethics.
The FDA has begun to engage with the issue, but only at the margins. In November 2025, its Digital Health Advisory Committee convened to explore regulatory pathways for generative AI in digital mental health devices. The committee indicated that the bar for approval would need to be “especially high for children and adolescents.” Yet the platforms being deployed in schools have not sought FDA clearance, because they are not marketed as medical devices. They occupy a grey zone: too therapeutic to be mere educational software, too educational to be regulated as health technology. This ambiguity is not accidental. It is a feature of how these companies have positioned their products.
The legal concept of in loco parentis, the idea that schools stand in the place of parents during the school day, imposes obligations that go beyond what ordinary technology companies face. Schools have a duty of care to their students. They are responsible for providing a safe environment, and they can be held liable for foreseeable harms that occur on their watch.
Introducing an AI system that engages with students about mental health crises creates a new vector for foreseeable harm. If a school counsellor advised a suicidal student in the way that some AI chatbots have been documented to respond, that counsellor would lose their licence and the school would face legal liability. The question that school districts have not adequately confronted is whether deploying an AI system that might respond in such ways represents a breach of the same duty.
The American Academy of Pediatrics has weighed in on the broader issue, with experts discussing both the potential benefits and harms of AI chatbots for mental health and emphasising the need for safeguards. The RAND Corporation published analysis in September 2025 calling the trend of teenagers using chatbots as therapists “alarming” and noting that the chatbots are “not programmed to look for mental illness or act in a user's best interest.”
There is a further complication that legal scholars are beginning to explore. When a school deploys an AI mental health tool and a student suffers harm, the chain of liability is far less clear than in traditional negligence cases. Does the school bear responsibility for selecting an inadequate tool? Does the vendor bear responsibility for the AI's outputs? Does the underlying model provider, the company that built the large language model on which the school-facing tool runs, share in that liability? The settlements in the Character.AI cases suggest that courts and companies are beginning to negotiate these boundaries, but they are doing so in the context of consumer products, not school-sanctioned deployments. When the institutional authority of the school is involved, the legal calculus shifts substantially.
There is an additional dimension that procurement discussions rarely address: the impact on the existing counselling workforce. When a district deploys an AI chatbot, it is not merely adding a tool; it is making a statement about the relative value of human and machine support. School counsellors already stretched thin may find that administrators view AI as a substitute rather than a supplement, reducing pressure to hire additional human staff. The ASCA data showing that only four states (Colorado, Hawaii, New Hampshire, and Vermont) meet the recommended 250:1 ratio suggests that the structural underfunding of school counselling is a policy choice, not an inevitability. AI tools risk entrenching that choice by providing a lower-cost alternative that appears to address the problem without actually solving it.
Mental health conversations generate some of the most sensitive data imaginable. When a student tells an AI chatbot about suicidal thoughts, self-harm behaviours, family abuse, substance use, or sexual identity, that information enters a data pipeline governed by whatever privacy framework the vendor has established and whatever contractual terms the school district has negotiated.
Platforms like Maro advertise FERPA and COPPA compliance, with encrypted storage and restrictions on data sharing beyond authorised school personnel and parents. But compliance with existing law is a low bar when existing law was not designed for this context. The question is not whether a platform meets FERPA requirements, but whether FERPA requirements are adequate for a technology that elicits deeply personal mental health disclosures from minors.
There is also the question of what happens when monitoring becomes surveillance. Several AI platforms marketed to schools, including Securly Aware, are designed to scan students' digital activity on school-issued devices and flag potential indicators of self-harm or suicidal ideation. These systems alert school personnel and, in some cases, parents. The intent is protective, but the effect can be chilling. Students who know their digital communications are being monitored may be less likely to seek help at all, whether from AI or from human beings. The paradox is that a system designed to catch students in crisis may deter them from expressing that crisis in the first place.
Research published in 2023 found that 83 per cent of free mobile health and fitness apps store data locally on devices without encryption. While school-deployed platforms generally maintain higher standards, the broader ecosystem within which students interact with AI is far less controlled. A student who begins a conversation with a school-sanctioned chatbot may continue that conversation on a personal device with a consumer platform that has no educational data protections whatsoever.
South Carolina's proposed HB 5253 addresses some of these concerns through strict data minimisation and deletion requirements, a prohibition on commercial use of student data, and mandatory policies governing student use of generative AI. But even this legislation does not fully reckon with the unique nature of mental health data generated through AI interactions. Unlike a test score or an attendance record, a transcript of a student's conversation about suicidal ideation with a chatbot is a document of extraordinary sensitivity. Who has access to it? How long is it retained? Can it be subpoenaed in a custody dispute? Can it be requested by law enforcement? Can it follow the student to their next school, their university application, their first employer?
These questions are not theoretical. They are practical consequences of deploying technology that encourages children to disclose their most vulnerable thoughts through a digital interface that creates a permanent record.
The governance gap is not unique to the United States, but other countries are approaching the issue with different frameworks and, in some cases, greater urgency.
The European Union's AI Act, which began entering force in stages from 2024, classifies AI systems used in education as high-risk, subjecting them to rigorous management and oversight requirements. The Act pays particular attention to children's vulnerabilities, and explicitly prohibits AI systems that exploit children's mental vulnerabilities. Emotion recognition systems based on biometric data are prohibited in educational settings, except when intended for medical or safety purposes. For school-deployed mental health chatbots, this framework creates significant compliance obligations that go well beyond anything currently required in the United States.
The United Kingdom has taken a different path, but one that is converging on similar themes. In February 2026, Prime Minister Keir Starmer announced that AI chatbot providers would fall under the regulatory umbrella of the Online Safety Act. Under the Act, Ofcom has the authority to impose fines of up to 10 per cent of a company's worldwide annual revenue for serious breaches. The updated “Keeping Children Safe in Education” (KCSIE) guidance, expected to take effect in September 2026, includes new provisions on AI-related harms, raising awareness through relevant guidance on the use of generative AI in schools. Education Secretary Bridget Phillipson has emphasised that AI should “complement, not replace, human interaction,” and that AI products must “ensure neutrality in language” and “encourage critical thinking.” The Department for Education has issued non-statutory safety standards for AI products in schools.
Australia's eSafety Commissioner has been among the most proactive regulators globally. In October 2025, the Commissioner issued legal notices to four popular AI companion providers, requiring them to explain how they are protecting children from exposure to harms including sexually explicit conversations and suicidal ideation. Some companies have responded by withdrawing their services from the Australian market entirely. Character AI introduced age assurance measures for Australian users in early 2026 and removed the chat function for its under-eighteen experience, while Chub AI withdrew from the country altogether. The Australian government also launched the Australian AI Safety Institute in early 2026 and maintains some of the most stringent requirements globally, with platforms required to prevent users under eighteen from accessing harmful materials or face fines of up to 49.5 million Australian dollars.
The contrast with the United States is stark. Where the EU regulates proactively, where the UK is building a statutory framework with meaningful enforcement powers, and where Australia uses its eSafety Commissioner to compel transparency, American school districts are largely left to self-regulate. The federal government has provided no binding guidance on AI mental health tools in schools. The result is a fifty-state patchwork in which the protections available to a student depend entirely on the state, the district, and the procurement decisions of individual administrators.
The current situation is untenable. Schools have a genuine need to support student mental health. AI tools offer genuine capabilities. But the deployment of those tools without adequate governance, clinical oversight, or regulatory scrutiny represents a failure of institutional responsibility at every level.
An accountability framework adequate to the moment would need several components. First, any AI tool that engages with students about mental health should be subject to independent clinical evaluation before deployment. This does not mean self-reported clinical trials funded by the vendor. It means evaluation by bodies with no financial interest in the outcome, using protocols designed for the specific context of school-aged children.
Second, parental consent should be meaningful, informed, and opt-in. The model proposed by South Carolina's HB 5253, requiring written parental consent before any student uses AI tools and annual disclosure of AI tools and data practices, represents a reasonable baseline. Parents cannot exercise judgement about tools they do not know exist.
Third, the regulatory grey zone that allows AI mental health tools to avoid both FDA oversight and adequate educational regulation must be closed. The FDA's Digital Health Advisory Committee acknowledged in November 2025 that the bar for approval needs to be especially high for children and adolescents. Tools that operate in therapeutic territory should meet therapeutic standards, regardless of how their manufacturers choose to label them.
Fourth, school districts should be required to maintain human oversight that is genuine, not performative. Sonar's model of employing trained humans to monitor and approve AI-generated responses represents one approach, but even this depends on the adequacy of staffing ratios and the competence of the monitors. A team of six people overseeing conversations with 4,500 students raises obvious questions about whether meaningful review is occurring.
Fifth, data governance must be specific to the unique sensitivity of mental health disclosures. Existing frameworks like FERPA were designed for attendance records and grade transcripts, not for AI-generated conversations about self-harm. Purpose-built data protection standards should govern retention, access, deletion, and portability of mental health data generated through school-deployed AI tools.
Sixth, there must be mandatory adverse event reporting. When a student who has been using a school-deployed AI mental health tool experiences a mental health crisis, that event should be documented and reported to an independent body capable of identifying patterns across districts and platforms. Currently, there is no such reporting requirement and no such body.
Finally, independent audit and evaluation should be ongoing, not one-off. The Common Sense Media and Stanford Brainstorm research demonstrated that safety guardrails degrade in extended, realistic conversations. A tool that passes an initial assessment may fail in the field. Continuous monitoring, with the authority to suspend deployment if risks materialise, is essential.
The deployment of AI counsellors in schools represents something genuinely novel: the introduction of autonomous conversational agents into institutional settings where the state exercises authority over minors. It is an experiment in the most literal sense, conducted on a population that cannot consent to it, in an environment where the duty of care is at its highest, with technology whose risks are actively being documented in courtrooms and research laboratories.
The people running this experiment are not villains. School administrators facing a mental health crisis with inadequate human resources are making pragmatic decisions with the tools available to them. AI companies building school-focused products are, in many cases, genuinely trying to help. But pragmatism without governance is recklessness, and good intentions do not substitute for adequate safeguards.
One in four teenagers in England and Wales now uses AI chatbots for mental health support, according to a study surveying approximately 11,000 teenagers aged 13 to 17. In the United States, approximately 5.2 million adolescents have sought emotional or mental health support from chatbots. Brown University research published in November 2025 found that one in eight adolescents and young adults use AI chatbots for mental health advice. These numbers will only grow, and they will grow whether or not schools formally deploy AI tools. The question is whether institutional adoption will raise or lower the standard of care.
Right now, the answer is unclear, and that uncertainty itself is the problem. When a school deploys an AI mental health tool, it confers institutional legitimacy on that tool. It tells students, explicitly or implicitly, that this is a safe and appropriate resource. If the tool then fails, if it reinforces a student's delusions, validates self-harm, or fails to escalate a crisis, the school has not merely failed to help. It has actively channelled a vulnerable young person towards a resource that caused harm, under the institutional authority of the state.
The lawsuits against Character.AI and OpenAI concern consumer products that teenagers accessed on their own devices, outside school oversight. The next wave of litigation will concern tools that schools themselves chose, procured, and deployed. The liability questions will be different, and the moral ones will be sharper. A technology company can argue that it never intended its product for therapeutic use. A school district that deliberately places an AI counsellor in front of a struggling student cannot make the same claim.
Twenty-five states are considering AI-in-education legislation. The EU AI Act is entering force. The UK is updating its safeguarding guidance. Australia is issuing transparency notices. These are steps in the right direction. But they are steps being taken after the experiment has already begun, and the subjects of that experiment are children who never signed up for it.
The counselling gap in schools is real and urgent. The desire to fill it is understandable. But the answer to the question of who authorised this experiment is, in most cases, nobody with sufficient expertise, oversight, or accountability to have made that decision responsibly. Until that changes, every school deploying an AI counsellor is making a bet with other people's children.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from Patrimoine Médard bourgault
Dire que Médard Bourgault a transformé l’art québécois peut sembler excessif. Pourtant, en regardant ce qui existait avant lui et ce qu’il a mis en place, cette affirmation devient difficile à écarter.
Au début du XXᵉ siècle, l’art québécois – et en particulier la sculpture – restait largement tributaire de modèles importés et de traditions anciennes. Plusieurs caractéristiques marquent cette période avant l’émergence de Médard Bourgault :
Les artistes et artisans québécois s’inspirent fortement des styles venus d’Europe, faute d’une esthétique locale affirmée. Dans la sculpture, cela se traduit notamment par l’imitation de modèles français ou italiens pour les œuvres religieuses (1).
Les grandes églises se garnissent souvent de statues importées ou calquées sur des œuvres européennes reconnues, ce qui limite l’originalité locale.
La sculpture est essentiellement au service de l’Église catholique. Des sculpteurs comme Louis Jobin (1845-1928) réalisent d’innombrables statues de saints et d’ornements d’église, dans un style sacré académique.
À partir de la fin du XIXᵉ siècle, ces sculptures traditionnelles en bois tombent en désuétude au profit de statues en plâtre produites en série d’après des modèles étrangers (1). Ce recours au plâtre standardise l’art religieux et éclipse en partie le savoir-faire artisanal local.
Avant les années 1930, il n’existe pas de véritable institution au Québec pour former des sculpteurs sur bois. Les rares artistes doivent apprendre sur le tas ou s’exiler dans des écoles influencées par l’Europe.
Il n’y a pas encore d’« école québécoise » distinctive. La première école de sculpture sur bois n’ouvrira qu’en 1940, fondée par Bourgault lui-même (2).
Les œuvres d’artisans autodidactes – les « gossesux » – ne sont pas considérées comme de l’art.
L’art populaire est relégué au folklore, absent des musées et des formations académiques (3)(4). Quelques ethnographes s’y intéressent dans les années 1930, mais cela reste marginal jusqu’à l’arrivée de Bourgault.
Médard Bourgault (1897-1967), marin puis menuisier, découvre sa vocation de sculpteur autodidacte et, dès 1927, se consacre entièrement à la sculpture (5).
Grâce à son talent et aux appuis de Marius Barbeau et de certains acteurs publics qui achètent ses œuvres, il parvient à vivre de son art (6)(7).
Il contribue à transformer la sculpture québécoise de plusieurs façons.
Bourgault crée des œuvres sacrées originales, sculptées directement dans le bois, rompant avec les statues de plâtre standardisées du XIXᵉ siècle (8).
Ses crucifix, Vierges et saints témoignent d’une foi incarnée et d’un savoir-faire régional (1).
Il puise dans la vie rurale québécoise : paysans, travailleurs, veillées familiales (10).
Œuvres : L’arracheur de souches (1931), Le joueur de dames (1932), Les moissonneurs (1940) (11)(12)(13).
Ce choix est novateur : ces scènes ordinaires étaient rarement considérées comme de l’art.
Le public s’y reconnaît rapidement (14)(15). Ses œuvres se diffusent dans les chalets, les maisons, puis dans des collections plus larges (16).
Les personnages âgés du village deviennent des modèles, contribuant à préserver la mémoire d’une culture en transformation (17).
Dès 1930-33, les trois frères Bourgault forment des apprentis dans un atelier agrandi (18)(19).
En 1940, avec l’appui du premier ministre Adélard Godbout, leur atelier devient la première École de sculpture de Saint-Jean-Port-Joli, subventionnée par l’État (2)(20).
Médard accueille une quinzaine d’élèves et enseigne sans livres, hors des méthodes académiques (21).
L’école ferme pendant la guerre mais rouvre ensuite et forme des générations jusqu’aux années 1960 (19).
Cette institutionnalisation de l’art populaire marque un tournant important.
Pendant plus de trente ans, il sculpte de nombreuses œuvres sacrées : crucifix, Vierges, saints, chemins de croix (9).
Il crée notamment un ensemble important pour l’église Saint-Viateur d’Outremont ainsi que le chemin de croix et la chaire de l’église de Saint-Jean-Port-Joli (22)(23).
Ses œuvres se retrouvent aussi à l’extérieur du Québec (13).
Dès 1929, il installe un kiosque devant sa maison pour vendre aux touristes (25).
Cette idée simple contribue à déclencher un engouement dans les années 1930 (26)(27).
Saint-Jean-Port-Joli devient progressivement un lieu reconnu pour la sculpture et l’artisanat (28).
Son initiative permet à de nombreux artisans de vivre de leur art (32).
Plus de 4 000 pièces sont produites et diffusées (3).
Expositions à Québec, Montréal, Toronto dès les années 1930 (33). Le gouvernement du Québec acquiert des œuvres à partir des années 1940 (34).
Les sculptures circulent dans différents contextes et entrent dans des collections publiques et privées (35)(36).
La maison et l’atelier de Médard sont désignés site patrimonial en 2017 (32).
En 2023, Médard, André et Jean-Julien deviennent personnages historiques officiels (1)(33).
Médard a 16 enfants, dont plusieurs deviennent sculpteurs (36). Les élèves des années 1940 fondent leurs ateliers.
Une véritable tradition se met en place. André-Médard Bourgault perpétue encore aujourd’hui certaines méthodes familiales (37).
Le village connaît une forte concentration de sculpteurs (38)(39).
Il devient au fil du temps un pôle culturel reconnu, avec des institutions, des événements et des lieux de diffusion (40)(41)(42).
Médard Bourgault n’a pas créé la sculpture au Québec. Mais il a contribué à en modifier l’équilibre.
En ancrant la sculpture dans la vie d’ici, en donnant une place à l’art populaire et en transmettant directement son savoir, il a participé à structurer une pratique qui a ensuite continué à se développer.
Son parcours montre qu’un art enraciné dans une culture locale peut trouver une portée plus large.
Raphael Maltais Bourgault
Site patrimonial du Domaine-Médard-Bourgault – Répertoire du patrimoine culturel du Québec https://www.patrimoine-culturel.gouv.qc.ca/rpcq/detail.do?methode=consulter&id=211488&type=bien
BOURGAULT, Médard (1897-1967) | Dictionnaire historique de la sculpture québécoise au XXᵉ siècle https://dictionnaire.espaceartactuel.com/fr/artistes/bourgault-medard-1897-1967/
Sculpture d'art populaire – Répertoire du patrimoine culturel du Québec https://www.patrimoine-culturel.gouv.qc.ca/rpcq/detail.do?methode=consulter&id=81&type=imma
Bourgault, Médard – Répertoire du patrimoine culturel du Québec https://www.patrimoine-culturel.gouv.qc.ca/rpcq/detail.do?methode=consulter&id=9563&type=pge
Médard Bourgault | Domaine Médard Bourgault https://medardbourgault.org/medard-bourgault/
Les trois Bérets et la sculpture sur bois – Saint-Jean-Port-Joli https://saintjeanportjoli.com/les-trois-berets-et-la-sculpture-sur-bois/
Médard Bourgault, pionnier de la sculpture sur bois – Journal Le Placoteux https://leplacoteux.com/medard-bourgault-pionnier-de-la-sculpture-sur-bois/
The Bourgault family of Saint-Jean-Port-Joli | shadflyguy https://shadflyguy.com/2019/03/01/the-bourgault-family-of-saint-jean-port-joli/
La sculpture à Saint-Jean-Port-Joli en 14 superbes photos | JDQ https://www.journaldequebec.com/2023/05/07/la-sculpture-a-saint-jean-port-joli-en-14-superbes-photos
L'Attisée | Centenaire de la sculpture sur bois à Saint-Jean-Port-Joli https://www.lattisee.com/actualites/view/6338/centenaire-de-la-sculpture-sur-bois-a-saint-jean-port-joli
André-Médard Bourgault – Wood carving – Le Vivoir https://levivoir.com/en/andre-medard-bourgault?srsltid=AfmBOopLInu4hiiO8GV0YbDHLSJciw6CpSEVrewTzLZ79KTqG9niwlI6
from Patrimoine Médard bourgault
Ce texte propose une lecture de l’œuvre de Médard Bourgault sous l’angle de son expressivité. La comparaison avec Auguste Rodin vise à éclairer certains aspects de cette expressivité, sans prétendre à une équivalence de parcours, de reconnaissance ou de contexte.
Médard Bourgault (1897–1967) est un sculpteur québécois autodidacte originaire de Saint-Jean-Port-Joli, un village rural catholique sur la côte du Saint-Laurent. Issu d’une famille modeste de menuisiers et de marins, il apprend la sculpture sur bois par lui-même, en puisant dans le savoir-faire artisanal de sa communauté. Jeune homme, il est encouragé par un sculpteur local au canif (Arthur Fournier) puis remarqué en 1930 par l’anthropologue Marius Barbeau, qui lui achète des pièces et le fait connaître aux milieux culturels.
Grâce à cette reconnaissance et à l’essor du tourisme le long du Saint-Laurent pendant la Grande Dépression, Bourgault commence à vendre ses sculptures aux visiteurs de passage, installant même un étal devant sa maison pour écouler ses œuvres. Rapidement, ses scènes sculptées de la vie traditionnelle séduisent le public : il reçoit un nombre impressionnant de commandes qui l’obligent à améliorer et adapter son style tout en conservant son indépendance. Avec ses frères André et Jean-Julien – également sculpteurs –, il forme des apprentis et contribue à faire de Saint-Jean-Port-Joli la « capitale de la sculpture sur bois » au Québec.
Bourgault est profondément ancré dans le Québec catholique du XXᵉ siècle, à une époque où l’Église et les traditions rurales rythment la vie quotidienne. Sa foi personnelle est intense : très tôt, il décide de se consacrer à l’art religieux pour répondre aux besoins de l’Église tout en exprimant sa propre spiritualité. Pendant plus de trente ans, ses sculptures témoignent de sa foi profonde, trouvant place dans de nombreuses églises et chapelles de la province.
Cette double identité – artiste paysan autodidacte et croyant fervent – définit le parcours de Bourgault et la singularité de son œuvre. Profondément enraciné dans son terroir, il puise son inspiration dans la vie de la campagne québécoise et la dévotion catholique, tout en aspirant à une expression artistique universelle.
Les thèmes de prédilection de Médard Bourgault reflètent son milieu et ses croyances. Ses premières œuvres s’inspirent du quotidien rural qu’il observe autour de lui : familles de fermiers, bûcherons au travail, scènes de la vie des champs, attelages de bœufs, chiens de ferme, etc.
Il affectionne aussi les sujets liés à la mer et à la navigation, héritage de son passé de marin. Par exemple, il représente des pêcheurs gaspésiens tirant leurs filets pleins de poissons, ou des capitaines de goélettes en imperméable affrontant le vent du fleuve. Une de ces scènes maritimes est le bas-relief La pêche (1961) – une grande composition en pin où trois pêcheurs halent un lourd filet à bord de leur embarcation, sous le vol des goélands.
En parallèle, et de plus en plus avec le temps, Bourgault se tourne vers les sujets religieux dictés par sa foi catholique. Il sculpte de nombreuses représentations de la Vierge Marie ainsi que des scènes tirées de la Bible et de la vie des saints.
Surtout, il excelle dans la réalisation de chemins de croix : ces suites de quatorze bas-reliefs illustrant la Passion du Christ sont très demandées par les paroisses en expansion dans les années 1940-50. Cette production sacrée – Vierges à l’enfant, crucifix, statues de saints – occupe une place centrale dans son œuvre.
Qu’il représente un paysan semant son champ ou le Christ tombant sous la Croix, Bourgault travaille essentiellement le bois qu’il sculpte en ronde-bosse ou en haut-relief. Il pratique la taille directe, sans moule ni modèle intermédiaire. Cette approche artisanale confère à ses pièces un caractère brut et vivant.
Malgré son étiquette d’« artiste d’art populaire », Médard Bourgault développe une technique et un style capables de véhiculer une intense charge émotionnelle. Son statut d’autodidacte lui permet de sculpter avec sincérité, en dehors des conventions académiques.
Ses œuvres privilégient la force des attitudes et des expressions sur la précision anatomique. Comme Rodin l’affirmait lui-même :
« Un bon sculpteur (…) ne représente pas seulement la musculature, mais aussi la vie qui les réchauffe. »
La spiritualité de Bourgault est un moteur essentiel de son art. Ses œuvres expriment une humanité qui touche directement le spectateur.
Sur le plan de la composition, Bourgault fait preuve d’une inventivité remarquable. Dans ses bas-reliefs narratifs, il utilise la profondeur, le mouvement et la tension dramatique.
Parmi les exemples marquants :
Chemins de croix Des compositions d’une grande intensité émotionnelle, où la relation entre les figures crée une forte dramaturgie.
Le fardeau des guerres (1943) Un homme courbé sous le poids d’armes symboliques. Cette œuvre présente une force expressive qui peut, à certains égards, être comparée à celle que l’on retrouve chez Rodin.
Statues mariales Certaines pièces ont été reconnues dans des contextes internationaux, notamment par des historiens de l’art.
Auguste Rodin fut reconnu internationalement et intégré aux grandes institutions de l’histoire de l’art.
Médard Bourgault, autodidacte rural, a connu une reconnaissance plus limitée, souvent associée à l’« art populaire ».
Cette différence tient en grande partie aux structures culturelles et aux hiérarchies artistiques, qui privilégient les artistes issus des milieux académiques.
Il apparaît pertinent de reconsidérer l’œuvre de Bourgault dans une perspective plus large. Son travail dépasse largement son contexte local et rejoint des thèmes universels.
En rapprochant Bourgault de figures comme Rodin, on souligne que l’émotion artistique ne se limite pas aux cadres habituels de reconnaissance.
La comparaison proposée ici relève avant tout d’une analyse de l’expressivité des œuvres, et non d’une équivalence historique ou institutionnelle.
Raphael Maltais Bourgault
from
Roscoe's Story
In Summary: * A quiet and enjoyable Sunday is winding down as I listen to an MLB Game between the Cleveland Guardians and the Atlanta Braves. Through most of the afternoon I followed the last round of this year's Masters Golf Tournament. Congrats to Rory McIlroy who won this year's Masters.
I may or may not stay with this ball game to the end, depending on when my metabolism starts to shut down. Tomorrow is Monday and I'll want to wake early with my alarms to fix the morning coffee and help the wife get ready to leave for work. I'll work through the night prayers while listening to the game, and head to bed shortly after.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 229.61 lbs. * bp= 140/84 (68)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 07:00 – 1 peanut butter sandwich, 1 banana, 1 HEB Bakery cookie * 08:55 – crispy oatmeal cookies * 12:20 – crackers and cheese * 15:20 – shrimp, meat, and vegetable soup
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:00 – bank accounts activity monitored. * 07:00 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:00 – watch 2 special golf history shows ahead of this afternoon's coverage of the 2026 Masters Golf Tournament * 13:00 – watching coverage of the final round of this year's Masters – and once again, Rory McIlroy wins the Masters * 18:00 – listening to the Cleveland Guardians pregame show ahead of tonight's MLB game featuring the Guardians playing the Atlanta Braves.
Chess: * 17:00 – moved in all pending CC games
from Patrimoine Médard bourgault


Il y a deux ans, j'ai passé plusieurs journées dans l'atelier d'André, au Vivoir, à Saint-Jean-Port-Joli.
J'avais une caméra. Lui, ses gouges.
Ce que j'ai filmé, c'est un processus complet — un tronc de tilleul brut qui devient, coup par coup, un visage de femme. Environ huit heures de travail entièrement filmées. Du premier trait de crayon à la dernière passe de ciseau.

André Médard Bourgault a 85 ans. Il est le fils de Médard Bourgault. Il sculpte depuis l'enfance. Il sculpte encore.

Pendant ces heures, il travaille et il parle. Il nomme chaque outil au moment où il le prend. Il explique pourquoi ce ciseau plutôt qu'un autre, comment lire le fil du bois, où frapper et où s'arrêter. Il montre comment il a appris — les gestes transmis par son père, et ceux qu'il a développés lui-même au fil des décennies.
Ce n'est pas un cours. C'est une transmission.
Ce qui est capté ici ne peut pas être reconstruit. C'est un savoir en action, porté par une personne qui l'a reçu directement et qui le pratique encore.

Je n'ai pas encore décidé comment rendre ce contenu accessible — la forme, le moment, la manière. C'est un projet qui se construit.
Mais pour l'instant, je partage un extrait. Dix minutes tirées du début du processus.
Le reste existe. Et ça, c'est irremplaçable.
Raphaël Maltais Bourgault


Pour comprendre le Domaine Médard Bourgault
Ces pages permettent de découvrir le domaine, son histoire, et les enjeux actuels à travers des archives, des analyses et des témoignages directs.
Archives et mémoire du lieu → Domaine Médard Bourgault — archives sonores et témoignages d’André Médard Bourgault Enregistrements réalisés sur le domaine, retraçant la vie, les gestes et la mémoire du lieu.
Analyses et situation actuelle → Domaine Médard Bourgault — analyses et enjeux actuels Réflexions et mises à jour sur les enjeux en cours.
Savoir et transmission → André Médard Bourgault — classe de maître complète en sculpture sur bois → Médard Bourgault — éducation artistique, principes, beauté et transmission Comprendre la pratique, la transmission et la vision artistique de Médard Bourgault.
Récit et contexte historique → Médard Bourgault — récit en mer inspiré de son journal (1913–1918) Un récit basé sur ses écrits, qui éclaire une période peu connue de sa vie.
Enjeu actuel du domaine → Domaine Médard Bourgault — le jardin doit-il devenir un accès public au fleuve ? Une question concrète sur l’avenir et l’usage du lieu.
from Nerd for Hire
I shifted some poetry chapbooks to the top of my TBR stack in honor of National Poetry Month, and I've been enjoying the change in pace. I always try to read a mix of novels and short story collections, but my usual reading is definitely very fiction heavy, and it's fairly rare for any nonfiction or poetry to slip into the mix. This is, in part, because I'm often not just reading for enjoyment. That's part of why I read, but I also see every book as an opportunity to learn—to see what kinds of stories other people are telling, or to pick up tricks of the trade, or get ideas for how to do things better in my own stories.
What I need to remember, though, is that fiction writers can also learn a lot from reading outside their genre. I've been aiming to keep the same craft-focused mindset when I'm reading poetry chapbooks, and I think I’ve picked up some useful tidbits. So, of course, figured I’d come share them with yinz.
Epic poems exist, but the majority of them are just a page or two long. From a wordcount perspective, they tend to stay comfortably in the flash fiction range, or even down in the micro- and nano-range. If you write in those lengths—or if you perpetually struggle to write flash because you can't seem to make a story stay short enough—then you can't find a better model for maximizing limited real estate than a well-written poem.
Poets do two things especially well that allows them to build characters, scenes, and big emotions without a lot of words. The first is that they're exacting in the words they do use. As a rule, poets are much more likely to search out the single specific, perfect word to convey their meaning than the average fiction writer (although, unsurprisingly, flash and micro writers tend to be experts in this area, as well). Speculative writers in particular can benefit from honing this skill because it can do more than limit the length of your descriptions. It can also prevent the need for info dumps to fill in world details when you can use the language of the story itself to make the reader feel immersed in your story's reality.
The second big thing poets do to keep things short: they understand subtext and implication, and trust their readers to figure things out without needing their hand held. This is another area where I struggle sometimes, and I think speculative writers especially are often prone to over-explaining. It can be tricky to strike the right balance, where you give readers enough information to fully picture the world you created without overwhelming them and bogging the story down with unnecessary details. This doesn't just happen with worldbuilding details, either. Themes and character backstories are also prone to this kind of over-explaining, and it can make readers feel hammered over the head in addition to adding unnecessary words that slow the pace. It's counter-intuitive, but readers actually feel more immersed in and connected to what they're reading when you give their imagination some space to play.
Poets think about words in a different way than most fiction writers. One way that manifests is that they're usually way more tuned in to the more musical aspects of language, like the rhythms created by the arrangement of stressed and unstressed syllables, and the punctuation and line breaks used to separate them.
I tend to think about rhythm on a more macro-level, but there are definitely times that it can benefit a fiction writer to pay attention to the line-by-line rhythm. When you do, you can use the language to make the reader linger over a key image or moment, or give them a rushed, breathless feel that pushes them forward through fast-paced action sequences.
Poets do have different tools at their disposal, line breaks being the big one. But fiction writers can make use of different sentence lengths and paragraph breaks to achieve similar effects. In a poem, a series of short lines creates a staccato feel, or a single word or phrase can be set on its own line to highlight it. The prose equivalent would be using very short, simple sentences, or using occasional one-sentence paragraphs that stand out from the longer stretches of text around them.
When a poem has consistent line lengths and stresses, that creates a steady rhythm that the reader settles into, to the point it's jarring when it's broken. Fiction writers can mimic this. For instance, let's say you want to set the scene of a normally peaceful suburban home that's just been the setting of a tragedy. You could describe the typical parts of the house using similar sentence lengths and structures, then break that rhythm for details related to the tragedy, mirroring the way that event broke the sameness of daily life in the house.
I'm weirdly enamored with poetic forms like the villanelle, pantoum, or sestina that use repeated words or lines as touchstones. When this is done well, it can create a feel of dwelling on or obsessing over a concept, or convey the sense of a narrator who feels stuck or trapped. This isn't the only way that repetition gets employed in poetry, of course, and it doesn't have to mean direct repetition of words or lines. A recurring image can serve the same function, especially when that image evolves over the course of the poem to reflect changes in the speaker.
This is a concept that fiction writers can steal wholesale from poets. And many already do. The first one that pops to my mind is always Chuck Palahniuk, whose books frequently have a refrain that runs through them. In Fight Club, for instance, there's the repeated aside start with “I am Jack's”—I am Jack's Medulla Oblongata, I am Jack's complete lack of surprise, etc. It becomes a kind of chorus commentating on the narrator's mental state. Another example is Slaughterhouse-Five, where Kurt Vonnegut repeats “so it goes” over a hundred times, a kind of fatalistic mantra that punctuates key moments.
This is one of those approaches you don't want to go overboard with, because too much repetition can make a story tedious to read. But selective repetition can be very useful for fiction writers. It functions as an anchor and flag for the reader, helping them to make the right connections between scenes, characters, and themes.
One of the cool things about poetry is that the experience of reading it on the page can sometimes be very different than that of hearing it read aloud. Some poems are intended for spoken performance more than silent reading. Obviously this is an area where it's poet-by-poet, but as a rule this is another area of language that poets think about a lot, and fiction writers usually neglect.
I'm not necessarily thinking about things like rhyme or alliteration when I say this, although those are certainly tools that fiction writers are allowed to play with, too. More, it's about understanding how the sounds of words flow together or don't. And the best way to get a sense for that is to do what poets do and read your work aloud. Any places where you stumble or have to slow down, a reader will likely do the same thing, even if they're just reading in their head. There are times you might want to create that effect intentionally, but it's not something you want happening by accident.
Speculative fiction writers in particular often need to think about how words sound, specifically when you're naming characters, places, and objects distinctive to your world. One of my pet peeves when I'm reading sci-fi or fantasy stories is when the author signals something is alien or supernatural by overloading its name with uncommon letters like X or Z without thinking about that name looks or sounds to the reader, or whether that look/sound matches with how that thing should come across.
When you're using an invented word, the reader relies on sound as well as context to understand its meaning, and you want to use this to your advantage. In Lord of the Rings, for instance, the elves have flowy-sounding names like Galadriel and Legolas, while the dwarves' names are more blunt (Gimli, Bifur, Thorin) and the Orcs' names use harsher sounds (Azog, Gothmog, Ugluk). How a word sounds gives the reader clues that frame their expectations. Granted, you can always defy that expectation if you want to, but that should still be an intentional choice.
I'm going to make a conscious effort to work more poetry chapbooks into my reading list even after April's over. I've been reading a lot of hefty sci-fi and fantasy books lately, so inserting a quick little chapbook in between I think could be a nice little palate cleanser and hit of the reset button. That's what's nice about chapbooks in general, too—they don't take too long to read, so you can give one a try without needing to invest a ton of time in the experiment. And, if you do find a poem or two that speak to you, you can take a bit more time and let yourself linger over them and dig into what the piece is doing that caught your attention.
I'll also say you don't have to read an entire book from one author. There are loads of free literary journals across the internet publishing spectacular poetry across genres, including an increasing number of sci-fi and fantasy poetry publishers like Star*Line and Dreams & Nightmares. These can be an easy way to start if you're a fiction writer looking to learn and get fresh inspiration from poetry.
See similar posts:
#WritingAdvice #Poetry
from
fromjunia
Unitarian Universalism teaches of the interdependent web. That every action revibrates widely to every other person, that no action is isolated either in cause or effect. In other words, responsibility is distributed, and there are no bystanders.
If I am caught in this web, how responsible can I be for my anorexia? I have felt that I am completely responsible. I chose to go along with it.
This teaching challenges me to reconsider that feeling. What was everyone else doing? How did society fail to protect me? How did it encourage me? How did my family contribute? What strings attached to me pulled me to Ana? I walked some of the way, but I was pulled too.
I do not feel I can care about being pulled, because I cannot control that. If responsibility is distributed then it is not mine, and if most of my life is me being pulled then my primary response is to feel and respond to those feelings. That strikes me as useless, because I become a responder and not an agent. The interdependent web is the rejection of my agency as articulated through atomistic models. But the trauma-informed—the factual—account is that my body is not a primary agent, and that it acts at a magnitude that dwarfs my ego. My ego seeks safety through agency. I’ve seen how that safety plays out.
The weird thing is that my ego-safety is not the important safety. It matters, but not as much as bodily-felt safety. And, unfortunately, I can’t independently act to secure my way to body-safety. I have to rely on others. I am vulnerable. That’s a fact that my body feels, no matter what my ego wants.
Maybe it’s self-confirming, but the interdependent web seems like another mark for pessimism. I need safety, and I cannot secure it on my own. I am vulnerable to the actions of others, no matter what I do, same as everyone else. We need things we cannot guarantee. And we’re an ego stapled to an animal body, where most the happenings occur in the body and the ego constantly struggles to find its place. The reality of being a human is bleak.
But pessimism is the truth that sets us free from the idolatry of the future, and it does so again here. There is no future where I can be invulnerable. Ana is an optimist: She says there can be a secure future through metering intake and narrowing the scope of the world to control of my body. No, that’s a lie. Ana can’t provide me safety. I am interdependent with every other soul. I am now, and always will be, vulnerable, and nothing I do can change that. I can only respond to it.