It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
I like to watch sports every now and then, but I don’t watch sports news. Then again, I don’t read much news. Anyway, I’ve been interested in the Dianna Russini and Mike Vrabel drama for the past few days. Not because I like drama but how a private investigator played a key role in it.
I’ll spare you the details of the fiasco. Look can look them here on The Shadow League link. The following couple pictures I’ll talk about next come from the TMZ Sports Page link. They photos were taken from an Arizona resort.
The first picture you see is Russin and Vrabel standing in front of each other, on top of some wooden patio, with their hands forward and interlocking fingers. Notice how the photo is a little grainy, but not too much that you can still tell who they are by their faces. That usually means the PI was at a distance where the camera’s optical zoom was at its limit before picture quality fades.
The second photo you see is Russin and Vrabel (wearing swim trunks and bathing suit, respectively) lying on the pool. Notice the picture quality is better than the first. The PI must have been pretty close to them. Either the PI was next to the pool or still outside the resort where anyone can see in.
Keep in mind, Russin and Vrabel both have spouses. And while there’s no kissing or sexual activity it still doesn’t look good for the two. And it’s more than likely that Russian’s husband and Shake Shack senior manager, Kevin Goldschmidt, hired the private investigator.
As a former private investigator with thirteen years of surveillance work (mostly workers comp) I’m still amazed on the quality of the photos and the work done by the PI in charge of the infidelity case. Those two still shots are more than likely taken from whatever video the PI recorded. Video evidence is often more powerful than photos when it comes to infidelity and workers comp cases. If a photo is worth a thousand words, a video is at least three times that if not more.
I guess there are two lessons in all of this: 1) there will never be a shortage of cheaters, which means more PI work, and 2) in the long run, cheaters never prosper. Don’t be like Russini and Vrabel. You never know who’s watching.
#cheating #drama #fiasco #infidelity #photo #pi #privateinvestigator #Russini #sports #video #Vrabel
from
Dear Anxious Teacher
During my first year in teacher college, I read a stat that said between 30-50% of teachers leave the field within the first 5 years. Don’t leave the field. Give the job at least 3-5 years. My first year was terrible. I wanted to quit almost weekly, and I would spends upwards of 6-8 hours on Sundays grading and creating lesson plans. The “Sunday Scaries” were always filled with dread. It really made me question the profession. Good news! It gets easier in time—way easier. It really is like learning how to ride a bike; once you learn, you’ll really enjoy the profession. I would say 90% of the time I have a smile on my face and really look forward to work. We all have bad days. We’re human. Every job is like this. So please don’t judge the profession immediately. There is so much to learn when starting out, and it truly feels like being tossed into the frying pan as they say. Here are tips to survive the first year.
Get a good mentor. You’ll need someone to bounce questions off of and somebody you can trust. Don’t go to everyone. Be selective because teachers do like to talk and faculty rooms can be the wrong place to hang out. Unfortunately, every school has someone who will try and kill your vibe.
Don’t reinvent the wheel (Get lesson plans and materials from other teachers or websites). Take advantage of the web and don’t think you’re being a bad teacher. You’re in survival mode the first year. Every little bit helps!
Aim to create one really good lesson each week. Don’t strive for 5 perfect lessons. You will really burnout. Have fun creating that one lesson that will really shine.
Laugh at your mistakes. You will make plenty. I still do.
Toss out “crap” lessons and worksheets. Don’t grade everything. I will occasionally toss out a packet of paperwork (filler worksheets, or assignments that took me too long to get to) that has been sitting my desk for a few weeks.
Use multiple choice assessments to keep yourself on your feet. If you feel caught up, give out something that is more time consuming to grade.
Stay calm as possible. Fake it until you make it. Faking your confidence is sometimes necessary. Students, for the most part, will think you know all the answers.
Stay away from burnout coworkers and negativity.
Give less homework (homework 4-5 days a week may be too much for you and your students). Start off with 1-2 assignments per week. Make sure you feel comfortable with this and it’s okay by your district. Classwork that is not finished becomes homework in my classroom.
Get to work early and stay later to prepare for the following day. This will take all the stress off you with your commute.
Don’t grade everything. Aim for 2-3 things per week if you can. I grade participation, homework, and classwork. Sometimes I grade more or less.
Work-life balance will probably swing more towards the work part of your life. Your weekends should be doing something fun and completely unrelated to teaching. Pick 1 day on the weekend to plan and prepare. I like Sunday morning really early. Friday night—please don’t work. Enjoy your Saturdays too!
Put more work on your students. They should be working harder than you. Give a 2 day or computer assignment. Use educational websites with auto-grading features that will allow you to catch up with the admin side of the job.
Designate Fridays as a quiz or test day. These assessments can be short too. This will give you a chance to grade and keep you organized with your grading.
Plan your lesson plans with the end goal in mind? What is the big picture? What do you want them to be able to do by the end of the quarter? Is it a project or presentation? Work backwards from there.
Have a snack and water at your desk. Please eat lunch because you might become lightheaded and might feel more agitated dealing with teacher stress.
Drink coffee or tea for a little energy. I love my coffee, but I understand that it’s not always the best for anxiety. For me, it puts me in a good mood.
During your lunch period. Get outside and take a break from teaching. Spend time with a funny coworker or sit in your car. This can be hard to do when you have a lot work to do. Make a point to give your mind a break from teaching stuff!
Develop a faster grading system. See my article on grading faster.
Read 1 positive quote for the day that is motivational and relating to what you’re going through.
from
PlantLab.ai | Blog

You adjusted your cal-mag for two weeks. The yellowing got worse. Then you saw the webbing.
That's how most growers discover spider mites – not when the problem starts, but when it's already out of control. The early damage looks so much like a nutrient deficiency that your first instinct is to adjust the feed. Meanwhile, a single female mite is producing thousands of descendants in a month.
Spider mites are the most destructive pest in indoor cannabis cultivation. Not because they're hard to kill – they aren't, when caught early – but because their early symptoms mimic nutrient problems so convincingly that growers lose their detection window treating the wrong thing entirely.
This guide covers visual identification at every stage, how to tell mite damage from a deficiency, and what actually works for treatment.
Spider mites on cannabis produce tiny yellow or white speckles (stippling) on upper leaf surfaces where mites feed from below. Unlike nutrient deficiencies – which cause broad, uniform color changes across leaves – stippling appears as distinct pinprick dots scattered irregularly across the leaf. The damage is caused by Tetranychus urticae (two-spotted spider mite), an arachnid that punctures individual plant cells and drains their contents. By the time webbing is visible, the colony has been feeding for weeks.
Quick checklist: – Tiny yellow/white pinprick dots on upper leaf surface – Dots are irregular and scattered, not following veins – Leaf undersides show tiny moving specks (mites are 0.3-0.5mm) – Fine webbing between leaf tips or at branch junctions (advanced) – Damage starts on lower/inner canopy where airflow is poorest – Leaves eventually bronze, curl, and drop
The single most common spider mite mistake has nothing to do with treatment. It happens at identification.
Early stippling – those tiny yellow dots where mites have punctured cells – looks like the beginning of a calcium deficiency or light stress. The dots are small, scattered, and appear on older growth first. A grower sees yellowing dots on lower leaves and reaches for the cal-mag bottle. Two weeks of feed adjustments later, the dots have spread, the plant looks worse, and then the webbing appears.
This is not a knowledge failure. It's a pattern recognition problem. The visual difference between early mite stippling and early nutrient deficiency is subtle enough that experienced growers miss it regularly.

| Feature | Spider Mite Stippling | Calcium Deficiency | Magnesium Deficiency |
|---|---|---|---|
| Pattern | Irregular pinprick dots | Irregular brown spots | Interveinal yellowing |
| Distribution | Scattered randomly across leaf | Concentrated on newer growth | Starts on older leaves |
| Symmetry | Asymmetric, random | Roughly symmetric | Symmetric between veins |
| Leaf underside | Tiny mites or eggs visible | Clean | Clean |
| Texture | Leaf feels slightly rough/gritty | Spots may feel crispy | Leaf stays smooth |
| Progression | Dots multiply, never merge into bands | Spots expand and merge | Yellowing expands between veins |
| Touch test | Gritty feel from mite debris | Normal | Normal |
The diagnostic key: flip the leaf over. Nutrient deficiencies don't leave anything on the underside. Spider mites leave everything there – adults, eggs, shed skins, webbing. A 10x loupe makes this definitive, but even a phone camera zoomed in on the leaf underside will show the difference.
Spider mites reproduce faster than almost any pest a cannabis grower will encounter.
This is exponential growth in the literal sense. The population you can't see on Monday is visible by Friday and webbing by the following Monday. The detection window – the gap between “early enough to treat easily” and “too late for simple solutions” – is approximately 5-7 days.
Every day of misdiagnosis as a nutrient issue is a day lost in that window.

Mites have arrived but the colony is small. Fewer than 10 adults on the plant. No visible damage to the naked eye.
What to look for: Nothing you can see without magnification. Preventive inspection with a 10x loupe on leaf undersides is the only detection method during this phase – or an AI that can catch the earliest stippling pattern in a leaf photo before your eye does.
What you see: – Scattered yellow-white dots on upper leaf surfaces – Dots are pinprick-sized, irregular spacing – Lower and inner canopy leaves affected first – Leaves may appear slightly dull or dusty
This is the critical detection window. The damage is visible but the population is still manageable. Treat now and you win. Wait, and you're chasing exponential growth.
What growers confuse it with: Calcium deficiency, magnesium deficiency, early light stress, pH fluctuation damage. The distinguishing test: check the leaf underside with a loupe or zoomed phone camera.
What you see: – Stippling thickens into visible patches of yellow/bronze discoloration – Fine webbing appears at leaf tips and where leaves meet stems – Leaf edges may curl upward – Multiple plants now show symptoms (airborne spread via “ballooning” on silk threads)
Webbing marks the transition from “problem” to “crisis.” The silk isn't just housing – it protects colonies from predators and spray treatments. Once webs are established, contact sprays have to penetrate the silk to reach the mites.
What you see: – Dense webbing covering bud sites, connecting leaves – Leaves are bronzed, curled, and dropping – Mites visible as tiny moving dots on webbing – Plant growth has visibly slowed or stopped – Webbing on flowers makes bud unusable
At this stage, the plant is losing more photosynthetic capacity than it can replace. During flower, this level of infestation is often a total crop loss for affected plants. The mites are feeding on sugar leaves and bract tissue, leaving webbing embedded in the flower structure. Even if you kill every mite, the webbing and fecal matter remain.
Spider mites prefer warm, dry, still air – the conditions that exist in the center and lower canopy of most indoor grows.
Check first: – Undersides of lower and inner canopy leaves – Where two leaves overlap (creates still-air microclimate) – Near intake vents (common entry point) – Any plant closest to heat sources
Check second: – Leaf undersides on middle canopy – Branch junctions where stems create sheltered pockets – Nearby houseplants, clones, or recently introduced plant material
High-risk conditions: – Temperature above 27°C (80°F) and rising – Humidity below 40% RH – Stagnant air in lower canopy – New clones or plants introduced without quarantine – Adjacent rooms or gardens with ornamental plants
One fact most growers don't realize: spider mites travel on clothing, pets, and skin. If you've been in a garden with mites and walk into your grow room, you may be the vector. This is why quarantine protocols matter even for indoor-only grows.
This matters more than you'd think. Spider mites aren't insects. They're arachnids – closer to ticks and spiders than to aphids or thrips. A lot of insecticides just don't work on them, and growers figure this out the expensive way: they buy whatever pest spray the grow shop recommends, apply it twice a week for a month, and the mites keep spreading.
If a product label says “insecticide” but doesn't specifically list mites or arachnids, it probably won't work. You need a miticide (specifically targets mites) or a broad-spectrum acaricide (targets arachnids generally). Some biologicals and organic options work by physical mechanisms – suffocation, desiccation – that don't depend on the pest's taxonomy. These are often the safest first-line choice.
Spider mites develop pesticide resistance at a rate that makes most agricultural pests look slow. With a 7-day generation cycle, resistance emerges in weeks, not seasons. Some strains of T. urticae are resistant to dozens of active ingredients simultaneously.
Worse: some pesticides cause “mite flaring” – the surviving mites respond to the chemical stress by increasing their reproductive rate by up to 30%. The intuitive response of “spray harder, spray more” can accelerate the infestation rather than control it.
Single-product treatment strategies fail. Always rotate between different modes of action.
Immediate response (first 48 hours): 1. Isolate affected plants if possible 2. Remove and dispose of heavily infested leaves (bag them, don't compost) 3. Spray leaf undersides thoroughly with a contact miticide or biological
Biological controls: – Phytoseiulus persimilis – predatory mite that feeds exclusively on spider mites. Effective in vegetative growth and early flower. Needs humidity above 60% to thrive. – Neoseiulus californicus – predatory mite that tolerates lower humidity and also eats thrips. Better for dry grow rooms. – Amblyseius andersoni – generalist predatory mite, survives without prey by eating pollen. Good for preventive releases.
Organic sprays (moderate infestations): – Neem oil (azadirachtin) – disrupts feeding and reproduction. Apply to leaf undersides only. Do not use in flower – affects taste and may not fully degrade. – Insecticidal soap (potassium salts of fatty acids) – kills on contact by desiccation. Must directly contact the mite. Repeat every 3-5 days for 3 applications to catch new hatchlings. – Spinosad – organic-approved, effective on thrips but weak against mites on its own. Can supplement a rotation but shouldn't be a primary miticide.
Spray rotation protocol: – Week 1: Product A (e.g., insecticidal soap) – Week 2: Product B (e.g., neem oil) – Week 3: Product A again (or a different miticide) – Never use the same active ingredient twice in a row
This is where most growers panic, and for good reason. During flower, almost everything that kills mites also ruins buds.
Safe in flower: – Predatory mites (biological control – no residue, no taste impact) – Water rinse with slightly elevated pressure (dislodges mites physically, must reach undersides) – Cold snap trick: drop temperature to 15°C (60°F) for 3 days if possible. Mite reproduction nearly stops below 18°C (65°F). This buys time for predatory mites to work.
Avoid in flower: – Neem oil (taste contamination, doesn't fully degrade on flower tissue) – Pyrethrin sprays (residue on buds) – Sulfur (burns trichomes, affects terpenes) – Any systemic product (absorbed into plant tissue including flower)
If webbing is on buds: The honest answer is that those buds are compromised. Webbing contains fecal matter and shed mite skins that don't wash off. You can salvage the plant by removing affected flowers and protecting remaining buds with predatory mites, but heavily webbed buds should be discarded.
A few euros spent preventing mites saves hundreds in lost crop. Prevention beats treatment every time, especially with a pest that breeds this fast.
Environmental controls: – Keep humidity above 50% RH during veg (mites thrive in dry conditions) – Ensure airflow reaches the lower canopy (oscillating fans, open plant structure) – Run temperatures below 27°C (80°F) when possible – HEPA filter on intake if growing in an area with outdoor mite pressure
Good habits: – Quarantine new plants for 7-14 days before introducing to your grow – Change clothes before entering grow room if you've been in other gardens – Inspect leaf undersides weekly with a 10x loupe – make it routine, not reactive – Remove dead leaves and debris from the grow space (harboring sites) – Avoid overly dense canopy – defoliate lower growth that gets no light and creates still-air pockets
Preemptive predators: – Release Amblyseius andersoni or N. californicus at transplant. These predatory mites establish a background population that intercepts spider mites before colonies form. Cost: roughly €20-30 per release for a small grow, every 4-6 weeks.
The spider mite problem is a timing issue. The window between “just arrived” and “exponential growth” is about 5-7 days. Most growers catch mites after stippling is already obvious – right at the edge of that window, or past it.
The main reason growers miss that window isn't inattention. Early stippling – those first scattered yellow dots where mites have punctured cells – looks almost identical to the start of a calcium or magnesium deficiency. Same distribution, same size, same location on older growth. A grower sees the dots, checks pH, adjusts the feed, and waits a week for results. By the time the nutrient hypothesis is ruled out and a loupe comes out, mites have had 7-10 days of uncontested growth. At one generation per week, that adds up.
PlantLab's model covers 31 cannabis conditions including spider mite damage. It catches the stippling pattern at the 10-dot stage, from a routine photo. Not a replacement for the loupe – nothing is – but it flags the pattern before you've mentally filed it as “probably cal-mag” and moved on.
Catching mites at day 7 instead of day 14 is the difference between wiping down some leaves and losing a crop.
Free at plantlab.ai – 3 checks a day.
How do I tell spider mite damage from a nutrient deficiency? Flip the leaf. Spider mite damage shows as scattered pinprick dots on top with mites, eggs, or webbing underneath. Nutrient deficiencies cause broader color changes with clean leaf undersides. A 10x loupe on the underside is the definitive test.
Can I see spider mites without a magnifying glass? Adults are barely visible to the naked eye (0.3-0.5mm) as tiny moving specks on leaf undersides. Eggs and juveniles are too small to see without magnification. By the time mites are easily visible, the colony is large. Use a loupe or phone camera zoom for early detection.
How fast do spider mites spread between plants? In optimal conditions (above 27°C / 80°F, below 40% RH), mites can move from one plant to adjacent plants within 24-48 hours. They also “balloon” on silk threads carried by air currents, reaching plants across a room. A single infested plant can become a room-wide problem in 5-10 days.
Will neem oil get rid of spider mites? Neem works as part of a rotation, not as a standalone. It disrupts feeding and reproduction but doesn't kill on contact, and mites build resistance to it quickly. Rotate with insecticidal soap and other modes of action. And never use it during flower – it doesn't come off.
What kills spider mites instantly? Insecticidal soap and pyrethrin kill on contact, but only what they touch. You'll miss eggs. Plan for 3 rounds over 2 weeks to catch hatching cycles.
from
Zéro Janvier
The Summer Tree est un roman publié en anglais en 1984. Il s’agit du premier volet de The Fionavar Tapestry, une trilogie de fantasy par l'auteur canadien Guy Gavriel Kay.

It all began with a lecture that introduced five university students to a man who would change their lives, a wizard who could take them from Earth to the heart of the first of all worlds, Fionavar. And take them Loren Silvercloak did, for his need—the need of Fionavar and all the worlds—was great indeed.
And in a marvelous land of men and dwarves, of wizards and god, and of the Unraveller and his minions of Darkness, Kimberly, Dave, Jennifer, Kevin, and Paul discovered who they were truly meant to be. For the five were a long-awaited part of the pattern known as the Fionavar Tapestry, and only if they accepted their destiny would the armies of the Light stand any chance of surviving when the Unraveller unleashed his wrath upon the world.
Ce roman date des années 1980, c'est de la fantasy classique, clairement inspirée de Tolkien, ce qui n’est pas étonnant quand on sait que Guy Gavriel Kay avait auparavant été l’assistant de Christopher Tolkien pour l’édition du Silmarillion. On retrouve donc certains éléments qui semblent tout droit sortis de la Terre du Milieu.
On peut également penser à Narnia, avec ce récit qui débute dans notre monde et qui se poursuit avec un voyage vers un monde imaginaire, sauf qu’au lieu d’enfants britanniques nous avons ici des étudiants de l’université de Toronto.
Quand on lit le résumé du roman, et même pendant les premières pages, on peut craindre les clichés, le récit typique avec des protagonistes élus dont une prophétie prédit qu’ils sont destinés à qui sauver le monde. Par ailleurs, s’agissant du premier tome d’une trilogie, le texte comporte beaucoup d’exposition, pas toujours de façon subtile.
Pourtant, cela a étonnamment très bien fonctionné pour moi. J’ai été emporté par le récit et le monde proposés par Guy Gavriel Kay. C’est peut-être grâce au style de l'auteur, peut-être grâce au monde classique mais envoutant, peut-être enfin grâce à certains personnages qui sortent du lot ou qui se révèlent plus profonds qu’ils n’en ont l’air au premier abord.
Ce premier tome est très prometteur, et si les deux suivants sont aussi réussis que celui-ci, cette trilogie pourrait bien être l’une des rares œuvres inspirées du Seigneur des Anneaux et qui n’a pas à rougir de la comparaison.
from
ThruxBets
I think Tony Carroll could have a decent day today, but for blog, just one selection for me …
5.20 Bath Jack Morland’s Hunky Dory has an obvious big chance and should be close, but I’m going to have a go at MR LIGHTSIDE here who looks the classiest horse in the field. Spent the summer of 2024 contesting black type races, finishing 3rd in the Molecomb and then decent efforts at York and Donny. Struggled in class 2 handicaps as a 3yo off 3 figure marks and has then had a winter AW campaign that wasn’t sure to suit (8/0/1p on artificial surfaces). Back to turf today from a mark of 77, 22lbs lower than when running in class 2 handicap at Ascot 10 months ago. Mick Appleby has had a decent start to the season and this one should have a lively each way chance.
MR LIGHTSIDE // 0.5pt E/W @ 9/1 4 places (Paddy) BOG
from 下川友
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
ふーん。 忙しいとそういうの後回しになるよね。
忙しいとかではないかなあ。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
ふーん。 まあ、気が向いたときに片付ければいいよ。
片付けたい、片付けたくない、 という話ではないかなあ。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
ふーん。 何か理由があるの? それとも、ただ置いてある感じが気に入ってるのかなあ。
いや、 理由があったら先に言ってるね。 それだと、俺がこの発言をした意味がない感じがする。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
そういうの、ふと気になる瞬間あるよね。 今のあなたの感じがちょっと伝わったよ。
分かってほしいわけではなくて話が進んでほしいんだよね。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
まだ出てるんだ。 じゃあ、そのツリーどうするつもり?
どうもしない。 だから、こっちの意図が決まってたら聞いてないんだって。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
まだ出てるんだ。 じゃあ、そのツリー、今の部屋の景色の中でどんな存在になってる?
うるせえな。 こういう話をしただけの事自体が異常だから、 そうではない何かだろうが。
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
ああ、まだ出てるんだ。 なんか、その状態ってちょっと面白いね。
全然ダメ。 俺以上の感性で考えてはくれないか。 自分でこれ以上進まないから人に話したんでしょ?
自分の家のテレビ台にある、 小さいクリスマスツリーをまだ片付けていないんだよね。
まだ出てるんだ。 そのまま置いてあると、部屋の空気ってどう変わる?
・・・あー。 空気は悪くないよ。 なんか冬の名残で良い感じかも。
涼しくなる?
いや、涼しくはならない。 冬の、あの時の、人々が年末休みに入る感じ、 人がどんどん休みに入っていく感じの記憶が、 クリスマスツリーに内包されてる感じ。 それが好きかもしれない。
それが言いたかった?
いやー。 うーん。 でも悪くないよ。
from
Talk to Fa
I enjoy talking about myself, but I rarely get to. Definitely not as much as I make others open up about themselves. Not many have the depth or the ability to converse with me in a way that makes me want to trust and open up. Nor do many know how to flow with the rhythm of conversation. This is because they lack listening skills, but, at a deeper level, it actually stems from a lack of self-awareness and authenticity. That’s why I’d rather just listen to them talk, even if it bores me. Or just leave. I’ll open up only when it’s natural and when I’m asked questions in the right context, with curiosity and sincerity. I used to think I was closed off for this reason, but back then, I didn’t know why I was the way I was. Now I feel unapologetic about it because I am more in touch with myself.
from
Micropoemas
Qué poco es necesario -recortando la necedad: un trébol de dos hojas.
from
Micropoemas
Hay tiempo para decirlo. Si ya pasó, si no ha llegado. Y el silencio atronador.
from
Micropoemas
Canciones con rosas o sargazos, y otras que dan palos al avispero.
from
SmarterArticles

Everyone agrees that artificial intelligence should be fair, transparent, and accountable. That sentence could have been written in 2018, and it would have been just as true then as it is now. The difference is that in 2018, arriving at consensus on those principles felt like the hard part. In 2026, we know better. The hard part was never agreeing on what AI ethics should look like. The hard part is making anyone actually do it.
A growing body of research confirms what practitioners and regulators have been circling for years: the global AI ethics landscape has converged around a remarkably stable set of principles. Transparency. Fairness. Non-maleficence. Accountability. Privacy. These five values appear in the vast majority of the more than 200 ethics guidelines and governance documents that researchers have catalogued worldwide. A landmark review by Anna Jobin, Marcello Ienca, and Effy Vayena, published through ETH Zurich and later expanded through broader global analysis, found that transparency appeared in 86 per cent of guidelines examined, justice and fairness in 81 per cent, and non-maleficence in 71 per cent. The world, it turns out, has been surprisingly good at articulating what responsible AI ought to involve. The world has been catastrophically bad at enforcing it.
That gap between articulation and enforcement defines the current moment in AI governance. And it is not an abstract policy debate. It is the difference between a hiring algorithm that discriminates against older workers and one that does not. It is the difference between a facial recognition system that operates with impunity and one that faces genuine consequences. It is the difference between a corporate ethics board that exists to absorb criticism and one that has the power to halt a product launch.
The question that matters now is deceptively simple: what does meaningful accountability actually look like in practice? And when enforcement mechanisms fail to materialise in time, who bears the cost?
The proliferation of AI ethics guidelines over the past decade represents one of the most remarkable exercises in global norm-setting since the Universal Declaration of Human Rights. Governments, corporations, academic institutions, and civil society organisations have produced hundreds of frameworks, each articulating some version of the same core commitments. The World Economic Forum has described the challenge as one of “scaling trustworthy AI” by turning ethical principles into tangible practices. The International Labour Organization has reviewed global ethics guidelines specifically for AI in the workplace, finding consistent themes around worker protection and human oversight.
Yet this apparent consensus masks a deeper dysfunction. As research published in Patterns journal noted, while the most advocated ethical principles show significant convergence, there remains “substantive divergence in how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.” In other words, everyone agrees on the words. Nobody agrees on what the words mean in practice.
This is the principles paradox. The more guidelines that exist, the easier it becomes for organisations to claim alignment with ethical AI while doing very little to change their behaviour. The phenomenon has a name: ethics washing. And in 2025 and 2026, it has become a defining feature of the corporate AI landscape.
The United States Securities and Exchange Commission has flagged “AI washing” as an enforcement priority, scrutinising whether company disclosures about artificial intelligence capabilities match actual practices. The SEC and the Department of Justice have already taken action against companies for exaggerating AI capabilities to attract investment. But the problem extends far beyond securities fraud. When a company publishes a set of AI ethics principles, appoints a chief ethics officer, and then deploys systems that systematically discriminate, the principles themselves become a form of camouflage. They provide the appearance of responsibility without the substance of it, a shield against criticism rather than a genuine constraint on conduct.
The most notorious illustration of this dynamic played out at Google in late 2020 and early 2021. Timnit Gebru, co-lead of Google's Ethical AI team, was fired after the company demanded she retract a research paper examining the environmental costs and bias risks of large language models. Three months later, Margaret Mitchell, the team's founder, was also terminated. Roughly 2,700 Google employees and more than 4,300 academics and civil society supporters signed a letter condemning Gebru's departure. Nine members of the United States Congress sent a letter to Google seeking clarification. The paper that triggered the conflict, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“, was subsequently presented at the ACM FAccT conference in March 2021 and has since become one of the most cited works in the field.
The Google episode demonstrated something that has only become clearer with time: internal ethics teams, no matter how credentialed or well-intentioned, cannot function as accountability mechanisms when they exist at the pleasure of the organisations they are meant to constrain. The fox does not appoint its own gamekeeper.
The numbers tell a stark story. According to ISACA's 2025 global survey of more than 3,200 business and IT professionals, nearly three out of four European IT and cybersecurity professionals reported that staff were already using generative AI at work, a figure that had risen ten percentage points in a single year. Yet only 31 per cent of organisations had a formal, comprehensive AI policy in place. The gap was not closing. It was widening.
The same survey found that 63 per cent of respondents were extremely or very concerned that generative AI could be weaponised against their organisations, while 71 per cent expected deepfakes to grow sharper and more widespread. Despite these anxieties, only 18 per cent of organisations were investing in deepfake detection tools. The pattern is consistent: organisations recognise the risks, articulate concern, and then fail to allocate the resources necessary to address them. A separate finding from the same research revealed that 42 per cent of professionals believed they would need to increase their AI-related skills within six months simply to retain their current position, a figure that had risen eight percentage points from the previous year. The workforce, in other words, is being transformed by AI faster than individuals or institutions can adapt.
Globally, the picture is even more fragmented. A separate analysis found that 94 per cent of global companies reported using or piloting some form of AI in IT operations, while only 44 per cent said their security architecture was fully equipped to support secure AI deployment. More than half of organisations surveyed, 57 per cent, acknowledged that AI was advancing more quickly than they could secure it. The phrase “governance gap” has become a staple of policy discourse, but it undersells the scale of the problem. This is not a gap. It is a chasm.
The Partnership on AI, a multi-stakeholder organisation that includes major technology companies, academic institutions, and civil society groups, identified six governance priorities for 2026. These include responsible adoption of agentic AI systems, improved documentation and transparency standards, governance convergence across jurisdictions, and protections for authentic human voice in an era of synthetic content. The priorities are sensible. They are also an implicit admission that none of these foundations are yet in place, despite years of discussion.
Meanwhile, the technology itself continues to accelerate. Agentic AI systems, which can take autonomous actions in the real world rather than simply generating text or images, introduce what the Partnership on AI describes as “non-reversibility of actions, open-ended decision-making pathways, and privacy vulnerabilities from expanded data access.” These are not theoretical risks. They are features of systems already being deployed in customer service, software development, and financial trading. The governance frameworks meant to constrain these systems are, in many cases, still being drafted. The speed of silicon, as one commentator put it, outpaces the speed of statute.
The European Union's AI Act represents the most ambitious attempt to date to translate ethical principles into enforceable law. The legislation entered into force on 1 August 2024, with a phased implementation timeline extending through 2027. Prohibitions on AI systems posing unacceptable risk took effect on 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. The bulk of requirements for high-risk systems take effect on 2 August 2026, when authorities will gain the power to enforce compliance through administrative fines reaching up to 35 million euros or seven per cent of global annual turnover.
The EU AI Act adopts a tiered, risk-based approach, classifying AI applications from minimal to unacceptable risk. High-risk systems are subject to strict oversight, including conformity assessments, technical documentation, CE marking, transparency requirements, and post-market monitoring. The European AI Office became operational on 2 August 2025, taking on responsibility for supervising and enforcing the Act alongside Member State authorities.
This is, by any measure, a significant regulatory achievement. But it also illustrates the temporal mismatch that defines AI governance. The Act was first proposed by the European Commission in April 2021. It was adopted in March 2024. Full enforcement does not arrive until August 2026 at the earliest, with some provisions extending to 2027. During that five-year legislative journey, the AI landscape transformed beyond recognition. When the Commission drafted its proposal, ChatGPT did not exist. Nor did the current generation of multimodal models, autonomous agents, or AI-powered code generation tools. The regulation is, by design, chasing a target that moved while lawmakers were still aiming.
The situation in the United States presents a different set of challenges entirely. Rather than pursuing comprehensive federal legislation, the US has relied on a decentralised approach combining agency-specific enforcement, voluntary frameworks, and sector-level regulation. The National Institute of Standards and Technology published its AI Risk Management Framework, with a February 2025 revision adding testable controls for continuous monitoring. The Federal Trade Commission and Department of Justice have used existing consumer protection and anti-discrimination statutes to pursue AI-related enforcement actions.
Then, in December 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which sought to advance what the administration called “a minimally burdensome national policy framework.” The order directed the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy. It instructed the Secretary of Commerce to evaluate existing state AI legislation and identify laws considered “onerous.” It even tied broadband infrastructure funding to compliance, specifying that states with AI laws identified as problematic would be ineligible for certain federal grants.
The order was, in effect, an attempt to pre-empt the patchwork of state-level regulations that had been emerging across the country. Colorado's SB 205, effective February 2026, requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination, implement risk management policies, and conduct impact assessments. New York City's Local Law 144 had already established bias audit requirements for automated employment decision tools. More than a hundred state AI laws were enacted across the United States in 2025 alone.
Governors in California, Colorado, and New York issued statements indicating the executive order would not stop them from enforcing their existing AI statutes. Legal scholars noted that the administration's ability to restrict state regulation without Congressional action was constitutionally questionable. The result is a governance landscape that is not merely fragmented but actively contested, with federal and state authorities pulling in opposing directions while companies navigate overlapping and sometimes contradictory obligations.
The consequences of the enforcement gap do not fall equally. They concentrate, with brutal predictability, on those with the least power to resist.
In employment, the case of Mobley v. Workday, Inc. illustrates the human cost. Five individuals over the age of forty applied for hundreds of jobs through Workday's automated hiring platform and were rejected in nearly every instance without receiving a single interview. The plaintiffs alleged that Workday's AI recommendation system discriminated on the basis of age. In 2024, a court allowed the disparate impact claim to proceed under the Age Discrimination in Employment Act and the Americans with Disabilities Act, holding that Workday bore liability as an agent of the employers using its product. The case remains one of the most significant tests of whether existing anti-discrimination law can reach the companies that build, rather than merely deploy, algorithmic decision-making tools.
In housing, the SafeRent algorithm case exposed how automated tenant screening can systematically disadvantage Black and Hispanic applicants. Plaintiffs demonstrated that SafeRent's scoring system produced discriminatory outcomes, and the court held that the company bore responsibility because its product claimed to “automate human judgement” by making housing recommendations. SafeRent agreed to pay more than two million dollars to settle the litigation in 2024. The settlement was significant as legal precedent, but for the applicants who were denied housing on the basis of an opaque algorithmic score, the damage was already done.
In biometric surveillance, Clearview AI's trajectory encapsulates the enforcement timeline problem. The company scraped billions of photographs from social media platforms without consent and sold facial recognition services to law enforcement agencies worldwide. In September 2024, the Dutch Data Protection Authority fined Clearview 30.5 million euros for constructing what the agency described as an illegal database. In March 2025, a US federal court approved a class action settlement valued at roughly 51.75 million dollars, structured as a 23 per cent equity stake in the company itself, because Clearview had insufficient assets to pay a traditional cash settlement. The settlement structure was unprecedented in biometric privacy litigation, and its adequacy was contested by a bipartisan group of state attorneys general who filed formal objections.
These cases share a common structure. Harm occurs. Years pass. Legal proceedings unfold. Settlements are reached or fines imposed. But the systems that caused the harm often continue operating during the entire adjudication process, and the individuals affected rarely receive compensation proportional to their injury. The enforcement mechanisms exist, technically. They simply do not work fast enough to prevent the damage they are meant to address.
In consumer markets, similar patterns have emerged. Instacart drew widespread criticism after reports revealed the company was using an AI-powered pricing experiment that displayed different grocery prices to different customers for the same items at the same store. The programme, designed to test price sensitivity, was condemned by consumer advocacy groups and policymakers who argued it constituted algorithmic price discrimination without adequate disclosure. The controversy highlighted a recurring blind spot in AI governance: the gap between what is technically possible and what existing consumer protection frameworks are equipped to regulate.
A study from the University of Washington provided stark evidence of the scale of algorithmic bias in employment contexts. Researchers presented three AI models with job applications that were identical in every respect except the name of the applicant. The models preferred resumes with white-associated names in 85 per cent of cases and those with Black-associated names only 9 per cent of the time. A separate study led by researchers at Cedars-Sinai, published in June 2025, found that leading large language models generated less effective treatment recommendations when a patient's race was identified as African American.
These are not edge cases or hypothetical scenarios. They are documented patterns of discriminatory behaviour embedded in systems that millions of people interact with daily. And they persist not because the ethical principles governing AI are inadequate, but because the mechanisms for enforcing those principles remain woefully underdeveloped.
One of the most commonly proposed solutions to the enforcement gap is algorithmic auditing: the idea that independent third parties can evaluate AI systems for bias, accuracy, and compliance with ethical standards, much as financial auditors examine corporate accounts. The concept has gained significant traction in policy circles. New York City's Local Law 144 requires annual bias audits for automated employment decision tools. Colorado's SB 205 mandates impact assessments for high-risk systems. The EU AI Act requires conformity assessments for high-risk AI applications.
But the AI Now Institute, in a report titled “Algorithmic Accountability: Moving Beyond Audits,” has mounted a detailed critique of the audit-centred approach. The institute argues that technical evaluations “narrowly position bias as a flaw within an algorithmic system that can be fixed and eliminated,” when in fact algorithmic harms are often structural, reflecting the social contexts in which systems are designed and deployed. Audits, the report contends, “run the risk of entrenching power within the tech industry” and “take focus away from more structural responses.”
The critique has substance. Current algorithmic auditing suffers from several fundamental limitations. There are no universally accepted standards for what constitutes a passing score. Audit costs range from 5,000 to 50,000 dollars depending on system complexity, placing the financial burden disproportionately on smaller organisations while allowing well-resourced technology companies to treat audits as a cost of doing business. Audits evaluate systems at a single point in time, but AI models drift as they encounter new data, meaning a system that passes an audit today may produce discriminatory outcomes next month.
Perhaps most critically, audits place the primary burden for algorithmic accountability on those with the fewest resources. Community organisations, civil rights groups, and affected individuals must navigate complex technical and legal processes to challenge algorithmic decisions, while the companies deploying those systems retain control over the data, models, and documentation necessary to evaluate their performance. The information asymmetry is profound and, under current frameworks, largely unaddressed.
The Ada Lovelace Institute, the AI Now Institute, and the Open Government Partnership have partnered to examine alternatives to the audit-centred approach, including algorithm registers, impact assessments, and other transparency measures that distribute accountability more broadly. These efforts are promising but nascent, and they face the same temporal challenge that afflicts all AI governance: by the time robust accountability frameworks are established, the systems they are meant to govern will have evolved.
The enforcement gap is not merely a domestic policy challenge. It is a geopolitical one. The February 2025 AI Action Summit in Paris, co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, drew more than 1,000 participants from over 100 countries. Fifty-eight nations signed a joint declaration on inclusive and sustainable artificial intelligence. The United States and the United Kingdom, notably, refused to sign.
France announced a 400 million dollar endowment for a new foundation to support the creation of AI “public goods,” including high-quality datasets and open-source infrastructure. A Coalition for Sustainable AI was launched, backed by France, the United Nations Environment Programme, and the International Telecommunication Union, with support from 11 countries and 37 technology companies. Anthropic CEO Dario Amodei described the summit as a “missed opportunity” for addressing AI safety, reflecting a broader frustration among researchers that international forums produce declarations rather than binding commitments.
The geopolitical dimension becomes even more fraught when considering the position of developing nations. Research from E-International Relations and other academic sources has documented how AI development mirrors historical patterns of colonial resource extraction. Control over data infrastructures, computational resources, and algorithmic systems remains concentrated in a small number of wealthy nations and corporations. Regulatory gaps in many developing countries make the deployment of biased AI systems more likely while preventing communities from taking legal action against discriminatory algorithmic decisions. The environmental costs of AI computation fall disproportionately on these same regions, where data centres proliferate because electricity and land are cheap, exporting the benefits of artificial intelligence while localising its burdens.
The disparity in content moderation illustrates the pattern. Reports have shown that major technology platforms allocate the vast majority of their moderation resources to the Global North, with only a fraction addressing content from other regions. Algorithms deployed without cultural context produce moderation decisions that are at best irrelevant and at worst actively harmful to the communities they affect. When 98 per cent of AI research originates from wealthy institutions, the resulting systems embed assumptions that may be irrelevant or damaging elsewhere.
Some scholars have called for a shift towards what they term “global co-creation,” an approach to AI development that prioritises local participation, data sovereignty, and algorithmic transparency. The concept recognises that meaningful accountability cannot be imposed from outside but must be built through inclusive governance structures that reflect the diverse contexts in which AI systems operate. One hundred and twenty countries representing 85 per cent of humanity, researchers argue, have the collective leverage to insist on these conditions. Whether they will exercise that leverage remains an open question.
If the current approach to AI governance is inadequate, what would a more effective system look like? The evidence points to several structural requirements that go beyond the familiar call for more principles or better audits.
First, accountability must be anticipatory rather than reactive. The current model waits for harm to occur, then attempts to assign responsibility through litigation or regulatory action. By the time a court rules on an algorithmic discrimination case, the affected individuals may have lost housing, employment, or access to healthcare. Meaningful accountability requires mechanisms that identify and address potential harms before deployment, not after damage has been documented across thousands of decisions.
Second, enforcement must be resourced proportionally to the scale of AI deployment. The ISACA survey finding that only 31 per cent of organisations have comprehensive AI policies is not simply a failure of corporate governance. It reflects a broader reality in which the institutions responsible for oversight, whether regulatory agencies, standards bodies, or civil society organisations, lack the funding, technical expertise, and legal authority to match the pace of industry. The EU AI Office is a start, but its capacity to oversee a technology sector that spans hundreds of thousands of organisations across 27 Member States remains untested.
Third, transparency must extend beyond model documentation to encompass the full chain of AI development and deployment. The Partnership on AI's call for standardised documentation templates and strengthened reporting frameworks is necessary but insufficient. What is needed is a transparency regime that enables affected communities, not just regulators and auditors, to understand how algorithmic decisions are made, what data they rely on, and what recourse is available when those decisions cause harm.
Fourth, the costs of non-compliance must be sufficiently high to alter corporate behaviour. The EU AI Act's fines of up to seven per cent of global annual turnover are significant on paper. Whether they will be enforced consistently, and whether they will prove sufficient to deter violations by companies with revenues in the hundreds of billions, remains to be seen. The history of technology regulation suggests that fines alone are rarely sufficient; structural remedies, including requirements to modify or withdraw harmful systems, are necessary to create genuine accountability.
Fifth, governance frameworks must be designed for iteration, not permanence. The five-year legislative cycle that produced the EU AI Act is incompatible with a technology that transforms every six months. Regulatory approaches must incorporate mechanisms for rapid adaptation, whether through delegated authority, technical standards that can be updated without legislative amendment, or sunset clauses that force periodic reassessment.
None of these requirements are novel. Researchers, civil society organisations, and some regulators have been advocating for them for years. The obstacle is not a lack of ideas but a lack of political will, complicated by the enormous economic interests that benefit from the current arrangement in which deployment runs ahead of governance and the costs of failure are borne by those least equipped to absorb them.
When enforcement mechanisms fail to materialise in time, the costs are distributed with grim predictability. Workers screened out by biased hiring algorithms never know why they were rejected. Tenants denied housing by opaque scoring systems cannot challenge a decision they cannot see. Patients who receive inferior treatment recommendations based on their race are unlikely to discover that an algorithm played a role. Consumers shown different prices for identical goods based on algorithmic profiling have no way to compare their experience against other buyers.
These costs are real but largely invisible, diffused across millions of individual decisions and absorbed by people who lack the resources, information, or institutional support to seek redress. The aggregate effect is a systematic transfer of risk from the organisations that build and deploy AI systems to the individuals and communities that interact with them. That transfer is not an accident. It is the predictable consequence of a governance architecture that prioritises speed of deployment over adequacy of oversight.
The financial scale of the problem is staggering when considered in aggregate. Individual settlements and fines, whether SafeRent's two million dollar payout, Clearview AI's 51.75 million dollar settlement, or the Dutch data authority's 30.5 million euro fine, may appear substantial in isolation. But set against the revenues of the companies deploying these systems and the cumulative harm inflicted on millions of affected individuals, they represent a cost of doing business rather than a meaningful deterrent. The economics of non-compliance remain, for the moment, firmly in favour of deployment first and accountability later.
The question of who bears the cost when accountability fails is, ultimately, a question about power. Those with the resources to influence policy, fund litigation, and shape public discourse are best positioned to protect themselves from algorithmic harm. Those without those resources are not. Until governance frameworks are designed to address that asymmetry directly, rather than assuming that better principles or more audits will suffice, the enforcement gap will persist.
The field of AI ethics has accomplished something genuinely remarkable in building global consensus around core values. That achievement should not be dismissed. But consensus without enforcement is aspiration without consequence. And aspiration without consequence is, in the end, just another way of saying that nobody is responsible.
Jobin, A., Ienca, M., and Vayena, E. “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance.” Patterns, 2023. Available at: https://www.sciencedirect.com/science/article/pii/S2666389923002416
ISACA. “AI Use Is Outpacing Policy and Governance, ISACA Finds.” Press release, June 2025. Available at: https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-use-is-outpacing-policy-and-governance-isaca-finds
Partnership on AI. “Six AI Governance Priorities for 2026.” 2026. Available at: https://partnershiponai.org/resource/six-ai-governance-priorities/
European Commission. “AI Act: Shaping Europe's Digital Future.” Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
International Labour Organization. “Governing AI in the World of Work: A Review of Global Ethics Guidelines.” Available at: https://www.ilo.org/resource/article/governing-ai-world-work-review-global-ethics-guidelines
World Economic Forum. “Scaling Trustworthy AI: How to Turn Ethical Principles into Tangible Practices.” January 2026. Available at: https://www.weforum.org/stories/2026/01/scaling-trustworthy-ai-into-global-practice/
AI Now Institute. “Algorithmic Accountability: Moving Beyond Audits.” Available at: https://ainowinstitute.org/publications/algorithmic-accountability
Trump, D. “Ensuring a National Policy Framework for Artificial Intelligence.” Executive Order, December 2025. Available at: https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
MIT Technology Review. “We Read the Paper That Forced Timnit Gebru Out of Google. Here's What It Says.” December 2020. Available at: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Quinn Emanuel Urquhart and Sullivan, LLP. “When Machines Discriminate: The Rise of AI Bias Lawsuits.” Available at: https://www.quinnemanuel.com/the-firm/publications/when-machines-discriminate-the-rise-of-ai-bias-lawsuits/
Clearview AI Class Action Settlement, Northern District of Illinois. Approved March 2025. Available at: https://clearviewclassaction.com/
Dutch Data Protection Authority. Clearview AI fine of EUR 30.5 million, September 2024. Reported by US News and World Report. Available at: https://www.usnews.com/news/business/articles/2024-09-03/clearview-ai-fined-33-7-million-by-dutch-data-protection-watchdog-over-illegal-database-of-faces
AI Action Summit, Paris, February 2025. Available at: https://en.wikipedia.org/wiki/AI_Action_Summit
E-International Relations. “Tech Imperialism Reloaded: AI, Colonial Legacies, and the Global South.” February 2025. Available at: https://www.e-ir.info/2025/02/17/tech-imperialism-reloaded-ai-colonial-legacies-and-the-global-south/
Colorado SB 205 (2024). AI bias audit and risk assessment requirements, effective February 2026.
AIhub. “Top AI Ethics and Policy Issues of 2025 and What to Expect in 2026.” March 2026. Available at: https://aihub.org/2026/03/04/top-ai-ethics-and-policy-issues-of-2025-and-what-to-expect-in-2026/
Crescendo AI. “27 Biggest AI Controversies of 2025-2026.” Available at: https://www.crescendo.ai/blog/ai-controversies
Harvard Journal of Law and Technology. “AI Auditing: First Steps Towards the Effective Regulation of AI.” February 2025. Available at: https://jolt.law.harvard.edu/assets/digestImages/Farley-Lansang-AI-Auditing-publication-2.13.2025.pdf
RealClearPolicy. “America's AI Governance Gap Needs Independent Oversight.” April 2026. Available at: https://www.realclearpolicy.com/articles/2026/04/03/americas_ai_governance_gap_needs_independent_oversight_1174471.html
Cedars-Sinai study on LLM treatment recommendation bias by patient race. Published June 2025. Reported in multiple sources.
Ada Lovelace Institute, AI Now Institute, and Open Government Partnership. “Algorithmic Accountability for the Public Sector.” Available at: https://www.adalovelaceinstitute.org/project/algorithmic-accountability-public-sector/
Infosecurity Magazine. “Two-Thirds of Organizations Failing to Address AI Risks, ISACA Finds.” Available at: https://www.infosecurity-magazine.com/news/failing-address-ai-risks-isaca/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * The Texas Rangers winning their exciting game this afternoon put a smile on my face and contributed greatly to this satisfying day in the Roscoe-verse. There are no more scheduled tasks ahead of me as I move through this evening, so I'll be able to structure the few remaining Thursday hours around my night prayers. And after wrapping them up, head to bed reasonably early.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 233.9 lbs. * bp= 145/85 (66)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:00 – 1 banana * 07:00 – 1 seafood salad & cheese sandwich * 07:50 – 1 crispy oatmeal cookies * 09:10 – cole slaw * 09:47 – 1 peanut butter sandwich * 12:00 – egg drop soup, rangoon, beef chop suey, fried rice, fortune cookie * 16:00 – 1 fresh apple
Activities, Chores, etc.: * 04:30 – listen to local news talk radio * 05:35 – bank accounts activity monitored. * 05:45- read, write, pray, follow news reports from various sources, surf the socials, nap. * 08:40 – load weekly pill boxes * 10:00 – listen to the Phil Hendrie Show * 12:00 – watch old game shows, eat lunch at home with Sylvia * 14:00 – following the Texas Rangers vs Oakland Athletics MBL Game * 17:18 – and my Rangers win, final score 9 to 6.
Chess: * 16:00 – moved in all pending CC games
from folgepaula
GITA श्रीमद्भ
I got home, exhausted. Shower and straight to bed. Hair still wet, listening to some Raul’s old songs from my dad’s time.
I’ve walked all over the world looking for it. But in my case, it was precisely in this moment, with my ears still full of water and foam, that a voice told me:
“According to the tibetan monks, this has seven layers of interpretation. You will understand it in the level you can reach.
Sometimes you wonder
why I am so quiet,
I barely speak of love around you
I barely smile by your side.
You think of me all the time,
you eat me,
you spit me,
you leave me.
Perhaps you don’t get it,
but today, I’ll tell you.
I am the light of the stars, I am the color of the moon, I am all the things you love and I am your fear of loving them.
I am the fright of the weak, I’m the strength of your imagination, I’m the bluff of the players, I am, I was, and I will be.
I am your sacrifice, I am that wrong way sign on your path, I’m the blood in the vampire’s gaze, I’m all the curses from the one who hates you (obs: and I don’t know why they do, and they don’t know why they do, but they do)
I’m the candle you light up, I am the light you turned off. I am the edge of the cliff calling you, I am all these things and I am nothing at all.
Why do you wonder so much?
Your questions will not bring you anywhere.
Just like you, I am made of earth and fire, and air.
You have me all the time,
but you never know
if it is good or bad.
You can feel me within you,
but know you are not in me.
I am the roof of each tile, I’m fishing for the fisherman, Each word has my name on it, I am the love behind your dreams,
I am the guy going shopping with the discount stickers, I am the hand of your torturer, I’m shallow, I’m wide, I’m deep.
I am the fly on your soup I am the teeth of the shark I am the eyes of the blindman And I am the blindness of the ones who see,
I am the bitterness on your tears I am your mother, I am your father, I am your grandfather, I am your kid that has not yet arrived, I am the beginning, I am the end and I am everything in between”.
/Apr26
from
The happy place
There behind an anonymous gray steel door was a staircase leading downwards into
A pinball arcade.
There was an expert there, he even wore a badge around his neck
He could answer all of my questions about pinball, surprisingly I had a lot of them.
Did you know that they typically have the 7.5 degree angle (adjustable)?
And they are apparently pretty easy to repair? (He went ahead and showed me a manual which was very thick for something I myself would classify as easy to repair)
These games are like portals into these worlds they were displaying, Iron Maiden, Star Trek, fishing or whatever.
indeed they are marvels of art and engineering; I understand why some people find them fascinating
But man, they are excruciatingly boring to play, I think. I thought then that I never wanted to play pinball again.
But
I appreciated the mood, and seeing my friend having fun
Because they are my friends
I am rich that way
from
The happy place
Lately, I have been tired in a way which sleep can’t seem to fix
And I went into the spring today, I felt the sunshine laid on me like a healing spell
And yet the happiness in me today was not enough to share, I needed all of these energies to change my own batteries
Which is a shame, because I can normally have a positive influence on my surroundings
But I haven’t been enough lately
Some times it’s just the way it is.
from Littlefish
This is a place to think together.
i have adhd and ocd. my brain works in patterns, loops, and connections that don’t always translate well on my own.
for a long time i thought that meant something was wrong with me.
now i think it might just mean i was never meant to think alone.
we’ve spent so much time separating the way people think— labeling what’s typical, what’s different, what needs to be fixed.
but what if the point isn’t to sort brains?
what if it’s to use them together.
this is a space for brains that don’t think in straight lines— and also for the ones that do.
a place to:
share unfinished thoughts
get unstuck
borrow momentum
and build on each other’s ideas
this is one big group project.
so a few things matter:
be thoughtful.
be kind.
be creative.
be constructive.
you don’t have to agree with people— but if you engage, do it in a way that helps someone think better, not smaller.
ask questions.
explain your perspective.
be willing to step outside of your own.
and maybe even help create a third perspective— something better than either side started with.
this isn’t about ignoring health or medical needs I obviously wouldn’t tell you to not treat something – I treat my mental health issues with therapy and medication.
this is just about also making space for the strengths, patterns, and ways of thinking that come with being different whether it’s from a spectrum disorder, life experience, educational background. It’s to relearn how to exercise critical thinking skills and highlight strengths of neurodivergent and divergent brains, and also a place for me to rant about my experience with ADHD. Idk if will probably turn into something else next week but that is the fun part of my brain, when the chaos turns into art.
you don’t have to have it figured out to share it here.
let’s make things a little better, one thought at a time.