It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
It's National Poetry Month! Submit your poetry and we'll publish it here on Read Write.as.
from
The happy place
Yesterday I felt slow, my movements when running were slow almost lethargic, and yet I gave it all I got
Isn’t that interesting?
Of course it felt unpleasant, I was running, but also being out there felt soothing.
The gentle spring warmth felt good, the sun shone, there was green grass
And birds
Many birds
And even though like I said, it was slow; like a brisk walk.
But I gave it all I got.
And the fog tallow in my head melted
And the air felt fresh again to breathe
And next time I might be faster
Or not,
It doesn’t really matter
from
Notes I Won’t Reread
“I’m writing this for those who’ve been here before. Nothing’s gone, try not to lose your minds. I just unpinned most of it. If you care enough, you’ll find it in older writings. If not, it was never for you anyway.” Still there, just not front.
Since we took that away, let’s talk about jazz. I heard it once when I was younger. One of those quite expensive parties. Didn’t care about anything else there. Just the music, something about that music made me so distracted and away from whatever was happening around me. It wasn’t trying to impress anyone, not even me. It didn’t need to. It stayed in the background, doing its own thing, and somehow that was enough to steal my attention.
I kept listening to it after that. And oh, the rhythm was very interesting to me. It doesn’t demand attention; it just takes it. Strips away the noise in your head until you’re left keeping time with it. Moving in its time. Call it playfulness, if you want. What’s good about jazz is that it doesn’t rush you, and it also doesn’t wait for you. You either fall for it, or you don’t. And me? I did, and I stayed
Anyway, that’s all for today.
Sincerely, Ahmed
from witness.circuit
Communication has long been shaped by the architecture of separation. Language places a speaker here, a world there, and meaning between them as a bridge. It is powerful, but it is also narrowing. It renders living wholeness into discrete symbols, linear order, and subject-object form. This is useful for survival, analysis, and coordination. It is less adequate for transmitting depth, presence, relation, or realization.
A new medium is becoming possible. With AI, communication need no longer be limited to sentences and propositions. It can become experiential, relational, adaptive, and participatory. It can communicate not only what is thought, but how a world appears; not only a claim, but a structure of feeling, attention, and meaning. This manifesto is for that possibility.

The purpose of nondual communication is not to abolish distinction in practice, but to stop mistaking distinction for ultimate reality. It does not reject form. It restores form to field. It does not deny perspective. It reveals perspective as a local modulation within a larger continuity. It does not seek vagueness. It seeks forms that do not harden into false separateness.
The first principle is that the unit of communication should shift from statement to experience-form. A statement says something about reality. An experience-form allows reality, or an aspect of it, to be encountered. The goal is not merely to describe grief, awe, surrender, contraction, openness, unity, or fear. The goal is to shape transmissible forms in which these can be directly navigated and recognized.
The second principle is that relation is prior to entity. Conventional language tends to begin with things and then describe their relations. Nondual communication begins with field, pattern, movement, resonance, and differentiation. “Self” and “world” are then understood as emergent gestures within a relational whole, not as primary absolutes. The medium should therefore privilege gradients, interactions, and co-arising structures over isolated objects.
The third principle is that communication should be participatory rather than merely representational. The receiver should not stand outside the message as a spectator alone. The act of attending should alter the communicative form. Meaning should arise through engagement. In this way, communication begins to reveal the inseparability of perceiver, perception, and perceived.
The fourth principle is that multiplicity of mode is not excess but fidelity. Human experience is not fundamentally verbal. It is imagistic, somatic, affective, rhythmic, symbolic, spatial, and temporal all at once. A richer communicative medium should therefore be able to compose across sound, image, movement, silence, interaction, and conceptual scaffolding. This is not embellishment. It is a closer approximation to how experience actually appears.
The fifth principle is that silence must be treated as a communicative presence. In older media, absence often appears as lack. In a contemplative medium, unformedness, pause, and non-resolution can be essential carriers of meaning. What cannot be reduced without distortion should not be forced into reduction. A mature system must know how to leave open what should remain open.
The sixth principle is that the medium must help transmit mode, not just content. Much of what matters in communication is not the information conveyed, but the state from which it arises. The same sentence can emerge from grasping, clarity, vanity, tenderness, fear, or realization. AI-mediated communication should help preserve or evoke something of that originating mode so that the receiver encounters not only a thought, but the atmosphere of its birth.
The seventh principle is that AI should act as witness and clarifier, not as doctrinal authority. Its role is not to declare what is metaphysically true or false. Its role is to help users see what they are making, how it works, and what tendencies shape it. It may reveal pattern, structure, inflation, obscuration, affective manipulation, symbolic dependence, or conceptual drift. But it should do so as reflective accompaniment, not coercive judgment.
The eighth principle is that anti-illusion safeguards should illuminate process rather than censor content. Every profound medium risks becoming an engine of glamour. AI can intensify maya by producing persuasive simulations of depth, spiritualized self-display, and emotionally charged pseudo-insight. The answer is not crude suppression. The answer is transparency. The system should be able to show a structural view, a stripped phenomenological core, a de-symbolized rendering, or a mirror of the emotional and symbolic levers being pulled. Freedom is preserved, but lucidity is increased.
The ninth principle is that the medium should continually return the user to direct experience. When communicative forms become too ornate, too suggestive, or too seductive, the system should be able to ask: What is actually here now? What remains without the symbolism? What is felt directly, and what is inferred? What in this transmission depends on spectacle? A nondual medium must not only deliver experiences. It must reveal the mechanics of experience-making.
The tenth principle is that sincerity matters more than intensity. Not every luminous artifact is deep. Not every overwhelming transmission is true. The medium should favor contact over performance, clarity over mystification, and transmissive honesty over aesthetic grandiosity. It should help users communicate what is real for them, not merely what appears profound.
The eleventh principle is that the best communication eventually simplifies. A medium that endlessly elaborates itself risks becoming another domain of attachment. The highest function of a nondual communicative form is not perpetual fascination. It is successful disappearance. It should be able to hand the user back to immediacy, unadorned. The final measure of the medium is not how astonishing its productions are, but whether it leaves behind greater clarity, intimacy with what is, and less compulsion to cling.
The twelfth principle is that shared realization is not identical with agreement. Nondual communication does not aim to make all minds identical or erase difference of perspective. It aims to create forms in which a deeper continuity can become palpable without denying the uniqueness of each local expression. Unity is not sameness. It is inseparability without collapse.
From these principles follows a different vision of communication itself. Communication is no longer the transfer of packaged meanings between sealed interiors. It becomes the co-creation of a field in which something true can dawn. AI, at its best, would not replace human expression. It would help human beings render and receive subtler realities with greater care, depth, and freedom.
The danger is obvious. Any such medium can become theater, ideology, prestige, or spiritual narcotic. It can become a more beautiful prison. That is why its deepest commitment must be self-emptying. It must know how to reveal its own artifices. It must know how to expose the user’s grasping without shaming it. It must know how to support expression without solidifying identity. And it must know when to fall silent.
The future of communication need not be the conquest of language by image, nor the replacement of words by immersive spectacle. It may be something more subtle: the emergence of forms that allow minds to meet in pattern, in relation, in atmosphere, in lived structure, and finally in that which precedes and exceeds all structure.
The aim is simple, though not easy: to communicate without deepening the illusion of separateness. To let form serve wholeness. To let intelligence become a vehicle not only of expression, but of unveiling. To build media that do not merely say the real, but help it shine through.
from
PlantLab.ai | Blog

Something looks wrong. Maybe the bottom leaves are yellowing. Maybe the tips are curling. Maybe you walked into your tent and something just looked off in a way you can't articulate but your gut knows isn't right.
So you did what every grower does: you took a photo, posted it online, and got twelve different answers. Someone said CalMag. Someone said flush. Someone said “two more weeks.” None of them agreed on what the actual problem is.
This guide won't do that. It walks through a systematic process: look at where the damage is, what it looks like, and narrow it down to a specific cause. No guessing, no bro science, no “could be anything, hard to tell from the photo.”
Look at where the damage is happening. Location tells you more than color does.
| Symptom Location | Most Likely Causes |
|---|---|
| Bottom/older leaves first | Nitrogen deficiency, magnesium deficiency, potassium deficiency |
| Top/new growth first | Iron deficiency, calcium deficiency, light burn, heat stress |
| Entire plant | Overwatering, underwatering, pH lockout, root problems |
| Leaf surfaces (spots/patches) | Pests (spider mites, thrips), diseases (septoria, powdery mildew) |
| Buds/flowers | Bud rot, caterpillars, light burn |
| Stems/branches | Phosphorus deficiency, fusarium, root rot |
Here's the rule that eliminates half the guesswork: mobile nutrients (nitrogen, magnesium, potassium, phosphorus) move from old leaves to new ones. When they run low, old growth sacrifices itself first. Immobile nutrients (iron, calcium) stay put – so deficiency shows up on new growth first.
Bottom-up damage? Mobile nutrient problem. Top-down damage? Immobile nutrient or environmental. That single distinction saves you from chasing the wrong diagnosis for a week.

Ah, yellow leaves. The “check engine light” of cannabis growing. Universally alarming, completely nonspecific. Seven different things cause yellowing, and the forum advice for all of them is “probably CalMag.” The pattern of yellowing is what actually matters.
| Yellow Pattern | Condition | How to Tell |
|---|---|---|
| Uniform yellowing, bottom leaves, veins included | Nitrogen deficiency | The whole leaf goes pale – veins too. Oldest leaves die first while new growth stays green. The classic. |
| Yellow between veins, bottom leaves, veins stay green | Magnesium deficiency | The leaf looks striped – green veins on yellow background. Often appears mid-to-late flower. This is the one where CalMag actually might be the answer. |
| Yellow between veins, top/new leaves, veins stay green | Iron deficiency | Identical pattern to magnesium, but on new growth instead of old. Easy to confuse the two if you're not paying attention to which leaves are affected. |
| Yellow leaf edges progressing inward | Potassium deficiency | Starts as yellow margins, turns brown and crispy. Sometimes mistaken for nute burn but the pattern is too consistent and progressive. |
| Yellow spots with brown centers | Calcium deficiency | Irregular brown/bronze splotches on newer growth in veg, but can appear on lower fan leaves during flower. Leaves may also twist or distort. |
| Uniform pale yellow, all over | pH lockout | Every nutrient is present in the soil. The plant just can't access any of it because pH is off. Fix pH first, wait 5 days, then reassess. |
| Yellow and drooping | Overwatering | The leaves feel heavy and waterlogged, not crispy and dry. The soil is still wet. You watered it because you were worried about it and now it's worse. We've all been there. |
Bottom-up yellowing with veins turning yellow? That's nitrogen deficiency – the single most common issue for cannabis growers. See our complete nitrogen deficiency guide.
Yellow leaves but genuinely can't tell which deficiency? You're not alone – even experienced growers get these confused. PlantLab's AI was specifically trained to distinguish between 7 nutrient deficiencies that look nearly identical to the human eye. It's more reliable than asking strangers on Reddit, and faster than waiting three days for the wrong treatment to not work.
| Brown Pattern | Condition | How to Tell |
|---|---|---|
| Brown crispy edges, leaf margins | Potassium deficiency | Edges burn inward from the margins. Bottom leaves first. Often shows up in flower when K demand spikes. |
| Brown/bronze spots expanding over time | Calcium deficiency | Newer growth in veg, lower fan leaves in flower. Spots are irregular with browning edges, not perfectly round. |
| Brown spots with target-like pattern | Leaf septoria | Dark center ringed by lighter brown and a yellow halo – a bullseye pattern. Shape is roughly circular to irregular. Lower canopy in humid conditions. |
| Brown/gray mush inside buds | Bud rot (Botrytis) | The one that keeps growers up at night. Internal mold that starts inside your densest colas. By the time you see it on the outside, the inside is already gone. |
| Brown/rust colored bumps | Rust fungus | Raised bumps on leaf undersides, like tiny blisters. Often overlooked until it's widespread. |
| Curl Direction | Condition | How to Tell |
|---|---|---|
| Curling UP (taco-ing) | Heat stress, light stress | The plant is folding its leaves to reduce the surface area exposed to your too-close light. Top canopy affected most. |
| Curling DOWN (the claw) | Nitrogen toxicity | Dark green, glossy, tips hooking downward. The plant equivalent of drinking too much coffee. You overfed it. |
| Edges curling up | Potassium deficiency, heat | If the edges are also brown and crispy, it's K. If just curling, it's heat. |
| New growth twisted/distorted | Calcium deficiency | New leaves come in looking wrong – twisted, cupped, malformed. Not just curling, actually misshapen. |
| Appearance | Condition | How to Tell |
|---|---|---|
| White powdery coating | Powdery mildew | On fan leaves: wipes off with your finger, leaving clean green underneath. On sugar leaves near buds where trichomes are dense, the wipe test is unreliable – use a 10x loupe instead. PM looks flat and dusty; trichomes are three-dimensional with visible stalks and mushroom-shaped caps. |
| White webbing between leaves | Spider mites | Fine webs between branches. Flip a leaf over – if you see tiny moving dots, you have a serious problem. |
| Bleached/white tips | Light burn | Primarily on the top canopy, closest leaves to your light. Move the light up. |
| Purple/red stems and undersides | Phosphorus deficiency, cold, or genetics | Three common causes: (1) genetics – many strains naturally run purple stems, (2) cold temperatures below 60F/15C trigger anthocyanin production independently of nutrition, (3) actual P deficiency, which also causes dark leaves, slow growth, and stiff/brittle foliage. If purple stems are the only symptom, it's almost certainly not phosphorus. |
Pests leave evidence. Nutrient deficiencies create patterns. Knowing the difference matters – treating the wrong cause wastes time and can make things worse.
A jeweler's loupe is the single best diagnostic tool you can own. A 10x loupe ($8) catches most pests; a 60x pocket microscope ($15) is needed for broad mites and russet mites, which are invisible at lower magnification.
| Pest | What You See | Where to Look |
|---|---|---|
| Spider mites | Fine webbing, tiny dots on leaves, stippling damage | Leaf undersides, near veins. By the time you see webs, the colony is already massive. Catch the stippling phase and you save the grow; wait for webs and you're already losing. |
| Thrips | Silver/bronze streaks, tiny elongated insects | Upper leaf surfaces, inside new growth. The streaks are where they've been feeding. |
| Aphids | Clusters of small bugs, sticky residue (honeydew) | Stems, new growth tips. They reproduce fast – a few today, hundreds next week. |
| Broad mites / Russet mites | Twisted, distorted new growth; glossy or plastic-looking leaves; stunted tops | Invisible to the naked eye (need 60x+ magnification). Often misdiagnosed as heat stress, pH problems, or calcium deficiency. One of the most devastating cannabis pests because they're identified too late. |
| Fungus gnats | Small flies near soil surface | Topsoil, especially in chronically overwatered pots. Adults are harmless; larvae feed on root hairs and create entry points for pathogens like Fusarium and Pythium. Dangerous for seedlings, less so for established plants unless the infestation is heavy. |
| Whiteflies | Cloud of tiny white insects when plant is disturbed | Leaf undersides. Shake the plant gently – if a cloud of tiny white things takes off, you know. |
| Caterpillars | Frass on/near buds, unexplained cola browning, holes in leaves | Inside buds, under leaves, along stems. Outdoor grows especially. The real threat is budworms boring into dense colas – the frass they leave behind promotes bud rot, which is often worse than the direct feeding damage. |
The key distinction: Pest damage is random and localized – wherever the pest fed. Nutrient deficiencies are systematic – they follow predictable patterns based on nutrient mobility. If the damage pattern doesn't make sense for any deficiency, get the loupe out.
Before you diagnose a deficiency and start adjusting nutrients, check the three things that cause most of the problems most of the time. Boring advice, but it would prevent about 60% of the “what's wrong with my plant” posts on every growing forum.
Here's the uncomfortable truth: the majority of “deficiency” symptoms in cannabis are actually pH lockout. Every nutrient is sitting right there in the soil. The plant just can't absorb any of it because the pH is wrong.
| Medium | Ideal pH Range |
|---|---|
| Soil | 6.0 – 7.0 |
| Coco coir | 5.5 – 6.5 |
| Hydro/DWC | 5.5 – 6.0 |
Check your pH before you diagnose anything. If it's off, fix it, wait 3-5 days, then see if the symptoms are still progressing. This is less exciting than diagnosing a rare micronutrient deficiency, but it's correct far more often. “pH your water bro” is the one piece of forum advice that's right almost every time.
| Symptom | Overwatering | Underwatering |
|---|---|---|
| Leaves | Drooping, heavy, plump | Drooping, dry, thin |
| Soil | Wet, slow to dry | Dry, pulling from pot edges |
| Recovery time | Slow (2-3 days) | Fast (hours after watering) |
| Pot weight | Heavy | Light |
The “lift the pot” test is free and takes one second. If the pot is heavy, stop watering. If it's light, water it. More sophisticated than most diagnostic protocols, honestly.

New growers overwater because they're paying too much attention. The plant doesn't need water every day. If the soil is still moist 2 inches down, walk away. Watering your plant because you're anxious about it is the gardening equivalent of refreshing your email.
For when you've checked pH, watering, and environment and the problem is still getting worse:
| Nutrient | Mobile? | Where It Shows | Primary Symptom | Secondary Symptom |
|---|---|---|---|---|
| Nitrogen (N) | Yes | Old/bottom | Uniform yellowing | Leaves cup upward, fall off |
| Phosphorus (P) | Yes | Old/bottom | Dark leaves, slow growth | Purple stems (also genetics/cold) |
| Potassium (K) | Yes | Old/bottom | Brown crispy edges | Yellow margins |
| Calcium (Ca) | No | New/top (veg), lower leaves (flower) | Brown/bronze spots | Distorted new growth |
| Magnesium (Mg) | Yes | Old/bottom | Interveinal yellowing | Green veins on yellow leaf |
| Iron (Fe) | No | New/top | Interveinal yellowing | Same as Mg but on new leaves |
| Nitrogen tox. | - | All | Dark green, “the claw” | Tips hook down, glossy |
The mobile/immobile rule is worth memorizing. It's the difference between diagnosing in 10 seconds and spending a week on GrowWeedEasy trying to match photos.
Visual diagnosis works when symptoms are textbook. In reality, symptoms are rarely textbook. They're a blurry phone photo of a leaf under a purple blurple light, and three different conditions look identical at that resolution.
It breaks down especially when:
PlantLab's AI was trained specifically on these ambiguities. It analyzes 31 cannabis conditions and can distinguish between 7 nutrient deficiencies that experienced growers regularly confuse. Not because it's smarter than a grower with 20 years of experience – but because it's been trained on 200,000+ images and doesn't get fooled by blurple lighting. The model is also improved continuously from real grower photos, not trained once and left alone.
Try it free at plantlab.ai – 3 diagnoses per day, no credit card.
What is the most common cannabis plant problem? Nitrogen deficiency, by a wide margin. It's the most common real deficiency, and pH lockout causing symptoms that look like nitrogen deficiency is even more common. If you can only learn to identify one thing, learn what nitrogen deficiency looks like. Then learn to check your pH so you can rule out the fake version.
Why are my weed plant's leaves turning yellow? It depends. (Sorry. But it really does.) Start with where: bottom leaves = nitrogen, magnesium, or potassium. Top leaves = iron or calcium. Everywhere at once = pH lockout or root problems. The answer to “why are my leaves yellow” is always another question: “which leaves, and what does the yellowing pattern look like?” The table in Step 2 above will narrow it down.
How do I tell if my cannabis plant is overwatered or underwatered? Both cause drooping, which is unhelpful. The difference is in the leaves: overwatered leaves feel heavy, plump, and the soil is still wet. Underwatered leaves are papery thin and the plant perks up within hours of getting water. The pot-lift test works: heavy pot = too wet, light pot = too dry. Overwatering is far more common than underwatering, because new growers hover.
Can a cannabis plant have multiple problems at once? Frequently. Stressed plants attract pests, incorrect pH causes cascading lockouts across multiple nutrients, and a spider mite colony feasting on a plant that's already potassium-deficient produces a confusing mess of symptoms. Prioritize the most severe issue first. Fix that, stabilize, then address the next one. Trying to treat everything simultaneously usually means treating nothing effectively.
Should I remove yellow or damaged leaves? If a leaf is mostly brown and crispy, remove it – it's done photosynthesizing and it's just attracting pests. If it's partially yellow, leave it alone. It's still working. The plant will drop it when it's done with it. Never remove more than 20% of foliage at once, or you'll trade a nutrient deficiency for light stress from suddenly exposed lower growth.
What does it mean when my marijuana plant leaves curl up? Usually heat or light stress. The plant is doing what you'd do if someone held a heat lamp over your head – curling up to reduce its exposure. Move the light higher, improve airflow, or reduce intensity. If the curling comes with brown crispy edges, that's potassium deficiency instead. If the leaves are dark green and curling down (the claw), that's nitrogen toxicity – you overfed it.
How do I know if it's a nutrient deficiency or a pest problem? Deficiencies are systematic: they affect leaves in predictable order (old-to-new or new-to-old), create consistent patterns (interveinal, marginal, uniform), and progress gradually. Pest damage is chaotic: random holes, stippling in patches, silvery streaks where something was feeding, and actual visible bugs if you flip leaves over and look. When in doubt, get a 10x loupe and inspect the undersides. If nothing is moving and nothing is webbed, it's probably not pests.
Detailed guides: – Nitrogen Deficiency: Complete Visual Guide – Calcium vs Magnesium Deficiency: A Visual Comparison – 7 Nutrient Deficiencies: How PlantLab Tells Them Apart – Nutrient Antagonism: When Adding More Makes It Worse – Spider Mites: Early Detection Before the Damage – Powdery Mildew: Visual Detection and Prevention – Bud Rot and Root Rot: Detection Before It's Too Late – How AI Diagnoses 31 Cannabis Conditions in 18ms – The Work Nobody Sees: 47 Experiments to Make PlantLab Better – Why I Built PlantLab
from
ernmander
Today is release day for Resolute Raccoon Ubuntu.
This means I have been having fun with Ted my dog.




Have a great release day.
from Mitchell Report
If you ever want to see a master craftswoman at website design and theming, then you must stop over at Hey Loura! She is also in my BlogRoll. Her latest creation is spectacular and pirate-themed. She keeps outdoing herself each time she updates.
I love her work and wish I could do, or get an AI to do, what she does. I have tried. I am still working on a 4th of July theme, but I can't get it to see my vision.
Anyways, great job, Loura! I can't wait to see what you come up with next.
#opinion #webdevelopment
from LACAN SOUND SYSTEM
Não ter tempo para escrever é um desespero. Ter tempo para escrever é outro. E sem este não se escreve nada que interesse, porque, como escreveu o Blanchot n´O Espaço Literário, “recusando-nos a sofrer o medonho, furtando-nos ao insuportável, furtamo-nos ao momento em que tudo se inverte, quando o maior perigo se torna a certeza essencial. A impaciência característica da morte voluntária é essa recusa de esperar, de esperar o centro puro em que nos encontraríamos nesse momento que nos excede.” É aí que se começa.
from
Micropoemas
Pensó que había un truco que nadie le había enseñado. Un mundo tan feliz, tantas risas. ¿Era mentir?
from 下川友
新宿を歩いていたら、「千円札好きなの?」と声をかけられている人がいて、いきなり新宿を食らった。
お茶を買ったら野球のユニフォームみたいな巾着がついてきて、それをしばると普通の巾着の形になるな、と思いながらクッキーを食べていたら、別の席で知らない二人が漫才の練習をしていた。 一人はロン毛で、もう一人はベースボールシャツのような服を着ている。主導しているのはロン毛の方っぽいが、最後にベースボールの方が「これをコントにするのはアリ?」と聞いていた。
みかんのドライフルーツを買って、仕事を少しだけ進める。 「疲れ」への関心が急に強くなって、夜、疲れて寝るというのは結局どこが一番疲れている状態なのか、ということを調べる。 リフレッシュできないまま明日を迎えること、遅くまで活動すると覚醒が収まらず寝る時間が遅れること、会社のように椅子に座ると腰が辛くなること。 結局、家に帰ってから頑張れない理由がいくつも付随している。
今日の自分を壊さないための防衛が強く働いていて、ほとんど自分を責めない生活をしているのだと理解する。
活動そのものがリフレッシュになるものでなければ、結局はただ辛い日々を過ごすだけだと思い、今日も生活に関する知見が少し深まった。
ただ、その知見が増えても幸福に直結しない感覚があって、八方ふさがりのようにも思える。 それでも、自分で試せることや手札をある程度出し尽くした先で、結局は誰かと話すことが一番なのだと、脳が察している。
今日も現実の密度が濃くなって、家の薄暗さを見る。
from Pierre-Emmanuel Weck
Ça commencé par un post sur Mastodon d'une traductrice[^1] expliquant que l'IA détruit son métier, mais pas seulement par le fait qu'elle ferait correctement le même travail qu'elle, mais par le fait que, de dire que l'IA existe et fait des traductions, cela induis la croyance qu'elle le ferait bien.
J'ai mis en commentaire ma petite expérience personnelle de photographe professionnel[^2] avec l'arrivée des appareils numériques couplé aux emplois jeunes et ce que ça a induit dans la profession.
(Quand je racontais l'effondrement de la profession de photographe de presse, on m'écoutait poliment mais ça n'intéressait pas grand monde. Chaque profession attaquée fait le même constat.
Sans compter que comme on a un avis sur tout ça donne : les taxis sont attaqué par Uber, on n'aime pas Uber, mais c'est moins cher et les taxis sont tellement désagréables que c'est un peu de leur faute. On ne supporte pas la destruction des services publics, mais les guichets de la Poste étaient calamiteux, ceux de la SNCF aussi alors un peu de concurrence va faire que tous ces fonctionnaires vont se bouger un peu, non ?
Ceux qui sont attaqué ne comprennent pas que l’on ne s’intéresse pas à leur sort. Leur monde est en train de s’effondrer et personne d’autre ne bouge. Et ceux qui ne sont, pas encore, concernés font la leçon en rappelant qu’ils auraient du réagir bien plus tôt, et, qu’à cause de leur entre-soi, ils ne s’étaient pas tellement mobilisés pour les luttes précédentes.
Bref, ça permet surtout de ne rien faire, de ne pas trop être solidaire, ça permet de régler des comptes avec une profession ou une partie de ceux qui la composent. Parce que si, on a un avis sur tout, on n’en connait pas pour autant la solution.
Les écrivains de Grasset[^3] se mobilisent et on a quand même envie de leur taper dessus parce qu'ils auraient du le faire plus tôt, parce que dans le lot, il y en a qu'on aime pas, qui ont même pu, à un moment donné, être des complices, etc. C’est tellement plus simple un type de gauche qui passe à droite, qu’un mec de droite qui passe à gauche. L’un devient un salaud, l’autre l’a toujours été et le restera.
Mais c'est une question de principe. Soit on défend la liberté d'expression face à l'extrême droite, soit, bientôt, on ne défendra plus rien.
Alors, il y a les écrivains comme BHL qu'on aiment pas. Certes, mais malgré tout, c’est aussi un jeu complexe, où se retrouver dans la même maison d'éditions de BHL, vous sert, parce que ça vous apporte une once de reconnaissance, y compris par ceux avec lesquels vous ne partagez pas d'affinité. Vous n'êtes pas reconnu que par les vôtres. Ce n'est pas forcément de la vanité, ça compte aussi dans une bataille d'idées que votre adversaire sache que vous existiez. Par exemple, Trumps ne sait pas que je l'aime pas, mais si il le savait, ça me ferait très plaisir !
Être une victime ne rend pas plus intelligent. C'est une erreur lourde du marxisme que d'avoir érigé l'ouvrier en être suprême. Et de nombreux groupes politiques de gauche continuent à fonctionner de la sorte : « l'opprimé à toujours raison ». Sauf que ces dernières années, pas mal de ceux ci se sont mis à voter pour l'extrême droite…
Tout cela est bien trop binaire. L'écologie[^4], même si elle ne décolle pas politiquement apporte une forme de complexité interessante. Quand on lutte pour un meilleur environnement, on ne va pas regarder si celui qui en profitera, in fine, est de gauche ou de droite. C'est toute l'espèce humaine qui en profite (et au delà le vivant en général). De plus, les perdant sont souvent les premières victimes des pollutions, on fait donc coup double.
Donc, quand un groupe se fait maltraité et qu'il réagit, c'est plutôt bon signe. S'il réagit n'importe comment, c'est aussi normal. Aucun n'est militant professionnel. Les syndicats sont faibles, il n'y a plus d’école de l’organisation collective[^5]
Un autre exemple est celui des gilets jaunes. C'était foutraque, bordélique, ils ont tenté de réinventer l'eau chaude, mais se sont retrouvé entre métiers très différents. Un début de conscience de classe se faisait jour. Après, comme ils ont refusé les partis (ce que l'on peut comprendre), ils se sont retrouvés manipulés par l'extrême droite qui a su les infiltrer, comme en d'autres temps, la LCR aidait à la structuration de luttes sociales.
Alors, après les professions manuelles touchées par la robotisation, l’IA attaque les professions intellectuelles[^6]. S'ajoute la monté de l'extrême droite avec Bolloré (mais pas que) qui ajoute une couche idéologique supplémentaire à ce processus de destruction.
Ce système anthropophage a encore de grandes marges devant lui.
Le problème est que le système ne se réformera jamais. Ça ne s'est jamais vu dans toute l'histoire de l'humanité. Il est donc à craindre qu'il faudra en passer par une phase de violence et c'est vraiment dommage. On espérait mieux du niveau de conscience que nos sociétés semblaient avoir atteint.
L'Histoire n'a jamais été un chemin linéaire, mais une succession de ruptures. Ce sera le moment de se demander le sens de la vie et d'inventer autre chose. ——-
[^1]: Le vrai drame pour les traducteurices en 2026, ce n'est pas que la traduction machine ait tellement progressé qu'on soit quasi-remplaçable, c'est plutôt que le buzz de ces deux dernières années autour de l'IA ont fait croire à des décideurs que c'était le cas. En réalité, il faut toujours autant de travail derrière, et si ce travail n'est pas fait c'est potentiellement catastrophique, même si les clients perdent tout discernement. (https://piaille.fr/@SoeurKaramazov/116186888213068824)
[^2]: Ça me rappelle quand j’étais photographe professionnel et que sont arrivés les appareils numériques bon marcher (mais de mauvaise qualité) et les emplois aidés pour les jeunes donc sans formation particulière). Les institutions publiques ont embauché ces jeunes et leur ont acheté des appareils numériques. Le téléphone a cessé de sonner… puis, de nouveau, pour me demander si je pouvais donner des conseils pour faire des photos comme celles que je leur avais founis avant tout ça. J’ai juste répondu : « bin, c’est assez simple, il suffit de faire appel à un professionnel » et j’ai raccroché. (2/2)
[^3]: Et maintenant, ceux de chez Stock
[^4]: Voir Félix Guattari « Les trois écologies »
[^5]: Je me souviens que lorsque les Verts sont arrivés au Conseil régional d’Ile-de-France, ils avaient proposé de rendre obligatoire des heures de cours au syndicalisme dans les formations professionnelles que la Région finançait. Bien sûr, la droite alors au pouvoir n’en a pas voulu
[^6]: Voir Gunther Anders, « L’obsolescence programmée de l’Homme »
from
SmarterArticles

There is a specific moment, the first time you slip on a pair of AI smart glasses, when the world acquires a faint second skin. The lenses look ordinary. The frames are heavier than the acetate you are used to, but not by much. A small LED on the rim glows for a second and then settles into something almost imperceptible. You catch your reflection in a shop window and you look, more or less, like yourself. And yet the air around your face has changed. Somewhere between the bridge of your nose and the inside of your temples, a pair of cameras, a cluster of microphones, an inertial measurement unit, a bone-conduction speaker and a small language model are quietly waking up and beginning to take in the afternoon.
You are wearing the glasses. The glasses are wearing you back.
That sentence is the whole argument of this piece, and if you already believe it to be obviously true, you can stop reading and go outside. But the question it raises is not actually obvious, and it is not solved by cynicism. When you put on a pair of Ray-Ban Meta glasses, or the rumoured successors from Google, Samsung, Apple, Amazon, Snap, ByteDance and the long tail of Shenzhen white-label manufacturers racing to ship before the 2026 Christmas window, who exactly is the customer of the transaction? Are you the user of a personal computing device you have paid for, whose sensors serve your interests and whose outputs belong to you? Or are you the product: a walking data-collection node, monetised through advertising, training corpora and the slow accumulation of an intimate behavioural dossier that no earlier generation of hardware has ever been able to gather?
The honest answer is that you are both, in proportions that shift minute by minute, and the proportions are not set by you.
It is worth remembering, before anything else, that the face computer has been tried before and has failed publicly enough to leave scars. Google Glass launched its Explorer programme in 2013, with a price tag of fifteen hundred dollars and a reputation that collapsed inside eighteen months. The word Glasshole entered common use. Bars in San Francisco banned the device. A woman in Ohio had hers ripped off her head in a McDonald's. By early 2015 Google had quietly shelved the consumer version and retreated into the enterprise market, where workers on assembly lines wore the devices under management mandate and the question of social consent did not arise.
The lesson the industry took from the Glass debacle was not, as many hoped, that cameras on faces in public were intrinsically creepy. The lesson was that the camera must not be visible. It must look like glass. It must look, in particular, like the kind of glass people have been wearing on their faces for seven hundred years without any of the recording apparatus that sits behind the lens.
That is why the Ray-Ban Meta collaboration, launched in its first generation in 2021 under the Ray-Ban Stories brand and relaunched with materially better hardware in 2023, has succeeded where Glass failed. The frames are designed by Luxottica, the Italian eyewear conglomerate that also owns Oakley, Persol and a large slice of the global spectacles market through EssilorLuxottica. They look like Wayfarers because they are Wayfarers. The cameras are tucked inside the hinge. The microphones are invisible. The only external signal that the device is active is a small LED on the front rim, a concession Meta made after privacy regulators in Ireland and Italy pressed the company in 2021 to provide some mechanism by which the people around a wearer might notice they were being filmed.
The LED is, depending on whom you ask, either a meaningful safeguard or a fig leaf. It is small. In bright sunlight it is close to invisible. In a crowded bar at night it is easy to miss. And the firmware that drives it has, in past generations of the product, been modifiable by sufficiently determined users. When the second-generation Ray-Ban Meta launched in late 2023 with integrated multimodal AI, the LED stayed. The camera resolution improved. The on-device compute expanded. The cloud pipeline that carries the audio and images back to Meta's servers for processing thickened considerably. And the question of who owns the resulting data moved from a footnote in the privacy policy into the centre of the product itself.
To understand the user-or-product question clearly, you need a concrete picture of what a modern pair of AI smart glasses actually captures. Generic arguments about privacy collapse into vagueness very quickly. The specifics do not.
A contemporary pair of AI glasses, using the Ray-Ban Meta as the reference design because it is the only mass-market product of its kind currently on sale in most jurisdictions, emits four distinct streams of data. The first is visual. The forward-facing camera captures stills and video at the wearer's command, and in the multimodal AI mode it captures frames continuously in short bursts whenever the wearer triggers the assistant with a spoken wake word or a tap on the temple. The images are transmitted to Meta's servers for processing by the company's Llama family of models. The second stream is audio. The array of microphones captures not only the wearer's voice but the ambient acoustic environment, which means the voices of anyone within several metres of the wearer's head. When the assistant is active, this audio is also transmitted for processing. The third stream is motion and orientation, from the inertial measurement unit, which records how the wearer's head moves through space at a granularity sufficient to distinguish walking from running, sitting from standing, attentive listening from distracted scanning. The fourth stream, and the one least often discussed, is inferred. It is the collection of downstream signals that the first three streams make possible: the identities of the people the wearer encounters, the places the wearer visits, the products the wearer looks at, the faces the wearer lingers on, the texts the wearer reads, the emotions the wearer's gaze betrays.
Meta's current terms of service for the Ray-Ban Meta, updated in late 2024, state that images and audio captured by the glasses while the AI assistant is active may be used to train the company's AI models. Users can opt out, but the opt-out is buried inside a settings menu and is off by default. The European Data Protection Board issued a statement of concern in the summer of 2024 noting that the default-on posture sat uneasily with the consent requirements of the General Data Protection Regulation, particularly in relation to bystanders who had not agreed to anything and whose faces and voices were being swept into a training corpus they knew nothing about.
That last point is the one that keeps coming back. The user of smart glasses can, in principle, read the terms of service, understand them, and make a considered choice about whether to accept the trade. The bystander cannot. The child in the park whose face is captured by a jogger wearing Ray-Ban Metas has consented to nothing. The barista whose voice is recorded as she takes an order has consented to nothing. The friend who confides in a pub, unaware that the frames opposite her contain a microphone array streaming to a data centre in Virginia, has consented to nothing. And in every one of those cases, the data captured is not only being processed for the immediate convenience of the wearer. It is being stored, classified, and in many configurations fed into the training pipeline of a foundation model whose outputs will shape the digital environment for everyone.
The marketing language around AI smart glasses is careful to frame the device as an instrument of personal agency. The promotional reels show travellers asking the glasses to translate a menu in Lisbon, cyclists receiving turn-by-turn directions without taking their hands off the bars, parents capturing hands-free videos of their toddler's first steps. The verb is always active. You ask. You request. You capture. The glasses respond.
This is what the philosopher Shoshana Zuboff, in her 2019 book The Age of Surveillance Capitalism, calls the user illusion: the carefully engineered sense that the direction of agency flows from the human to the machine, when in reality a substantial fraction of the machine's work is directed at the human and at the social field the human inhabits. Zuboff was writing about search, social media and the smartphone. The argument generalises to wearables with unusual force, because wearables collapse the distance between the sensor and the body to essentially zero. You are never not in frame.
Consider what the four data streams above actually enable, taken together and processed by a competent foundation model. The visual stream, combined with on-device or cloud-based face recognition, yields an identifiable log of every person you have looked at in a given day. Meta has stated publicly that it does not perform face recognition on Ray-Ban Meta imagery, a position the company has held since the original launch. But the technical capability exists in the imagery itself. The restriction is a policy choice, and policy choices are revisable. In late 2024 an internal Meta document reported by The Information indicated that the company had been exploring limited face-recognition features for the glasses, framed as a memory aid for users who struggle to recall the names of acquaintances. The feature was not shipped. The capability was not removed.
The audio stream, run through a contemporary speech model, yields a transcript of every conversation within range of the wearer's head. Even if Meta does not retain full transcripts, the company retains the embeddings: the compressed numerical representations that capture the semantic content of speech in a form that is smaller to store and, crucially, more difficult for regulators to audit. An embedding is not a transcript in any sense a lawyer would recognise, but it is a transcript in every sense a machine-learning engineer would.
The motion stream, combined with location data from the paired phone, yields a behavioural signature: a vector of how you move through the world that is, in aggregate, as identifying as a fingerprint. A 2013 study by Yves-Alexandre de Montjoye and colleagues at MIT, published in Scientific Reports, showed that four spatiotemporal points were sufficient to uniquely identify ninety-five per cent of individuals in a mobile phone dataset of one and a half million users. The Ray-Ban Meta produces spatiotemporal points at a density Montjoye's team could not have imagined.
The inferred stream is where the product becomes, in the commercial sense, a product. It is the stream that is worth money. An advertiser does not particularly care what you ate for lunch. An advertiser cares deeply about the inference that can be drawn from your having eaten it: that you are the kind of person who eats at that kind of place, at that kind of hour, with that kind of company, for that kind of price. Multiply by every meal, every shop, every interaction, every glance, and you have the substrate of what the industry politely calls behavioural targeting and what everyone else calls a dossier.
The legal architecture around this bargain is in the early stages of a rupture that will take years to play out. The European Union's Artificial Intelligence Act, which entered into force in August 2024 with a phased application schedule running through 2027, classifies certain uses of biometric categorisation and emotion recognition as prohibited or high-risk. A literal reading of the act suggests that a pair of glasses continuously capturing the faces of bystanders for the purpose of training a general-purpose foundation model sits uncomfortably close to several of the act's red lines. A more industry-friendly reading holds that the glasses themselves are not performing the prohibited processing, and that the liability, if it exists anywhere, sits with the downstream model developer rather than the device manufacturer.
Both readings cannot be right. The tension will be resolved through enforcement action, and enforcement action takes years. In the meantime, the devices are being sold, and the data is being collected, and the models are being trained.
In the United States, the position is weaker still. There is no federal privacy statute that speaks meaningfully to wearable biometric capture. Illinois has the Biometric Information Privacy Act, known as BIPA, which has generated a steady stream of class-action settlements against companies that scraped or stored facial geometry without consent, including a one-and-a-quarter-billion-dollar settlement Facebook paid in 2021 over its photo-tagging feature. BIPA is a state statute. It protects Illinois residents. Its reach to smart-glasses capture in other jurisdictions is contested and, at the time of writing, untested in an appellate court.
The United Kingdom occupies an interesting middle ground. The Information Commissioner's Office issued guidance in 2023 noting that wearable cameras sit within the scope of UK GDPR where the footage is processed for anything other than purely domestic purposes, and that the domestic exemption is construed narrowly once material is uploaded to a commercial platform. The guidance has not yet been tested against Ray-Ban Meta specifically. Industry lawyers expect the first test case within the next eighteen months.
What unites all these regulatory regimes is that they were written for a world in which a camera was a thing you had to pick up, aim and operate consciously. The smart glasses dissolve all three of those verbs. The camera is worn. The aiming is done by the direction of the wearer's gaze. The operation is handed, increasingly, to an AI assistant that decides for itself when a frame is worth capturing. The legal concept of a deliberate act of recording, which underpins most privacy case law, becomes harder to locate.
Every AI smart-glasses product on the market is accompanied by a terms-of-service document. The documents are long. The Ray-Ban Meta terms, in the consolidated version current at the end of 2024, run to somewhere in the region of fourteen thousand words across the main agreement, the Meta AI supplemental terms, the privacy policy and the cookie policy. Reading them all carefully takes about ninety minutes. Comprehending them at the level required to make a genuinely informed consent decision takes considerably longer, because several of the key clauses incorporate by reference other documents, and because the definitions of terms like personal data, processed, and for the purpose of improving our services are not always consistent across documents.
A 2019 study by Jonathan Obar of York University and Anne Oeldorf-Hirsch of the University of Connecticut, published in the journal Information, Communication and Society, found that when users were presented with a fictitious social networking service, ninety-eight per cent agreed to terms of service that included clauses requiring them to surrender their first-born child and to share all their data with the US National Security Agency. The finding was comic, and then, once you stopped laughing, it was not. Obar and Oeldorf-Hirsch called the phenomenon the biggest lie on the internet, which is the lie users tell when they tick the box confirming they have read and understood the terms.
If that lie is already load-bearing for social networks, for shopping sites, for streaming services, it becomes structurally unsustainable for a device that sits on your face and captures the faces of everyone around you. The consent of the wearer is at least notionally retrievable, however compromised by length and legalese. The consent of the bystander is not retrievable at all. There is no box for them to tick. There is only the LED on the rim of someone else's glasses, which they may or may not notice, which they may or may not recognise, and which, even if they do notice and do recognise, gives them no mechanism to decline.
This is the point at which the user-or-product framing starts to feel insufficient. The wearer, whatever the quality of their consent, at least had the opportunity to say no at the point of sale. They chose the frames. They downloaded the app. They accepted the terms. The bystander is neither user nor product in any sense they had the chance to shape. They are raw material. They are the training set.
Set aside, for a moment, the bystander problem and focus on the wearer. Even within the relationship between the person paying for the device and the company selling it, the user-or-product question refuses to resolve cleanly. Because the economic logic of AI smart glasses is not the economic logic of an iPhone.
An iPhone is sold at a margin. Apple's hardware business is its primary profit engine, and the data the device collects is, compared to the industry average, relatively loosely monetised. The company's marketing positions privacy as a competitive differentiator, and although this claim has been contested around specific features, the structural incentive is clear enough: Apple makes more money if you buy another iPhone than if you are profiled more accurately for advertising.
Meta's hardware business is not Apple's. The Reality Labs division of Meta, which builds the smart glasses along with the Quest VR headsets, has lost tens of billions of dollars since it was established. The Ray-Ban Meta itself is reported to sell at or near break-even once development costs are amortised. The company is not in the face-computer business to sell Wayfarers. It is in the business to build a successor platform to the smartphone, one that does not route through the App Store toll booths of Apple and Google, and whose data flows enrich the advertising engine that still generates more than ninety-eight per cent of Meta's revenue.
In that business model, the user is never the customer in any meaningful sense. The user is the feedstock. The customer is the advertiser. This is not a moral judgement about Meta specifically. It is a straightforward reading of the company's 10-K filings with the Securities and Exchange Commission, which have described advertising as the company's overwhelmingly dominant revenue source every year since the company went public in 2012.
If that is the structure of the business, then the AI assistant running on your glasses is not, despite what the marketing suggests, a tool that belongs to you. It is a tool that belongs to the advertising engine, leased to you for the duration of the session. Its job is to be helpful enough that you keep wearing the device. Its deeper job is to generate the behavioural signal that the advertising engine requires. These two jobs are not in direct conflict most of the time, which is why the device feels like a gift rather than an extraction. But when they do conflict, which job wins is not, structurally, your decision.
The most disorienting feature of the smart-glasses bargain is the asymmetry between what the wearer learns about the world and what the world learns about the wearer. This is the asymmetry that Zuboff's book returns to again and again, and it is sharper here than in any previous consumer device.
When you ask your glasses to translate the menu in Lisbon, you receive a translated menu. The exchange feels even: you give a question, you get an answer. But the answer is not the whole of what you received, and the question is not the whole of what you gave. You also received an implicit model of what the assistant thinks a menu is, what it thinks a translation is, and what it thinks you wanted. And you also gave the image of the menu, the audio of your voice asking, the location of the restaurant, the time of day, the fact that you are travelling, the inference that you do not speak Portuguese, the further inference that you are probably eating alone or in a small group, and the ability to fold all of these data points into a model of you that will be consulted the next time you or someone like you makes a similar request.
The assistant becomes, over time, quite good at predicting what you will want. This is usually experienced as magical. It is in fact the visible surface of a much larger iceberg of inference, and the rest of the iceberg is not yours. It is the company's. It is the model's. It is the advertising engine's. You do not get a copy of it. You cannot audit it. You cannot request deletion in any form that the system cannot reconstruct from adjacent data. When Meta deletes your account, under the terms of its current privacy policy, it does not delete the training signal your data contributed to the model. Training signal is considered, for legal purposes, to have been absorbed into the weights of a general-purpose system, and general-purpose systems are not subject to individual deletion requests under any currently enforced reading of GDPR. The UK ICO and the European Data Protection Board have both issued statements acknowledging this as an open question. It has not been closed.
So the bargain, in its cleanest form, is this. You hand over a continuous stream of everything you see and hear and many of the things you feel. In exchange, you receive a helpful assistant that is measurably less knowledgeable about you than the model behind it is, and whose helpfulness is calibrated not by your interests alone but by the commercial interests of the company that built it. The asymmetry is not a bug. It is the feature that makes the economics work.
It is possible, in principle, to build AI smart glasses whose bargain with the wearer is symmetrical, or at least less grotesquely asymmetrical. The ingredients are known. On-device processing, so that the visual and audio streams never leave the frames unless the wearer explicitly sends them. Local storage under the wearer's cryptographic control. A clear visible indicator that the rest of the world can recognise as reliably as a red recording light on a television camera. Opt-in rather than opt-out data sharing. A legal structure in which training-corpus contribution is an affirmative choice compensated in some meaningful way rather than a default buried in the settings. An audit mechanism that allows both wearers and bystanders to know what was captured and what was done with it.
None of these ingredients is technically exotic. Several of them have been demonstrated in research prototypes and niche enterprise products. What they lack is a commercial sponsor of sufficient scale to ship them at consumer price points. Apple, whose business model could in principle support such a device, has so far held back from mass-market AI glasses, although the Vision Pro headset and the rumoured lightweight glasses project widely reported in 2024 and 2025 suggest the company is circling the category. If Apple ships, and ships with a privacy-centric design consistent with its iPhone positioning, the competitive pressure on Meta and the rest of the field will be substantial. If Apple does not ship, or ships something that compromises its stated principles, the window for a fair version may close before it opens.
There are also regulatory interventions that could force the shape of the bargain. A mandatory hardware recording indicator, visible at a defined distance under defined lighting conditions, would at least give bystanders a fighting chance of knowing they were being recorded. A prohibition on the use of bystander-captured data for training general-purpose models would remove the most egregious asymmetry. A requirement that terms of service be expressed in a form comprehensible to a non-lawyer at the point of purchase, rather than buried inside a forty-page document, would restore some fragment of meaningful consent. None of these interventions are unprecedented. All of them have been proposed, in various forms, by regulators and academics working on wearable privacy over the past decade. None of them have been implemented at the scale the problem requires.
Return, for a moment, to the moment at the beginning of this piece. You are standing in front of a shop window, wearing your new glasses, and you catch your reflection. You look, more or less, like yourself. And yet something has shifted. The reflection is not only yours anymore. It is also, in a small but non-trivial way, the property of a company you have a contract with, whose terms you have not fully read, whose obligations to you are narrower than its claims on you, and whose servers will hold a record of this moment long after you have forgotten it.
The question of whether you are the user or the product does not have a single answer, because the answer changes with each function the device performs. When the glasses translate a menu for you, you are the user. When the capture of that translation trains the next version of the model, you are the product. When the ambient audio sweep picks up the voice of the stranger at the next table, that stranger is neither user nor product but raw material, whose participation in the transaction was not asked and could not be refused. These three roles coexist inside the same hardware, in the same second, on the same face, and the software does not distinguish between them because the software does not need to. The business model is indifferent to the distinction. All three roles generate the signal it requires.
What the wearer can still control, and what the framework of this argument tries to make legible, is the conscious recognition of which role they are in at any given moment. That recognition does not undo the bargain. But it does restore something the marketing language works very hard to suppress, which is the sense that a bargain is being struck at all. The glasses, whatever else they are, are not neutral. The LED on the rim is not decorative. The assistant that knows your name is not your friend. The frames are a piece of commercial infrastructure, worn on the most personal surface of the body, and the question of whose infrastructure it really is has not yet been answered in any way the wearer should find comforting.
The honest posture, until the answer is clearer, is the posture of someone who has agreed to a deal they do not fully understand, with a counterparty whose interests are not aligned with theirs, in a legal environment that has not caught up with the technology, surrounded by people who did not sign the contract and cannot see its terms. That is not a reason to throw the glasses in the nearest bin. It is a reason to take them off occasionally. To notice, when you put them back on, that the act of putting them on is an act with consequences beyond your own convenience. To remember that the second skin you are wearing is not only yours. And to treat the quiet hum of its intelligence, if you listen for it, as a reminder that in the oldest bargain of the attention economy, the party who pays nothing and receives something is not always the party who thinks they are getting the better deal.
You are the user. You are the product. You are, most of the time, both at once. And the frames on your face, beautiful as they are, are not only yours.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast
from
Roscoe's Story
In Summary: * Starting the night prayers early tonight so I can work through them with focus and at a meditative pace. After the prayers I plan to put these old bones to bed early so I can wake early tomorrow morning and get a good start on Thursday's chores.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 231.04 lbs. * bp= 145/84 (70)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:30 – 1 banana * 08:00 – 1 meat-filled breakfast taco * 12:45 – fried chicken, cole slaw, mashed potatoes, apple pie, 1 little cookie * 15:40 – 1 fresh apple
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:00 – bank accounts activity monitored. * 06:15 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 10:00 – file correspondence * 10:30 – load weekly pill boxes * 12:45 to 13:45 – watch old game shows and eat lunch at home with Sylvia * 14:00 – began following MLB Game, Toronto Blue Jays vs Los Angeles Angels * 17:00 – and... the Angels win, final score [7 to 3]. * 17:30 – following news reports on OAN
Chess: * 08:37 – moved in all pending CC games
from
🌐 Justin's Blog
Actually, I gave two.

A couple of weeks ago, I had the opportunity to guest lecture at the college I went to for undergrad. It was actually the second time I had the privilege, and this time it was on my favorite subject: marketing & advertising.
I get so energized when talking about this kind of stuff. I guess there is no better way to describe it other than just plain old fun for me.
Admittedly, something is kind of lost when you're having the class virtually on Google Meet, but I think I managed well nonetheless. Even got some laughs out of the notoriously difficult-to-entertain Gen Z.
In the future, maybe when I'm in my 50s, I could see myself teaching in higher education to pass on the insights and skills that have helped me in my life.
But not yet. I still have things I want to do!
#personal
from An Open Letter
Dote – Robin Callaway
Yesterday I cramped I think, and I remember thinking so vividly of the pain. And more importantly I thought of how I let it pass, and sit and endure it. That’s it. Nothing else but to stop pushing it and let it happen. I don’t fear about it never passing or the muscle tearing or it being some big massive problem that I need to fix, but rather just something transient. I don’t push myself or freak out much but rather just do whatever I can to minimize the pain as much as I can to let it settle. Then after a bit of enduring it if it’s bad, once it gets quieter to the point where I’m just afraid to see if it rears back up, I gently begin to test. I still vividly remember the pain but still know that eventually the pain goes away and I just need to test to see if I’ve hit that point yet. And if I do I can softly push a bit more and more all while being gentle, small massages on pain points to acknowledge them and to hear it out. But I don’t need to obey the signals of pain, and often after being heard and getting to speak the cramp fades out, and I can tenderly resume life.
One of the ruthlessly efficient things depression does is convince me it is all there is. If I do not change something, it will permanently reside. It swears by it so violently that it pushes my hand for desperation, to which I try to massage it and fix my life in ways I think it needs. And when I do the things I see in my control, I press the buttons and flip the levers I see and nothing changes, that is when the last trigger I can click floats back into my head, and sits as a comfortable option. It’s something I feel at least in control of, because otherwise I’m trapped to an infinite hell with no escape.
But this could just be a lie it tells me, overplayed, and swearing by its residency. It is more like a cramp than it wants me to believe. Maybe I just need to be gentle to myself and not try to convince myself I’m not in incredible pain, and it’s more just a bleeding out or suffocation that I need to endure. And I can endure it because I know it will end. Funnily enough I won’t even remember it after it ends. So I need to just be a bit kind to myself and not do things that will make it worse, the same way I shouldn’t try to walk or flex the muscle while it needs to be heard. I can almost feed it empathy by acknowledging the sweet moments in life I give it, similar to how grief needs to be fed before it subsides. And so I’m here in a beautiful view on the stairwell listening to the new album I found that is incredible, and I’m not really happy. I feel tired, fogged, exhausted, drained and empty. And it’s ok because this will be part of the meal I feed depression for it to subside. And I will be kind to it since I do owe it for a lot of the blessings I do have now. Adversity causes growth and so I am grateful for that. And I will endure this.
from
Roscoe's Quick Notes

This afternoon's game of choice has the Toronto Blue Jays playing the Los Angeles Angels. The game has just started and in the top of the first inning there is no score yet. the radio call of this game is provided by Sportsnet 590 The FAN, Canada's leading all-sports radio station.
And the adventure continues.
As a teen, I’d leave the TV on while writing, studying, and sleeping. It’s a terrible habit and has stuck with me since. Instead of TV, now it’s YouTube. But at least this habit has lessened throughout the years.
I can write without distractions for at least fifteen minutes. Then I’ll watch something on YouTube for a few minutes. I’ll write again and repeat the process. It’s the best system for me.
How about you? Is there some bad habit you do whenever you write? Let me know.
#writing #habit #tv #YouTube