Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from 下川友
会社で仕事をしていると、清掃の人が毎日きっかり16時に入ってくる。 俺の後ろを掃除機が通るので、邪魔にならないように体を細くする、あの独特の動きが、いつの間にか身体に沈殿している。 掃除をしている人の前では、人は体が細くなるのだ。
清掃の人がいなくなるまで、メインの業務に戻るだけの集中力は無く、 机に転がっていた、自分のではないボールペンに名前をつけてみることにした。 ボールペンには「貴文」と名付けた。
頭の中で「やっぱりマイクにしようかな」と別の名前を考えかけたあたりで、もうこんな時間か、という顔をしながら上司がこちらへやってきた。 たまたまではあるが、貴文のそばに早速人が寄ってきたことになる。
名前を持った物体は、持たない物体より、ほんの少しだけ質量を持って、その重さに引き寄せられる。 もしそこに質量に加えて、それ以外の意味まで持って来れば、神社のように、人が集まる場所として機能するだろう。
そんなことを考えていたら定時になったので、 「明日まで忙しい感じすよね、お疲れ様でした」と言ってビルを出た。
街が生きているんだか、すでに廃れた後なんだか、よくわからん場所だなと改めて思いつつ、 今日は肉が食べたい気分だったので、スマホで調べた2,000円以内で食べられるステーキ屋に入った。
入口は寂れていて一瞬ためらったが、肉を食べたい欲求が勝った。 中に入ると印象はまるで違い、ファミレスのようだった。 学生や家族連れが多く、入口との落差に驚きつつ、次から通う店のリストに入れることにした。
ハンバーグもあったが、最初から決めていたステーキを頼んだ。 ナイフ置きが最初から置かれていて、手持ち無沙汰になるたびに貴文のことを思い出しながら、ナイフ置きを触っていた。 ナイフ置きには、思ったよりもしっかりした重さがある。 通うようになったら、こいつにも名前をつけようか。 そんなことを考えているうちにステーキが運ばれてきた。
ステーキを食べているときは、ただステーキを食べているだけなので、細かい描写は覚えていない。 1,600円のステーキで、きっちり1,600円の味がした。妥当だな、と思ったことだけは覚えている。
隣の席から夫婦の会話が聞こえてきた。
「あの子また外で全裸になってたぞ。ちゃんと注意しないと」 夫が言う。 「そんなこと言ってもねえ。噴水の近くなら服を脱いでいい、みたいに思っちゃってるみたいよ」 と妻が答えた。 なぜか他人事のような口調だったが、夫にうっすら助けを求めているようにも聞こえた。
会計を済ませて外に出ると、夜の空気がいつもより少しだけ暖かかった。 もう桜が咲く頃なのかと思った瞬間、そういえば去年、梅をゆっくり眺める時間を作りたいと思っていたことを思い出した。 けれど寒さの厳しい季節のあいだは、自分の体を温めることで精一杯で、結局今年も梅の存在を忘れてしまっていた。 来年こそは、梅をじっくり眺める時間をちゃんと設けようと思う。
from
EpicMind

Freundinnen & Freunde der Weisheit! Unsere Persönlichkeit ist nicht in Stein gemeisselt. Wir können sie formen, an ihr arbeiten. Aber sie verändert sich nicht über Nacht.
Viele Menschen wünschen sich, gelassener, gewissenhafter oder kontaktfreudiger zu sein. Die psychologische Forschung zeigt: Unsere Persönlichkeit ist formbar – aber nur bis zu einem gewissen Grad. Zwar bestimmen genetische Anlagen zu einem grossen Teil, wie wir ticken. Doch auch unser Umfeld, unsere Erfahrungen und bewusste Entscheidungen prägen mit, wer wir sind – und wer wir werden können.
Psychologinnen und Psychologen unterscheiden dabei fünf zentrale Persönlichkeitsmerkmale: emotionale Stabilität, Extraversion, Offenheit, Gewissenhaftigkeit und soziale Verträglichkeit. Diese „Big Five“ sind keine festen Kategorien, sondern Kontinua – man kann also durchaus an einer Eigenschaft arbeiten, ohne sich grundlegend zu verändern. Studien belegen, dass gezielte Übungen wie Tagespläne, kleine Mutproben oder sogenannte Wenn-Dann-Pläne (z. B. „Wenn ich auf der Party allein bin, spreche ich jemanden an“) dabei helfen können, gewünschte Eigenschaften zu stärken. Voraussetzung ist jedoch: Die Veränderung muss aus einem inneren Antrieb heraus erfolgen – nicht aus gesellschaftlichem Druck.
Besonders gut lassen sich Eigenschaften wie Extraversion oder Gewissenhaftigkeit beeinflussen. Andere wie Offenheit oder Verträglichkeit sind tiefer verankert – oft durch kulturelle oder familiäre Prägungen – und lassen sich nur schwer und meist nur mit Unterstützung verändern. Entscheidend ist dabei weniger der Wunsch nach einem Idealbild als vielmehr die Frage: Was tut mir gut? In welchen Situationen möchte ich mich anders verhalten – und warum?
Letztlich geht es nicht darum, sich neu zu erfinden, sondern sich besser kennenzulernen. Persönlichkeit verändert sich nicht über Nacht, sondern schrittweise – ähnlich wie ein Muskel, der durch Training wächst. Und sie verändert sich nachhaltiger, wenn Entwicklung und Selbstakzeptanz Hand in Hand gehen. Wer sich unter permanentem Optimierungsdruck verbiegt, läuft Gefahr, sich selbst zu verlieren. Veränderung braucht deshalb mehr als Methoden – sie braucht Mass und Sinn.
„Die Freiheit des Menschen liegt nicht darin, dass er tun kann, was er will, sondern dass er nicht tun muss, was er nicht will.“ – Jean-Jacques Rousseau (1712–1778)
Ein unaufgeräumter Arbeitsplatz kostet dich nicht nur Zeit beim Suchen von Unterlagen, sondern wirkt sich auch negativ auf deine Konzentration aus. Sorge für eine klare, aufgeräumte Umgebung, um produktiver zu sein.
Wenn ich etwas Neues lerne oder mich für ein Thema interessiere, greife ich auf eine Methode zurück, die sich über die Jahre bewährt hat: Ich mache Notizen in meiner Notizbuch-App, und oft entstehen daraus Mini-Essays. Diese Mini-Essays sind nicht nur eine Art, meine Gedanken zu ordnen, sondern dienen mir auch als Grundlage für spätere Blogbeiträge. Sie helfen mir, meine Ideen zu strukturieren und mein Verständnis zu vertiefen. Doch was passiert, wenn man diese Methode mit einer der effektivsten Lerntechniken kombiniert, die es gibt – der Feynman-Methode?
Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!
EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.
Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.
Topic #Newsletter
from sugarrush-77
Zabit Magomedsharipov had an interesting interview where he basically says the success is when you find the meaning of life. YT Clip
Ever since I saw it — maybe two, three weeks ago, maybe it was just a week ago actually — this thought has been living rent-free in my head. Because I saw the way the guy was talking and what he said, and I instinctively knew that he had spoken some kind of universal truth. I couldn't express it but I understood it.
It was also confirmed by life experiences I recently had. I was pretty friendless for a long time and then I met some friends. We went to karaoke, we drank, I went to this guy's birthday party and had some fun. I wasn't lonely in any capacity like I had been for a very long time. And then I came back and I was just like, why did I fucking do that? What was the meaning of that? We had some fun, it was a good time, maybe I needed it — but then again what was the meaning of it all?
I also feel this when I look at my X timeline, where all these people are like, “Oh look at AI, AI this AI that, escape the permanent underclass, you just gotta hit your goals, get rich,” all this bullshit. Does this really mean anything in the end? That's what I think when I read those things. We might all be just focusing on the wrong thing, which is why the world is in such a fucked up state.
All in all I was left wondering, what is the meaning of life? This week at bible study the pastor was like, “The meaning of life is to know and love Jesus/God.” And then in the sermon this week the head pastor was talking about different layers of meaning to life — sub-meanings, lower levels of meaning, which are like doing well at your job, raising a family, all these things. And above it all there is a super-meaning, which is something that cannot really be communicated because it is this greater meaning that is hidden to you by the universe, a.k.a. God. Super-meanings are important because the smaller sub-meanings are very temporary, vanish over time, and can never really fill you. They are very dependent on your environment and circumstance. For example, even if you were living within Auschwitz, the super-meaning would give your life full meaning. Because in such a hard place like that, where basically everything is stripped away from you and you live a very barren life in terms of worldly aspirations and there is no hope — all the sub-meanings disappear and it's revealed to you and everyone else around you whether you have a super-meaning or not.
One of the conclusions made during the sermon was:
Man's search for meaning stops when he finds God.
Man's struggle with theology stops when he finds a good church.
And then he said this. If you feel a sense of meaninglessness in life, there are ONLY 2 possibilities:
You have decided to love yourself more than God — come back to Him, let go of your idols.
You have not actually met the living God yet.
Because I am confident that I believe in God and I am confident that this is what I believe in, I know that what applies to me is number one. I know this. I've been on a bit of a rebellious road in a lot of ways. Not really rebellious probably in the eyes of others, but God knows how my heart is.
I've been frustrated with God and mad at him because life hasn't really been working out the way I wished it would. A lot of isolation. I feel as if a lot of the people around me don't care about me and so I've been angry. No close relationships. Out of rebellion I just decided not to do shit — not really try at all in my faith life. Didn't really pray anymore, read the Bible, or things like that. Things that would bring you closer to God, because I just didn't want to be close. I was just mad.
Despite the fact that he's given me a lot, I felt like some of my basic needs were not being met. And then even when they were met, they weren't exactly what I wanted. That girl that liked me — I didn't like her back. The friend group I found — I'm not sure if I want to be friends with them because their definition of fun is getting high, getting domed, getting super drunk. I don't really hate them for it or think badly of them for it. It just feels like — why? What's the point? It just feels a little pointless to me. It's just not really that fun.
And I have really hated myself for the past couple months. It's this weird twisted obsession with yourself — this twisted kind of self-love that makes me hate myself. I like it because it lets me obsess on myself, lets me focus on myself over all other things, and it makes me feel good in some ways, but it's also self-hatred. It's this very self-centered kind of self-hatred, if that makes sense.
That's what I've been on, and I think God is telling me I gotta get off. Like he's let me do this for a little bit, and now it's time to stop. Get back on the horse. Live life the way I want you to. I wasn't really happy to hear that message today. I didn't really want to hear it. But then again I felt like there's no going back. After you really understand this, once you really start believing it, there isn't any way back but forward.
I can only go forward, because knowing God in some ways has ruined a lot of things for me — ruined things that would previously bring me satisfaction. I'm at this point where sometimes I feel like I really don't care about anything else but God, even though I haven't been praying at all. Nothing else matters. It's just so clear to me. It's this mental awakening, this realization that once you see the truth you can't go back. That's what it feels like to me anyways.
I feel very free in it. Almost a little detached from everything. I don't know if that's correct, but even though I don't want to go any further — even though I just want to stop here and not progress anymore in my journey with God and just give up and die on the side of this road — I know that it's not possible anymore once I've seen it. I can't go back.
I feel like once I get on this journey I won't recognize myself at the end of it. I don't know how I feel about that, because so much of my personality has been about all this cruft that God is trying to remove from me. I don't know what I am other than that. This has been my identity — the one that people have liked me for, the identity that I liked myself for. So what happens once I remove all that? I don't even know what I am anymore. I don't like it, but there's no path but the one forward.
I don't know anything anymore, I think. Who knows where God will lead me.
God, I know you are listening. I pray to you, Lord, that you would forgive me for all that I've done, all that I will do. I want to thank you for choosing me to die on the cross for me. I pray that my joy will come from you and that until the day I die you would be at the forefront of my mind. Lord, help me not to lose this awakening, this understanding that you have opened to my eyes. It's so easy to lose it because it's not expressable in words. I just know and I feel it and maybe that's the way it should be.
Despite the fact that I can't express it, I know that this is more important than anything else I know. God, would you help me to love you and love my neighbors and those that are not my neighbors? Thank you for everything. I do not understand but help me to give you my all and then some. Help me to wholly rely on you for all the things I cannot do, which is really everything.
Lord, everyone's getting hitched around me. Everyone's dating, marrying, doing all this and I feel like I'm so behind and there's no hope for me. I'll never find anyone; I'll never even find friends. Sometimes I spiral like that but God, that's me again focusing on myself. Help me to turn my eyes outwards to you instead of myself, to gaze on you instead of myself, and to look at the people around me. Help me not to be concerned with myself and be self-centered. Lord, would you give me a heart to love others and to find joy in you no matter what circumstances are before my way, whether I am single for the rest of my life, whether I never make another friend, or I'm just completely alone? Lord, be with me. Help me to look at you and not myself and not this world.
In Jesus' name I pray, Amen.
Despite just being and maybe what others would consider a dark time in my life, I don't even care because I know the meaning of life now and it has set me free. Nothing else really matters. If I die, I die. If I live, I live. I understand now.
from triptych
I asked Claude.ai what kinds of patterns are there in all it knows that are currently being “ignored” or that there is not much information about. Things that are “hiding in plain sight”. Here is it’s response:
This is a genuinely interesting question to sit with — not “what do we know” but “what's been hiding in plain sight.”
Let me think through some real patterns that feel underexplored, drawing on what's actually known but not yet synthesized well:
The convergence problem in complex adaptive systems
Ecosystems, immune systems, markets, and neural networks all exhibit what looks like “memory without a memory organ” — they adapt to past states without a centralized record. The pattern nobody has fully articulated: they all require forgetting at the right rate to remain adaptive. Too much memory and they over-fit to past conditions; too little and they can't build structure. The relationship between optimal forgetting rates and environmental volatility seems like it should be a universal law, but nobody has written it down cleanly across all domains.
The geometry of compression and creativity
Information theory, evolution, and artistic style all seem to share a deep structural relationship: the most generative patterns live at the edge of maximum compression. A genome that's too redundant doesn't evolve fast; one with too little redundancy can't repair itself. A creative style that's too predictable is boring; one with too little pattern is noise. Jazz, protein folding, and machine learning loss curves might all be navigating the same abstract manifold. The connection hasn't been formalized across domains in a unified way.
Developmental timing as a master variable
In embryology, “heterochrony” — shifts in the timing of developmental events — is a massive evolutionary driver. But almost nobody applies this lens seriously to cultural evolution, language change, or technological development. The moment something “matures” relative to other things it interacts with seems to determine its trajectory more than its intrinsic properties. Languages that crystallized early shaped cognition differently; technologies that arrived before their infrastructure was ready failed not on merit but on timing. Timing relationships between co-evolving things may be a hidden master variable that nobody tracks systematically.
The immunological model of belief systems
Immune systems develop tolerance to self and reactivity to other — but this requires early exposure during a critical window. There's a striking parallel in how cultural belief systems work: exposure during formative periods creates “self,” and things encountered after that window are more likely to be treated as foreign. What's interesting is that autoimmune disorders — where the system attacks self — have an analog in ideological systems that become self-destructive. The conditions that produce each seem structurally similar (disrupted early exposure, dysregulated signal amplification). Nobody has mapped this carefully.
Friction as information
In physics, friction is usually treated as noise — energy lost, signal degraded. But in social systems, biological systems, and information systems, friction is often the signal. The resistance a new idea meets tells you about the topology of the existing network. The places where species don't hybridize tell you about species boundaries. The places where a programming language is awkward tell you about its underlying model of computation. Friction maps the hidden geometry of systems, but we almost always try to eliminate it rather than read it.
The recursion trap in self-modeling systems
Any system that models itself changes itself by modeling itself — and that change alters the model, which alters the system. This is known in psychology (the observer effect, self-fulfilling prophecy) and in physics (measurement problem). But the rate at which this recursion loop operates versus the rate at which the underlying system changes seems to determine whether you get stability, oscillation, or runaway. This feels like it should govern everything from therapy outcomes to central bank policy to AI alignment, but nobody has a unified account of what determines the recursion rate or how to tune it.
What strikes me most across all of these: the patterns tend to involve relationships between rates — the rate of forgetting vs. learning, the rate of self-modeling vs. system change, the rate of maturation vs. environmental readiness. Static snapshots miss them entirely. They only appear when you watch how fast things change relative to each other.
That might itself be the underarticulated meta-pattern.
from Two Sentences
I had a great last day of unemployment — did my first legit long run of the year, hung out with two of my close friends with meaningful conversations, and finally went grocery shopping after two weeks. Yet the Monday Blues remain.
from sugarrush-77
So there's this girl I'd been talking with for a good amount of time. We're just friends — I think she's funny, our conversations are great, she lives pretty far away from me so it would be hard to meet up anyway. But I think she started catching feelings for me.
How I know is that a couple days ago she started sending me these texts out of the blue. We're having a normal conversation and she just sends me this thing about like, five different ways guys respond to a girl's texts, something about her complaining about some bitchy girl she had to deal with. And then she was like “which one would you choose?” And I'm like, oh. This is weird. I was not thinking of you like this.
I realized okay, we do match well personality-wise. But I just don't think she's pretty. Is that fucked up? That's pretty fucked up. That's a little fucked up, but it's like — whether you’re a guy or a girl, whatever, as long as you’re human, there needs to be some base level of attraction for you to like someone. If you're just not attractive to someone, it ain’t happening. I just don't think she's attractive physically and it's a little fucked up to say that but I know myself. I have a feeling that if we start dating I will not treat her right, because I don't find her attractive, so she won't be that valuable to me, and I don't think she would want that either. So I kinda just did not respond well to any of her comments whenever she sent me shit hinting at dating. It's awkward.
Then I was drinking a little bit on Friday night, and on my Uber ride back we were texting, and she was like oh I wanna call you. So we called. We were just talking about random shit and then it got to a point where she was like, “I hate talking to all these guys who have hidden intentions, who are like 'oh I just want us to have a good friendship, a good relationship, make sure we really care about each other,' when they're just trying to date me.” And it's like — I don't even really blame these guys? She's been bitching about this for months. Guys approach her and she doesn't feel left alone. But okay, if someone has to approach you, the only reason you're mad is because you're not attracted to any of these people. You're not mad that people are approaching you. You're mad that the wrong people are approaching you, and none of the people you actually like are approaching you. That's kinda a stupid thing to be mad about imo.
And yeah dude, meanwhile I'm out here getting like five minutes of attention from nobody. In the dating market it is very clear that nobody wants me. I'm trying to expand my circle but it's not easy to find a date. I think I also don't take enough care of how I look — I probably need to look more presentable, all that bullshit — but fuck dude, I'm working on too much shit. I'm making a game, trying to figure out all this AI shit so I don't get left behind in the dust, working a lot, working out, going to church, doing community service here and there. When do I even have time for this? I don't know. Fuck. I need to do something. Fuck, whatever.
Anyways she was going on about how she doesn't like all these guys approaching her with impure intentions, and the specific thing she said was that she hates guys who are “playing the long game.” And I just — I was pretty drunk at this point — I just laughed at her and told her, yeah don't worry about me playing the long game with you, that's never happening.
She hasn't texted me since. Which is really funny in a fucked up way. She has not texted me since.
I do think this might be over, but out of respect for her I will not pull that shit. I will not force myself to be attracted to someone I'm not attracted to. I'm sure she'll find someone — there's too many people approaching her for her not to. It'll be good. She just has to get over this rejection of sorts.
But also given that I haven't dated in like many years and I barely have any dating experience, maybe it wouldn't be bad to just try it out. What's the point of even trying it out? Maybe because I don't think she's hot in any way; never thought she was pretty ever in my life. I don't think I ever will.
It's kind of a big bump for me to just ignore, dude. Yeah it's tempting and maybe I should just take it and go with a flow and just break up in like two months. I feel like that's what's going to happen. Anything else than that is gonna be just a long slog towards an eventual breakup. I don't know, I don't know.
from sugarrush-77
I think I'm starting to understand what artists are talking about when they say AI is like a spit in the face against humanity. The more I get into art, the more I understand art, the more emotionally involved I get with it, and I start to realize what it actually is. It's almost a sacred thing. It brings a lot of meaning to people's lives — it helps them see beauty, makes them feel like God exists in a world that honestly feels pretty hopeless sometimes. The livelihoods of the people that make it are already endangered, and with AI you could just bulldoze a whole bunch of them in one go.
But then there's the next thought, which is like — are all these artists actually creating art that has that level of sanctity? Are they all speaking that level of speech into the human experience? Not everyone's writing Crime and Punishment. Not everyone is Dostoevsky. But at the same time, are the people that have put their time, their whole life into creating something beautiful, creating something that changes how people view the world — is there something sacred about that process itself? I think so. Although I cannot fully express why I feel this way, I do feel like there is some encroachment on the sanctity of art when AI starts doing it, because the fundamental difference between AI and humans is that AI is copying. It's replication. If you believe in Christianity, or really most religions, most of them would say that AI has no soul, whereas a human does, and that's what makes humans different. There is something fundamentally different when you look at it from that perspective.
And in the name of efficiency and technological advancement and this kind of technology war that people are fighting — both private sector and public sector, across borders — we are going really fast. This is a classic human race thing. Greed and fear overpower everything, and people do not care about what happens next, they just care about winning because they're scared and also greedy. It's very human that it's all happening like this, but at the same time people are forgetting how dangerous this all is. We are going to deal with a lot of issues, not just art being less art than it used to be.
There's also the question of what really is art. Is it just as good if I can create something “soulful” using AI, compared to what a mediocre artist makes? I do think AI beats most mediocre artists right now. It just currently doesn't beat the people at the top. But if you're cutting out all the people that are mediocre, who even has the chance to become good? Nobody is born great. You have to be mid first. You have to suck for a long time. And if AI kills the bottom of that pipeline, the top eventually dries up too.
Now here's where it gets interesting though, and where I think my own thinking starts to complicate itself. I do think the nature of art is not going to disappear. It's going to become more and more technological. And actually, I think artists — all these different people that are not programmers — will have more leverage than the programmers. Once you abstract out the software engineering part of the equation, artists and designers and all these creative people have the ability to just create things on command. Things they used to need engineers for. They had the vision, but they needed a team to build it. Now they might not.
Actually, in the long run, software engineers might be the ones that are fucked, and artists might come out on top. Which is not necessarily a bad thing, because you'll still need engineers — they're not disappearing — but it means artists will have more things to do than before, because now they can create large, meaningful things, like software, things that actually change the world. If you look at the last Claude Code hackathon that Anthropic put on, there were like five winners, and four of them were non-software engineers. Only one was a software engineer. One was an artist, one was a surgeon, something like that. So I don't actually know if I agree with the idea that art is disappearing. I think it'll change a lot, but I think there will be an expansion in how artists are able to earn money and create and stuff like that. It's just going to be different from the previous way we've known.
I do wonder where all this is going and how it'll all balance out. It will all balance out in the future — the printing press was not the end, the industrial revolution was not the end. AI is not going to be the end. There are always going to be new horizons, new boundaries, new work to be done, new problems for people to solve. I do not think this is going to be the end. But I do wonder how it will all balance out and create a new meta, and I'm not so sure about that. I still have to think about this a little more. I have to think about the next logical conclusions of everything. I don't even know if I have the knowledge or the foresight to have that understanding either.
But it is something to know that this definitely isn't the end. The story will continue. It's just right now we're in a period of turmoil, and turmoil always feels like the end of the world when you're inside of it.
We'll see.
from Dallineation
It's been a long day. I'm still actively attending and serving in The Church of Jesus Christ of Latter-day Saints even while I wrestle with doubts and questions. I need to stay anchored to something while I figure things out. But as a member of my ward's bishopric, that means my Sundays are usually quite busy.
A ward bishopric is made up of the bishop (the presiding authority in the ward, like a pastor) two counselors, an executive secretary, and a ward clerk. I'm the First Counselor.
If you'd like to read an overview of what is expected of a counselor in a bishopric, you can read this overview.
It's a lot.
I spent most of today attending church meetings. My first meeting started at 10am. My last meeting ended around 7:15 pm. I was able to stop by my home a couple times in there to eat lunch and dinner, but it's been a full day.
We have obligations during the week, too. On Tuesday I will go to the weekly youth activity at 7pm. Sometimes we visit the homes of a couple members on Wednesday evenings.
And there are always less frequent but regular devotionals, training meetings, conferences, and other meetings in addition to the weekly ones.
This is all volunteer, by the way. The church has a mostly lay clergy (some general authorities – senior church leaders – receive a stipend) and no leaders on the local level are paid. So we do all of this in addition to working to support our families.
This is the second time I have served as First Counselor in a bishopric and it's actually been harder for me this time compared to last, especially given my ongoing faith struggles. But I'm trying to push through.
Yet amidst the meetings, there were some inspiring and spiritual moments today. I can't help but feel good when I'm around good people teaching about and testifying of Jesus Christ.
#100DaysToOffload (No. 135) #faith #Lent #Christianity
from
Talk to Fa

My heart is full 🪽
from
SmarterArticles

The shopping app Nate promised something irresistible: buy anything from any online store with a single tap, powered entirely by artificial intelligence. Neural networks that “understand HTML and transact on websites in the same way consumers do,” founder Albert Saniger told investors. The pitch worked spectacularly. Between 2019 and 2021, Nate raised approximately $42 million from venture capitalists hungry for the next AI breakthrough. There was just one problem. The actual automation rate of Nate's supposedly intelligent system was, according to federal prosecutors, effectively zero. Behind the sleek interface, hundreds of human workers in call centres in the Philippines and Romania were manually completing every purchase. When a deadly tropical storm struck the Philippines in October 2021, Nate scrambled to open a new call centre in Romania to handle the backlog. Saniger allegedly concealed the manual processing from investors and employees, restricting access to internal dashboards and describing automation rates as trade secrets. During product demonstrations, Nate engineers worked behind the scenes to manually process orders, making it falsely appear that the app was completing purchases automatically. In April 2025, the US Department of Justice and the Securities and Exchange Commission charged Saniger with securities fraud and wire fraud, each carrying a maximum sentence of twenty years in prison. Nate had run out of money in January 2023, leaving its investors with what prosecutors described as “near total” losses. Saniger had personally profited, selling approximately $3 million of his own Nate shares to a Series A investor in June 2021.
This is not an outlier. It is a symptom. As artificial intelligence becomes the most potent marketing buzzword since “disruption,” a growing number of companies are engaged in what regulators, investors, and technologists now call “AI washing,” the practice of making false, misleading, or wildly exaggerated claims about AI capabilities to attract customers, investors, and talent. The phenomenon mirrors greenwashing, where companies overstate their environmental credentials, but the stakes may be even higher. With the global AI market projected to reach approximately $250 billion by the end of 2025, and with venture capital firms pouring a record $202.3 billion into AI startups in 2025 alone (a 75 per cent increase from 2024, according to Crunchbase data), the financial incentives to slap an “AI-powered” label onto virtually anything have never been greater.
The question is no longer whether AI washing exists. It clearly does, and at scale. The real question is what consumers, investors, and regulators should do about it.
The first systematic attempt to measure AI washing came in 2019, when London-based venture capital firm MMC Ventures published “The State of AI 2019: Divergence,” a report produced in association with Barclays. The researchers individually reviewed 2,830 European startups across thirteen countries that claimed to use AI. Their finding was stark: in approximately 40 per cent of cases, there was no evidence that artificial intelligence was material to the company's value proposition. These firms were not necessarily lying outright. Many had been classified as “AI companies” by third-party analytics platforms, and as David Kelnar, partner and head of research at MMC Ventures, noted at the time, startups had little incentive to correct the misclassification. Companies labelled as AI-driven were raising between 15 and 50 per cent more capital than traditional software firms. The UK alone accounted for nearly 500 AI startups, a third of Europe's total and twice as many as any other country, making the scale of potential misrepresentation significant.
Six years later, the problem has not improved. A February 2025 survey by MMC Ventures of 1,200 fintech startups found that 40 per cent of companies branding themselves “AI-first” had zero machine-learning code in production. A quarter were simply piping third-party APIs, such as those offered by OpenAI, through a new user interface. Only 12 per cent trained proprietary models on unique datasets. Yet funding rounds that mentioned “generative AI” commanded median valuations 2.3 times higher than those that did not. The financial logic is brutally simple: pitch decks with AI buzzwords close faster and raise larger sums.
The pattern repeats across sectors. Amazon's “Just Walk Out” grocery technology, deployed across its Fresh stores, was marketed as a fully autonomous AI-powered checkout system. Customers could enter, pick up items, and leave without scanning anything. In April 2024, The Information reported that approximately 700 out of every 1,000 Just Walk Out transactions in 2022 required human review by a team of roughly 1,000 workers in India, far exceeding Amazon's internal target of 50 reviews per 1,000 transactions. Customers frequently received their receipts hours after leaving the store, the delay caused by reviewers checking camera footage to verify each transaction. Amazon disputed the characterisation, stating that its “Machine Learning data associates” were annotating data to improve the underlying model. Dilip Kumar, Vice President of AWS Applications, wrote that “the erroneous reports that Just Walk Out technology relies on human reviewers watching from afar is untrue.” Nevertheless, the company subsequently removed Just Walk Out from most Fresh stores, replacing it with simpler “Dash Carts,” and laid off US-based staff who had worked on the technology.
Then there is DoNotPay, which marketed itself as “the world's first robot lawyer.” Founded in 2015 to help people contest parking tickets, the company expanded into broader legal services, claiming its AI could substitute for a human lawyer. The Federal Trade Commission investigated and found that DoNotPay's technology merely recognised statistical relationships between words, used chatbot software to interact with users, and connected to ChatGPT through an API. None of it had been trained on a comprehensive database of laws, regulations, or judicial decisions. The company had never even tested whether its “AI lawyer” performed at the level of a human lawyer. In February 2025, the FTC finalised an order requiring DoNotPay to pay $193,000 in refunds and to notify consumers who had subscribed between 2021 and 2023. The order prohibits the company from claiming its service performs like a real lawyer without adequate evidence. FTC Chair Lina M. Khan stated plainly: “Using AI tools to trick, mislead, or defraud people is illegal. The FTC's enforcement actions make clear that there is no AI exemption from the laws on the books.”
The enforcement reckoning arrived in earnest in March 2024, when the SEC announced its first-ever AI washing enforcement actions. The targets were two investment advisory firms: Delphia (USA) Inc. and Global Predictions Inc. Delphia, a Toronto-based firm, had claimed in SEC filings, press releases, and on its website that it used AI and machine learning to guide investment decisions. When the SEC examined Delphia in 2021, the firm admitted it did not actually possess such an algorithm, yet it subsequently made further false claims about its use of algorithms in investment processes. Global Predictions, based in San Francisco, marketed itself as the “first regulated AI financial advisor,” claiming to produce “expert AI driven forecasts.” SEC Chair Gary Gensler was blunt: “We find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” He drew a direct parallel to greenwashing, cautioning that “when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies.” Delphia paid a $225,000 civil penalty. Global Predictions paid $175,000.
These penalties were modest, almost symbolic. The cases that followed were not.
In January 2025, the SEC charged Presto Automation Inc., a formerly Nasdaq-listed restaurant technology company, marking the first AI washing enforcement action against a public company. Presto had promoted its “Presto Voice” product as a breakthrough AI system capable of automating drive-through order-taking at fast food restaurants. In its SEC filings between 2021 and 2023, including Forms 8-K, 10-K, and S-4, the company referred to Presto Voice as internally developed technology and claimed that the system “eliminates human order taking.” The SEC's investigation found that the speech recognition technology was actually owned and operated by a third party, and that the system relied heavily on human employees in foreign countries to complete orders.
In April 2025, the DOJ and SEC jointly charged Nate's founder with fraud, the most aggressive AI washing prosecution to date. The parallel criminal and civil actions sent an unmistakable signal: AI washing was no longer a regulatory grey area. It was fraud.
By mid-2025, the SEC had established a dedicated Cybersecurity and Emerging Technologies Unit (CETU) specifically to pursue AI-related misconduct. At the Securities Enforcement Forum West in May 2025, senior SEC officials confirmed that “rooting out” AI washing fraud was an immediate enforcement priority. Existing securities laws provided ample authority to prosecute misleading AI claims, and the Commission would not wait for new legislation.
The private litigation followed. Apple became the highest-profile target when shareholders filed a securities fraud class action in June 2025, alleging that the company had misrepresented the capabilities and timeline of “Apple Intelligence,” its ambitious AI initiative unveiled in June 2024. The complaint, filed by plaintiff Eric Tucker, alleged that Apple lacked a functional prototype of Siri's advanced AI features and misrepresented the time needed to deliver them. When Apple announced in March 2025 that it was indefinitely delaying several AI-based Siri features, the stock dropped $11.59 per share, nearly 5 per cent, in a single trading session. Internal sources, including Siri director Robby Walker, later admitted the company had promoted enhancements “before they were ready,” calling the delay “ugly and embarrassing.” By April 2025, Apple's stock had lost nearly a quarter of its value, approximately $900 billion in market capitalisation. The case, Tucker v. Apple Inc., No. 5:25-cv-05197, remains pending in the US District Court for the Northern District of California.
Understanding how AI washing works requires understanding what companies are actually doing when they claim to use “artificial intelligence.” The term itself is part of the problem. There is no universally accepted definition of AI, and the phrase has become so elastic that it can encompass everything from genuinely sophisticated deep learning systems to simple rule-based automation that has existed for decades. As a legal analysis published by CMS Law-Now in July 2025 noted, “AI-washing can constitute misleading advertising” and represents an unfair competitive practice, yet companies continue to exploit the vagueness of the terminology.
The most common forms of AI washing fall into several recognisable categories. First, there is relabelling: companies take existing software, algorithms, or automated processes and rebrand them as “AI-powered” without any meaningful change in functionality. A recommendation engine that uses basic collaborative filtering becomes “our proprietary AI.” A chatbot built on decision trees becomes “our intelligent assistant.” Second, there is API pass-through: companies integrate a third-party AI service, typically from OpenAI, Google, or Anthropic, wrap it in a custom interface, and present it as their own technology. Third, there is capability inflation: companies describe aspirational features as current capabilities, presenting what they hope to build as what already exists. Fourth, and most egregiously, there is the human-behind-the-curtain model, where supposed AI systems rely primarily on manual human labour, as in the cases of Nate and, arguably, Amazon's Just Walk Out technology.
The phenomenon is not confined to startups. As University of Pennsylvania professor Benjamin Shestakofsky has observed, there exists a grey area in artificial intelligence “filled with millions of humans who work in secret,” often hired to train algorithms but who end up performing much of the work instead. This usually involves “human labour that is outsourced to other countries, because those are places where they can get access to labour in places with lower prevailing wages.” The practice of disguising human labour as artificial intelligence has a long history in the technology industry, but the current wave of AI hype has turbocharged it.
The California Management Review published an analysis in December 2024 examining the cultural traps that lead to AI exaggeration within organisations. The study found that one of the most pervasive issues was “the lack of technical literacy among senior leadership. While many are accomplished business leaders, they often lack a nuanced understanding of AI's capabilities and limitations, creating a significant knowledge gap at the top.” This gap allows marketing teams to make claims that engineering teams know are unsupported, while executives lack the technical fluency to challenge them.
So how should an ordinary person navigate this landscape? The answer begins with developing what researchers call “AI literacy,” a term that has rapidly moved from academic obscurity to mainstream urgency. Long and Magerko's widely cited academic definition describes AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.” The Organisation for Economic Co-operation and Development published its AI Literacy Framework in May 2025, designed for primary and secondary education but with principles applicable to anyone. The framework emphasises that AI literacy is not about learning to code or understanding neural network architectures. It is about developing the critical thinking skills to evaluate AI claims, understand limitations, and make informed decisions. The World Economic Forum now classifies AI literacy as a civic skill, essential for participating in democratic processes and, without it, people remain vulnerable to misinformation, biased systems, and decisions made by opaque algorithms.
The OECD framework identifies a core principle: “Practicing critical thinking in an AI context involves verifying whether the information provided by an AI system is accurate, relevant, and fair, because AI systems can generate convincing but incorrect outputs.” This applies equally to evaluating AI products themselves. Consumers need to ask not just what an AI system can do, but what it should do, and for whom. The framework also compels users to consider the environmental costs of AI systems, which require significant amounts of energy, materials, and water while contributing to global carbon emissions.
Several practical frameworks have emerged to help consumers and professionals evaluate AI claims. The ROBOT checklist, developed by Ulster University's library guides for evaluating AI tools, begins with the most fundamental question: reliability. How transparent is the company about its technology? What information does it share about when the tool was created, when it was last updated, what data trained it, and how user data is handled?
Ohio University's research, published in November 2025, identifies four integrative domains of AI literacy: effective practices (understanding what different AI platforms can and cannot do), ethical considerations (recognising biases, privacy risks, and power consumption), rhetorical awareness (understanding how AI marketing shapes perception), and subject matter knowledge (having enough domain expertise to evaluate AI outputs critically). These domains are not discrete skills that can be taught independently but rather co-exist and co-inform one another.
Drawing on these frameworks and the enforcement record, consumers can develop a practical toolkit for spotting AI washing. The first question to ask is specificity: does the company explain precisely what its AI does, or does it rely on vague buzzwords? Genuine AI companies tend to be specific about their models, training data, and capabilities. Companies engaged in AI washing tend to use phrases like “powered by AI” or “AI-driven insights” without explaining the underlying technology. The second question is transparency: does the company publish technical documentation, model cards, or performance benchmarks? Reputable AI firms increasingly publish this information voluntarily. The third question concerns provenance: did the company develop its own AI, or is it using a third-party service? There is nothing inherently wrong with building on existing AI platforms, but consumers deserve to know what they are actually paying for. The fourth question is about limitations: does the company acknowledge what its AI cannot do? Every legitimate AI system has significant limitations, and any company that presents its AI as infallible or universally capable is almost certainly overstating its case.
Perhaps the most important principle is the simplest: if a company's AI claims sound too good to be true, they probably are. The technology is advancing rapidly, but it is not magic, and the gap between what AI can actually deliver today and what marketing departments promise remains enormous.
The regulatory response to AI washing is gaining momentum, but it remains fragmented across jurisdictions and agencies, each with different powers, priorities, and approaches.
In the United States, enforcement has proceeded primarily through existing legal frameworks rather than new AI-specific legislation. The SEC has used securities fraud statutes. The FTC has relied on its longstanding authority to police unfair and deceptive trade practices. In September 2024, the FTC launched “Operation AI Comply,” a coordinated enforcement sweep targeting five companies for deceptive AI claims. The agency also brought an action against Ascend, a suite of businesses operated by William Basta and Kenneth Leung that allegedly defrauded consumers of more than $25 million by falsely claiming its AI tools could generate passive income. A proposed settlement in June 2025 imposed a partially suspended $25 million monetary judgement. In August 2025, the FTC filed a complaint against Air AI for advertising a conversational AI tool that allegedly caused business losses of up to $250,000.
The Department of Justice has maintained enforcement continuity across administrations. Despite broader deregulatory shifts under the Trump administration, the DOJ has not rescinded AI enforcement initiatives begun under the Biden administration. It brought a new criminal AI washing case in April 2025, the prosecution of Nate's founder, suggesting bipartisan consensus that fraudulent AI claims merit criminal prosecution.
At the state level, over 1,000 AI-related bills have been introduced in state legislatures since January 2025. Colorado's AI Act, enacted in May 2024, requires developers and deployers of high-risk AI systems to exercise “reasonable care” to avoid algorithmic discrimination. California's proposed SB 1047, though vetoed by Governor Gavin Newsom in September 2024, sparked intense debate about strict liability for AI harms.
The European Union has taken the most comprehensive legislative approach with the EU AI Act (Regulation (EU) 2024/1689), published in the Official Journal of the European Union, which began phased implementation in 2025. The Act takes a risk-based approach spanning 180 recitals and 113 articles. Prohibitions on AI systems posing unacceptable risks took effect on 2 February 2025. Transparency obligations for general-purpose AI systems follow on a twelve-month timeline. The penalties for non-compliance are severe: up to 35 million euros or 7 per cent of worldwide annual turnover, whichever is higher. While the Act was not explicitly designed to combat AI washing, its strict definitions of what constitutes an AI system and its transparency requirements create an environment where false or exaggerated claims carry substantial legal risk. A pending case before the Court of Justice of the European Union is already testing the boundaries of the Act's AI definition. As legal analysts have noted, the regulatory clarity is exerting a “Brussels effect,” shaping expectations for AI governance from Brazil to Canada.
In the United Kingdom, the regulatory approach has been characteristically more principles-based. The Financial Conduct Authority confirmed in September 2025 that it will not introduce AI-specific regulations, citing the technology's rapid evolution “every three to six months.” Instead, FCA Chief Executive Nikhil Rathi announced that the regulator will rely on existing frameworks, specifically the Consumer Duty and the Senior Managers and Certification Regime, to address AI-related harms. The FCA launched an AI Lab in September 2025 enabling firms to develop and deploy AI systems under regulatory supervision, and its Mills Review is expected to report recommendations on AI in retail financial services in summer 2026.
The more significant development for AI washing in the UK may be the Digital Markets, Competition and Consumers Act 2024, which received Royal Assent on 24 May 2024. The Act grants the Competition and Markets Authority sweeping new direct enforcement powers. For the first time, the CMA can investigate and determine breaches of consumer protection law without court proceedings, and impose fines of up to 10 per cent of global annual turnover. While the Act does not contain AI-specific provisions, its broad prohibition on misleading actions and omissions clearly covers exaggerated AI claims. CMA Chief Executive Sarah Cardell has described the legislation as a “watershed moment” in consumer protection. The CMA stated it would focus initial enforcement on “more egregious breaches,” including information given to consumers that is “objectively false.”
AI washing is not merely a consumer protection issue. It is increasingly a systemic risk to financial markets. Goldman Sachs has acknowledged that AI bubble concerns are “back, and arguably more intense than ever, amid a significant rise in the valuations of many AI-exposed companies, continued massive investments in the AI buildout, and the increasing circularity of the AI ecosystem.” The firm's analysis noted that “past innovation-driven booms, like the 1920s and in the 1990s, have led the market to overpay for future profits even though the underlying innovations were real.”
The numbers are staggering. Hyperscaler capital expenditure on AI infrastructure is projected to reach $1.15 trillion from 2025 through 2027, more than double the $477 billion spent from 2022 through 2024. What began as a $250 billion estimate for AI-related capital expenditure in 2025 has swollen to above $405 billion. Goldman Sachs CEO David Solomon has said he expects “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos has called the current environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman has warned that “people will overinvest and lose money.”
When the capital flowing into an industry reaches these proportions, the incentive to overstate AI capabilities becomes almost irresistible. Companies that cannot demonstrate genuine AI differentiation risk losing funding to competitors who can, or who at least claim they can. This creates a vicious cycle: exaggerated claims raise valuations, which attract more capital, which creates more pressure to exaggerate, which distorts the market signals that investors rely on to allocate resources efficiently.
JP Morgan Asset Management's Michael Cembalest has observed that “AI-related stocks have accounted for 75 per cent of S&P 500 returns, 80 per cent of earnings growth and 90 per cent of capital spending growth since ChatGPT launched in November 2022.” When that much market value depends on a technology whose real-world returns remain uncertain, the consequences of widespread AI washing extend far beyond individual consumer harm. They become a matter of market integrity.
The current regulatory patchwork has achieved some notable successes, particularly the SEC's enforcement actions and the FTC's Operation AI Comply. But addressing AI washing at scale requires more than case-by-case prosecution. It requires structural reforms that create incentives for honesty and penalties for deception.
Several principles should guide this effort. First, mandatory technical disclosure. Companies that market products as “AI-powered” should be required to disclose, in plain language, what specific AI technology they use, whether it was developed in-house or licensed from a third party, what data trained it, and what its documented performance metrics are. This is not an unreasonable burden. The pharmaceutical industry must disclose the composition and clinical trial results of every drug it sells. The financial services industry must disclose the risks associated with every investment product. AI companies should face equivalent obligations.
Second, standardised definitions. The absence of a universally accepted definition of “artificial intelligence” has allowed companies to stretch the term beyond recognition. Regulators should work with technical standards bodies to establish clear thresholds for when a product can legitimately be described as “AI-powered,” much as the term “organic” is regulated in food labelling.
Third, third-party auditing. Just as financial statements require independent audits, AI claims should be subject to independent technical verification. The EU AI Act's requirements for conformity assessments of high-risk AI systems point in this direction, but the principle should extend to marketing claims about AI capabilities more broadly.
Fourth, proportionate penalties. The $225,000 fine imposed on Delphia and the $175,000 fine on Global Predictions were gestures, not deterrents. When companies can raise tens of millions through fraudulent AI claims, penalties must be calibrated to remove the financial incentive for deception. The EU AI Act's penalties of up to 7 per cent of global turnover and the UK CMA's new power to fine up to 10 per cent of global turnover represent the right order of magnitude.
Fifth, consumer education at scale. Regulatory enforcement alone cannot protect consumers from AI washing. Governments should invest in public AI literacy programmes, drawing on the frameworks developed by the OECD, UNESCO, and academic institutions. Microsoft's 2025 AI in Education Report found that 66 per cent of organisational leaders said they would not hire someone without AI literacy skills, indicating that the market itself is beginning to demand this competency. Public investment in AI literacy should be treated with the same urgency as digital literacy campaigns were in the early 2000s.
None of this is to suggest that artificial intelligence is merely hype. The technology is real, its capabilities are advancing rapidly, and its potential applications are genuinely transformative. The problem is not AI itself but the gap between what AI can actually do and what companies claim it can do. That gap is where AI washing thrives, and closing it requires honesty from companies, scepticism from consumers, and vigilance from regulators.
The enforcement actions of 2024 and 2025 represent a turning point. For the first time, companies face meaningful legal consequences for overstating their AI capabilities. The SEC, FTC, DOJ, EU regulators, and the UK's CMA are all converging on the same message: existing laws already prohibit fraudulent and misleading claims, and the “AI” label does not provide immunity.
But enforcement is reactive by nature. It catches the worst offenders after the damage is done. Building a world where consumers can trust AI claims requires something more fundamental: a culture of transparency, a standard of proof, and a population literate enough to ask the right questions. The technology itself is neither the hero nor the villain of this story. It is simply a tool, and like all tools, its value depends entirely on the honesty of those who wield it.
US Department of Justice, Southern District of New York. (2025). “Indictment: United States of America v. Albert Saniger.” April 2025. https://www.justice.gov/usao-sdny/media/1396131/dl
Securities and Exchange Commission. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36, March 2024. https://www.sec.gov/newsroom/press-releases/2024-36
MMC Ventures and Barclays. (2019). “The State of AI 2019: Divergence.” March 2019. Reported by CNBC: https://www.cnbc.com/2019/03/06/40-percent-of-ai-start-ups-in-europe-not-related-to-ai-mmc-report.html
MIT Technology Review. (2019). “About 40% of Europe's AI companies don't use any AI at all.” March 2019. https://www.technologyreview.com/2019/03/05/65990/about-40-of-europes-ai-companies-dont-actually-use-any-ai-at-all/
The Information. (2024). Report on Amazon Just Walk Out technology human review rates. April 2024. Reported by Washington Times: https://www.washingtontimes.com/news/2024/apr/4/amazons-just-walk-out-stores-relied-on-1000-people/
Federal Trade Commission. (2025). “FTC Finalizes Order with DoNotPay That Prohibits Deceptive 'AI Lawyer' Claims.” February 2025. https://www.ftc.gov/news-events/news/press-releases/2025/02/ftc-finalizes-order-donotpay-prohibits-deceptive-ai-lawyer-claims-imposes-monetary-relief-requires
Securities and Exchange Commission. (2025). Presto Automation Inc. enforcement action. January 2025. Reported by White & Case: https://www.whitecase.com/insight-alert/new-settlements-demonstrate-secs-ongoing-efforts-hold-companies-accountable-ai
DLA Piper. (2025). “SEC emphasizes focus on 'AI washing' despite perceived enforcement slowdown.” https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/sec-emphasizes-focus-on-ai-washing
DLA Piper. (2025). “DOJ and SEC send warning on 'AI washing' with charges against technology startup founder.” April 2025. https://www.dlapiper.com/en/insights/publications/2025/04/doj-and-sec-send-warning-against-ai-washing-with-charges-against-technology-startup-founder
Tucker v. Apple Inc., et al., No. 5:25-cv-05197. Filed June 2025. Reported by Bloomberg Law: https://news.bloomberglaw.com/litigation/apple-ai-washing-cases-signal-new-line-of-deception-litigation
Federal Trade Commission. (2024). “FTC Announces Crackdown on Deceptive AI Claims and Schemes.” September 2024. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
European Parliament. (2024). “EU AI Act: first regulation on artificial intelligence.” https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Financial Conduct Authority. (2025). “AI and the FCA: our approach.” September 2025. https://www.fca.org.uk/firms/innovation/ai-approach
Digital Markets, Competition and Consumers Act 2024. UK Parliament. https://bills.parliament.uk/bills/3453
CMS Law-Now. (2025). “Avoiding AI-washing: Legally compliant advertising with artificial intelligence.” July 2025. https://cms-lawnow.com/en/ealerts/2025/07/avoiding-ai-washing-legally-compliant-advertising-with-artificial-intelligence
California Management Review. (2024). “AI Washing: The Cultural Traps That Lead to Exaggeration and How CEOs Can Stop Them.” December 2024. https://cmr.berkeley.edu/2024/12/ai-washing-the-cultural-traps-that-lead-to-exaggeration-and-how-ceos-can-stop-them/
Goldman Sachs. (2025). “Top of Mind: AI: in a bubble?” https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble
OECD. (2025). “Empowering Learners for the Age of AI: An AI Literacy Framework.” Review Draft, May 2025. https://ailiteracyframework.org/wp-content/uploads/2025/05/AILitFramework_ReviewDraft.pdf
TechCrunch. (2025). “Fintech founder charged with fraud after 'AI' shopping app found to be powered by humans in the Philippines.” April 2025. https://techcrunch.com/2025/04/10/fintech-founder-charged-with-fraud-after-ai-shopping-app-found-to-be-powered-by-humans-in-the-philippines/
Fortune. (2025). “A tech CEO has been charged with fraud for saying his e-commerce startup was powered by AI.” April 2025. https://fortune.com/2025/04/11/albert-saniger-nate-shopping-app-fraud-ai-justice-department/
DWF Group. (2025). “AI washing: Understanding the risks.” April 2025. https://dwfgroup.com/en/news-and-insights/insights/2025/4/ai-washing-understanding-the-risks
Clyde & Co. (2025). “The fine print of AI hype: The legal risks of AI washing.” May 2025. https://www.clydeco.com/en/insights/2025/05/the-fine-print-of-ai-hype-the-legal-risks-of-ai-wa
Darrow. (2025). “AI Washing Sparks Investor Suits and SEC Scrutiny.” https://www.darrow.ai/resources/ai-washing
Crunchbase. (2025). AI sector funding data for 2025.
Ulster University Library Guides. (2025). “AI Literacy: ROBOT Checklist.” https://guides.library.ulster.ac.uk/c.php?g=728295&p=5303990
Ohio University. (2025). “A framework for considering AI literacy.” November 2025. https://www.ohio.edu/news/2025/11/framework-considering-ai-literacy
Long, D. and Magerko, B. (2020). “What is AI Literacy? Competencies and Design Considerations.” CHI Conference on Human Factors in Computing Systems.
Financial Conduct Authority. (2025). “Mills Review to consider how AI will reshape retail financial services.” https://www.fca.org.uk/news/press-releases/mills-review-consider-how-ai-will-reshape-retail-financial-services
Womble Bond Dickinson. (2024). “Digital Markets, Competition and Consumers Act 2024 explained.” https://www.womblebonddickinson.com/uk/insights/articles-and-briefings/digital-markets-competition-and-consumers-act-2024-explained-cmas

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
#HumanInTheLoop #AIWashing #AIFraudEnforcement #AILiteracy
from
Roscoe's Story
In Summary: * Another quiet Sunday winds down. I'm glad the IU Womens Basketball Team won their Senior Day game this afternoon. It was a very close, exciting game. And congratulations to Tyler Reddick for winning tthis afternoon's NASCAR Cup race. This is two wins, two weekends in a row. And there's time enough for me work through my night prayers and do a bit of relaxed reading, then turn in early tonight. That's my plan.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I'll add this daily prayer as part of the Prayer Crusade Preceding SSPX Episcopal Consecrations.
Health Metrics: * bw= 229.06 lbs. * bp= 140/83 (65)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:15 – cake and little cookies, 1 banana * 08:55 – mashed potatoes, breaded pork chop * 13:55 – garden salad
Activities, Chores, etc.: * 05:50 – bank accounts activity monitored * 06:00 – read, pray, follow news reports from various sources, surf the socials, and nap * 11:30 – tuned into the Pregame Show for this afternoon's Indiana University Women's Basketball Game vs Oregon * 14:00 – After IU wins 72 to 65, I'm switching over to FOX TV for NASCAR Cup Race coverage of the Autotrader 400 at Atlanta's EchoPark Speedway.
Chess: * 12:15 – moved in all pending CC games
from Manuela
Oi meu amor.
Hoje eu estava sentado, pensando em um livro ambicioso demais, de tal forma que eu tinha a ideia, tinha a historia na minha mente, mas não sabia como colocar no papel, não sabia como começar a escrever, como transforma aquelas imagens e sensações em palavras.
Foi duas horas depois, de tanto escrever e apagar que eu comecei a pensar, quando eu virei isso?
Quando eu voltei a gostar de escrever? ou melhor, quando eu voltei a sentir necessidade de escrever?
Eu passei anos sem escrever. Você me contou que eu te escrevia textos enquanto você dormia e depois você acordava com eles e por dentro eu sorri, porque eu já não me lembrava que algum dia eu já tinha sido esse cara.
Eu não gosto mais de escrever, é coisa de adolescente, todo mundo é poeta quando é criança.. são algumas das respostas que eu dava quando também me eram cobrado textos.
Acho que existem varias formas de amar: carinho, cuidado, atenção…
Existem muitas formas de demonstrar afeto; mas a escrita, a minha escrita, sempre pertenceu apenas a você, de forma que eu nem me lembrava mais de como era escrever, de como eu gostava de fazer isso.
E agora me pego tentando te escrever livros, cartas, panfletos, bilhetes, playlists e outdoors, para que você seja constantemente bombardeada com meus pensamentos.
Porque acho que no fim das contas é isso, a gente escreve sobre o que pensa, sobre o que sente, e você Julia Manuela, não tem dado folga a minha mente.
Eu amo ter você de volta, ainda que não da forma que eu gostaria, eu amo poder pensar em você, poder escrever sobre você, poder gritar que te amo.
Hoje faz duas semanas que te mandei aquele oi, e ontem conversando um pouco falamos que parecia uma vida, e na realidade, realmente é uma vida.
Eu não te amo por causa dessas duas semanas, o que sinto por você é muito mais antigo que isso, é muito mais interno e profundo; e estar com você nessas ultimas duas semanas, só fez eu perceber o quanto o amor não morre, e isso me anima e me assusta na mesma intensidade.
Talvez seja por isso que eu sinta tanta necessidade de te escrever, talvez seja meu peito querendo suprir todo o tempo que não pode te gritar tudo que escrevo, não pode sentir tudo que sinto e nem desejar tudo que desejo.
Te procuro em cada canto, te desejo em cada lugar, sinto sua falta o tempo inteiro.
Te amo, com tudo que tenho.
Do seu garoto,
Com a pior escrita, mas a melhor inspiração,
Nathan