Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Epic Worlds
Amazon announced that on January 20th, 2026, they were going to make any book that wasn’t DRM protected to be available for download as an epub or a pdf. For anyone who has followed me on my social media, they know that I’m an absolutely fan of the epub format. It is one of the few ways that you can own digital products where a company doesn’t have their greedy, restrictive hands on it. I think it is a big deal that Amazon has seen enough of a threat from other epub distributors that they are loosening their hold.
I went to my Kindle library to see what was available and discovered that out of 116 books that I own, only 5 were able to be downloaded. And three of them were dictionaries. To be clear, I do not in any way fault authors in wanting to use DRM to protect their works. I can understand the need to want to protect something they created and I know I might be approaching this while being selfish. It’s just that I was hoping there would be a lot more of my library available to save in my calibre library in case Amazon’s AI randomly decides I’m somehow a bad person and bans me which would mean I lose access to my entire library. (Yes, you do not own your books per Amazon. You license and thus subject to losing access if you are banned for whatever reason.) This is why I actually try to buy from websites that provide an epub (thank you Smashwords).
Nothing really. I just wanted to write out my feelings about the whole ownership of your digital content, the games that are played and an author’s valid concern over protecting one’s work. This is also my blog to share thoughts like this to inflict on everyone else (joking of course).
In my case, if you are someone who does enjoy reading my works, if you do buy from Amazon, you’ll find that all my works are not DRM restricted where possible and are available for download as epubs. I know I can’t stop pirates from stealing my works even with DRM (there are ways to strip DRM from amazon books).
Anyhow. That’s my thoughts on the subject.
from OOTD Blog
Step into the future with Fashion Forward: Spring 2026 Trends You Can Wear Now. Discover what's next and start updating your wardrobe now! Spring 2026 fashion is all about balance: expressive yet wearable, trend-driven yet timeless. Designers are blending comfort with creativity, offering styles that feel fresh without being over-the-top. The best part? Many of the biggest Spring 2026 fashion trends are easy to wear right now, using pieces you may already own or can effortlessly add to your wardrobe.
This guide breaks down the most wearable Spring 2026 trends, how to style them, and why they matter in today’s fashion landscape. Whether you’re updating your everyday outfits or planning seasonal content for your fashion blog or brand, these trends deliver both style and longevity.
One of the most important Spring 2026 fashion trends is the evolution of tailoring. Rigid silhouettes are giving way to softer, more fluid shapes that prioritize movement and comfort.
Relaxed blazers, draped trousers, and lightweight suit sets are everywhere this season. The focus is on breathable fabrics, subtle shaping, and versatility.
Pair a relaxed blazer with a simple tank top and straight-leg jeans
Choose wide-leg tailored trousers with elastic or soft waistbands
Opt for monochrome suit sets in neutral or muted tones
This trend works equally well for office wear, casual outings, and elevated everyday looks.
Minimalism is back, but it’s far from boring. Spring 2026 embraces clean silhouettes infused with intentional details like asymmetrical cuts, textured fabrics, and unexpected seams.
Instead of loud prints, designers are leaning into shape, fabric, and fit to make a statement.
Asymmetrical necklines and hems
Ribbed, pleated, or textured fabrics
Neutral color palettes with one standout detail
This modern minimalism trend is perfect for capsule wardrobes and consumers who value quality over quantity.
Sheer fabrics continue their strong momentum into Spring 2026, but with a more wearable, layered approach. Instead of fully transparent looks, designers are focusing on subtle sheerness that adds depth without feeling revealing.
Think mesh tops, chiffon overlays, and organza layers styled over basics.
Layer a sheer blouse over a fitted tank or bralette
Try a semi-transparent skirt with structured shorts underneath
Add a sheer cardigan or top layer to simple outfits
This trend adds softness and dimension while staying practical for everyday wear.
Knitwear is no longer reserved for colder months. Spring 2026 introduces lightweight knits designed for layering and transitional weather.
From fitted ribbed tops to relaxed knit dresses, knitwear becomes a year-round essential.
Short-sleeve knit tops and polos
Fine-gauge knit dresses
Cropped or waist-length cardigans
These pieces offer comfort, stretch, and versatility, making them easy to style from morning to evening.
Skinny silhouettes continue to fade as wide-leg pants, fluid trousers, and relaxed skirts dominate Spring 2026 fashion.
Comfort-driven fashion remains a priority, and designers are responding with silhouettes that feel effortless and flattering.
Wide-leg trousers with soft drape
Pull-on pants with minimal structure
Midi skirts with movement and flow
Pair these bottoms with fitted tops or cropped layers to balance proportions.
Utility-inspired fashion remains relevant in Spring 2026, but with a cleaner, more polished execution. Instead of heavy cargo styles, expect refined pockets, subtle straps, and functional details integrated into everyday pieces.
Light jackets with structured pockets
Dresses with adjustable waist ties
Pants with minimal, streamlined cargo elements
This trend blends practicality with modern aesthetics, appealing to style-conscious consumers who value function.
Sustainability is no longer a niche trend; it’s a core expectation. Spring 2026 fashion continues to emphasize eco-friendly materials, ethical production, and long-lasting design.
Consumers are increasingly drawn to brands that prioritize transparency and durability.
Organic cotton, recycled fibers, and low-impact dyes
Timeless designs meant to outlast seasonal trends
Smaller, more intentional collections
This shift supports both environmental responsibility and smarter wardrobe building.
Spring 2026 color trends strike a balance between comfort and optimism. Instead of extreme brights or overly muted tones, designers are choosing colors that feel calming, wearable, and uplifting.
Soft earth tones like sand, clay, and olive
Muted pastels such as dusty blue, pale lavender, and butter yellow
Classic neutrals refreshed with warm undertones
These colors integrate seamlessly into existing wardrobes while still feeling seasonally relevant.
Spring 2026 dresses prioritize ease, movement, and versatility. Instead of ultra-formal designs, the focus is on dresses that work for daily life.
Midi dresses with relaxed fits
Shirt dresses with adjustable waists
Knit and jersey dresses with stretch
These dresses can be styled casually with sneakers or dressed up with tailored layers, making them ideal for multiple occasions.
Accessories in Spring 2026 are intentional rather than excessive. The goal is to enhance outfits without overpowering them.
Structured yet soft handbags
Minimal jewelry with sculptural shapes
Comfortable footwear with clean design
These accessories support the overall trend of wearable sophistication.
What sets Spring 2026 fashion trends apart is their practicality. Designers are responding to real lifestyle needs, creating fashion that feels relevant, comfortable, and adaptable.
These trends:
Build on existing wardrobe staples
Focus on fit and function
Offer long-term value rather than short-term hype
This makes Spring 2026 fashion ideal for everyday wear, content creation, and retail strategy.
Spring 2026 proves that being fashion-forward doesn’t mean chasing every trend. It’s about choosing pieces that align with your lifestyle while still feeling current.
By embracing soft tailoring, modern minimalism, lightweight layers, and conscious design, you can stay ahead of fashion trends without sacrificing comfort or individuality.
These Spring 2026 fashion trends are not just for the runway. They are designed to be worn now, lived in, and styled your way.
from
Un blog fusible
« Le bas du village, côté rivière. » © Gilles Le Corre Courtesy of Gilles Le Corre & ADAGP
branches roseaux et broussailles vaines tentatives pour ralentir le flot des eaux grises impérieuses trouées de pluie de tourbillons de remous dans la lumière d'un matin presque crépusculaire
eaux glacées des jours glauques qui pourront rapides bien loin des arbres amis nous emporter
from W1tN3ss
After nearly two years of back-and-forth, my son finally decided he wanted to play the piano. We searched for a tutor and eventually found one nearby—perfect for me, because I love my kids, but I don’t love being a full-time shuttle service.
(If you’re wondering why Mom isn’t in this story, that’s something I’ll explore another time.)
The tutor quoted a fair rate. “Considering the economy,” she said, “I’ll do $35 per half hour.” It felt considerate, almost unusually so.
But during our phone call, something in her voice caught me. She sounded distracted, distant. When I asked, she said she was caring for both her parents, who were sick. I didn’t think much of it at the time—other than noticing myself judging her tone more than I should have.
Then we met.
She was petite, attractive, and kind, but the moment she started speaking, I noticed things I hadn’t expected.
• Her mind clearly moved faster than her words; speech took effort, like each sentence had to fight its way out.
• She had small, involuntary facial and body tics.
I quietly wondered if she was dealing with early-onset Parkinson’s. And I regretted how quickly I had judged her over the phone—another reminder of where my mind still needs work.
But then something beautiful happened.
As soon as she began teaching—her hands on the keys, her voice guiding my son—the tics faded. Her speech smoothed. She was steady, focused, alive in a way she wasn’t a minute earlier.
In that moment, I saw her in her flow. And it gave me hope—about her, about my son, and about the way people transform when they’re doing what they were meant to do.

from
The happy place
We’re watching the road. It’s not a feel-good movie. Outside big flakes of snow fall slowly in loose heaps, but we’re inside.
It’s warm!
We’ve invited the neighbours for some 🇮🇹 Italian food. I made the pizza dough last night. They’ll be here in about an hour.
I will start baking soon. With Eros Ramazzotti on the boom blaster.
Maybe I’ll have a glass of wine, or a beer.
And the two dogs play happily in the snow !!
And the RGB lights from the computer illuminate the room where we will do the taxes tomorrow.
And it feels so strange. I’m sorry for how wrong… for how weird it’s become— that. It’s like some t
Anyway
Isn’t life so incredibly rich?
from Lastige Gevallen in de Rede
We gaan tussen nu en later onze eigen DBTE producten op de markt brengen. We hebben de productie lijn al getrokken, de winst prognose vastgesteld, de balans is er en we hebben een prijs lijst. Later zetten we een volgens ons passend artikel bij die prijs. U kunt nu al vast naar behoefte inkopen dat scheelt later tijd en moeite. U ziet vanzelf wel een keer wat u op voorraad heeft staan.
De Prijslijst.
€ 25,56 $ 34,76 € 5,99 per 3 € 4,00 (wit) £ 4,00 (zwart) € 6,77 (grijs) € 908,99 2 voor de prijs van 1 € 56,50 (hoog) € 19876,99 € 1,00 € 0,50 € 5,00 retourdeal € 8,99 € 55 (cashback) £ 76,90 prijspakker € 99,99 zo lang de voorraad strekt
Wij wensen u alvast veel plezier met de aankoop van het De Bond Tegen Efficiëntie artikel potentieel. Onze man hiervoor ingehuurd, Gaston, zal u later bij de bezorging weleens vertellen wat u allemaal bij ons klein- of grootschalig heeft ingekocht.
from Blind Spot Lab
LOG ENTRY: 2026-01-24
STATUS: ANALYZING
SOURCE: BLIND SPOT LAB / NODE 01
“We all know the sequence: the sudden spark of desire, the rush that takes hold when a curated image of a beautiful, naked body flashes across a screen. What was once rare has become mundane. In the age of infinite digital access, the sacred has been replaced by the accessible.
For years, many of us—perhaps all of us—have surrendered to this flow. It felt like a remedy. We used it to numb the ache of dissatisfaction, to bridge the hollow gaps of loneliness, and to quiet the heavy whispers of depression. But it is a deceptive medicine.
One day, you wake up and realize you are trapped in a loop of diminishing returns. It is a cycle that leads nowhere. With every click, the dissatisfaction doesn't disappear; it deepens. We are trying to integrate a Hollywood-grade fantasy into a mundane reality, a task as impossible as it is destructive. This is the core of the pornographic illusion: it presents a standard that does not exist, a perfection that is manufactured.
So why does this business model thrive? What nerve does it strike?
The fuel of this industry is the very loneliness and discontent it claims to heal.
It preys on our most primal instinct—the drive for procreation—and weaponizes it against us. It is, in every sense, a digital narcotic. A drug that triggers a craving so profound that resistance feels futile.
This industry does not sell connection; it sells the shadow of it. It is a parasite living off the emotional starvation of a society that has forgotten how to find the real in a world of ghosts.”
Is our society losing the ability to distinguish between a biological drive and a commercial product?
— The Collective
PROTOKOLL-EINTRAG: 24.01.2026
STATUS: ANALYSE LÄUFT
QUELLE: BLIND SPOT LAB / KNOTENPUNKT 01
Wir alle kennen den Ablauf: der plötzliche Funke des Begehrens, der Rausch, der einen ergreift, wenn das kuratierte Bild eines schönen, nackten Körpers über den Bildschirm flackert. Was einst selten war, ist heute alltäglich geworden. Im Zeitalter des grenzenlosen digitalen Zugangs wurde das Heilige durch das Verfügbare ersetzt.
Jahrelang haben sich viele von uns – vielleicht wir alle – diesem Strom hingeben. Es fühlte sich wie ein Heilmittel an. Wir nutzten es, um den Schmerz der Unzufriedenheit zu betäuben, um die hohlen Lücken der Einsamkeit zu überbrücken und um das schwere Flüstern der Depression zum Schweigen zu bringen. Doch es ist eine trügerische Medizin.
Eines Tages wacht man auf und erkennt, dass man in einer Endlosschleife mit abnehmendem Ertrag gefangen ist. Es ist ein Kreislauf, der ins Nichts führt. Mit jedem Klick verschwindet die Unzufriedenheit nicht; sie vertieft sich. Wir versuchen, eine Fantasie auf Hollywood-Niveau in eine profane Realität zu integrieren – ein Vorhaben, das ebenso unmöglich wie zerstörerisch ist. Dies ist der Kern der pornografischen Illusion: Sie präsentiert einen Standard, der nicht existiert; eine Perfektion, die künstlich hergestellt wurde.
Warum also floriert dieses Geschäftsmodell? Welchen Nerv trifft es?
Der Treibstoff dieser Industrie ist genau jene Einsamkeit und Unzufriedenheit, die sie vorzugeben scheint zu heilen.
Sie macht Jagd auf unseren ureigensten Instinkt – den Fortpflanzungstrieb – und setzt ihn als Waffe gegen uns selbst ein. Es ist in jeder Hinsicht ein digitales Betäubungsmittel. Eine Droge, die ein Verlangen auslöst, das so tief geht, dass sich Widerstand zwecklos anfühlt.
Diese Industrie verkauft keine echte Verbindung; sie verkauft nur deren Schatten. Sie ist ein Parasit, der vom emotionalen Hunger einer Gesellschaft lebt, die vergessen hat, wie man das Wahre in einer Welt voller Geister findet.”
Verliert unsere Gesellschaft die Fähigkeit, zwischen einem biologischen Trieb und einem kommerziellen Produkt zu unterscheiden?
— Das Kollektiv
from An Open Letter
Yeah I crashed hard. I couldn’t remember what happened two days ago at all.
from tomson darko
Op YouTube staan vele video’s van de Canadese acteur Jim Carrey (1962) die praat als een verlicht iemand.
Zijn interview op de rode loper bij een modeshow in 2017 is hilarisch.
Een presentatrice spreekt hem aan en hij draait letterlijk rondjes om haar heen.
Dan zegt hij: ‘Ik was op zoek naar het meest betekenisloze evenement dat er bestaat en ik vond dit.’
De presentatrice is een beetje verward. ‘We eren hier beroemde iconen,’ zegt ze.
‘Iconen? Dat is wel echt het laagste dat er bestaat. Geloof jij in iconen dan?’ vraagt Jim.
Ze wil antwoorden en dan zegt Jim:
‘Ik geloof niet in persoonlijkheden. En ik geloof er niet in dat jij bestaat. Er is een parfum hier in de lucht.’
De presentatrice laat zich niet kennen. ‘Geloof je er dan niet in dat iconen voor verandering kunnen zorgen? Door mensen te inspireren? Als artiesten?’
Jim roept daarna met een gekke stem wat maffe termen. Ze kijkt totaal verward in de camera alsof Jim is doorgedraaid.
Dan zegt Jim:
‘Ik geloof niet in iconen. Ik geloof niet in persoonlijkheden. Ik geloof dat er iets achter zit waarin je vrede vindt. Iets voorbij je masker. Voorbij de S op je superheldenpak om kogels te weren. We zijn een veld van energie dat om zichzelf heen danst. En het boeit me eigenlijk niets, dit.’
De presentatrice probeert hem weer op aarde te krijgen. ‘Maar jij bent hier, heel mooi gekleed in je pak.’
Jim: ‘Er is geen ik. Dit is een droom. We zijn een cluster van driehoekkubussen die om elkaar heen bewegen.’
De presentatrice: ‘Maar er is een wereld, toch? En er gebeurt heel veel in die wereld.’
‘Er is geen wereld,’ zegt Jim. ‘Dat is juist het goede nieuws hieraan. We doen er totaal niet toe.’
Daarna loopt hij weg.
Dit interview was vlak voordat er een documentaire genaamd Jim & Andy uitkwam.
Die docu zet je enorm aan het denken over identiteit en je persoonlijkheid.
Het gaat over de opnames van de film Man on the Moon (1999). Daarin speelt Jim Carrey het levensverhaal van Andy Kaufman (1949–1984). Een ontregelende Amerikaanse comedian die op 35-jarige leeftijd is overleden. Hij had een absurdistische vorm van humor die maar weinig mensen begrepen.
Niet per se grappig, maar dat is dus het punt. Het is enorm ontregelend en provocatief.
Om Andy te spelen veranderde Jim in hem. De hele tijd. 24 uur per dag. Dus ook als de camera’s niet draaiden.
Hij was provocatief, ontregelend en niet te peilen.
Carrey zegt zelf dat het voelde alsof de ziel van Andy hem kwam bezoeken en hem toestemming gaf om zo te zijn.
Het leverde heel veel gedoe op achter de schermen van deze film. Veel ruzies. Getreiter. Ongelukken. Mensen in tranen. Verwarring.
Dit was allemaal gefilmd ter promotie van de film. Alleen de filmmaatschappij was zo geschokt door wat daar allemaal achter de schermen gebeurde, dat die de geschoten beelden wilde vernietigen in plaats van gebruiken.
Twintig jaar later hebben ze er toch een documentaire van kunnen maken: Jim & Andy. Waarin Jim Carrey met andere mensen terugblikt op die tijd.
Doordat Jim Carrey niet meer bestond op de set, ga je je afvragen wat identiteit eigenlijk is.
Uiteraard is dat het beroep van wat acteurs doen: veranderen in iemand anders. Maar dat is ook wat mensen doen. We creëren een persoonlijkheid om ons heen en we gaan erin geloven dat we zo zijn.
Een soort opgeklopte versie van onze successen of onze manier om gevoelens te onderdrukken.
In de docu zegt Jim dat we een persoonlijkheid om ons heen bouwen, zodat we niet hoeven na te denken over het idee dat we helemaal niets voorstellen. Dat we niet hoeven toe te geven aan onze angst dat er vanzelf een dag komt dat anderen ook inzien dat we niets kunnen.
Carrey kwam zelf in een identiteitscrisis terecht toen de opnames waren afgelopen en hij afscheid nam van Andy. Toen was hij weer Jim Carrey met zijn sombere gedachten en emotionele problemen.
Je komt vanzelf op een punt in je leven terecht waarop je je gaat afvragen of je nog wel gelooft in jezelf. In de persoonlijkheid die je zelf hebt gecreëerd. In de keuzes die je hebt gemaakt.
Sta je jezelf toe, vraagt Jim zich af, om andere mensen toe te laten om van de ‘echte’ jou te houden?
Of blijf je een toneelstuk opvoeren van iemand die je nooit bent geweest en graaf je zo je eigen graf in het leven?
from tomson darko
Als je schermtijd hoger is dan de uren dat je werkt, is er iets niet helemaal in evenwicht in je leven.
Sommige jongeren zitten 6 uur per dag naar hun smartphone te kijken. Waarbij ik de vraag aan jou kan stellen: hoeveel uur per dag staar jij naar de telefoon?
Ik zeg het maar gewoon: het telefoongebruik heeft heel veel invloed op hoe we ons voelen. Het doet veel met de gedachten die we hebben.
Ken je de ouderbeweging ‘smartphone vrij opgroeien’?
Zij hebben een duidelijke stelling:
Omdat een smartphone niet gemaakt is voor het kinderbrein. En om kinderen een zorgeloze tijd te gunnen vol creativiteit, vriendjes en verveling, in plaats van staren naar een glasplaatje dat licht geeft.
Laten we wel wezen.
Je hoeft de resultaten van wetenschappelijk onderzoek niet eens te horen om te weten wat de negatieve effecten op jongeren zijn.
Deze klachten door schermgebruik gelden toch ook voor ons?
Ook hebben we minder aandacht en concentratie dan vroeger en slapen we veel slechter.
Als de telefoon dit al met ons doet, wat doet dat dan met breinen die nog volop in ontwikkeling zijn?
Precies!
Maar ik ben de moeilijkste niet hoor.
Verre van zelfs.
Maar laten we wel wezen: de techbedrijven zijn er niet voor jouw en mijn welzijn, maar vooral voor de portemonnee van zichzelf.
Ik heb een oplossing.
==
Laat ik vooropstellen dat ik van het pragmatisme ben.
Dat is een chique woord dat betekent dat ik geen radicale beslissingen hoef te nemen om mezelf beter te beschermen tegen mezelf (en tegen Silicon Valley).
Neem bijvoorbeeld het fenomeen ‘alcohol’.
Of zoals Homer Simpson uit The Simpsons ooit zei:
“To alcohol! The cause of, and solution to, all of life's problems.”
Door met de auto naar een feestje te gaan, weet je dat je niet je weekend verpest met een kater. Je neemt daar een biertje voor de smaak en je stapt dan over op water of prik. Dat is een pragmatische manier om jezelf te beschermen tegen jezelf.
Dat hebben we ook nodig qua telefoongebruik.
Om ons mentale welzijn te beschermen.
Goed voor jezelf zorgen is ook op een juiste manier met je smartphone omgaan.
Ik zeg niet vaarwel tegen doomscrollen. Doomscrollen is leuk. Maar niet de hele dag. En helemaal niet als je er nog leger en somberder door voelt. Beter is om het op een bepaald moment te doen en jezelf toestemming te geven om het een uur te doen. Snap je?
In het boek smartphonevrij opgroeien pleit de ouderbeweging voor hele duidelijke regels voor hun kinderen. Want grijstinten zijn het begin van het verval bij kinderen en tieners.
Bij de introductie van de smartphone in het leven van de tiener worden er kaders meegegeven. Zoals dat de smartphone alleen in de woonkamer wordt gebruikt. En ook welke apps erop komen. En hoelang ze erop mogen.
Ja.
Het klinkt paradoxaal. Maar in de beperking zit vrijheid.
Jij hebt kaders nodig.
De meeste kaders komen je wel bekend voor. Apps om je schermtijd te blokkeren. Een steen in huis nemen waar je je telefoon tegenaan moet houden om erop te mogen. Of een praktische telefoon aanschaffen.
En hoe heeft dat voor je gewerkt?
Laat ik drie alternatieve manieren introduceren. Je kunt ze allemaal toepassen of één ervan.
==
Er is een vrij simpele manier om extreem op te laden op je vrije dag, zonder te veranderen in een monnik vrij van 4G en wifi en alleen de bijbel als vriend.
Je hoeft niet eens de dag te beginnen zonder je eerste shotje schermtijd.
Nee.
Het enige wat ik je vraag is: op het moment dat je op je vrije dag het bed uitstapt, leg je de telefoon onder je kussen.
Laat dat ding daar nog maar even rusten.
Aai eroverheen. Geef het nog een kusje.
Ja.
Start je ochtend uit bed zonder telefoon.
Wat je sowieso vanaf het eerste moment voelt is rust. Want je weet dat je de telefoon het komende uur niet meer gaat aanraken.
(wat is een uur? Je bent toch geen junk? Dat ben je wel. Maar acteer alsof je het niet bent.)
Wat er ook gebeurt zijn flitsende vragen waar je je telefoon voor nodig hebt om een antwoord te vinden:
Maar ja. Je telefoon is niet in de buurt.
Laat de gedachte rusten.
Je hoeft er ook geen laptop bij te pakken.
Schrijf het desnoods op, om straks alsnog op te zoeken.
Ik moedig je aan om nu echt iets ontspannends te doen.
Doe wat yoga-oefeningen. Of lees een paar pagina’s. Of kijk geconcentreerd een stukje van een film.
Alles om je af te leiden van de hang naar schermtijd.
Wat er na ongeveer een uur gebeurt, is dat je gedachten dieper worden.
Je denkt na over iets leuks doen met een goede vriend. Of over een oplossing voor wat je wil met de inrichting van de gang. Of nog fundamenteler: wat je wil met je werk of studie of relatie of je leven.
Als je ook nog eens gaat douchen (zonder je telefoon onder je kussen aan te raken), gaan die gedachten ÉCHT de diepte in.
Je komt als een frisser mens de douche uit met nieuwe inzichten. En vooral: kalmte.
En dat in slechts een uur, anderhalf uur zonder telefoon als start van je vrije dag.
==
Een makkelijke manier om je schermgebruik terug te dringen, zonder radicale toepassingen, is te gaan ‘vasten’ op de verslavende apps.
Dit doe je door de 16:8-uurregel toe te passen. Ook wel intermitterend vasten genoemd. Oftewel tijdsgebonden op social media zitten.
=
De laatste methode is een ‘tricky’ methode, omdat je er snel misbruik van kan maken.
Maar het idee is als volgt.
Leg je telefoon op een plek en laat het daar ook altijd liggen als je in huis bent. Bijvoorbeeld in de hoek van het aanrechtblad. Of bij een stoel in de kamer.
Je kunt eventueel een oplaadkabel van anderhalf of twee meter kopen die je eraan koppelt met als regel dat de telefoon altijd aan het draad vast moet zitten.
Dat wordt de ‘regio’ waar je de telefoon kunt oppakken om te zien of je iets gemist hebt of om iets te googlen of snel iets te bestellen.
Het voordeel is nu dat je de telefoon niet het gehele huis doorsleept. Daardoor stopt die automatische reflex om het ding zonder duidelijke reden uit je zak te pakken en ernaar te kijken en weer veel tijd te verliezen.
Hoe fijn is dat om een film te kijken zonder direct op je telefoon te googlen wie de nieuwste vriendin van Leonardo DiCaprio (1974) is of van Harry Styles (1994)?
De was opvouwen gaat sneller. Al je huishoudelijke klusjes gaan sneller. Want er is even geen afleiding.
Vergis je niet hoeveel rust het al geeft als je telefoon op een vaste plek ligt.
Probeer een van deze voorstellen sowieso een dag en voel je bevrijd.
Dat kun je vervolgens blijven herhalen.
Je bent geen slaaf van je scherm.
Cultiveer de analoge wereld.
Je nagels zijn er om je vingertoppen te beschermen omdat die zo gevoelig zijn. Laat ze dan ook wat voelen, behalve een glasplaatje met oplichtende pixels.
Voel een papieren pagina, de zwaarte van een hamer, de ruwe stenen van het huis, iemands vettige huid, de steel van een hark, het knopje van een fototoestel, de naald van een lp-speler, het koele metaal van een sleutelbos, het afgewerkte hout van een tafelrand, het karton van een bol.com-doos, de ribbels van een koffiekop, het versleten leer van een oude tas, het touw van een waslijn, de knopen van een jas, het stof van een gordijn, het grind onder je schoenen, de bast van een boom.
Voel.
from
laxmena
An ongoing weekend project documenting the journey of uncovering hidden connections in corporate financial filings—the stumbles, the learnings, the 'aha!' moments, and everything in between. Started January 2025.
The core idea is simple but ambitious: find hidden connections and risk trails that aren't immediately obvious when you're just reading through a 10-K filing.
Instead of treating each financial document as an isolated artifact, I'm building a system to: – Extract risk factors from 10-K filings (2004-2025) across 75 companies – Embed and connect these risks to find non-obvious relationships – Build a graph that reveals risk clusters, patterns, and “trails” that could signal systemic weaknesses or early warning signs
Why 10-K filings? Because companies are required to disclose risks in specific sections (Item 1 and Item 1a), and there's a decade+ of structured data just sitting there.
Here's the full pipeline I'm building toward:
[Raw Financial Data]
├── SEC Filings (10-K/Q) ── News Articles ── Earnings Transcripts ── Other Reports
│
▼
[1. Ingestion & Chunking]
→ Parse documents (PDF/HTML) → Split into sentences → Group into ~500-word chunks
│
▼
[2. Risk Extraction]
→ Use Gemini Flash per chunk → Extract 3-5 specific risk factors + severity
│
▼
[3. Storage & Embeddings]
→ SQLite DB (with sqlite-vec) → Embed risk labels (embedding-gemma-300m) → Deduplicate similar risks
│
▼
[4. Graph Construction]
→ Nodes = unique risks
→ Edges =
├─ Semantic similarity (embeddings)
└─ Statistical co-occurrence (PMI)
│
▼
[5. Hierarchical Clustering]
→ Apply Leiden algorithm (Surprise function) → Build risk hierarchy tree
→ Compute novelty scores for under-explored areas
│
▼
[6. CLI / Interface Layer]
→ Persistent server for fast queries
→ Commands: search_risks, browse_tree, cross_report_risks, etc.
│
▼
[7. Agent Workflow (Claude / similar)]
├── Stage 1: Ideation ── Browse tree → Propose novel risk chains (novelty bias)
├── Stage 2: Research ── Dive into chunks → Extract & order excerpts
└── Stage 3: Output ── Generate RiskChain (visual trail with edges + narrative)
│
▼
[8. Presentation & Action]
→ Web dashboard / exported report
→ Visual graph + highlighted excerpts + suggested hedges / alerts
→ Human review → Iterate via feedback
It's ambitious. It's probably overambitious. But that's the goal.
Phase: 2 – Chunking Strategy ✓
Progress: Data downloaded → Chunking complete → Ready for Risk Extraction
I'm documenting this journey every weekend—the wins, the blockers, the learnings. If you want regular updates on how RiskChain develops, subscribe below to get new posts delivered to your inbox.
What I built:
Downloaded 10-K filings for 75 companies from 2004-2025 using the Python edgartools library. Curated a list of significant companies (including ones that went bankrupt in 2008—why not?). Got the script working and only extracting the relevant sections (Item 1, Item 7, Item 8) to keep things lean.
The messy parts (aka real life):
I initially tried sec-edgar-downloader to connect to SEC and download. Spent way too much time on this approach, got stuck in the data cleaning rabbit hole, and realized I was losing sight of the actual goal. The real issue? Many of the 10-K filings before the SEC standardized their item categorization didn't play nice with the tool.
Lesson learned: when you're iterating, it's okay to abandon the “perfect” approach for one that ships faster.
Then I switched to edgartools (also known as edgar). This library gave me more flexibility, though the documentation still wasn't intuitive for my specific use case. But instead of giving up, I dug into the source code. That's when things clicked. Sometimes the best learning comes from reading other people's code instead of waiting for docs to explain everything.
The 'aha!' moment: > My wife helped me understand what Item 1, Item 1a, Item 7, and Item 8 actually mean in a 10-K filing. She translated the financial jargon into plain English, and suddenly the document structure made sense. Having someone who can bridge the domain knowledge gap is invaluable. I realized I was building this in a foreign domain—finance is not my native language, and that's okay.
What blocked me:
– Figuring out the right tool for downloading (sec-edgar-downloader vs edgartools vs rolling my own)
– Understanding that parsing 10-K files is genuinely harder than it looks (inconsistent structures across years, weird formatting, embedded tables)
Next up: Phase 2: Chunking strategy. Need to figure out how to split these documents intelligently for downstream LLM tasks.
What I built:
Implemented chunking using wtpsplitter and stored all chunks as markdown files with YAML frontmatter metadata (ticker, filing date, company name, chunk ID, item section). Now sitting on several thousand chunks, each ~1000 characters max, ready for extraction.
The messy parts (aka real life):
I tried two chunking strategies: RecursiveChunker and wtpsplitter. RecursiveChunker felt like brute force—just splitting on token counts. But wtpsplitter was smarter; it respects sentence boundaries and creates more semantically coherent chunks.
Storing these as markdown files locally feels like a step backward (shouldn't I be using a database?), but honestly, it's perfect for iteration. I can inspect the chunks, debug the metadata, and understand what's happening before I add the complexity of a full DB setup.
The 'aha!' moment: > Chunk quality matters way more than I initially thought. The way you split text directly impacts whether an LLM can extract meaningful risk factors later. Sentence-aware chunking beats token-counting brutality. This made me reconsider the whole “let me jump straight to a database” instinct. Sometimes you need to slow down and get the fundamentals right first.
What blocked me: – Deciding between chunking strategies (trial and error on a few approaches) – Understanding the tradeoff between local file storage and “proper” database setup (spoiler: local storage is fine for now) – Realizing I was overthinking this phase when the real value comes next
Next up: Phase 3: Risk Extraction. I'll iterate through each chunk and use Claude/Gemini to extract 3-5 risk factors per chunk. This is where the actual signal starts emerging.
Most financial analysis tools treat risks as isolated items. “Company X faces supply chain risk.” “Company Y has regulatory exposure.” But what if you could see that 40 companies in the industrial sector all mention the same emerging regulatory risk, and 3 of them went bankrupt 2 years later?
That's the thesis here. Hidden connections. Patterns that emerge when you look at scale.
Also, I'm learning a ton: SEC filing structures, chunking strategies, embedding models, graph theory, the Leiden algorithm... This is weekend learning on steroids.
Updates added weekly (weekends permitting). Check back for new learnings, blockers, and wins.
from Thoughts on Nanofactories
It is the future, and Nanofactories have accelerated our material ability to make discoveries. Yet there is a sense amongst many that with this invention we have reached “the end of discovery” – that there is nothing new worth exploring.
This is clearly not correct on two levels: Firstly, the fact that happiness surveys continue to show lack of meaning to be a very common malaise in people’s lives means that having the entire physical universe at our fingertips hasn’t fixed everything. Secondly, there are new discoveries happening all the time. They just don’t tend to be highlighted in mainstream news outlets. This is where we have to rely on ourselves to cultivate an ecosystem of both awareness of new discoveries, and a regular ritual to help us make our own. This won’t necessary solve an entire lack of meaning in someone’s life, but it can go a long way to making the universe feel more open and full of possibility again. A good way to go about this, I’m suggesting, is to cultivate a sense of expected discovery.
Many years ago, there was a video game which captured this sense of expected discovery well. The Legend of Zelda: The Wind Waker placed players in a world of open ocean, dotted with islands to discover. A key feature was the map of this world being made up of a grid of squares – with each square having exactly one island to discover. This created a mechanic-tight loop of expectation and discovery. It was almost ritual:
You enter a new square knowing there is an island to find.
You discover and explore the island.
Knowing you have discovered that one island, you move on to the next grid square.
Real life is obviously a lot less uniform than that. It is very rare to know when a new discovery is coming. But this doesn’t mean we can’t create regular rituals around the process to keep things progressing and keep things engaging. We might also make discoveries more often if we reconsider our benchmark for what constitutes a discovery. Not all islands in Wind Waker are of the same depth and quality. But more importantly, they are reliably there. While it wouldn’t be fair to expect a revolution every time you sit down at your workbench, aiming for micro-discoveries may be more realistic, and thus, more sustainable. The aim here is to craft a routine or ritual that is endlessly repeatable.
This can look different for each person, but I am particular inspired by “Makers” throughout history who have set aside one or two days a week, outside of their “day-job”, to research or make something.
There were always two big obstacles to this schedule though. The first was actually securing time to focus on the project. This was (and still is) quite challenging to balance with other important parts of life – usually family, relationships, and general home maintenance. That’s legitimate. That regular time may be one night a week for some people. The main thing is that it is regular, and a roughly known amount of time to help scope out project size and expectations.
The second obstacle was usually an internal sense of perfectionism, or a sense that if you can only spend one day, or one night, on something, then it isn’t worth doing. This is a big problem. Think of all the discoveries that could have been if all the perfectionists in the world were willing to be a little more relaxed on the quality of the final product, but more focused on the quality of the process and ritual itself. Remember – not all islands are the same, but you know they are reliably there. Even the most ambitious and perfect inventions soon become historical dot-points. In reality, it is the larger chain of discoveries, and the chain of culture, that advances humanity.
We’ve had Nanofactories for years, and fortunately our lives are no longer filled with day jobs for financial survival. Now it is worth asking how we might craft our weeks to defy this end of discovery.
from
SmarterArticles

In November 2025, a mysterious country music act named Breaking Rust achieved something unprecedented: the AI-generated song “Walk My Walk” topped Billboard's Country Digital Song Sales chart, marking the first time an artificial intelligence creation had claimed the number one position on any Billboard chart. The track, produced entirely without human performers using generative AI tools for vocals, instrumentation, and lyrics, reached its peak with approximately 3,000 digital downloads. That same month, Xania Monet, an AI R&B artist created using the Suno platform, became the first known AI artist to earn enough radio airplay to debut on a Billboard radio chart, entering the Adult R&B Airplay ranking at number 30.
These milestones arrived not with fanfare but with an uncomfortable silence from an industry still grappling with what they mean. The charts that have long served as the music industry's primary measure of success had been successfully penetrated by entities that possess neither lived experience nor artistic intention in any conventional sense. The question that follows is not merely whether AI can achieve commercial validation through existing distribution and ranking systems. It clearly can. The more unsettling question is what this reveals about those systems themselves, and whether the metrics the industry has constructed to measure success have become so disconnected from traditional notions of artistic value that they can no longer distinguish between human creativity and algorithmic output.
The music industry has always operated through gatekeeping structures. For most of the twentieth century, these gates were controlled by human intermediaries: A&R executives who discovered talent in smoky clubs, radio programmers who decided which songs reached mass audiences, music journalists who shaped critical discourse, and record label executives who determined which artists received investment and promotion. These gatekeepers were imperfect, often biased, and frequently wrong, but they operated according to evaluative frameworks that at least attempted to assess artistic merit alongside commercial potential.
The transformation began with digital distribution and accelerated with streaming. By the early 2020s, the typical song on the Billboard Hot 100 derived approximately 73 per cent of its chart position from streaming, 25 per cent from radio airplay, and a mere 2 per cent from digital sales. This represented a dramatic inversion from the late 1990s, when radio airplay accounted for 75 per cent of a song's chart fortunes. Billboard's methodology has continued to evolve, with the company announcing in late 2025 that effective January 2026, the ratio between paid subscription and ad-supported on-demand streaming would be adjusted to 1:2.5, further cementing streaming's dominance whilst simultaneously prompting YouTube to withdraw its data from Billboard charts in protest over what it characterised as unfair undervaluation of ad-supported listening. The metrics that now crown hits are fundamentally different in character: stream counts, skip rates, playlist additions, save rates, and downstream consumption patterns. These are measures of engagement behaviour, not assessments of artistic quality.
Streaming platforms have become what scholars describe as the “new gatekeepers” of the music industry. Unlike their predecessors, these platforms wield what researchers Tiziano Bonini and Alessandro Gandini term “algo-torial power,” a fusion of algorithmic and curatorial capabilities that far exceeds the influence of traditional intermediaries. Spotify alone, commanding approximately 35 per cent of the global streaming market in 2025, manages over 3,000 official editorial playlists, with flagship lists like Today's Top Hits commanding over 34 million followers. A single placement on such a playlist can translate into millions of streams overnight, with artists reporting that high positions on editorial playlists generate cascading effects across their entire catalogues.
Yet the balance has shifted even further toward automation. Since 2017, Spotify has developed what it calls “Algotorial” technology, combining human editorial expertise with algorithmic personalisation. The company reports that over 81 per cent of users cite personalisation as what they value most about the platform. The influence of human-curated playlists has declined correspondingly. Major music labels have reported significant drops in streams from flagship playlists like RapCaviar and Dance Hits, signalling a fundamental change in how listeners engage with curated content. Editorial playlists, whilst still powerful, often feature songs for only about a week, limiting their long-term impact compared to algorithmic recommendation systems that continuously surface content based on listening patterns.
This shift has consequences for what can succeed commercially. Algorithmic recommendation systems favour predictable structures and familiar sonic elements. Data analysis suggests songs that maintain listener engagement within the first 30 seconds receive preferential treatment, incentivising shorter introductions and immediate hooks, often at the expense of nuanced musical development.
Artists and their teams are encouraged to optimise for “asset rank,” a function of user feedback reflecting how well a song performs in particular consumption contexts. The most successful strategies involve understanding algorithmic nuances, social media marketing, and digital engagement techniques.
Into this optimisation landscape, AI-generated music arrives perfectly suited. Systems like Suno, the platform behind both Xania Monet and numerous other AI artists, can produce content calibrated to the precise engagement patterns that algorithms reward. The music need not express lived experience or demonstrate artistic growth. It need only trigger the behavioural signals that platforms interpret as success.
In November 2025, French streaming service Deezer commissioned what it described as the world's first survey focused on perceptions and attitudes toward AI-generated music. Conducted by Ipsos across 9,000 participants in eight countries, the study produced a startling headline finding: when asked to listen to three tracks and identify which was fully AI-generated, 97 per cent of respondents failed.
A majority of participants (71 per cent) expressed surprise at this result, whilst more than half (52 per cent) reported feeling uncomfortable at their inability to distinguish machine-made music from human creativity. The findings carried particular weight given the survey's scale and geographic breadth, spanning markets with different musical traditions and consumption patterns.
The implications extend beyond parlour game failures. If listeners cannot reliably identify AI-generated music, then the primary quality filter that has historically separated commercially successful music from unsuccessful music has been compromised. Human audiences, consciously or not, have traditionally evaluated music according to criteria that include emotional authenticity, creative originality, and the sense that a human being is communicating something meaningful.
If AI can convincingly simulate these qualities to most listeners, then the market mechanism that was supposed to reward genuine artistic achievement has become unreliable.
Research from MIT Media Lab exposed participants to both AI and human music under various labelling conditions, finding that participants were significantly more likely to rate human-composed music as more effective at eliciting target emotional states, regardless of whether they knew the composer's identity. A 2024 study published in PLOS One compared emotional reactions to AI-generated and human-composed music among 88 participants monitored through heart rate, skin conductance, and self-reported emotion.
Both types triggered feelings, but human compositions scored consistently higher for expressiveness, authenticity, and memorability. Many respondents described AI music as “technically correct” but “emotionally flat.” The distinction between technical competence and emotional resonance emerged as a recurring theme across multiple research efforts, suggesting that whilst AI can successfully mimic surface-level musical characteristics, deeper qualities associated with human expression remain more elusive.
These findings suggest that humans can perceive meaningful differences when prompted to evaluate carefully. But streaming consumption is rarely careful evaluation. It is background listening during commutes, ambient accompaniment to work tasks, algorithmic playlists shuffling in the background of social gatherings. In these passive consumption contexts, the distinctions that laboratory studies reveal may not register at all.
The SyncVault 2025 Trends Report found that 74 per cent of content creators now prefer to license music from identifiable human composers, citing creative trust and legal clarity. A survey of 100 music industry insiders found that 98 per cent consider it “very important” to know if music is human-made, and 96 per cent would consider paying a premium for a human-verified music service. Industry professionals, at least, believe the distinction matters. Whether consumers will pay for that distinction in practice remains uncertain.
The chart success of AI-generated music exposes a deeper fragmentation: different stakeholder groups in the music industry operate according to fundamentally different definitions of what “success” means, and these definitions are becoming increasingly incompatible.
For streaming platforms and their algorithms, success is engagement. A successful track is one that generates streams, maintains listener attention, triggers saves and playlist additions, and encourages downstream consumption. These metrics are agnostic about the source of the music. An AI-generated track that triggers the right engagement patterns is, from the platform's perspective, indistinguishable from a human creation that does the same. The platform's business model depends on maximising time spent listening, regardless of whether that listening involves human artistry or algorithmic simulation.
For record labels and investors, success is revenue. The global music market reached $40.5 billion in 2024, with streaming accounting for 69 per cent of global recorded music revenues, surpassing $20 billion for the first time. Goldman Sachs projects the market will reach $110.8 billion by 2030.
In this financial framework, AI music represents an opportunity to generate content with dramatically reduced labour costs. An AI artist requires no advances, no touring support, no management of creative disagreements or personal crises. As Victoria Monet observed when commenting on AI artist Xania Monet, “our time is more finite. We have to rest at night. So, the eight hours, nine hours that we're resting, an AI artist could potentially still be running, studying, and creating songs like a machine.”
Hallwood Media, the company that signed Xania Monet to a reported $3 million deal, is led by Neil Jacobson, formerly president of Geffen Records. The company has positioned itself at the forefront of AI music commercialisation, also signing imoliver, described as the top-streaming “music designer” on Suno, in what was characterised as the first traditional label signing of an AI music creator. Jacobson framed these moves as embracing innovation, stating that imoliver “represents the future of our medium.”
For traditional gatekeeping institutions like the Grammy Awards, success involves human authorship as a precondition. The Recording Academy clarified in its 66th Rules and Guidelines that “A work that contains no human authorship is not eligible in any Categories.” CEO Harvey Mason Jr. elaborated: “Here's the super easy, headline statement: AI, or music that contains AI-created elements is absolutely eligible for entry and for consideration for Grammy nomination. Period. What's not going to happen is we are not going to give a Grammy or Grammy nomination to the AI portion.”
This creates a category distinction: AI-assisted human creativity can receive institutional recognition, but pure AI generation cannot. The Grammy position attempts to preserve human authorship as a prerequisite for the highest forms of cultural validation.
But this distinction may prove difficult to maintain. If AI tools become sufficiently sophisticated, determining where “meaningful human contribution” begins and ends may become arbitrary. And if AI creations achieve commercial success that rivals or exceeds Grammy-winning human artists, the cultural authority of the Grammy distinction may erode.
For human artists, success often encompasses dimensions that neither algorithms nor financial metrics capture: creative fulfilment, authentic emotional expression, the sense of communicating something true about human experience, and recognition from peers and critics who understand the craft involved.
When Kehlani criticised the Xania Monet deal in a social media post, she articulated this perspective: “There is an AI R&B artist who just signed a multimillion-dollar deal... and the person is doing none of the work.” The objection is not merely economic but existential. Success that bypasses creative labour does not register as success in the traditional artistic sense.
SZA connected her critique to broader concerns, noting that AI technology causes “harm” to marginalised neighbourhoods through the energy demands of data centres. She asked fans not to create AI images or songs using her likeness.
Muni Long questioned why AI artists appeared to be gaining acceptance in R&B specifically, suggesting a genre-specific vulnerability: “It wouldn't be allowed to happen in country or pop.” This observation points to power dynamics within the industry, where some artistic communities may be more exposed to AI disruption than others.
If AI systems can achieve commercial validation through existing distribution and ranking systems without the cultural legitimacy or institutional endorsement traditionally required of human artists, what does this reveal about those gatekeeping institutions?
The first revelation is that commercial gatekeeping has largely decoupled from quality assessment. Billboard charts measure commercial performance. They count downloads, streams, and airplay. They do not and cannot assess whether the music being counted represents artistic achievement.
For most of chart history, this limitation mattered less because commercial success and artistic recognition, whilst never perfectly aligned, operated in the same general neighbourhood. The processes that led to commercial success included human gatekeepers making evaluative judgements about which artists to invest in, which songs to programme, and which acts to promote. AI success bypasses these evaluative filters entirely.
The second revelation concerns the vulnerability of metrics-based systems to manipulation. Billboard's digital sales charts have been targets for manipulation for years. The Country Digital Song Sales chart that Breaking Rust topped requires only approximately 2,500 downloads to claim the number one position.
This is a vestige of an era when iTunes ruled the music industry, before streaming subscription models made downloads a relic. In 2024, downloads accounted for just $329 million according to the RIAA, approximately 2 per cent of US recorded music revenue.
Critics have argued that the situation represents “a Milli Vanilli-level fraud being perpetrated on music consumers, facilitated by Billboard's permissive approach to their charts.” The Saving Country Music publication declared that “Billboard must address AI on the charts NOW,” suggesting the chart organisation is avoiding “gatekeeping” accusations by remaining content with AI encroaching on its rankings without directly addressing the issue.
If the industry's most prestigious measurement system can be topped by AI-generated content with minimal organic engagement, the system's legitimacy as a measure of popular success comes into question.
The third revelation is that cultural legitimacy and commercial success have become separable in ways they previously were not. Throughout the twentieth century, chart success generally brought cultural legitimacy. Artists who topped charts received media attention, critical engagement, and the presumption that their success reflected some form of popular validation.
AI chart success does not translate into cultural legitimacy in the same way. No one regards Breaking Rust as a significant country artist regardless of its chart position. The chart placement functions as a technical achievement rather than a cultural coronation.
This separability creates an unstable situation. If commercial metrics can be achieved without cultural legitimacy, and cultural legitimacy cannot be achieved through commercial metrics alone, then the unified system that connected commercial success to cultural status has fractured. Different stakeholders now operate in different legitimacy frameworks that may be incompatible.
Beyond questions of legitimacy, AI-generated music creates concrete economic pressures on human artists through royalty pool dilution. Streaming platforms operate on pro-rata payment models: subscription revenue enters a shared pool divided according to total streams. When more content enters the system, the per-stream value for all creators decreases.
Deezer has been the most transparent about the scale of this phenomenon. The platform reported receiving approximately 10,000 fully AI-generated tracks daily in January 2025. By April, this had risen to 20,000. By September, 28 per cent of all content delivered to Deezer was fully AI-generated. By November, the figure had reached 34 per cent, representing over 50,000 AI-generated tracks uploaded daily.
These tracks represent not merely competition for listener attention but direct extraction from the royalty pool. Deezer has found that up to 70 per cent of streams generated by fully AI-generated tracks are fraudulent.
The company's Beatdapp co-CEO Morgan Hayduk noted: “Every point of market share is worth a couple hundred million US dollars today. So we're talking about a billion dollars minimum, that's a billion dollars being taken out of a finite pool of royalties.”
The connection between AI music generation and streaming fraud became explicit in September 2024, when a North Carolina musician named Michael Smith was indicted by federal prosecutors over allegations that he used an AI music company to help create “hundreds of thousands” of songs, then used those AI tracks to steal more than $10 million in fraudulent streaming royalty payments since 2017. Manhattan federal prosecutors charged Smith with three counts of wire fraud, wire fraud conspiracy, and money laundering conspiracy, making it the first federal case targeting streaming fraud.
Universal Music Group addressed this threat pre-emptively, placing provisions in agreements with digital service providers that prevent AI-generated content from being counted in the same royalty pools as human artists. UMG chief Lucian Grainge criticised the “exponential growth of AI slop” on streaming services. But artists not represented by major labels may lack similar protections.
A study conducted by CISAC (the International Confederation of Societies of Authors and Composers, representing over 5 million creators worldwide) and PMP Strategy projected that nearly 24 per cent of music creators' revenues are at risk by 2028, representing cumulative losses of 10 billion euros over five years and annual losses of 4 billion euros by 2028 specifically. The study further predicted that generative AI music would account for approximately 20 per cent of music streaming platforms' revenues and 60 per cent of music library revenues by 2028. Notably, CISAC reported that not a single AI developer has signed a licensing agreement with any of the 225 collective management organisations that are members of CISAC worldwide, despite societies approaching hundreds of AI companies with requests to negotiate licences. The model that has sustained recorded music revenues for the streaming era may be fundamentally threatened if AI content continues its current growth trajectory.
The relationship between AI music systems and human artists extends beyond competition. The AI platforms achieving chart success were trained on human creativity. Suno CEO Mikey Shulman acknowledged that the company trains on copyrighted music, stating: “We train our models on medium- and high-quality music we can find on the open internet. Much of the open internet indeed contains copyrighted materials.”
Major record labels responded with landmark lawsuits in June 2024 against Suno and Udio, the two leading AI music generation platforms, seeking damages of up to $150,000 per infringed recording. The legal battle represents one of the most significant intellectual property disputes of the streaming era, with outcomes that could fundamentally reshape how AI companies source training data and how human creators are compensated when their work is used to train commercial AI systems.
This creates a paradox: AI systems that threaten human artists' livelihoods were made possible by consuming those artists' creative output without compensation. The US Copyright Office's May 2025 report provided significant guidance on this matter, finding that training and deploying generative AI systems using copyright-protected material involves multiple acts that could establish prima facie infringement. The report specifically noted that “the use of more creative or expressive works (such as novels, movies, art, or music) is less likely to be fair use than use of factual or functional works” and warned that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets... goes beyond established fair use boundaries.” Yet legal resolution remains distant, and in the interim, AI platforms continue generating content that competes with the human artists whose work trained them.
When Victoria Monet confronted the existence of Xania Monet, an AI persona whose name, appearance, and vocal style bore resemblance to her own, she described an experiment: a friend typed the prompt “Victoria Monet making tacos” into an AI image generator, and the system produced visuals that looked uncannily similar to Xania Monet's promotional imagery.
Whether this reflects direct training on Victoria Monet's work or emergent patterns from broader R&B training data, the practical effect remains the same. An artist's distinctive identity becomes raw material for generating commercial competitors. The boundaries between inspiration, derivation, and extraction blur when machine learning systems can absorb and recombine stylistic elements at industrial scale.
The situation the music industry faces is not one problem but many interconnected problems that compound each other. Commercial metrics have been detached from quality assessment. Gatekeeping institutions have lost their filtering function. Listener perception has become unreliable as a quality signal. Royalty economics are being undermined by content flooding. Training data extraction has turned human creativity against its creators. And different stakeholder groups operate according to incompatible success frameworks.
Could widespread AI chart performance actually force a reckoning with how the music industry measures and defines value itself? There are reasons for cautious optimism.
Deezer has positioned itself as the first streaming service to automatically label 100 per cent AI-generated tracks, removing them from algorithmic recommendations and editorial playlists. This represents an attempt to preserve human music's privileged position in the discovery ecosystem. If other platforms adopt similar approaches, AI content might be effectively segregated into a separate category that does not compete directly with human artists.
The EU's AI Act, which entered into force on 1 August 2024, mandates unprecedented transparency about training data. Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, including content protected by copyright, according to a template published by the European Commission's AI Office in July 2025. Compliance became applicable from 2 August 2025, with the AI Office empowered to verify compliance and issue corrective measures from August 2026, with potential fines reaching 15 million euros or 3 per cent of global annual revenue. The GPAI Code of Practice operationalises these requirements by mandating that providers maintain copyright policies, rely only on lawful data sources, respect machine-readable rights reservations, and implement safeguards against infringing outputs. This transparency requirement could make it harder for AI music platforms to operate without addressing rights holder concerns.
Human premium pricing may emerge as a market response. The survey finding that 96 per cent of music industry insiders would consider paying a premium for human-verified music services suggests latent demand for authenticated human creativity. If platforms can credibly certify human authorship, a tiered market could develop where human music commands higher licensing fees.
Institutional reform remains possible. Billboard could establish separate charts for AI-generated music, preserving the significance of its traditional rankings whilst acknowledging the new category of content. The Recording Academy's human authorship requirement for Grammy eligibility demonstrates that cultural institutions can draw principled distinctions. These distinctions may become more robust if validated by legal and regulatory frameworks.
But there are also reasons for pessimism. Market forces favour efficiency, and AI music production is dramatically more efficient than human creation. If listeners genuinely cannot distinguish AI from human music in typical consumption contexts, there may be insufficient consumer pressure to preserve human-created content.
The 0.5 per cent of streams that AI music currently represents on Deezer, despite comprising 34 per cent of uploads, suggests the content is not yet finding significant audiences. But this could change as AI capabilities improve.
The fragmentation of success definitions may prove permanent. If streaming platforms, financial investors, cultural institutions, and human artists cannot agree on what success means, each group may simply operate according to its own framework, acknowledging the others' legitimacy selectively or not at all.
A track could simultaneously be a chart success, a financial investment, an ineligible Grammy submission, and an object of contempt from human artists. The unified status hierarchy that once organised the music industry could dissolve into parallel status systems that rarely intersect.
Perhaps what the AI chart success reveals most clearly is that commercial metrics have always been inadequate measures of what music means. They were useful proxies when the systems generating commercially successful music also contained human judgement, human creativity, and human emotional expression. When those systems can be bypassed by algorithmic optimisation, the metrics are exposed as measuring only engagement behaviours, not the qualities those behaviours were supposed to indicate.
The traditional understanding of musical success included dimensions that are difficult to quantify: the sense that an artist had something to say and found a compelling way to say it, the recognition that creative skill and emotional honesty had produced something of value, the feeling of connection between artist and audience based on shared human experience.
These dimensions were always in tension with commercial metrics, but they were present in the evaluative frameworks that shaped which music received investment and promotion.
AI-generated music can trigger engagement behaviours. It can accumulate streams, achieve chart positions, and generate revenue. What it cannot do is mean something in the way human creative expression means something. It cannot represent the authentic voice of an artist working through lived experience. It cannot reward careful listening with the sense of encountering another human consciousness.
Whether listeners actually care about these distinctions is an empirical question that the market will answer. The preliminary evidence is mixed. The 97 per cent of listeners who cannot identify AI-generated music in blind tests suggest that, in passive consumption contexts, meaning may not be the operative criterion.
But the 80 per cent who agree that AI-generated music should be clearly labelled suggest discomfort with being fooled. And the premium that industry professionals say they would pay for human-verified music suggests that at least some market segments value authenticity.
The reckoning, if it comes, will force the industry to articulate what it believes music is for. If music is primarily engagement content designed to fill attention and generate revenue, then AI-generated music is simply more efficient production of that content. If music is a form of human communication that derives meaning from its human origins, then AI-generated music is a category error masquerading as the real thing.
These are not technical questions that data can resolve. They are value questions that different stakeholders will answer differently.
What seems certain is that the status quo cannot hold. The same metrics that crown hits cannot simultaneously serve as quality filters when algorithmic output can game those metrics. The same gatekeeping institutions cannot simultaneously validate commercial success and preserve human authorship requirements when commercial success becomes achievable without human authorship. The same royalty pools cannot sustain human artists if flooded with AI content competing for the same finite attention and revenue.
The chart success of AI-generated music is not the end of human music. It is the beginning of a sorting process that will determine what human music is worth in a world where its commercial position can no longer be assumed. That process will reshape not just the music industry but our understanding of what distinguishes human creativity from its algorithmic simulation.
The answer we arrive at will say as much about what we value as listeners and as a culture as it does about the capabilities of the machines.
Billboard. “How Many AI Artists Have Debuted on Billboard's Charts?” https://www.billboard.com/lists/ai-artists-on-billboard-charts/
Billboard. “AI Artist Xania Monet Debuts on Adult R&B Airplay – a Radio Chart Breakthrough.” https://www.billboard.com/music/chart-beat/ai-artist-xania-monet-debut-adult-rb-airplay-chart-1236102665/
Billboard. “AI Music Artist Xania Monet Signs Multimillion-Dollar Record Deal.” https://www.billboard.com/pro/ai-music-artist-xania-monet-multimillion-dollar-record-deal/
Billboard. “The 10 Biggest AI Music Stories of 2025: Suno & Udio Settlements, AI on the Charts & More.” https://www.billboard.com/lists/biggest-ai-music-stories-2025-suno-udio-charts-more/
Billboard. “AI Music Artists Are on the Charts, But They Aren't That Popular – Yet.” https://www.billboard.com/pro/ai-music-artists-charts-popular/
Billboard. “Kehlani Slams AI Artist Xania Monet Over $3 Million Record Deal Offer.” https://www.billboard.com/music/music-news/kehlani-slams-ai-artist-xania-monet-million-record-deal-1236071158/
Bensound. “Human vs AI Music: Data, Emotion & Authenticity in 2025.” https://www.bensound.com/blog/human-generated-music-vs-ai-generated-music/
CBS News. “People can't tell AI-generated music from real thing anymore, survey shows.” https://www.cbsnews.com/news/ai-generated-music-real-thing-cant-tell/
CBS News. “New Grammy rule addresses artificial intelligence.” https://www.cbsnews.com/news/grammy-rule-artificial-intelligence-only-human-creators-eligible-awards/
CISAC. “Global economic study shows human creators' future at risk from generative AI.” https://www.cisac.org/Newsroom/news-releases/global-economic-study-shows-human-creators-future-risk-generative-ai
Deezer Newsroom. “Deezer and Ipsos study: AI fools 97% of listeners.” https://newsroom-deezer.com/2025/11/deezer-ipsos-survey-ai-music/
Deezer Newsroom. “Deezer: 28% of all delivered music is now fully AI-generated.” https://newsroom-deezer.com/2025/09/28-fully-ai-generated-music/
GOV.UK. “The impact of algorithmically driven recommendation systems on music consumption and production.” https://www.gov.uk/government/publications/research-into-the-impact-of-streaming-services-algorithms-on-music-consumption/
Hollywood Reporter. “Hallwood Media Signs Record Deal With an 'AI Music Designer.'” https://www.hollywoodreporter.com/music/music-industry-news/hallwood-inks-record-deal-ai-music-designer-imoliver-1236328964/
IFPI. “Global Music Report 2025.” https://globalmusicreport.ifpi.org/
Medium (Anoxia Lau). “The Human Premium: What 100 Music Insiders Reveal About the Real Value of Art in the AI Era.” https://anoxia2.medium.com/the-human-premium-what-100-music-insiders-reveal-about-the-real-value-of-art-in-the-ai-era-c4e12a498c4a
MIT Media Lab. “Exploring listeners' perceptions of AI-generated and human-composed music.” https://www.media.mit.edu/publications/exploring-listeners-perceptions-of-ai-generated-and-human-composed-music-for-functional-emotional-applications/
Music Ally. “UMG boss slams 'exponential growth of AI slop' on streaming services.” https://musically.com/2026/01/09/umg-boss-slams-exponential-growth-of-ai-slop-on-streaming-services/
Music Business Worldwide. “50,000 AI tracks flood Deezer daily.” https://www.musicbusinessworldwide.com/50000-ai-tracks-flood-deezer-daily-as-study-shows-97-of-listeners-cant-tell-the-difference-between-human-made-vs-fully-ai-generated-music/
Rap-Up. “Baby Tate & Muni Long Push Back Against AI Artist Xania Monet.” https://www.rap-up.com/article/baby-tate-muni-long-xania-monet-ai-artist-backlash
SAGE Journals (Bonini & Gandini). “First Week Is Editorial, Second Week Is Algorithmic: Platform Gatekeepers and the Platformization of Music Curation.” https://journals.sagepub.com/doi/full/10.1177/2056305119880006
Saving Country Music. “Billboard Must Address AI on the Charts NOW.” https://savingcountrymusic.com/billboard-must-address-ai-on-the-charts-now/
Spotify Engineering. “Humans + Machines: A Look Behind the Playlists Powered by Spotify's Algotorial Technology.” https://engineering.atspotify.com/2023/04/humans-machines-a-look-behind-spotifys-algotorial-playlists
TIME. “No, AI Artist Breaking Rust's 'Walk My Walk' Is Not a No. 1 Hit.” https://time.com/7333738/ai-country-song-breaking-rust-walk-my/
US Copyright Office. “Copyright and Artificial Intelligence Part 3: Generative AI Training.” https://www.copyright.gov/ai/
WIPO Magazine. “How AI-generated songs are fueling the rise of streaming farms.” https://www.wipo.int/en/web/wipo-magazine/articles/how-ai-generated-songs-are-fueling-the-rise-of-streaming-farms-74310
Yahoo Entertainment. “Kehlani, SZA Slam AI Artist Xania Monet's Multimillion-Dollar Record Deal.” https://www.yahoo.com/entertainment/music/articles/kehlani-sza-slam-ai-artist-203344886.html

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Roscoe's Story
In Summary: * A VERY satisfying win by IU men's basketball over Rutgers. Final score 82 to 59. As of this minute, the big winter storm hasn't hit San Antonio. It's comfortable now with 68 degrees and a light breeze. They say the cold will begin moving in over night, and tomorrow will bring us freezing rain. We'll see about that.
Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Health Metrics: * bw= 219.03 * bp- 123/77 (69)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 05:15 – toast and butter, 1 banana * 07:10 – breakfast taco, steak, vegetables, refried beans, rice * 10:00 – snacking on roasted chicken * 14:00 – snack on HEB Bakery cookies * 17:10 – big bowl of home made vegetable and meat soup
Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 05:20 – bank accounts activity monitored * 06:20 – read, pray, follow news reports from various sources, surf the socials, nap * 16:00 – tuned in to The Flagship Station for IU Sports ahead of tonight's men's basketball game between my Indiana University Hoosiers and the Rutgers University Scarlet Knights. * 18:50 – And IU wins 82 to 59! YES!!
Chess: * 15:45 – moved in all pending CC games
from
Vino-Films
Back in the 90s, I was living in Queens, one of those neighborhoods where every block had a different vibe, different people, different smells coming out of the windows. It wasn’t fancy, but it felt safe.
Me and my siblings spent half our childhood roasting each other. That was just normal for us. That was entertainment.
My youngest brother—real quiet kid, kept to himself—one day made this little Q-tip version of me. A Q-tip man. And he made it dance around like a puppet, basically mocking me. And yeah, it annoyed me, but that’s how we were back then.
He was really into comic books at the time. There was a little comic shop a few blocks away, and kids could still walk around the neighborhood without anyone worrying.
Later that day, still annoyed from the Q-tip show, I teased him again. I don’t remember what I said, but this time he didn’t joke back. He got quiet in a different way.
Then he said, “They stole my comic books.”
Everything shifted.
I told him, “Come on, get in the car.” It was a hot day. We had a Mazda 626 with no AC.
We drove toward the comic store, and before we even reached it, he pointed into an alley. About seven kids were standing there.
The second we turned in, they scattered like roaches when the lights come on.
They ran toward a fence. Most jumped it. One husky kid couldn’t. He slipped, and I grabbed him by the collar.
I said, “You’ve got two choices. Give back the comics, or I’m calling the police.”
He froze, then said, “Walk with me.”
So we walked. My brother was still in the hot car watching.
I asked the kid why they did it, and he started acting apologetic—shoulders slumped, voice soft.
We turned a corner, and he pointed down the block. “There they are.”
His friends were dropping the comics in the middle of the street and sprinting away—ditching him completely.
That moment stuck with me.
I didn’t know I had that kind of confidence in me.
And I didn’t expect to feel anything for that kid, but seeing his friends abandon him like that stayed with me. He did something wrong, sure, but that moment showed me how alone he really was.
We picked up the comics. I let him go. That was it.
My brother and I never talked about it again. I didn’t brag, didn’t tease him.
But something shifted. He respected me differently after that… and honestly, I respected myself differently too.

from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!