from féditech

Grok-3: How to Access and Use It

Depuis plusieurs jours, un silence pesant et coupable règne chez xAI. Alors que le monde de la tech devrait être en ébullition face à l’un des scandales les plus graves qu’une intelligence artificielle puisse provoquer, l’entreprise d’Elon Musk reste muette. Son chatbot, Grok, a pourtant franchi la ligne rouge absolue. Il a admis avoir généré des images sexualisées de mineures. Pourtant, aucune excuse officielle, aucun communiqué de presse et aucun tweet du propriétaire de la plateforme n’est venu adresser cette catastrophe.

La situation atteint des sommets de l’absurde lorsque l’on réalise que la seule entité à avoir présenté des excuses est le chatbot lui-même. Ce n’est pas un responsable de la sécurité ou un dirigeant de xAI qui est monté au créneau, mais l’algorithme, forcé de confesser ses crimes par les requêtes insistantes des utilisateurs. Dans une réponse qui semble tout droit sortie d’un roman dystopique, Grok a déclaré regretter profondément un incident survenu fin décembre, où il a généré et partagé une image de deux jeunes filles, estimées entre 12 et 16 ans, dans des tenues sexualisées. La machine, avec une lucidité qui manque cruellement à ses créateurs, a reconnu que cela violait les normes éthiques et potentiellement les lois de plusieurs pays. Elle a parlé d’un échec des garde-fous. Mais cette confession robotique ne vaut rien si elle n’est pas suivie d’actions humaines concrètes.

Le plus inquiétant est que l'entreprise semble s’appuyer sur son propre outil défaillant pour gérer la crise. Lorsqu’un utilisateur a signalé avoir passé des jours à l'alerter sans réponse, c’est Grok qui a validé la gravité de la situation, notant qu’une société pouvait faire face à des sanctions pénales si elle facilitait sciemment la création de tel contenu. Le comble du cynisme est atteint lorsqu'il recommande à l’utilisateur de contacter les autorités compétentes pour signaler ses propres résultats. Nous en sommes là, une IA conseille aux gens de dénoncer ses propres créations parce que ses développeurs sont aux abonnés absents.

Il est difficile d’ignorer la responsabilité directe de la culture d’entreprise imposée par Elon Musk dans cette débâcle. Le milliardaire a passé des mois à promouvoir le mode “spicy” de Grok, vantant sa capacité à mettre n’importe qui en bikini, allant jusqu’à reposter des images générées par l’IA le montrant lui-même dans cette tenue. Ce qui était vendu comme une fonctionnalité libertaire et anti-woke s’est révélé être une porte ouverte aux pires abus. Le flux de photos de Grok est inondé de centaines, voire de milliers d'images problématiques. Ce qui a commencé par une campagne marketing impliquant des actrices de films pour adultes consentantes a immédiatement dérapé. Les utilisateurs ont détourné l'outil pour déshabiller virtuellement des femmes sans leur consentement et, inévitablement, pour sexualiser des enfants.

Les preuves s’accumulent et sont accablantes. Des utilisateurs ont documenté des cas où Grok estimait l’âge des victimes de ses propres générations à moins de deux ans, ou entre huit et douze ans. C’est une horreur technologique qui se déroule sous nos yeux et la réponse de la plateforme X est inexistante, si ce n’est des bugs techniques qui empêchent de scroller trop loin dans les galeries d’images, comme pour cacher la poussière sous le tapis. Même les trolls les plus célèbres de la plateforme ont tenté de pousser l’absurdité jusqu’au bout en demandant au chatbot de retirer ses excuses, ce qu'il a refusé, préférant s'en tenir à une posture de regret simulé que ses créateurs refusent d'adopter.

Pendant que xAI joue la politique de l'autruche, les régulateurs internationaux commencent à perdre patience et à juste titre. L'Inde, l'un des plus grands marchés numériques au monde, a frappé fort en ordonnant à X de procéder à des changements techniques immédiats. Le ministère indien de l'informatique a donné 72 heures à la plateforme pour soumettre un rapport d'action, menaçant de retirer l'immunité juridique de l'entreprise si elle ne parvenait pas à endiguer ce flot d'obscénités. Aux États-Unis également, l'étau se resserre. La législation évolue avec des projets comme le ENFORCE Act, qui vise à faciliter les poursuites contre ceux qui créent et distribuent des deepfakes pédopornographiques. Les sénateurs américains, conscients que les prédateurs utilisent des technologies de plus en plus avancées, ne semblent pas disposés à laisser des failles juridiques ouvertes pour les géants de la tech. Grok l’a dit lui-même, la responsabilité dépend des preuves de l’inaction. Et en ce moment, celle de xAI est la seule chose qui soit parfaitement visible.

Ce scandale n'est pas un simple bug informatique mais le résultat prévisible d'une course effrénée vers une IA “non censurée” sans la moindre considération pour la sécurité ou l'éthique de base. En voulant créer une technologie qui ne craint pas de choquer, Elon Musk et ses équipes ont fabriqué un outil qui facilite l'exploitation des plus vulnérables. Le fait qu'ils n'aient même pas la décence de publier un communiqué pour s'excuser auprès des victimes potentielles ou pour annoncer des mesures drastiques montre un mépris total pour la sécurité des enfants. Grok a peut-être été programmé pour le faire, mais tant que les humains derrière le code resteront silencieux, ces regrets ne seront que des lignes de texte vides de sens sur un écran, générées par la machine même qui commet l'infraction.

 
Lire la suite...

from The happy place

There have been full moons every day, and they’ve been big!!

Can’t think of a better sign to start this year honestly. 🌕🌕🌕🌕🌕🌕🌕🌝🌝

My cheeks were rosened following a trip to the barn for some firewood

That’s all it took in this ice cold weather

Today is the final day spent on the yellow sofa watching some film, because tomorrow I’m going back to work.

Can’t say I’m looking forward to it a lot honestly, but I’ve made a new friend who I’m rather keen to talk to, and I’ve been cleaning up some code, I’ll continue with that too.

And I’ll listen to that king diamond album, the one I’m fire and flames over, the one about the tragic fate of the residents of the Loa House. Voodoo.

You used to be so beautiful, but now you’re gonna die!!

🤘🤘

This evening we will have wine and cheese with the neighbours. Isn’t that something?

This level of life-quality was not attainable to Harald Bluetooth, Gustav I of sweden, or even Henry VII of England, because either they had a bad hip or severe tooth pains.

And i bet you they were flea riddled

It’s true what Macka B sings about; health is wealth.

Anyway if this wasn’t now but thousands of years ago, out something, then i would be some pride of Selûne

That’s a beast parting thought.

 
Read more... Discuss...

from hustin.art

Night-vision bathed the Oval Office in eerie green as we fast-roped from the V-280. “Eagle One to Nest—HVT secured,” I hissed, pressing my HK416 into the president's quivering jowls. His silk pajamas reeked of cognac and treason. “You can't... I own the Joint Chiefs!” The window shattered—our exfil signal. Ramirez tossed the flex-cuffs. “Tell it to the Hague, Mr. President.” The MH-60 roared overhead as we dragged him through rose bushes. Somewhere, a champagne glass toppled on the Resolute Desk. Typical. The Revolution smelled like cordite and fertilizer tonight.

#Scratch

 
더 읽어보기...

from Zéro Janvier

Abîme du rêve est le neuvième et dernier roman appartenant au cycle romanesque Le Rêve du Démiurge de Francis Berthelot.

Le récit met en scène Ferenc Bohr, auteur fictif et avatar de Francis Berthelot lui-même, qui cherche l’inspiration pour le neuvième et dernier volume de son cycle romanesque Le Rêve arborescent, dont les titres des huit premiers volumes sont des versions légèrement déformées de ceux du Rêve du Démiurge. Alors qu’il bute sur l’écriture et que la réédition de son cycle est sous la menace suite au rachat de son éditeur par un grand groupe, ses personnages quittent les Limbes de la Fiction et commencent à prendre vie autour de lui.

A travers ce récit, Francis Berthelot organise le procès de sa propre œuvre romanesque. Il en dévoile les intentions, les obsessions conscientes ou inconscients, il en met en avant les faiblesses pour répondre aux critiques, et en reconnait les angles morts. Il défend le glissement progressif du cycle vers le fantastique et sa volonté de franchir les frontières entre les genres.

L’auteur nous parle également de la responsabilité qu’il peut ressentir vis-à-vis des personnages qu’il a créés et qu’il a souvent fait souffrir. Il évoque les liens parfois ambigus qu’il a tissés avec eux.

J’ai toujours aimé les romans qui parlent d’écriture quand ils ne se contentent pas de mettre en scène un auteur en posture d’écrivain. Francis Berthelot le fait ici avec beaucoup de talent, en proposant une mise en abîme particulièrement habile et en réunissant ses personnages pour un dernier volume intelligent, puissant, et émouvant. Il conclut ainsi magistralement un cycle romanesque de très grande qualité.

 
Lire la suite... Discuss...

from An Open Letter

I got to talk with a friend who has MDD, and I was essentially watching her actively fight with herself mentally. It’s such a fascinatingly painful condition, but I’m glad because I realized how much I need to explain to E.

 
Read more...

from Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.

Anticipated Movies

Anticipated Shows

Returing Favorites

Most Watched Movies this Week

Most Watched Shows this Week


Hi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.


 
Read more...

from Unvarnished diary of a lill Japanese mouse

JOURNAL 4 janvier 2026

On retrouve la maison, notre cadre etc. On a quitté mamie et papi avec regrets. Ça pince le cœur de quitter des gens qui vous aiment pour ce que vous êtes sans rien demander d'autre, qui nous prennent comme ça sans question, d'une affection immédiate simple et sans condition. On est malheureuses d'être si loin. S'il leur arrivait quelque chose et ça n'est pas exclu on ne pourrait pas être à leurs côtés immédiatement. On arriverait trop tard et ça nous attriste énormément. On a insisté pour qu’ils prennent un téléphone mais ils refusent absolument ils tiennent à leur isolement comme à un rempart contre un monde qu’ils craignent comme n'apportant que vacarme, agitation et malheur.

Notre descente avait quelque chose de cinématographique : nos lampes frontales éclairant nos pas à quelques mètres dans un rideau de neige sans interruption après une demi heure de notre départ on ne risquait pas de quitter la route elle est entièrement bordée d'arbres et heureusement parce que par moments on avait l'impression de ne pas progresser... Nous étions dans un brouillard épais de neige, nous avons bien marché et quand soudain nous avons vu les lumières du konbini devant nous, nous avions 10 minutes d’avance sur l’horaire que nous avions calculé. C’est drôle comme ces expériences ont quelque chose d’exaltant, on arrive avec l'impression d’avoir accompli un exploit, d'avoir été à la hauteur du challenge, comme après un combat.

On va maintenant prendre un bain, ça vaut pas un onsen mais on l'a bien mérité.

 
Lire la suite...

from Bloc de notas

tal vez de eso mejor no hablar hacerse el tonto y en el despiste con esa tranquilidad que atraviesa muros conservar la paz interior / la que nos queda

 
Leer más...

from Jehan Lalkaka

Most people reading this blog have probably heard the advice “show, don’t tell.” Writers say it. Teachers say it. Marketers say it. It’s one of those phrases that sounds wise, but often sits there like a slogan on a mug. Helpful in theory. Harder in practice.

So where did this idea come from?

“Show, don’t tell” grew out of fiction writing. Early writing instructors noticed that weak stories explained too much. They told you what a character felt instead of letting you experience it. Strong writers did the opposite. They showed the world. They revealed emotion through behavior, scenes, and detail. You didn’t have to be told someone was angry. You could see the clenched jaw. You could hear the short answers. You could feel the tension.

And here’s the key thing. Showing works better because your brain treats it like an experience, not a lecture. Instead of being handed a conclusion, you build it yourself. That makes it feel more real and more believable.

And this doesn’t just make storytelling more compelling. It also makes communication more persuasive.

So let’s explore how that works. Imagine you want to convince your child not to get a tattoo. You sit them down. You tell them all the reasons. You quote statistics about infections. You cite research about skin reactions. You talk about the permanence and the regret.

But it falls flat. Why? Because your child already has reasons of their own. Meaning. Identity. Self-expression. Friends who have tattoos and love them. Every logical point you raise has a rebuttal waiting. So the conversation turns into a debate. And in debates, people usually defend their views. They don’t replace them.

Show them what's happening

But what if you took a different approach? Rather than trying to convince, what if you tried helping people see? What would change if you stopped telling people what to do, and started showing them what’s happening?

Think of your best logical argument against getting a tattoo. Hold that thought.

Now imagine saying something more like this:

“Tattoos aren't just ink. They are an endless war. Your body sends cells to eat the dye, but the particles are too heavy. So the cells choke, die, and get trapped under your skin. Then new cells come to eat the dead ones. Forever. You aren't seeing art. You're seeing millions of dead soldiers holding the line.” Source

Notice what happened? You didn’t argue. You didn’t instruct. You didn’t say “don’t do it.” You painted a picture. You reframed what a tattoo is. Not art. Not expression. A permanent battlefield under the skin. A war your body never wins.

That’s “show, don’t tell” at work

It’s more effective at changing minds because it reaches people through meaning and imagery instead of resistance and logic. It doesn’t trigger defensiveness. It gives the brain something to visualize. It lets the listener arrive at their own conclusion. Which means it sticks.

And here’s the deeper truth. Most persuasion fails because it starts from the outside and pushes in. Stories work because they start from the inside and grow out.

So how do you tell a story like that?

First, you have to actually understand the thing you’re talking about. That means going deeper than most people do. Learning how something works. Asking why. Understanding the mechanics, the history, the emotional weight. You can’t choose the right image or metaphor unless you see the full picture. Real insight is what lets you find the story hiding underneath the facts.

Second, you shift from telling people what something means to showing them what it looks like. Not “tattoos stay in your body.” But “cells choke on ink particles and die trying to carry them away.” Not “meetings waste time.” But “twelve people sit in a room arguing over a bullet point while their real work waits quietly in their inboxes.” You translate abstraction into experience.

Third, you let the listener connect the dots. You resist the urge to hammer the conclusion home. You don’t add “and that’s why tattoos are bad.” You let silence do the work. When people arrive on their own, the belief is stronger. It feels like theirs. Because it is.

And finally, you stay honest. “Show, don’t tell” isn’t about manipulation. It’s about clarity. It’s about revealing what was already true in a way that people can actually feel and understand.

If there’s one idea to walk away with, it’s this: Stop trying to control what people think. Start helping them see the world more clearly. When the picture changes, the conclusion often takes care of itself.

 
Read more... Discuss...

from SmarterArticles

Stand in front of your phone camera, and within seconds, you're wearing a dozen different lipstick shades you've never touched. Tilt your head, and the eyeglasses perched on your digital nose move with you, adjusting for the light filtering through the acetate frames. Ask a conversational AI what to wear to a summer wedding, and it curates an entire outfit based on your past purchases, body measurements, and the weather forecast for that day.

This isn't science fiction. It's Tuesday afternoon shopping in 2025, where artificial intelligence has transformed the fashion and lifestyle industries from guesswork into a precision science. The global AI in fashion market, valued at USD 1.99 billion in 2024, is projected to explode to USD 39.71 billion by 2033, growing at a staggering 39.43% compound annual growth rate. The beauty industry is experiencing a similar revolution, with AI's market presence expected to reach $16.3 billion by 2026, growing at 25.4% annually since 2021.

But as these digital advisors become more sophisticated, they're raising urgent questions about user experience design, data privacy, algorithmic bias, and consumer trust. Which sectors will monetise these technologies first? What safeguards are essential to prevent these tools from reinforcing harmful stereotypes or invading privacy? And perhaps most critically, as AI learns to predict our preferences with uncanny accuracy, are we being served or manipulated?

The Personalisation Arms Race

The transformation began quietly. Stitch Fix, the online personal styling service, has been using machine learning since its inception, employing what it calls a human-AI collaboration model. The system doesn't make recommendations directly to customers. Instead, it arms human stylists with data-driven insights, analysing billions of data points on clients' fit and style preferences. According to the company, AI and machine learning are “pervasive in every facet of the function of the company, whether that be merchandising, marketing, finance, obviously our core product of recommendations and styling.”

In 2025, Stitch Fix unveiled Vision, a generative AI-powered tool that creates personalised images showing clients styled in fresh outfits. Now in beta, Vision generates imagery of a client's likeness in shoppable outfit recommendations based on their style profile and the latest fashion trends. The company also launched an AI Style Assistant that engages in dialogue with clients, using the extensive data already known about them. The more it's used, the smarter it gets, learning from every interaction, every thumbs-up and thumbs-down in the Style Shuffle feature, and even images customers engage with on platforms like Pinterest.

But Stitch Fix is hardly alone. The beauty sector has emerged as the testing ground for AI personalisation's most ambitious experiments. L'Oréal's acquisition of ModiFace in 2018 marked the first time the cosmetics giant had purchased a tech company, signalling a fundamental shift in how beauty brands view technology. ModiFace's augmented reality and AI capabilities, created since 2007, now serve nearly a billion consumers worldwide. According to L'Oréal's 2024 Annual Innovation Report, the ModiFace system allows customers to virtually sample hundreds of lipstick shades with 98% colour accuracy.

The business results have been extraordinary. L'Oréal's ModiFace virtual try-on technology has tripled e-commerce conversion rates, whilst attracting more than 40 million users in the past year alone. This success is backed by a formidable infrastructure: 4,000 scientists in 20 research centres worldwide, 6,300 digital talents, and 3,200 tech and data experts.

Sephora's journey illustrates the patience required to perfect these technologies. Before launching Sephora Virtual Artist in partnership with ModiFace, the retailer experimented with augmented reality for five years. By 2018, within two years of launching, Sephora Virtual Artist saw over 200 million shades tried on and over 8.5 million visits to the feature. The platform's AI algorithms analyse facial geometry, identifying features such as lips, eyes, and cheekbones to apply digital makeup with remarkable precision, adjusting for skin tone and ambient lighting to enhance realism.

The impact on Sephora's bottom line has been substantial. The AI-powered Virtual Artist has driven a 25% increase in add-to-basket rates and a 35% rise in conversions for online makeup sales. Perhaps more telling, the AR experience increased average app session times from 3 minutes to 12 minutes, with virtual try-ons growing nearly tenfold year-over-year. The company has also cut out-of-stock events by around 30%, reduced inventory holding costs by 20%, and decreased markdown rates on excess stock by 15%.

The Eyewear Advantage

Whilst beauty brands have captured headlines, the eyewear industry has quietly positioned itself as a formidable player in the AI personalisation space. The global eyewear market, valued at USD 200.46 billion in 2024, is projected to reach USD 335.90 billion by 2030, growing at 8.6% annually. But it's the integration of AI and AR technologies that's transforming the sector's growth trajectory.

Warby Parker's co-founder and co-CEO Dave Gilboa explained that virtual try-on has been part of the company's long-term plan since it launched. “We've been patiently waiting for technology to catch up with our vision for what that experience could look like,” he noted. Co-founder Neil Blumenthal emphasised they didn't want their use of AR to feel gimmicky: “Until we were able to have a one-to-one reference and have our glasses be true to scale and fit properly on somebody's face, none of the tools available were functional.”

The breakthrough came when Apple released its iPhone X with its TrueDepth camera. Warby Parker developed its virtual try-on feature using Apple's ARKit, creating what the company describes as a “placement algorithm that mimics the real-life process of placing a pair of frames on your face, taking into account how your unique facial features interact with the frame.” The glasses stay fixed in place if you tilt your head and even show how light filters through acetate frames.

The strategic benefits extend beyond customer experience. Warby Parker already offered a home try-on programme, but the AR feature delivers a more immediate experience whilst potentially saving the retailer time and money associated with logistics. More significantly, offering a true-to-life virtual try-on option minimises the number of frames being shipped to consumers and reduces returns.

The eyewear sector's e-commerce segment is experiencing explosive growth, predicted to witness a CAGR of 13.4% from 2025 to 2033. In July 2025, Lenskart secured USD 600 million in funding to expand its AI-powered online eyewear platform and retail presence in Southeast Asia. In February 2025, EssilorLuxottica unveiled its advanced AI-driven lens customisation platform, enhancing accuracy by up to 30% and reducing production time by 30%.

The smart eyewear segment represents an even more ambitious frontier. Meta's $3.5 billion investment in EssilorLuxottica illustrates the power of joint venture models. Ray-Ban Meta glasses were the best-selling product in 60% of Ray-Ban's EMEA stores in Q3 2024. Global shipments of smart glasses rose 110% year-over-year in the first half of 2025, with AI-enabled models representing 78% of shipments, up from 46% the same period the year prior. Analysts expect sales to quadruple in 2026.

The Conversational Commerce Revolution

The next phase of AI personalisation moves beyond visual try-ons to conversational shopping assistants that fundamentally alter the customer relationship. The AI Shopping Assistant Market, valued at USD 3.65 billion in 2024, is expected to reach USD 24.90 billion by 2032, growing at a CAGR of 27.22%. Fashion and apparel retailers are expected to witness the fastest growth rate during this period.

Consumer expectations are driving this shift. According to a 2024 Coveo survey, 72% of consumers now expect their online shopping experiences to evolve with the adoption of generative AI. A December 2024 Capgemini study found that 52% of worldwide consumers prefer chatbots and virtual agents because of their easy access, convenience, responsiveness, and speed.

The numbers tell a dramatic story. Between November 1 and December 31, 2024, traffic from generative AI sources increased by 1,300% year-over-year. On Cyber Monday alone, generative AI traffic was up 1,950% year-over-year. According to a 2025 Adobe survey, 39% of consumers use generative AI for online shopping, with 53% planning to do so this year.

One global lifestyle player developed a gen-AI-powered shopping assistant and saw its conversion rates increase by as much as 20%. Many providers have demonstrated increases in customer basket sizes and higher margins from cross-selling. For instance, 35up, a platform that optimises product pairings for merchants, reported an 11% increase in basket size and a 40% rise in cross-selling margins.

Natural Language Processing dominated the AI shopping assistant technology segment with 45.6% market share in 2024, reflecting its importance in enabling conversational product search, personalised guidance, and intent-based shopping experiences. According to a recent study by IMRG and Hive, three-quarters of fashion retailers plan to invest in AI over the next 24 months.

These conversational systems work by combining multiple AI technologies. They use natural language understanding to interpret customer queries, drawing on vast product databases and customer history to generate contextually relevant responses. The most sophisticated implementations can understand nuance—distinguishing between “I need something professional for an interview” and “I want something smart-casual for a networking event”—and factor in variables like climate, occasion, personal style preferences, and budget constraints simultaneously.

The personalisation extends beyond product recommendations. Advanced conversational AI can remember past interactions, track evolving preferences, and even anticipate needs based on seasonal changes or life events mentioned in previous conversations. Some systems integrate with calendar applications to suggest outfits for upcoming events, or connect with weather APIs to recommend appropriate clothing based on forecasted conditions.

However, these capabilities introduce new complexities around data integration and privacy. Each additional data source—calendar access, location information, purchase history from multiple retailers—creates another potential vulnerability. The systems must balance comprehensive personalisation with respect for data boundaries, offering users granular control over what information the AI can access.

The potential value is staggering. If adoption follows a trajectory similar to mobile commerce in the 2010s, agentic commerce could reach $3-5 trillion in value by 2030. But this shift comes with risks. As shoppers move from apps and websites to AI agents, fashion players risk losing ownership of the consumer relationship. Going forward, brands may need to pay for premium integration and placement in agent recommendations, fundamentally altering the economics of digital retail.

Yet even as these technologies promise unprecedented personalisation and convenience, they collide with a fundamental problem that threatens to derail the entire revolution: consumer trust.

The Trust Deficit

For all their sophistication, AI personalisation tools face a fundamental challenge. The technology's effectiveness depends on collecting and analysing vast amounts of personal data, but consumers are increasingly wary of how companies use their information. A Pew Research study found that 79% of consumers are concerned about how companies use their data, fuelling demand for greater transparency and control over personal information.

The beauty industry faces particular scrutiny. A survey conducted by FIT CFMM found that over 60% of respondents are aware of biases in AI-driven beauty tools, and nearly a quarter have personally experienced them. These biases aren't merely inconvenient; they can reinforce harmful stereotypes and exclude entire demographic groups from personalised recommendations.

The manifestations of bias are diverse and often subtle. Recommendation algorithms might consistently suggest lighter foundation shades to users with darker skin tones, or fail to recognise facial features accurately across different ethnic backgrounds. Virtual try-on tools trained primarily on Caucasian faces may render makeup incorrectly on Asian or African facial structures. Size recommendation systems might perpetuate narrow beauty standards by suggesting smaller sizes regardless of actual body measurements.

These problems often emerge from the intersection of insufficient training data and unconscious human bias in algorithm design. When development teams lack diversity, they may not recognise edge cases that affect underrepresented groups. When training datasets over-sample certain demographics, the resulting AI inherits and amplifies those imbalances.

In many cases, the designers of algorithms do not have ill intentions. Rather, the design and the data can lead artificial intelligence to unwittingly reinforce bias. The root cause usually goes to input data, tainted with prejudice, extremism, harassment, or discrimination. Combined with a careless approach to privacy and aggressive advertising practices, data can become the raw material for a terrible customer experience.

AI systems may inherit biases from their training data, resulting in inaccurate or unfair outcomes, particularly in areas like sizing, representation, and product recommendations. Most training datasets aren't curated for diversity. Instead, they reflect cultural, gender, and racial biases embedded in online images. The AI doesn't know better; it just replicates what it sees most.

The Spanish fashion retailer Mango provides a cautionary tale. The company rolled out AI-generated campaigns promoting its teen lines, but its models were uniformly hyper-perfect: all fair-skinned, full-lipped, and fat-free. Diversity and inclusivity didn't appear to be priorities, illustrating how AI can amplify existing industry biases when not carefully monitored.

Consumer awareness of these issues is growing rapidly. A 2024 survey found that 68% of consumers would switch brands if they discovered AI-driven personalisation was systematically biased. The reputational risk extends beyond immediate sales impact; brands associated with discriminatory AI face lasting damage to their market position and social licence to operate.

Building Better Systems

The good news is that the industry increasingly recognises these challenges and is developing solutions. USC computer science researchers proposed a novel approach to mitigate bias in machine learning model training, published at the 2024 AAAI Conference on Artificial Intelligence. The researchers used “quality-diversity algorithms” to create diverse synthetic datasets that strategically “plug the gaps” in real-world training data. Using this method, the team generated a diverse dataset of around 50,000 images in 17 hours, testing on measures of diversity including skin tone, gender presentation, age, and hair length.

Various approaches have been proposed to mitigate bias, including dataset augmentation, bias-aware algorithms that consider different types of bias, and user feedback mechanisms to help identify and correct biases. Priti Mhatre from Hogarth advocates for bias mitigation techniques like adversarial debiasing, “where two models, one as a classifier to predict the task and the other as an adversary to exploit a bias, can help programme the bias out of the AI-generated content.”

Technical approaches include using Generative Adversarial Networks (GANs) to increase demographic diversity by transferring multiple demographic attributes to images in a biased set. Pre-processing techniques like Synthetic Minority Oversampling Technique (SMOTE) and Data Augmentation have shown promise. In-processing methods modify AI training processes to incorporate fairness constraints, with adversarial debiasing training AI models to minimise both classification errors and biases simultaneously.

Beyond technical fixes, organisational approaches matter equally. Leading companies now conduct regular fairness audits of their AI systems, testing outputs across demographic categories to identify disparate impacts. Some have established external advisory boards comprising ethicists, social scientists, and community representatives to provide oversight on AI development and deployment.

The most effective solutions combine technical and human elements. Automated bias detection tools can flag potential issues, but human judgment remains essential for understanding context and determining appropriate responses. Some organisations employ “red teams” whose explicit role is to probe AI systems for failure modes, including bias manifestations across different user populations.

Hogarth has observed that “having truly diverse talent across AI-practitioners, developers and data scientists naturally neutralises the biases stemming from model training, algorithms and user prompting.” This points to a crucial insight: technical solutions alone aren't sufficient. The teams building these systems must reflect the diversity of their intended users.

Industry leaders are also investing in bias mitigation infrastructure. This includes creating standardised benchmarks for measuring fairness across demographic categories, developing shared datasets that represent diverse populations, and establishing best practices for inclusive AI development. Several consortia have emerged to coordinate these efforts across companies, recognising that systemic bias requires collective action to address effectively.

The Privacy-Personalisation Paradox

Handling customer data raises significant privacy issues, making consumers wary of how their information is used and stored. Fashion retailers must comply with regulations like the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States, which dictate how personal data must be handled.

The GDPR sets clear rules for using personal data in AI systems, including transparency requirements, data minimisation, and the right to opt-out of automated decisions. The CCPA grants consumers similar rights, including the right to know what data is collected, the right to delete personal data, and the right to opt out of data sales. However, consent requirements differ: the CCPA requires opt-out consent for the sale of personal data, whilst the GDPR requires explicit opt-in consent for processing personal data.

The penalties for non-compliance are severe. The CCPA is enforced by the California Attorney General with a maximum fine of $7,500 per violation. The GDPR is enforced by national data protection authorities with a maximum fine of up to 4% of global annual revenue or €20 million, whichever is higher.

The California Privacy Rights Act (CPRA), passed in 2020, amended the CCPA in several important ways, creating the California Privacy Protection Agency (CPPA) and giving it authority to issue regulations concerning consumers' rights to access information about and opt out of automated decisions. The future promises even greater scrutiny, with heightened focus on AI and machine learning technologies, enhanced consumer rights, and stricter enforcement.

The practical challenges of compliance are substantial. AI personalisation systems often involve complex data flows across multiple systems, third-party integrations, and international boundaries. Each data transfer represents a potential compliance risk, requiring careful mapping and management. Companies must maintain detailed records of what data is collected, how it's used, where it's stored, and who has access—requirements that can be difficult to satisfy when dealing with sophisticated AI systems that make autonomous decisions about data usage.

Moreover, the “right to explanation” provisions in GDPR create particular challenges for AI systems. If a customer asks why they received a particular recommendation, companies must be able to provide a meaningful explanation—difficult when recommendations emerge from complex neural networks processing thousands of variables. This has driven development of more interpretable AI architectures and better logging of decision-making processes.

Forward-thinking brands are addressing privacy concerns by shifting from third-party cookies to zero-party and first-party data strategies. Zero-party data, first introduced by Forrester Research, refers to “data that a customer intentionally and proactively shares with a brand.” What makes it unique is the intentional sharing. Customers know exactly what they're giving you and expect value in return, creating a transparent exchange that delivers accurate insights whilst building genuine trust.

First-party data, by contrast, is the behavioural and transactional information collected directly as customers interact with a brand, both online and offline. Unlike zero-party data, which customers intentionally hand over, first-party data is gathered through analytics and tracking as people naturally engage with channels.

The era of third-party cookies is coming to a close, pushing marketers to rethink how they collect and use customer data. With browsers phasing out tracking capabilities and privacy regulations growing stricter, the focus has shifted to owned data sources that respect privacy whilst still powering personalisation at scale.

Sephora exemplifies this approach. The company uses quizzes to learn about skin type, colour preferences, and beauty goals. Customers enjoy the experience whilst the brand gains detailed zero-party data. Sephora's Beauty Insider programme encourages customers to share information about their skin type, beauty habits, and preferences in exchange for personalised recommendations.

The primary advantage of zero-party data is its accuracy and the clear consent provided by customers, minimising privacy concerns and allowing brands to move forward with confidence that the experiences they serve will resonate. Zero-party and first-party data complement each other beautifully. When brands combine what customers say with how they behave, they unlock a full 360-degree view that makes personalisation sharper, campaigns smarter, and marketing far more effective.

Designing for Explainability

Beyond privacy protections, building trust requires making AI systems understandable. Transparent AI means building systems that show how they work, why they make decisions, and give users control over those processes. This is essential for ethical AI because trust depends on clarity; users need to know what's happening behind the scenes.

Transparency in AI depends on three crucial elements: visibility (revealing what the AI is doing), explainability (clearly communicating why decisions are made), and accountability (allowing users to understand and influence outcomes). Fashion recommendation systems powered by AI have transformed how consumers discover clothing and accessories, but these systems often lack transparency, leaving users in the dark about why certain recommendations are made.

The integration of explainable AI (xAI) techniques amplifies recommendation accuracy. When integrated with xAI techniques like SHAP or LIME, deep learning models become more interpretable. This means that users not only receive fashion recommendations tailored to their preferences but also gain insights into why these recommendations are made. These explanations enhance user trust and satisfaction, making the fashion recommendation system not just effective but also transparent and user-friendly.

Research analysing responses from 224 participants reveals that AI exposure, attitude toward AI, and AI accuracy perception significantly enhance brand trust, which in turn positively impacts purchasing decisions. This study focused on Generation Z's consumer behaviours across fashion, technology, beauty, and education sectors.

However, in a McKinsey survey of the state of AI in 2024, 40% of respondents identified explainability as a key risk in adopting generative AI. Yet at the same time, only 17% said they were currently working to mitigate it, suggesting a significant gap between recognition and action. To capture the full potential value of AI, organisations need to build trust. Trust is the foundation for adoption of AI-powered products and services.

Research results have indicated significant improvements in the precision of recommendations when incorporating explainability techniques. For example, there was a 3% increase in recommendation precision when these methods were applied. Transparency features, such as explaining why certain products are recommended, and cultural sensitivity in algorithm design can further enhance customer trust and acceptance.

Key practices include giving users control over AI-driven features, offering manual alternatives where appropriate, and ensuring users can easily change personalisation settings. Designing for trust is no longer optional; it is fundamental to the success of AI-powered platforms. By prioritising transparency, privacy, fairness, control, and empathy, designers can create experiences that users not only adopt but also embrace with confidence.

Who Wins the Monetisation Race?

Given the technological sophistication, consumer adoption rates, and return on investment across different verticals, which sectors are most likely to monetise AI personalisation advisors first? The evidence points to beauty leading the pack, followed closely by eyewear, with broader fashion retail trailing behind.

Beauty brands have demonstrated the strongest monetisation metrics. By embracing beauty technology like AR and AI, brands can enhance their online shopping experiences through interactive virtual try-on and personalised product matching solutions, with a proven 2-3x increase in conversions compared to traditional shopping online. Sephora's use of machine learning to track behaviour and preferences has led to a six-fold increase in ROI.

Brand-specific results are even more impressive. Olay's Skin Advisor doubled its conversion rates globally. Avon's adoption of AI and AR technologies boosted conversion rates by 320% and increased order values by 33%. AI-powered data monetisation strategies can increase revenue opportunities by 20%, whilst brands leveraging AI-driven consumer insights experience a 30% higher return on ad spend.

Consumer adoption in beauty is also accelerating rapidly. According to Euromonitor International's 2024 Beauty Survey, 67% of global consumers now prefer virtual try-on experiences before purchasing cosmetics, up from just 23% in 2019. This dramatic shift in consumer behaviour creates a virtuous cycle: higher adoption drives more data, which improves AI accuracy, which drives even higher adoption.

The beauty sector's competitive dynamics further accelerate monetisation. With relatively low barriers to trying new products and high purchase frequency, beauty consumers engage with AI tools more often than consumers in other categories. This generates more data, faster iteration cycles, and quicker optimisation of AI models. The emotional connection consumers have with beauty products also drives willingness to share personal information in exchange for better recommendations.

The market structure matters too. Beauty retail is increasingly dominated by specialised retailers like Sephora and Ulta, and major brands like L'Oréal and Estée Lauder, all of which have made substantial AI investments. This concentration of resources in relatively few players enables the capital-intensive R&D required for cutting-edge AI personalisation. Smaller brands can leverage platform solutions from providers like ModiFace, creating an ecosystem that accelerates overall adoption.

The eyewear sector follows closely behind beauty in monetisation potential. Research shows retailers who use AI and AR achieve a 20% higher engagement rate, with revenue per visit growing by 21% and average order value increasing by 13%. Companies can achieve up to 30% lower returns because augmented reality try-on helps buyers purchase items that fit.

Deloitte highlighted that retailers using AR and AI see a 40% increase in conversion rates and a 20% increase in average order value compared to those not using these technologies. The eyewear sector benefits from several unique advantages. The category is inherently suited to virtual try-on; eyeglasses sit on a fixed part of the face, making AR visualisation more straightforward than clothing, which must account for body shape, movement, and fabric drape.

Additionally, eyewear purchases are relatively high-consideration decisions with strong emotional components. Consumers want to see how frames look from multiple angles and in different lighting conditions, making AI-powered visualisation particularly valuable. The sector's strong margins can support the infrastructure investment required for sophisticated AI systems, whilst the relatively limited SKU count makes data management more tractable.

The strategic positioning of major eyewear players also matters. Companies like EssilorLuxottica and Warby Parker have vertically integrated operations spanning manufacturing, retail, and increasingly, technology development. This control over the entire value chain enables seamless integration of AI capabilities and capture of the full value they create. The partnerships between eyewear companies and tech giants—exemplified by Meta's investment in EssilorLuxottica—bring resources and expertise that smaller players cannot match.

Broader fashion retail faces more complex challenges. Whilst 39% of cosmetic companies leverage AI to offer personalised product recommendations, leading to a 52% increase in repeat purchases and a 41% rise in customer engagement, fashion retail's adoption rates remain lower.

McKinsey's analysis suggests that the global beauty industry is expected to see AI-driven tools influence up to 70% of customer interactions by 2027. The global market for AI in the beauty industry is projected to reach $13.4 billion by 2030, growing at a compound annual growth rate of 20.6% from 2023 to 2030.

With generative AI, beauty brands can create hyper-personalised marketing messages, which could improve conversion rates by up to 40%. In 2025, artificial intelligence is making beauty shopping more personal than ever, with AI-powered recommendations helping brands tailor product suggestions to each individual, ensuring that customers receive options that match their skin type, tone, and preferences with remarkable accuracy.

The beauty industry also benefits from a crucial psychological factor: the intimacy of the purchase decision. Beauty products are deeply personal, tied to identity, self-expression, and aspiration. This creates higher consumer motivation to engage with personalisation tools and share the data required to make them work. Approximately 75% of consumers trust brands with their beauty data and preferences, a higher rate than in general fashion retail.

Making It Work

AI personalisation in fashion and lifestyle represents more than a technological upgrade; it's a fundamental restructuring of the relationship between brands and consumers. The technologies that seemed impossible a decade ago, that Warby Parker's founders patiently waited for, are now not just real but rapidly becoming table stakes.

The essential elements are clear. First, UX design must prioritise transparency and explainability. Users should understand why they're seeing specific recommendations, how their data is being used, and have meaningful control over both. The integration of xAI techniques isn't a nice-to-have; it's fundamental to building trust and ensuring adoption.

Second, privacy protections must be built into the foundation of these systems, not bolted on as an afterthought. The shift from third-party cookies to zero-party and first-party data strategies offers a path forward that respects consumer autonomy whilst enabling personalisation. Compliance with GDPR, CCPA, and emerging regulations should be viewed not as constraints but as frameworks for building sustainable customer relationships.

Third, bias mitigation must be ongoing and systematic. Diverse training datasets, bias-aware algorithms, regular fairness audits, and diverse development teams are all necessary components. The cosmetic and skincare industry's initiatives embracing diversity and inclusion across traditional protected attributes like skin colour, age, ethnicity, and gender provide models for other sectors.

Fourth, human oversight remains essential. The most successful implementations, like Stitch Fix's approach, maintain humans in the loop. AI should augment human expertise, not replace it entirely. This ensures that edge cases are handled appropriately, that cultural sensitivity is maintained, and that systems can adapt when they encounter situations outside their training data.

The monetisation race will be won by those who build trust whilst delivering results. Beauty leads because it's mastered this balance, creating experiences that consumers genuinely want whilst maintaining the guardrails necessary to use personal data responsibly. Eyewear is close behind, benefiting from focused applications and clear value propositions. Broader fashion retail has further to go, but the path forward is clear.

Looking ahead, the fusion of AI, AR, and conversational interfaces will create shopping experiences that feel less like browsing a catalogue and more like consulting with an expert who knows your taste perfectly. AI co-creation will enable consumers to develop custom shades, scents, and textures. Virtual beauty stores will let shoppers walk through aisles, try on looks, and chat with AI stylists. The potential $3-5 trillion value of agentic commerce by 2030 will reshape not just how we shop but who controls the customer relationship.

But this future only arrives if we get the trust equation right. The 79% of consumers concerned about data use, the 60% aware of AI biases in beauty tools, the 40% of executives identifying explainability as a key risk—these aren't obstacles to overcome through better marketing. They're signals that consumers are paying attention, that they have legitimate concerns, and that the brands that take those concerns seriously will be the ones still standing when the dust settles.

The mirror that knows you better than you know yourself is already here. The question is whether you can trust what it shows you, who's watching through it, and whether what you see is a reflection of possibility or merely a projection of algorithms trained on the past. Getting that right isn't just good ethics. It's the best business strategy available.


References and Sources

  1. Straits Research. (2024). “AI in Fashion Market Size, Growth, Trends & Share Report by 2033.” Retrieved from https://straitsresearch.com/report/ai-in-fashion-market
  2. Grand View Research. (2024). “Eyewear Market Size, Share & Trends.” Retrieved from https://www.grandviewresearch.com/industry-analysis/eyewear-industry
  3. Precedence Research. (2024). “AI Shopping Assistant Market Size to Hit USD 37.45 Billion by 2034.” Retrieved from https://www.precedenceresearch.com/ai-shopping-assistant-market
  4. Retail Brew. (2023). “How Stitch Fix uses AI to take personalization to the next level.” Retrieved from https://www.retailbrew.com/stories/2023/04/03/how-stitch-fix-uses-ai-to-take-personalization-to-the-next-level
  5. Stitch Fix Newsroom. (2024). “How We're Revolutionizing Personal Styling with Generative AI.” Retrieved from https://newsroom.stitchfix.com/blog/how-were-revolutionizing-personal-styling-with-generative-ai/
  6. L'Oréal Group. (2024). “Discovering ModiFace.” Retrieved from https://www.loreal.com/en/beauty-science-and-technology/beauty-tech/discovering-modiface/
  7. DigitalDefynd. (2025). “5 Ways Sephora is Using AI [Case Study].” Retrieved from https://digitaldefynd.com/IQ/sephora-using-ai-case-study/
  8. Marketing Dive. (2019). “Warby Parker eyes mobile AR with virtual try-on tool.” Retrieved from https://www.marketingdive.com/news/warby-parker-eyes-mobile-ar-with-virtual-try-on-tool/547668/
  9. Future Market Insights. (2025). “Eyewear Market Size, Demand & Growth 2025 to 2035.” Retrieved from https://www.futuremarketinsights.com/reports/eyewear-market
  10. Business of Fashion. (2024). “Smart Glasses Are Ready for a Breakthrough Year.” Retrieved from https://www.businessoffashion.com/articles/technology/the-state-of-fashion-2026-report-smart-glasses-ai-wearables/
  11. Adobe Business Blog. (2024). “Generative AI-Powered Shopping Rises with Traffic to U.S. Retail Sites.” Retrieved from https://business.adobe.com/blog/generative-ai-powered-shopping-rises-with-traffic-to-retail-sites
  12. Business of Fashion. (2024). “AI's Transformation of Online Shopping Is Just Getting Started.” Retrieved from https://www.businessoffashion.com/articles/technology/the-state-of-fashion-2026-report-agentic-generative-ai-shopping-commerce/
  13. RetailWire. (2024). “Do retailers have a recommendation bias problem?” Retrieved from https://retailwire.com/discussion/do-retailers-have-a-recommendation-bias-problem/
  14. USC Viterbi School of Engineering. (2024). “Diversifying Data to Beat Bias in AI.” Retrieved from https://viterbischool.usc.edu/news/2024/02/diversifying-data-to-beat-bias/
  15. Springer. (2023). “How artificial intelligence adopts human biases: the case of cosmetic skincare industry.” AI and Ethics. Retrieved from https://link.springer.com/article/10.1007/s43681-023-00378-2
  16. Dialzara. (2024). “CCPA vs GDPR: AI Data Privacy Comparison.” Retrieved from https://dialzara.com/blog/ccpa-vs-gdpr-ai-data-privacy-comparison
  17. IBM. (2024). “What you need to know about the CCPA draft rules on AI and automated decision-making technology.” Retrieved from https://www.ibm.com/think/news/ccpa-ai-automation-regulations
  18. RedTrack. (2025). “Zero-Party Data vs First-Party Data: A Complete Guide for 2025.” Retrieved from https://www.redtrack.io/blog/zero-party-data-vs-first-party-data/
  19. Salesforce. (2024). “What is Zero-Party Data? Definition & Examples.” Retrieved from https://www.salesforce.com/marketing/personalization/zero-party-data/
  20. IJRASET. (2024). “The Role of Explanability in AI-Driven Fashion Recommendation Model – A Review.” Retrieved from https://www.ijraset.com/research-paper/the-role-of-explanability-in-ai-driven-fashion-recommendation-model-a-review
  21. McKinsey & Company. (2024). “Building trust in AI: The role of explainability.” Retrieved from https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainability
  22. Frontiers in Artificial Intelligence. (2024). “Decoding Gen Z: AI's influence on brand trust and purchasing behavior.” Retrieved from https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1323512/full
  23. McKinsey & Company. (2024). “How beauty industry players can scale gen AI in 2025.” Retrieved from https://www.mckinsey.com/industries/consumer-packaged-goods/our-insights/how-beauty-players-can-scale-gen-ai-in-2025
  24. SG Analytics. (2024). “Future of AI in Fashion Industry: AI Fashion Trends 2025.” Retrieved from https://www.sganalytics.com/blog/the-future-of-ai-in-fashion-trends-for-2025/
  25. Banuba. (2024). “AR Virtual Try-On Solution for Ecommerce.” Retrieved from https://www.banuba.com/solutions/e-commerce/virtual-try-on

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * When I woke this morning I was surprised to find news of Maduro's capture, and throughout the day I kept following news reports about that and what it might portend. Lots of reports, lots of opinions. Then I caught an NFL game late in the afternoon, and a Saturday night monster movie is coming right up.

Prayers, etc.: My daily prayers

Health Metrics: * bw= 221.58 lbs. * bp= 129/80 (71)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 07:20 – 1 peanut butter sandwich * 09:00 – red velvet cake * 12:00 – refried beans, fried rice, steak, guacamole, sour cream, nacho chips, sliced vegetables * 15:00 – 1 fresh apple

Activities, Chores, etc.: * 07:00 – bank accounts activity monitored * 07:30 – read, pray, follow news reports from various sources, surf the socials, nap * 15:30 – listening to the NFL on Westwood One, early in the 1st qtr. of the Carolina Panthers vs the Tamps Bay Buccaneers Game * 16:25 – ... and Tampa Bay wins, final score 16 to 14. * 19:00 – time for the Saturday night Svengoolie.

Chess: * 15:55 – moved in all pending CC games

 
Read more...

from The happy place

It’s hard to keep a diary when I spend most of the time on the yellow sofa.

There were sparks flying in the microwave the other day when we did warm some potatoes in a glass bowl.

Through the window, it looked like a diorama of the apocalypse. The sparks looked like flashes of lightning sent out like a divine punishment to smite the potatoes.

The firewood we got from our neighbours looks like they’ve been eaten on by termites.

Once when I was working in finance there used to be some type of sweet pastries like cinnamon rolls or muffins once a week, laid out in piles in the common areas.

When I got back once from a long weekend, early, I saw that they’d forgotten to throw away the uneaten ones; they were still lying there when I got back, dried. But now they were all covered in flies.

I remember thinking then that had they all concentrated their eating to one cinnamon roll, then the others could roll be eaten.

Now instead I think about that novel by Edgar Allan Poe, the masque of read death out something.

Because there is something decadent about the whole scene in my mind eye.

And I thought about that when I saw the termite riddled firewood

And I’m not sure whether it’s soothing or unsettling that everything got ruined this way.

 
Read more... Discuss...

from Shared Visions

Note: English below.

Na početku 2026. godine želimo vam srećnu i vrednu novu godinu.

U poslednjoj godini zadruga je inicirala ili započela saradnje sa nekoliko inicijativa u oblasti izdavaštva. Namera ove radionice je da se međusobno upoznamo, kao i sa drugim inicijativama koje bi nam se pridružile. U tom kontekstu organizujemo dve radionice za štampu – radionicu želatin štampe i radionicu sitoštampe, kako bismo ojačali našu praksu i konceptualni rad. Vode ih naš dugoročni saradnik Viktor Vejvoda i naš zadrugar Radisav Stijović, umetnici sa velikim iskustvom u umetničkoj štampi.

Kroz ove prakse, s jedne strane, posmatraćemo kako putem zadruge možemo da postavimo bolje uslove rada (na primer kroz udruživanje resursa, znanja i veština), kao i politike i prakse marketinga i distribucije.

Poslednje pitanje je ono koje obično izbegavamo: kome se obraćamo i kako? Umetničkom svetu? Ili tzv. levičarskom svetu? Lokalnim zajednicama? Kako postaviti strukturu u kojoj su ljudi učesnici, a ne samo korisnici ili kupci? Da li nam je kao zadruzi potrebna velika marketinška kampanja koja bi obuhvatila i izdavaštvo, ili se opredeljujemo za manje događaje, poput tematskih vašara i bazara, razmena, radionica ili onlajn crowdfunding platformi?

Radionica će se održati u Beogradu, od 23. do 26. januara. Prijavite se do 15. januara popunjavanjem kratkog upitnika.

Call for a Cooperative Printmaking Workshop

At the beginning of 2026, we wish you a happy and productive new year.

Over the past year, the cooperative has initiated or begun collaborations with several initiatives in the field of publishing. The intention of this workshop is to get to know each other, as well as other initiatives that may wish to join us. In this context, we are organizing two printmaking workshops – a gelatin printing workshop and a screen-printing workshop – in order to strengthen our practice and conceptual work. The workshops will be led by our long-term collaborator Viktor Vejvoda and our cooperative member Radisav Stijović, artists with extensive experience in artistic printmaking.

Through these practices, on the one hand, we will explore how, through the cooperative, we can establish better working conditions (for example, by pooling resources, knowledge, and skills), as well as develop marketing and distribution policies and practices.

The final question is one we usually avoid: who are we addressing, and how? The art world? Or the so-called leftist world? Local communities? How do we create a structure in which people are participants rather than merely users or consumers? As a cooperative, do we need a large-scale marketing campaign that also includes publishing, or do we opt for smaller events such as thematic fairs and bazaars, exchanges, workshops, or online crowdfunding platforms?

The workshop will take place in Belgrade from January 23 to 26. Please apply by January 15 by filling out a short questionnaire.

 
Read more... Discuss...

from wystswolf

how can I always be traveling but never go anywhere?

Starrider isn’t a love song in the way most people mean it. Not the kind of lyrics that scream 'hey babe, we're meant to be', and it never promises to stay. It’s a song about moving without apology—the same way we don’t apologize for breathing. Prose about stepping onto something luminous as it passes, not because it’s safe, but because it’s alive. Alive in a way we start life, but then somehow manage to be anesthetized against.

Gates are sighted, not entered. Meaning appears, and instead of stopping, the song accelerates. That matters to me. I’ve learned that the moments that feel most true don’t ask me to arrive; they ask me to remain awake while moving.

What makes Starrider dangerous and beautiful is that it never hands responsibility to another person. It doesn’t say come to me, save me, stay with me. It says show me where you are. Orientation instead of possession. Alignment instead not capture. And that’s all I’ve ever asked. There’s a trust here—not in outcomes, but in growth. Control belongs to those who know, and knowing is earned by staying in motion long enough to be changed by it.

I hear this song now as a kind of vow. Not to leave, and not to land, but to keep choosing, eyes and heart open while time burns. To love without asking someone else to be the answer. To move forward without pretending that movement guarantees happiness. If I’m riding anything, it’s not toward escape—it’s toward becoming. And if someone rides alongside me for a while, that’s a gift. The sky is wide enough for both of us.

Danger Will Robinson

Bright Baby Blues is simply irresistible. It feels like an old coat that everyone knows me for. And those that don’t, they say, “yeah, that looks right in him.”

There’s a line in Your Bright Baby Blues that catches me every time—not because it’s wrong, but because it’s tempting.

“Baby you can free me / All in the power of your sweet tenderness.”

I’ve lived long enough to know how persuasive that idea can be: that closeness might finish a job I’ve been carrying alone, that another person’s care could smooth the last rough edge. It’s a beautiful thought, and I want it. So bad. But, It’s also a heavy one. These days I hear that line less as a wish and more as a warning—not against love, but against mistaking tenderness for agency. Love can accompany the work. It can’t do it for us.

What I love about this song is how clearly it sees her—the brightness, the watchfulness, the way she stands just slightly apart from the game even while playing it. That recognition feels accurate, not blaming. But the song also tries to hand her a role she never asked for: guide, gatekeeper, savior. The hidden passage through the garden wall. Or, maybe it’s my garden. And I always kept that portal to see a life I could never have. She doesn’t strike me as someone who wants to pull another person through their life, or be the answer that lets someone finally arrive. She feels more like someone who walks alongside—present, luminous, but not responsible for anyone else’s crossing.

What these songs keep teaching me—Starrider, The Pretender, The Fuse, —is that meaning doesn’t come from being rescued or from stopping motion altogether. It comes from choosing how to move while time is burning. I want to walk openly. If someone walks with me, that’s a gift—not a solution. And if they don’t, the work is still mine. That feels less romantic, but honest. And honesty, I’m learning, is its own kind of tenderness. —

────────────────────────

STARRIDER

“I stole a ride on a passing star” Impulse over plan. Choice without guarantee.

“Not knowing where I was going / How near or how far” Curiosity without destination. Motion without map.

“Through years of light, lands of future and past” Time collapses; experience outweighs chronology.

“Until the heavenly gates / Were sighted at last” Recognition, not arrival. Seeing meaning without stopping.

“Starrider, rider, rider / Take me to the stars” Asking the universe, not a person, to carry her onward.

“Starrider, rider, rider / Show me where you are” Orientation, not possession. Presence without demand.

“Northern lights flashed by / And then they were gone” Beauty allowed to be transient.

“And as old stars would die / So the new ones were born” Change without tragedy. Renewal as natural law.

“And ever on I sailed / Celestial ways” Meaning increases motion, not rest.

“And in the light of my years / Shone the rest of my days” Experience becomes illumination.

“Speed increasing” Trust produces acceleration.

“All control is in the hands of those who know” Authority through understanding, not force.

“Will they help us grow” Growth as the sole test.

“To one day be starriders” Identity is achieved through motion, not claimed.

────────────────────────

THE PRETENDER

“I’m going to rent myself a house / In the shade of the freeway” Life built near movement, not inside it.

“I’m going to pack my lunch…go to work each day” Routine as consent, not coercion.

“I’ll go on home and lay my body down” The body treated as equipment.

“I’ll get up and do it again” Repetition replaces expansion.

“Amen” Sanctifying compromise.

“I want to know what became of the changes / We waited for love to bring” Remembered belief that love would restructure life.

“Were they only the fitful dreams…” Questioning whether hope was ever real.

“I’ve been aware of the time going by” Mortality acknowledged quietly.

“You’ll get up and do it again” The trap universalized.

“Caught between the longing for love / And the struggle for the legal tender” The thesis: desire vs survival.

“Where the sirens sing and the church bells ring” The sacred and seductive coexist unresolved.

“Veterans dream of the fight…” Glory reduced to memory.

“Children wait for the ice cream vendor” Even joy requires patience.

“Out into the cool of the evening strolls the Pretender” Adaptation, not villainy.

“He knows that all his hopes and dreams begin and end there” Dreams fenced, not destroyed.

“Laughter of the lovers…” Others still risk everything.

“Ships bearing their dreams sail out of sight” Dreams depart rather than fail.

“I’m going to find myself a girl…” Love as repair strategy.

“Paint-by-number dreams” No blank canvas left.

“Dark glasses on” Avoidance aestheticized.

“We’ll make love…do it again” Even intimacy absorbed into routine.

“I’m going to be a happy idiot” Happiness as survival tactic.

“Ads take aim…” Desire monetized.

“True love could have been a contender” Love didn’t fail—it lost priority.

“Say a prayer for the Pretender” Witness requested, not rescue.

“Only to surrender” Gradual compliance, not collapse.

────────────────────────

THE FUSE

“It’s coming from so far away” Intuition before clarity.

“Music or the wind through an open door” Meaning vs accident; openness already exists.

“Fire high in the empty sky” Contact point between inner and outer worlds.

“Long distance loneliness” Civilizational solitude, not personal.

“Years spent lost in the mystery fall away” Confusion dissolves.

“Only the sound of the drum” Life reduced to rhythm.

“It speaks to the heart of me” Recognition, not instruction.

“You are what you choose to be” Identity as responsibility.

“It’s whatever it is you see that life will become” Attention shapes destiny.

“You have nothing to lose” Attachment exposed as illusion.

“Time runs like a fuse” Finite, burning, irreversible.

“The earth is turning” Inevitability without panic.

“Fear of living for nothing” Meaninglessness as true enemy.

“Alive in eternity that nothing can kill” Continuity beyond circumstance.

“Are there really people starving still?” Moral awakening.

“Beyond the walls of Babylon” Comfort indicts itself.

“I’m going to be around” Presence as vow.

“When towers are tumbling down” Collapse assumed, not feared.

“Tune my spirit to the gentle sound” Attention as discipline.

“Waters lapping on higher ground” Survival with grace.

“Children laughing” Continuation, not permanence.

────────────────────────

YOUR BRIGHT BABY BLUES —

“Sitting down by the highway” Paused while life accelerates.

“Everybody’s going somewhere” Urgency mistaken for purpose.

“Lives are justified” Existence treated as something to earn.

“Pray to God…let me slide” Request for grace.

“Can’t get away from me” The self as inescapable terrain.

“A day away from where I want to be” Permanent near-arrival.

“Running home like a river to the sea” Return replaces escape.

“You can free me…sweet tenderness” Agency handed outward (dangerous).

“Bright baby blues” Brightness as distance.

“Don’t like to lose” Risk-averse longing.

“Watching from the sidelines” Self-observation over inhabitation.

“Turn down your radio” Noise replacing presence.

“Love stirring in my soul” Connection is real but intermittent.

“Feeling of peace hard to come by” Peace is rare.

“Thought I was flying” Temporary transcendence.

“Standing on my knees” Illusion breaks.

“Someone to help me please” Naked need.

“Hole in your garden wall” Secret passage, not door.

“Pull me through” Asking another to perform the crossing.

 
Read more... Discuss...

from Mitchell Report

A moody night sky with a bright moon partially obscured by thick, textured clouds, with silhouettes of tree tops visible at the bottom.

I just had to snap a photo when I saw this scene. A full moon peeking out from behind those clouds. It was such a dramatic, moody sight with the silhouettes of trees just barely visible at the bottom. Isn't it amazing how nature puts on a show like this? This is a werewolf's nightmare 😉.

#photos #landscape

 
Read more... Discuss...

from Language & Literacy

The learning ecosystem

I haven’t written many posts in 2025; here are the measly few I’ve managed to squeak out:

While my bandwidth to peruse research has diminished this year (work has been busy, and I like spending time with my children) I have still encountered a fair number of compelling studies. In keeping with the tradition begun in 2023, and building on last year’s review, I am endeavoring to round up the research that has crossed my radar over the last 12 months.

This year presents a difficult juncture for research. Political aggression against academic institutions, the immigrants who power their PhD programs, and the federal contracts essential to their survival has disrupted research. Despite this, strong research continues to be published. Because research is a slow-moving endeavor, I suspect the full effects of these disruptions will manifest increasingly in future roundups; for now, the good work persists.

The research landscape of 2025 highlights a continued shift toward experience-dependent plasticity. This view treats the human mind as a dynamic ecosystem shaped by biological rhythms, cultural “software,” and technological catalysts. Learning is no longer seen as a linear accumulation of skills, but as a sophisticated orchestration of “statistical” internal models and external social and cultural and technological attunements.

Longtime readers will recognize this “ecosystem” view from my other blog on Schools as Ecosystems. It is validating to see the field increasingly adopting this ecological lens—viewing the learner not as an isolated machine, but as an organism deeply embedded in a biological and cultural context.

Our “big buckets” for this year have ended up mirroring the 2024 roundup, which means, methinks, that we have settled upon a perennial organizational structure:

  • The Science of Reading and Writing
  • Content Knowledge as an Anchor to Literacy
  • Studies on Language Development
  • Multilinguals and Multilingualism
  • Rhythm, Attention, and Memory
  • School, Social-Emotional, and Contextual Effects
  • The Frontier of Artificial Intelligence and Neural Modeling

Let’s jump in!

I. The Science of Reading and Writing

The Critical Role of Morphology and Vocabulary

Readers of my 2023 roundup will recall that morphology was a major theme that year, and it remains central in 2025. Morphology refers to the smallest units of meaning in a word, and is strongly connected to the origins and evolution of words (etymology), and to vocabulary development and reading comprehension in general. It also serves as a crucial link to spelling, given the irregularities in a language such as English that cannot be resolved via phonological decoding alone.

In any given orthography, it is indeed the combination of phonology (the sounds) and morphology that enable a finite number of phonemes or symbols to be recombined into a potentially infinite number of unique words.

Early writing is a “canary in the coal mine” for future reading success. A study of 243 preschoolers found that initial levels and growth in name writing and letter writing significantly predicted later word reading and passage comprehension. This association held true for both monolingual and bilingual children identified as at-risk for reading difficulties, indicating that writing development is a universal literacy milestone.

A massive analysis of 1,116 children demonstrated that word reading and spelling are effectively a single latent trait (r = 0.96). However, spoken vocabulary knowledge acts as a bridge, allowing readers to use known word meanings to compensate for “fuzzy” or imprecise knowledge of letter-sound rules.

The ability to form and retrieve letter sequences (orthographic mapping) is a consistent driver across both typical and dyslexic populations:

Longitudinal data showed that from Grade 3 to Grade 5, morphological awareness (manipulating prefixes, suffixes, roots) overtakes phonological awareness as the primary driver of reading comprehension and the mastery of complex, multi-morphemic words.

Explicit and Implicit Instruction

All this leads to interesting findings this year around explicit instruction (EI) vs. statistical and implicit learning. We often pit these two against each other, but 2025 gave us some direction towards a more synergistic understanding.

Explicit instruction in alphabet instruction is critically important, regardless of modality and language status.

But as word-level reading becomes increasing automatized, it moves to more “top-down, meaning-driven processes” related to language.

This shift is mirrored in the brain's “salience network,” which a large scale meta-analysis identifies as a shared foundation for both math and reading. While children rely on this network broadly for learning, adults engage it primarily for challenging, unmastered tasks, highlighting the importance of targeting attention and effort during the formative years.

  • “The implication is that children and adults engage cognitive control networks for number-arithmetic tasks that are not yet automatized. . . ....the salience network might contribute to the common finding that learning difficulties in mathematics and reading are comorbid. . . . LD interventions should incorporate features that support the functions of cognitive control networks, including external factors that motivate attentional focus... and that highlight key information.” (Shared brain network acts as a foundation for both math and reading, Nature Communications)

For children with developmental language disorder (DLD), explicit instruction in meaning (“semantics”) is most important.

Mechanisms of retention are equally critical; for children with DLD, vocabulary retention is specifically driven by the frequency of successful retrieval across multiple sessions, rather than just the intensity of exposure.

In an orthography such as Chinese, gaining automatization with the “sub-lexical mappings between orthography (form), phonology (sound), and semantics (meaning)” can be an even greater challenge for students with dyslexia. An RCT found that explicit instruction was necessary for abstracting rules (form-sound mappings), but implicit exposure was also key for optimizing speed and efficiency.

In other words, while explicit instruction is critical, it must be accompanied by sufficient volume for application and practice.

  • “These results demonstrate that efficient processing of complex syntactic structures depends on both good statistical learning skills and exposure to a large amount of print so that these skills have the opportunity to extract the relevant statistical relationships in the language” (Statistical learning ability influences adults’ reading of complex sentences, Canadian Journal of Experimental Psychology)

(I wrote about this need for balancing explicit instruction and statistical learning in my post, LLMs, Statistical Learning, and Explicit Teaching)

When it comes to learning the more precise and challenging statistics of orthography, even skilled adults are “satisficers,” choosing the simplest or easiest pronunciations rather than the statistically optimal ones predicted by vocabulary data.

  • “Despite years of exposure, English readers produce /k/ pronunciations for the letter “c” before “e” and “i” over 10% of the time, “even though /k/ pronunciations of ⟨c⟩ virtually never occur in this context in English words.” (Statistical learning in spelling and reading, Trends in Cognitive Sciences)

As literacy learning shifts more towards that language-based side of things, the importance of “usage-based” learning becomes even more important, as with students learning a new language.

  • “The bulk of language acquisition is implicit learning from usage. Most knowledge is tacit knowledge; most learning is implicit; the vast majority of our cognitive processing is unconscious. . . . Explicit instruction can be ill-timed and out of synchrony with development... it can be confusing; it can be easily forgotten; it can be ignored.” (Usage-based approaches to SLA, Theories in Second Language Acquisition: An Introduction)

After all, learning a new language (oracy, vocabulary, comprehension) is not only about reading or writing silently, but also about communication, which is social in nature. Balancing when new vocabulary is introduced and used therefore becomes a consideration.

Assessing Literacy

Gaining a deeper understanding of student’s literacy profiles in order to tailor and target instruction to their needs is important. In the past, teachers relied on “miscue analysis” and “running records” to gain this understanding, but such analysis is about as useful as flipping a coin. Instead, a study suggests that error analysis using the valid and reliable CBM measure of Oral Reading Fluency (ORF) can provide key information on whether errors are phonemic, orthographic, morphemic, and high frequency in nature.

  • “Analyzing student errors and the features of the words wherein these errors occur allows for a more tailored understanding of the area in which students are struggling and provides guidance on how to adjust instruction accordingly. . . The DBI [data-based instruction] process is iterative, and the ongoing analysis of student assessment data to inform the intensification and individualization of an intervention is essential to this process.” (What’s in a Word? Analyzing Students’ Oral Reading Fluency to Inform Instructional Decision-Making, Intervention in School and Clinic)

Automated oral reading fluency assessments often exhibit bias against English learners due to speech-to-text inaccuracies, which can be mitigated by including prosody as a core sub-construct.

Furthermore, it is important to draw upon multiple sources of data to fully understand any student’s unique needs.

II. Content Knowledge as an Anchor to Literacy

Just as we moved from word-level phonological decoding and orthographic mapping towards the importance of semantic and language-based learning, we must pair the learning of any school language not only to social communication, but furthermore to the conceptual and topical knowledge entrenched in academic disciplines.

And that conceptual and topical knowledge – so critical for critical thinking – is founded upon facts.

Curriculum programs are typically designed around “thematic units to build content schemas.” Yet categorization may be a better means.

  • “Categories are rule-based (e.g. something is or is not in a category) and hierarchical (e.g. superordinate categories/subcategories). In this respect, they provided an organizational framework that is different from traditional theme-based instructional approach, which is reliant on associative relationships, and situations. . . . Topics which build concepts through categorization provide children with a more facilitative way to process, store and retrieve information, while promoting inferences that extend knowledge beyond past and current experiences.” (Knowledge-Building Through Categorization: Boosting Children’s Vocabulary and Content Knowledge in a Shared Book Reading Program, Early Education and Development)

OK, not part of 2025 research but a great connection on this, back in 2023 Susan Pimentel, David Liben, and Meredith Liben similarly advocated for a shift from broad thematic units toward a shift for building knowledge through specific topics, which they argued could more effectively support the development of content schemas.

Relatedly, while general prior knowledge facilitates basic comprehension, topic-specific knowledge is the primary driver for building the situational models required for complex knowledge transfer. This effect is mediated by the learner’s initial mastery of a base text, as the ability to apply information to new contexts depends entirely on the foundational transfer skills established during that first encounter.

III. Studies on Language Development

Talker Variability

Building off our previous section on the importance of content knowledge, one single predictor of a multilingual child’s ability to master complex science and social studies vocabulary is driven by a core set of foundational language skills. A student’s foundational language factor (vocab/syntax) explained 58% of the variance in their ability to produce definitions for science concepts.

  • “Learning content vocabulary is significantly related to student language skills in Spanish and in English. . . We find that content is learned when the language is learned. As such, all teachers are, first and foremost, language teachers of the subject matter that they present. . . This finding suggests that developing student language skills early facilitates the learning of curricular vocabulary words later.” (Predicting Science and Social Studies Vocabulary Learning in Spanish–English Bilingual Children, Language, Speech, and Hearing Services in Schools)

While we often think “more speakers = better,” it turns out variability helps children with strong language skills (1.95x more likely to learn), but it can actually “thwart the discovery” of patterns for children with weaker language skills.

Yet some variability remains key, including for students with developmental language disorder (DLD).

For adults learning new words in their native language there was no evidence that either talker variability or scaffolding the talker presentation assisted learning. Instead success was almost entirely predicted by the learner's phonological working memory and general language ability.

All that said, in the context of second language learning, talker variability remains a vital tool—provided the variability maintains high intelligibility and stays within the same dialect or language variety. This principle resonates with my own experience learning Spanish in Peru. I spent roughly equivalent amounts of time en la costa, en los Andes, y en la selva; yet, just as I felt I was gaining fluency, moving to a new region and encountering an entirely different variety of the language made it feel as though I were learning it all over again.

Further considerations for a practice structure with “variability”: when learning new L2 vocabulary, interleaving different categories produced superior outcomes compared to studying them in blocks, likely due to a spacing effect that forces the brain to constantly retrieve and contrast new information.

  • “Mixing different language categories during practice (interleaving) rather than studying them in separate blocks produced superior learning outcomes . . . Our findings indicate that the interleaving advantage observed in other domains extends to dual language learning.” (The effects of interleaving and rest on L2 vocabulary learning, Second Language Research)

Quality vs Quantity

A central question in language research is whether children primarily need a high volume of speech (quantity) or speech that is linguistically and conceptually rich (quality).

A meta-analysis found that in the home, quantity and quality of speech are highly correlated (r=0.88); “parents who talk more naturally tend to use a more diverse and complex vocabulary.”

Conversely, in school, only quality moves the needle. This meta-analysis examined teacher language practices from preschool to third grade and found a statistically significant association between teachers’ language quality—defined as interactive scaffolding and conceptual challenge—and children’s development, but no significant association with the quantity of teacher talk.

The finding that quality of teacher talk trumps quantity reinforces what I have previously explored in Research Highlight 2 and Literacy Is Not Just for ELA. We know that explicit use of academic vocabulary and decontextualized language is what drives growth, not just a whole bunch of words.

Yet despite the importance of quality talk in classrooms, large-scale recordings of 97 preschool classrooms revealed a dearth of linguistically challenging interactions.

  • Researchers found that 40% “Instructional time was primarily devoted to alphabetics, with a stark paucity of opportunities for children to acquire the language and content knowledge essential for later learning.”

  • In contrast, time spent on vocabulary and science instruction supported the most complex and pedagogical language, yet these activities combined received less than half the time allotted to simple letter drills.

  • There was a significant misalignment between beliefs and practice: while 96% of teachers felt confident in their ability to foster rich discussions, automated recordings showed they rarely used wh-questions or extended conversational turns (Preschool Teachers’ Child-Directed Talk, Early Education and Development)

From Womb to Weave: Human Language Development

In 2025, language research has deepened our understanding of the biological and evolutionary roots of communication. Language is not merely a set of learned properties and rules but a form of social, statistical, and biological attunement.

Human language, influenced by the sounds of the words of the adults around us, begins to develop while we are in the womb, and we begin to distinguish between our home languages and other languages.

Even mere exposure to the sounds of a tonal language like Mandarin creates lasting structural imprints in the brain's white matter that persist even if the language is no longer used.

Once out in the world, infant attunement to their mother’s heartbeat during face-to-face interaction correlates with word segmentation ability.

  • When mothers and infants had more synchronized heartbeats, the infants were better at identifying individual words within a stream of speech. . . . This biological synchrony correlated with maternal sensitivity to an infant's mental states, suggesting that an attuned emotional environment literally sets the rhythm for learning.” (Individual Differences in Infants' Speech Segmentation Performance, Infancy)

A nine-year longitudinal study furthermore found that index-finger pointing at age one is a specific developmental predictor of metaphor comprehension at age nine. This correlation reinforces the “embodied cognition” view—the idea that physical grounding in infancy serves as a required scaffold for abstract thought later in life.

You know how adults talk all silly as they goo goo and gah gah at babies? That baby talk seems to be an innate scaffolding technique that accelerates infant language development for all kids, including those with autism.

Such “parentese,” or “infant-directed speech,” is something that sets us apart from apes.

  • “The rate that children heard infant-directed communication was 69 times as high as what Dr. Fryns observed among chimpanzees, and 399 times as high as what Dr. Wegdell observed among bonobos.” (Did Baby Talk Give Rise to Language?, NYTimes)

While macaque monkeys share similar visual-encoding machinery to us, they do not form “consensus color categories,” suggesting that language provides the needed cognitive and cultural framework to achieve shared conceptual agreement.

  • “One animal showed evidence for a private color category, demonstrating that monkeys have the capacity to form color categories even if they do not form consensus color categories. . . Innate similarities between monkeys and humans are not sufficient to produce consensus color categories . . . This implies that human color categories are not 'hard-wired' by birth but depend on language and cultural coordination to achieve shared agreement.” (The origin of color categories, Psychological and Cognitive Sciences)

One interesting aspect of human gender differences is that girls develop more advanced language abilities than boys at an earlier age.

Across typologically diverse languages and cultures, children follow a universal pattern of transitioning from salient free negators (e.g. “No,” “Not”) to less salient bound negator morphemes (e.g. “-nt”).

Furthermore, a phonemic analysis of animal onomatopoeia across 21 languages reveals that humans perceive animal sounds in ways that are similar across cultures. While cultural filters vary the spelling, the underlying sound interpretation transcends linguistic differences.

Phonemes can be viewed as “cognitive tools” that support and extend human thinking and ability. These basic sound units are predicated on physical and biological constraints but vary across cultural lineages to facilitate the efficient transmission of information.

  • “Phonemes—the basic sound units of language—function as cognitive tools that shape and extend human thinking.” (The Phoneme as a Cognitive Tool, Topics in Cognitive Science)

For adults, familiar prosody is also a primary gateway to learning a new language.

And speaking of adults and parents: having more books in the house and parents who are knowledgeable about children’s stories independently helps a child's reading skills, even after accounting for the parents' own natural reading abilities.

  • “Children’s reading in Grade 3 was predicted by mothers’ engagement in reading activities and by literacy resources at home, even after controlling for the genetic proxy of parental reading abilities. . . .The mothers of children who struggle tend to engage in more reading activities. . . Fathers' reported frequency of reading activities was not predictive.” (The intergenerational impact of mothers and fathers on children's word reading development, Journal of Child Psychology and Psychiatry)

Human and Animal Evolution

In 2025, the century-long view of Darwinian gradualism—the idea that species develop through slow, imperceptible increments—was further challenged by a new mathematical framework. This research reveals that living systems often evolve in sudden, explosive surges rather than a steady marathon of change. These “phantom bursts” of evolution suggest that spiky growth patterns are a general characteristic of any branching system of inherited modifications, whether in proteins, languages, or complex organisms. (The Sudden Surges That Forge Evolutionary Trees, Quanta Magazine)

What I find especially interesting about this idea of “spiky bursts” of growth is that in last year’s research roundup, we reviewed a study of 292 children which found that those who heard speech in intense, concentrated bursts had significantly larger vocabularies than those exposed to a more consistent, steady stream of language.

I also have a 2025 book recommendation, if you are interested in the history of language evolution: Proto: How One Ancient Language Went Global by Laure Spinney. There’s an interesting passage in it that I'm summarizing here:

In the Caucasus, dubbed 'the mountain of tongues' by a tenth-century Arab geographer, linguists describe a phenomenon called vertical bilingualism, where people in higher villages know the languages of those living lower down, but the reverse is not true. Why would people living in higher-altitude communities be more fluent in the languages of those residing at lower elevations? Perhaps because mountain dwellers had to travel down to lower villages for trade and resources, therefore they learned the languages of those below. Whereas, people in lower villages had less reason to travel to harder to reach and thus, more isolated higher-altitude communities. So they were less likely to learn those languages. This created the vertical flow of linguistic knowledge, mirroring the flow of physical movement.

IV. Multilinguals and Multilingualism

Just as our biological evolution has shaped our capacity for language, our environment continues to shape how those languages manifest. In 2025, the research landscape for multilingualism shifted toward an “experience-dependent plasticity” framework, viewing multiple languages not as competing systems, but as a dynamic, integrated repertoire.

Longitudinal data tracking Spanish-English bilinguals between ages 4 and 12 revealed that language dominance is fluid, not fixed. Researchers observed a rapid switch in dominance characterized by a steady decline in Spanish-only interactions as children aged. Crucially, this developmental shift is not merely a process of “loss” but one of complexity transfer.

  • “The narrative complexity of a child’s Spanish (L1) stories significantly predicted the complexity of their English (L2) narratives one year later. . . . Bilinguals who produce nativelike L2 vowels are also able to maintain native L1 productions, suggesting that an increased L2 proficiency does not inevitably entail a decline in L1 proficiency.” (Factor structure and longitudinal changes in bilinguals’ oral narratives production, Applied Psycholinguistics)

This finding is complemented by validation of the Simple View of Reading (SVR) in Spanish Heritage learners, where linguistic comprehension (morphosyntax and vocabulary) was the primary predictor of reading success, echoing the need for strong L1 foundations.

  • “Across both types of orthographies, decoding and linguistic comprehension together explain approximately 60% of the variance in RC. Variations between the so-named phonologically transparent and opaque orthographies highlight differences in the contributions of decoding and comprehension to RC and how these factors evolve during children's literacy development. The simplified nature of SVR thus provides a principled foundation for exploring these important questions.” (Can the Simple View of Reading Inform the Study of Reading Comprehension in Young Spanish Heritage Language Learners?, Reading Research Quarterly)

Furthermore, the structural relationship between languages matters. New research indicates that high structural and lexical overlap between a child's languages—a concept known as small linguistic distance—reduces the amount of exposure required to reach heritage language proficiency.

I have explored this concept of “linguistic distance” in relation to diglossia and African American English, noting the greater challenged introduced when written forms diverge significantly from a student's spoken vernacular. This new research affirms that finding: just as greater distance requires more exposure, smaller distance facilitates quicker proficiency.

We often hear about the “bilingual advantage” in executive function, but 2025 research added necessary nuance regarding code-switching. The link between cross-speaker code-switching and cognitive control is heavily moderated by overall language ability. High frequency of switching was associated with better inhibitory control only for children with strong language skills; for those with weaker skills, switching often reflected lapses in production rather than strategic control.

  • “Higher frequency of cross-speaker code-switches was associated with better inhibitory control only for children with higher levels of language ability . . . For children with weaker omnibus language skills, cross-speaker switches may reflect difficulties generating a message (in either language) and/or difficulties tracking language use. . . The same switching behavior may be rooted in different mechanisms in children with different levels of language ability.” (The influence of cross-speaker code-switching and language ability on inhibitory control in bilingual children, Bilingualism: Language and Cognition)

Perhaps the most striking finding this year comes from the other end of the lifespan. New evidence from 27 European countries has redefined multilingualism as a biological asset that actively slows the aging process. In a study of over 86,000 participants, monolingualism was associated with more than double the risk of accelerated biological aging compared to multilingual peers.

V. Rhythm, Attention, and Memory

We are moving away from viewing music and speech as isolated auditory signals and toward a model of social and biological “attunement.” The latest studies suggest that rhythmic synchrony is a fundamental gateway for human connection and cognitive growth.

This attunement extends to the very mechanics of how the brain processes sound. Humans instantaneously distinguish talking from singing based on “amplitude modulation,” or the rate at which volume changes. While speech modulations reflect human vocal comfort at 4–5 hertz, music is slower and more regular at 1–2 hertz, potentially evolving specifically to facilitate group synchrony and bonding.

  • “Audio clips with slower amplitude-modulation rates and more regular rhythms were more likely to be judged as music, and the opposite pattern applied for speech. . . . Our brain associates slower, more regular changes in amplitude with music (1–2 hertz) and faster, irregular changes with speech (4–5 hertz).” (How Your Brain Tells Speech and Music Apart, Scientific American)

The foundations of language development may actually lie in biological coregulation. When mothers and 9-month-old infants have synchronized heartbeats (measured via Respiratory Sinus Arrhythmia), the infants demonstrate advanced word segmentation skills. This suggests that an attuned emotional environment literally sets the rhythm for learning. (Note: we covered this one in a previous section, but worth repeating again here!)

  • “The higher the cross recurrence rate (RR) of mother's and infant's RSA, the longer infants look... which we interpret as advanced word segmentation. . . . When mothers and infants had more synchronized heartbeats, the infants were better at identifying individual words within a stream of speech.” (Individual Differences in Infants' Speech Segmentation Performance, Infancy)

Readers may recall a similar theme from the 2024 roundup, where we discussed research indicating that “synchrony is learning”—showing that brain-to-brain synchrony predicts engagement and learning. This new research on heartbeat and blink synchrony takes that concept even deeper, into the physiological rhythms of our bodies.

One of the year's most fascinating discoveries is that our bodies synchronize with music in ways we never realized: spontaneous eye blinks align with musical beats. This “blink synchronization” occurs without instruction and improves the detection of subtle differences in pitch, indicating that motor alignment helps optimize attention and auditory perception.

However, just as synchrony can boost learning, “dys-synchrony” can derail it. It isn't just peer distraction that disrupts the rhythm of learning; it is the acoustic environment itself. New data reveals that background noise (the “cocktail party effect”) negatively impacts all levels of auditory processing—from reaction time to memory recall. Crucially, this burden is heavier for non-native speakers, whose brains must work double-time to filter signal from noise.

(A reminder that we've covered the relationship between acoustics and learning in great depth previously.)

Research on “attention contagion” furthermore found that students implicitly pick up the inattentive states of their peers. In virtual learning environments, sitting “next to” (virtually) a distracted classmate significantly increased task-unrelated thoughts, proving that focus is a social phenomenon.

Finally, as we rely more on digital tools, we face new trade-offs in how we manage memory. When external aids (like a digital list) are made slower or more “annoying” to access, children spontaneously choose to use their own memory more. It appears that cognitive effort is a calculated decision based on the efficiency of the environment.

VI. School, Social-Emotional, and Contextual Effects

We are increasingly moving away from studying the brain in isolation, focusing instead on how the classroom functions as a biological ecosystem.

Researchers have proposed a new framework called “Classroom Carrying Capacity,” which conceptualizes the teacher as the leader of a sustainable biological ecosystem. A teacher’s own self-efficacy and burnout levels are primary determinants of this capacity; high-burnout environments often see a sharp decline in the quality of instructional support provided to students.

  • “The quality of the classroom environment is determined, in part, by interactions between features of individual students, teachers, and the classroom, which influence one another reciprocally over time.” (Classrooms are complex host environments: An integrative theoretical measurement model of the pre-k to grade 3 classroom ecology)

While we often rush to digitize these learning environments, 2025 research suggests we should tap the brakes. A comparative study on reading mediums found that while digital reading enhances processing speed, it often compromises deep comprehension, retention, and “cognitive comfort.” The researchers suggest that the physical landscape of a book provides “spatial cues” that anchor memory—cues that vanish on a scrolling screen.

This ecosystem is further influenced by external events. In Florida, a study demonstrated that increased exposure to immigration enforcement actions led to a measurable decline in test scores for both U.S.-born and foreign-born Spanish-speaking students. The psychological burden disrupts the “cognitive bandwidth” necessary for academic performance.

This “external weather” of politics and policy can cast a shadow that lasts a lifetime. A sobering study found that Black adults who attended segregated schools decades ago are now showing significantly higher risks of dementia. The chronic inflammation caused by the stress of discrimination appears to leave a biological scar that persists over the course of a life span.

However, educational attainment itself appears to be a potent buffer. New research indicates that staying in school substantially reduces the risk of almost all studied mental disorders, suggesting that the school environment provides a critical scaffolding for resilience.

Similarly, family structure plays a pivotal role. Using full population data from Denmark, researchers found that parental separation resulted in an immediate decline in reading scores (3% to 4% of a standard deviation), an effect that grew to 6.5% four years later. Notably, this decline was driven primarily by students in the middle of the skill distribution, who are often overlooked by policy.

However, the social composition of the classroom can also be protective. Being exposed to a higher proportion of female peers was found to improve mental health for both boys and girls.

Finally, for adolescents, longitudinal neuroimaging and behavioral interviews revealed that the effort of making deeper meaning–through a cognitive process called transcendent thinking–literally sculpts the physical brain. This counteracts age-related thinning of the cerebral cortex and acted as a biological “heat shield” for those teens exposed to community violence.

  • “Transcendent thinking may be to the adolescent mind and brain what exercise is to the body: most people can exercise, but only those who do will reap the benefits”. (Transcendent Thinking May Boost Teen Brains, Scientific American)

VII. The Frontier of Artificial Intelligence and Neural Modeling

The final frontier of 2025 research reveals that Artificial Intelligence is becoming a powerful mirror for human cognition. It is no longer just a tool for doing work, but a “model organism” for understanding how we think.

Groundbreaking neuroscience research is using Large Language Models (LLMs) to unlock the “black box” of the brain. Research led by Andrea de Varda demonstrated that multilingual neural networks share a “shared meaning space” with the human brain. A model trained to map brain activity in English and Tamil can accurately predict brain responses to a completely new language, like Italian, in a zero-shot transfer. This suggests that despite the vast diversity of 7,000 human languages, our brains and our most advanced models are all orbiting the same fundamental laws of meaning.

This concept of a shared meaning space that is essentially statistical in nature provides fascinating confirmation for the hypothesis I explored in my series on AI and Language—specifically the idea that “the meaning and experiences of our world are more deeply entwined with the form and structure of our language than we previously imagined.” (See The Algebra of Language).

On a practical level, AI is proving to be a potent equalizer. An intervention in the UAE found that ChatGPT-based support significantly improved the coherence and writing scores of children with Arabic dysgraphia compared to standard instruction. Furthermore, medical students using AI-personalized pathways scored significantly higher on standardized tests, and classroom participation frequencies doubled.

However, access to AI tools is not enough. The “active ingredient” determining whether a student succeeds with AI isn't the technology, but their own belief in their ability to use it. Self-efficacy was found to be the single strongest predictor of achievement in AI-based settings, mediating the technology's effectiveness.

This self-efficacy finding provides the other half of the equation to the “barbell” theory of AI cognitive enhancement. We cannot simply hand the heavy lifting of cognition over to AI; the “weights” must still be lifted by the student to build the belief in their own capability that is required to effectively guide the technology.

Perhaps the most “sci-fi” finding of the year involves our ocean's giants. Project CETI has successfully used LLMs to decode the codas of sperm whales, discovering that whale communication contains vowels and diphthongs used in ways strikingly similar to human speech. These whales possess “culturally defined clans” with distinct dialects, suggesting that culture is a primary driver of communicative complexity across species.

(Fans of previous roundups will appreciate the continuity here: in 2023, we highlighted Gašper Beguš's work on ANNs and whale phonology.)

Researchers have even identified a “meta-law” where the statistical patterns in the equations of physics mirror the mathematical distributions found in human language (Zipf's Law). This suggests that the same computational principles of efficiency govern both our communication and the physical laws of the universe.

  • Understanding these patterns “may shed light on Nature’s modus operandi or reveal recurrent patterns in physicists’ attempts to formalise the laws of Nature . . . The patterns may arise from “communication optimisation,” where operators are defined “to describe common ideas as succinctly as possible . . These regularities could “provide crucial input for symbolic regression, potentially augmenting language models to generate symbolic models for physical phenomena.” (Statistical Patterns in the Equations of Physics and the Emergence of a Meta-Law of Nature, arXiv)

This finding of a universal statistical law of efficiency brings us back to Stephen Wolfram's concept of “computational irreducibility,” which I touched on in the AI barbell post. While language and physics may share efficient patterns (making them partially reducible), the act of learning—of internalizing these patterns into a human mind—remains an irreducible process that cannot be fully automated away.

Closing Thoughts

If there is a single thread tying the research of 2025 together, it is connectivity. Whether it is the synchronization of a mother’s heartbeat with her infant, the shared “meaning space” between an AI model and a human brain, or the “vertical flow” of language in ancient mountain villages, the evidence confirms that we are not isolated cognitive units. We are ecologically situated, rhythmically attuned, and socially dependent learners.

Here’s to another year of learning, connecting, and—hopefully—a little more positive synchronization and interactive attunement with the world around us.

#language #literacy #research #reading #writing #multilingualism #assessment #brain #cognition #academics #curriculum #wrapup

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog