Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
💚
Artemis II (pt. III)
The lucky way out For this fortune of air Exploring the symphony- of noise In thoughts to care in time Special about In six shiny windows The Mercury of days As the messenger Rod to reunion If preterm but at speed High-altitude poem For crews to enjoy- And at most- remembering her Our ship of plans Linking our phone To the day of ideas More than mercy The victory sings Of payloads of fortune And just enough energy- to return And researched to the skies A thing about wear To spot on the payout In electrical force And everything works- just enough Staying the course Of rockets the same And this- Our day beyond In a course of will And three repeats of the tour Sincerely that star That victory eye For thoughts of made whole In stunningly deep For the Moon- and back.
from
Kroeber
O Zizek a colocar uma balaclava, no final da conversa com a Nadya Tolokonnikova. O gato que me veio cumprimentar a meio da minha caminhada. Os dias às vezes só precisam destes pequenos prazeres, para resgatar alguma luz. .
from
Roscoe's Quick Notes

My basketball game before bedtime tonight will find me following the Indiana University Women's Basketball Team as they travel to their final road game of the regular season. They'll be playing the Rutgers Scarlet Knights in New Brunswick, New Jersey at Jersey Mike's Arena. The game has a scheduled start time of 6:00 PM CST and fits nicely into my routine.
I'll be listening to the pregame show then the radio call of the game streaming from B97 – The Home for IU Women's Basketball.
And the adventure continues.
from brendan halpin
Cory Doctorow recently caused a stir on the nerdy corners of the internet where I hang out by writing an essay saying he uses AI to proofread his blog and, what’s more, you are a chump if you decide not to buy literally anything. I mean, that’s my interpretation, but he gives multiple examples of how every form of tech is tainted by its association with someone horrible, and his conclusion seems to be that one therefore should be indiscriminate in what one uses and purchases.
Now, I do not worship Cory Doctorow as many folks do—I think he’s a gifted nonfiction writer who, like most of these guys who run their own platform, desperately needs an editor. I used to find him annoying about 20 years ago when he wrote clumsy, didactic YA novels and asserted that everybody should give their books away like he did. (At the time, he was writing for one of the most-read blogs on the internet and didn’t seem to recognize that this was contributing to his success.)
So yeah, a smart, insightful guy who, like most internet celebrities, is a little high on his own supply and therefore annoying, but I read him semi-regularly for his smarts and insights.
And I get where he’s coming from here—he’s repeatedly asserted that you can’t shop your way to social change, and that, furthermore, that placing all the onus on social change on individual consumers is a strategy to prevent mass movements that might actually cause real change.
So far so good. And, yes, there is, famously, no ethical consumption under capitalism, but people seem to see this and respond with “so, therefore, you shouldn’t even try,” which is how I’m reading Doctorow’s protest-too-much defense of his AI use.
I disagree with this on both a moral and political basis. We cannot, after all, perfect ourselves as human beings—we will always slip up and harm people we care about and/or do things that don’t align with our values. But I think most of us agree that we have a responsibility to keep trying, while knowing that we will never reach the goal.
And, also, while shopping (or, more accurately, refusing to shop) alone cannot bring about social change, it remains an important tool in our arsenal. For many of us our purchasing power is the most meaningful power we have. If you live in a gerrymandered “red” state, you can’t vote your way out of fascism. If you, like me, live in a “blue” state controlled by the Democratic party, you effectively get a choice in every election between people who believe we should be grateful serfs of the Epstein Class, and the collection of religious fanatics, grifters, and pedophiles that calls itself the Republican Party. Voting alone will not bring about the change I want, but I still do it. Trying to make my purchases align with my values also won’t bring about the change I want, but I’m damn sure not going to renounce the only power I have that the ruling class cares about.
Here’s what I have found about trying to reach the impossible goal of having my economic life reflect my values—every time I do it, usually by NOT buying something rather than by buying something—it makes me feel good. I’m not saying you, like me, should renounce corporate social media (though for God’s sake get off of X, what the hell are you doing on a literal Nazi site), or eating meat, or any of the things I’ve done to try to feel like somewhat less of a hypocrite. But I am suggesting that you’d be foolish to not even try to align your economic life with your ostensible values.
I don’t care if Cory Doctorow uses AI to proofread his blog. Proofreading is one of the rare tasks that AI actually excels at, which makes sense since it was trained on the purloined output of hundreds of millions of writers. And look, nobody likes a scold. The fact is that people who are trying very hard to live their values will still fall short (I have an Amazon Prime subscription and shop at Whole Foods all the freakin’ time) because we all fall short, and the fact that other people aren’t doing the same things as you doesn’t mean they’re bad people or that they’re doing nothing at all.
You’ve got a lot of tools available to make the world a better place. I urge you not to throw any of them away.
The same God who guides the stars in their courses, who directs the earth in its orbit, who feeds the burning furnace of the sun, and keeps the stars perpetually burning with their fires—the same God has promised to supply thy strength. While he is able to do all these things, think not that he shall be unable to fulfill his own promise!”
— Charles Spurgeon
#life #quotes #theology
from folgepaula
since the sun began to shine again, I am longing for the next days and I can feel my heart has been slowly opening. all I long for is the promise of these beautiful simple days, when I can lie in an open field and fall asleep under its warmth, perhaps close to a cute tree, but exposed enough to feel the sunlight settle on my skin while the earth gathers around me like a soft blanket. I want to be able to close my eyes and surrender to it the way a child surrenders to a mother’s chest: safe enough to sleep, free enough to silent, held enough to cry, because everything is allowed there and everything is natural. And then I want to throw myself into a small river or a lake and get the water wrap itself around me in a hug, while the plants brush against my legs like gentle hands, and in there I know I will laugh again, before I rise to the surface, wrap myself in a towel, and sit at the margins to dry, feeling every pore of my skin open, as if my whole soul is finally able to breathe again. that’s all I want.
/feb26
from Faucet Repair
9 February 2026
Stuck star (or possibly Third man): returned to the star image in the studio today after the last go at it didn’t work. That’s something I’ve found myself doing for the first time—returning to elements/motifs from failed paintings and re-deploying them. Used to treat references that led to inert paintings as dead weight, but it’s nice to now see that unsuccessful work really can be bent into more interesting shapes. In this case it was by paring down; this one even more than Plane. It’s a small pink star floating near the middle of a panel and sort of spiderwebbing out over a sky blue blotch of watercolor. Now that I think about it, the spiderwebbing feels related to a Lois Dodd painting (Spider Web with Clover and Grass, 2004) I've looked at a lot this week after Louis Block wrote about it in the Brooklyn Rail (it's included in the retrospective he covered). Anyway, I think I like the questions it is asking. Which seem to circle around stability, projection (I see a facade), order, and control.
from
Olhar Convexo
Recentemente, o INSPER proibiu o uso de celulares em seus campi, e a FGV seguiu o mesmo caminho. Na realidade, não é uma proibição propriamente dita, já que são adultos — e não há lei que dite essa regra, nem federal, nem estadual — é uma forte recomendação que pode beneficiar os alunos em projetos internos das próprias universidades.
Entretanto, definir uma “política de forte recomendação do não uso” é proibir sem usar uma lei, segundo relatos de alunos ao podcast “O ASSUNTO”, do G1. (edição de 18/02/2026).
Mas essa proibição traz algum benefício real ou apenas tenta controlar o incontrolável?Essa proibição já está em vigor no INSPER há pelo menos um ano, e os professores relatam que as notas e a qualidade do ensino já tem sido melhor.
Podemos usar o exemplo de uma faculdade do Texas: uma política de incentivo ao uso direcionado apenas ao aprendizado. Quanto “melhor” for o uso — nos momentos corretos — os alunos ganham moedas (coins) para trocar por descontos em lojas no campus e outros benefícios. A diferença é que a política do incentivo não é mandatória — é voluntária.
No Brasil, uma recente pesquisa divulgada pelo G1, demonstrou que os brasileiros usam o celular, em média, por 05h e 30min por dia. (com uma média de 4h apenas usando redes sociais!). Quando olhamos mais a fundo e separamos por faixa etária, vemos a disparidade entre os jovens: 70% destes passam entre 10h e 19h por dia usando o telefone. Deste tempo, em média 9h por dia é dedicado somente para redes sociais.
Devem ser realizadas políticas de incentivo ao uso correto do celular, especialmente pelo temido efeito contágio em sala: esse efeito é literal – quando um aluno começa a jogar, os outros têm vontade de jogar também, e quando percebe-se, a sala inteira está olhando para as telas.
Quando pensamos que os alunos já são adultos quando estão nas universidades, esse pensamento deve ser feito com ressalvas — lembre-se que os alunos oriundos das escolas, geralmente são adolescentes (com 17/18 anos), que entram na faculdade, geralmente, no ano seguinte (com 18/19 anos), ou seja, raramente ocorre grande evolução em tão pouco tempo.
Porque não faz sentido restringir?
Os profissionais, especialmente aqueles do time dos cálculos (calculadoras de diferentes tipos) e da área da medicina (consultas de condutas médicas) e farmácia (consulta de interações medicamentosas) — usam o celular rotineiramente no mercado de trabalho.
Não faz sentido restringir, afinal também se faz o mesmo uso nas universidades.
Faz-se necessário desenvolver mecanismos de controle do uso dos celulares, voltados ao uso do brasileiro jovem, sem usar a “restrição mandatória”. Um exemplo já mencionado, é a política de incentivo (Universidade do Texas).
O celular não é o vilão.
O vilão é a incapacidade institucional de lidar com a complexidade dele.
Rio de Janeiro, 19 de fevereiro de 2026.
Aunque muchas personas piensan que la vida de un robot es afortunada, o por lo menos satisfactoria, este tipo de afirmaciones parten de opiniones sesgadas.
Quien así piensa no observa lo fundamental: no es lógico comparar. Quiero decir, no examina por sí misma la vida robótica, sino que la compara, sin más, con la vida humana, que en estos momentos parece un desastre.
Un robot es un robot, por útil o inútil que sea. Hemos visto robots que dan tres pasos y se caen, y otros que corren, saltan y hasta hacen muecas. En cualquiera de los casos, son robots. La identidad robótica está garantizada, al menos en este momento de la historia.
Pero el ser humano es diferente. Primero somos bebés, luego vamos pasando por las diferentes etapas, hasta trascender el en paz descanse. Somos de esta o aquella nacionalidad, ricos, pobres o no se sabe, nuestros antepasados fueron nobles, habrá que ver o facinerosos, carnívoros o veganos, sanos, enfermos o ahí vamos. En todo esto y más, es lógico que nos encontremos con un problema de identidad del tamaño de diez burros, y a la espera de que una circunstancia desencadenante nos encamine al brote de angustia existencial.
Los robots no poseen características similares. Haríamos bien en no comparar; en no proyectar en ellos nuestros fantasmas. Lo que sí es cierto -todo sea dicho-, es que tienen cara de pasarla bien en nuestro mundo.
Soy consejero legal de Markus Skhalagrinsen desde hace cincuenta años. No tengo la menor duda de su honorabilidad; sé que va con la verdad por delante.
Él está dolido. Destila rencor cuando se acuerda del asunto, pero no sabe si callar, porque las consecuencias de armar un escándalo podrían ser perjudiciales para él y su familia, y cree que hasta para nuestro Estado, que no está para muchos brincos.
Realmente, es un auténtico pionero en materia de inteligencia artificial. No me cabe duda. Quizás antes no se llamaba así, claro. Ahora bautizan las cosas de otro modo, según las modas en Silicon Valley.
En su trabajo, Markus ha tenido un éxito moderado. Ya está mayor, cumple ochenta y nueve en julio.
Escribe libros de relatos. Ninguno se escapó de recibir elogios de la crítica y su obra en conjunto fue premiada con la medalla del mérito literario, aunque no hizo el dinero que esperaba.
Su método es único. Reúne sobre el escritorio las obras de Ray Bradbury, abre una página, señala un renglón con los ojos cerrados, lo digiere, y viento: desarrolla una historia. Otras veces arranca con una paráfrasis y luego empuja lo que viene, horno, papel y tinta.
-Dígame si no soy pionero. Merezco un reconocimiento público, por lo menos -me dice.
-Sí, Markus, ya sabes, las cosas son según se miren. Se llama inteligencia artificial si lo hacen en Silicon Valley. Pero aquí, entre nosotros, no faltará un desgraciado que lo llame plagio.
from
China Internship
In today’s global economy, a resume is only as strong as the real-world experience behind it. While many look to study in China to learn the language, the most successful global leaders are those who have actually stepped into the professional landscape.
The China International Leadership Programme is designed for those who want more than just a certificate. This is a blended, high-impact programme where a core component allows you to actually work in China, applying your leadership skills in real-time through meaningful professional placements.
By combining online modules with immersive, on-the-ground experience, you won't just learn about leadership—you will practice it.
The programme is strategically built around three core objectives to maximize your professional and personal ROI:
The programme consists of eight modules delivered in a flexible, hybrid format. You begin with online modules that establish your knowledge base, which then transition into experiential, on-the-ground components. This ensures that when you arrive to begin your work placement, you are prepared, culturally aware, and ready to lead.
We offer three distinct pathways, each building on the last to offer deeper levels of immersion and professional responsibility:
A focused, high-intensity immersion perfect for those looking to kickstart their Mandarin skills and cultural understanding.
Deepen your expertise by combining language mastery with a broader understanding of China’s diverse landscape.
Our flagship 12-month programme for those ready to fully commit to their professional development. This track provides the most comprehensive experience, allowing you to live and work in China for a full year.
from
wystswolf
Coalescence
When I think of you I see it— a soft red glow in the dark of the world,
I am the wind And you, a coal...
One ember glowing hot but patient. Hidden beneath the ash.
I ache to see what light we make.
I lean close— slowly—
and feel our ignition— your heat answers mine.
Breath deepens, you brighten.
Tell me not to.
from 下川友
片手で船を出した。 もう片方の手は、頬の髭の剃り残しをなぞっていた。ざらりとした感触が、朝の光に溶けていく。 「初めてガムを食べたときも、こんな感じだったなあ」 隣で船をたたんでいたミルが、顔を上げた。 「ガムって、あの噛むやつ?」 「そう。初めて食べたとき、噛んだら破れて、変な感じがしたんだ」 ミルは首をかしげて、また船に戻った。彼女の宇宙服は、朝日を受けて淡く光っていた。
その日、天気は三度変わった。朝は霧、昼は雷、夕方には雪が降った。 「今日は、天気が三回も変わったんだよ」 アルトがそう報告すると、ミルは「ふうん」とだけ言って、ピアノの下に潜り込んだ。 「友達に連絡するから」と言って、スマホの画面を見つめていた。 ピアノの脚の影が、彼女の頬に落ちていた。どの星の友達に連絡しているのか、アルトはあえて訊かなかった。
「サウナ入ってくる」 「ハマってるね、それ」 「うん、黒い人が頑張って作ってくれたからね」 湯気の向こうで、ミルが頷いた。 「本当にそうだよなあ」 アルトが言った。 「本当にそうだよなあ?」 ミルも繰り返した。 ニュースの音が遠くで流れていた。二人とも、それにうなずいていた。
自分のホームページを作るのにハマっている。 「また呼吸を忘れてるわよ」 「はっ」として、アルトは慌てて呼吸を再開する。 「これ、誰に見てもらうの?」とミルに聞かれて、 「電話帳順に、姿勢が良くなった自分を送る」と答えた。 「それって、どういう意味?」 「電波。チューニングを合わせないと」 ミルはしばらく考え込んで、それから小さく笑った。 「アルトらしいね」
航海中、自販機が浮いていて、サイダーを買うことにする。 ボタンを押そうとしたら、自販機の上に鳥が乗っていて、こっちを見ている。 「風邪ひけやー」 アルトがそう言って、そっと手を差し出すと、鳥は首をかしげて、また空へ戻っていった。 ミルがその後ろ姿を見送りながら言った。 「あの鳥、私たちのこと、どう思ってるのかな」 「さあ。でも“風邪ひけやー”って言ったから、友好的なことは伝わったと思うよ」 二人はまた船に乗り込む。宇宙服のブーツが、自販機にくっついていたアスファルトの地面を軽く鳴らした。
「シソ揚げる」 そう言って、ミルはシソを揚げ始めた。 油のはねる音を聞いていたら、弟から電話がかかってきた。 「アニメみたいな髪型しないと、体重計に乗れないんだよー」 弟は今日も、言いたいことを優先して話してくる。 相変わらず思い込みが激しいなあ、と思いながら、買って帰るおみやげを考える。 しばらくして戻ってきたミルは、なぜか髪の毛が少し立っていた。
夜、宇宙船から庭に出ると、遠くにチーズケーキみたいな星が見える。 ふわふわしていて、甘くて、少しだけ焦げ目がついている。 「本に載ってた通りだ。かなり住みたい星No.1だ」 アルトが言った。 二人は並んで、チーズケーキの星を見る。その周りの、砂糖のような星々が静かに瞬いていた。
ミルが言った。 「アルト、最近、こっちの星の言葉、練習した?」 「5級くらい」 「そう。でも、なんとかなると思う」
from Manuela
Meu amor, se você soubesse quantos sorrisos você tirou de mim hoje…
Acordei com uma determinação voraz de não te mandar mensagem.
Normalmente é difícil pois sinto sua falta constantemente e me entrego a saudade; mas confesso que estava ou estou, com ciúmes, ciúmes de algo que ainda não aconteceu mas vai, e me apegar a esse ressentimento me fez não te mandar nada pela manhã.
Felizmente ou infelizmente, você só pediu acesso ao GPT, e eu já me derreti todo com a “obrigação” de ter que entrar em contato para te passar o código.
Prometi a mim mesmo que seria mais “difícil” durante esses dias (você não pode ficar brava comigo por isso, estranho seria eu não me incomodar), e consegui ficar sem te mandar um Te amo por incríveis 4 minutos.
A verdade é que você me conhece estranhamente bem, e derrete minhas defesas e postura com uma maestria quase magica.
Você me tirou sorrisos hoje todas as vezes que me mandou uma mensagem, que esticou um assuntinho, que me fez um sinal que pra mim só significava “rock”.
Voce me fez sorrir porque por algum motivo voce fez parecer que queria falar comigo, mesmo eu mais distante em alguns momentos, e isso deu um calorzinho no meu coração.
Eu confesso que sorri quando voce me mandou foto do seu prato, e quando achou impressionante o chat te falar para não emagrecer. Eu deixei 4 instrucoes pra ele, e no final ele me disse: “Fica tranquilo. Eu sei a responsabilidade que voce esta me dando aqui”. Eu só espero que ele saiba mesmo, porque se ele vacilar eu vou queimar todos os servidores onde ele estiver rodando.
Também fiquei meio sorridente quando você ficou com ciúmes, juro que não era a intenção inicial, mas me senti um pouco vingado fazendo você sentir um pouco do que eu sinto… sou toxico?
Sorri quando de coração, não conseguia dormir de tarde porque não conseguia parar de pensar em você, e você respondeu dizendo que estava mesmo na minha cabeça, suas bobeiras despretensiosas conseguem me alegrar com uma facilidade inexplicável.
Sorri relendo sua carta, a parte da casa me pega muito.
Eu leio esse trecho, fecho os olhos e fico saboreando, a cena, as palavras, o desejo.
Eu amo te amar, te amaria mesmo que você já não lembrasse mais meu nome; mas é tão bom te amar e me sentir amado de volta de alguma forma.
Você me fez sorrir com sua empolgação com o livro hoje, me fez lembrar o quanto eu amo o fato de você ser essa pessoa tão viva, tão alegre.
A verdade dona Manuela, é que você me fez sorrir todas as vezes que pensei em você, e lembrei do seu sorriso que é tão meu.
“Tira-me o pão, se quiseres,
tira-me o ar, mas
não me tires o teu riso”
Te amo meu amor, obrigado por me arrancar os sorrisos mais sinceros, e me presentear com os teus sorrisos mais bonitos, você é o meu mundo todinho.
Do garoto que sempre sorrirá ao ouvir seu nome,
Nathan.
from
SmarterArticles

You woke up this morning and checked your phone. Before your first cup of tea had brewed, you had already been nudged, filtered, ranked, and sorted by artificial intelligence dozens of times. The news headlines surfaced to your lock screen were algorithmically curated. The playlist that accompanied your commute was assembled by machine learning models analysing your listening history, mood patterns, and the time of day. The product recommendations that caught your eye during a two-minute scroll through an online shop were generated by systems that, according to McKinsey research, already account for roughly 35 per cent of everything purchased on Amazon. And you noticed none of it.
According to IDC's landmark “Data Age 2025” whitepaper, produced in partnership with Seagate, the average connected person now engages in nearly 4,900 digital data interactions every single day. That is roughly one interaction every 18 seconds across every waking hour. The figure has grown dramatically from just 298 interactions per day in 2010 to 584 in 2015, climbing through an estimated 1,426 by 2020. Today, more than five billion consumers interact with data daily, and that number is projected to reach six billion, or 75 per cent of the world's population, by the end of 2025. The vast majority of these touchpoints are mediated, shaped, or outright determined by artificial intelligence systems operating beneath the surface of your awareness. The question is no longer whether AI influences your daily life. The question is whether you still recognise the difference between a choice you made and a choice that was made for you.
To understand the scale of what is happening, consider the platforms that structure most people's digital existence. Netflix reports that more than 80 per cent of the content its subscribers watch is discovered through its recommendation engine, a figure the company has cited consistently since at least 2017. The platform, which serves over 260 million subscribers globally across more than 190 countries, reports that its personalisation engine saves users a collective total of over 1,300 hours per day in search time alone. On Spotify, algorithmic features including Discover Weekly, Release Radar, and personalised mixes account for approximately 40 per cent of all new artist discoveries, according to the platform's own Fan Study released in April 2024. Since its launch, users have listened to over 2.3 billion hours of music from Discover Weekly alone. These are not peripheral features bolted onto the side of the product. They are the product.
The sophistication of these systems has advanced well beyond simple collaborative filtering, the technique that once powered the familiar “customers who bought this also bought” prompt. Modern recommendation engines deploy deep learning architectures that analyse hundreds of signals simultaneously: your viewing history, obviously, but also how long you hovered over a thumbnail, whether you watched to completion or abandoned at the 23-minute mark, what time of day you tend to prefer certain genres, and how your consumption patterns correlate with those of millions of other users whose behaviour the system has already mapped. According to McKinsey, effective personalisation based on user behaviour can increase customer satisfaction by 20 per cent and conversion rates by 10 to 15 per cent, while retailers implementing advanced recommendation algorithms report a 22 per cent increase in customer lifetime value.
What makes this consequential is not the technology itself but its invisibility. The philosopher and legal scholar Cass Sunstein, co-author of the influential book “Nudge” with Nobel laureate Richard Thaler, has written extensively about how “choice architecture” shapes human decisions. A nudge, in their definition, is any design element that alters people's behaviour in a predictable way without restricting their options or significantly changing their economic incentives. The critical insight is that choice architecture cannot be avoided. Every interface, every default setting, every ordering of options on a screen constitutes a form of choice architecture. The only question is whether it is designed transparently and in the user's interest, or opaquely and in the interest of the platform.
In the digital realm, that question has taken on extraordinary urgency. A European Commission study published in 2022 found that 97 per cent of the most popular websites and apps used by EU consumers deployed at least one “dark pattern,” a design technique that manipulates users into decisions they might not otherwise make. A subsequent investigation by the United States Federal Trade Commission, published in July 2024, examined 642 websites and apps and found that more than three quarters employed at least one deceptive pattern, with nearly 67 per cent deploying multiple such techniques simultaneously. These are not outlier findings. They describe the default condition of the digital environment in which billions of people make thousands of decisions every day.
Perhaps the most profound form of invisible AI influence operates through the news and social media feeds that billions of people consult daily. The global number of active social media users surpassed 5 billion in 2024, with the average user spending approximately 2 hours and 21 minutes per day on social platforms, according to DataReportal and Global WebIndex. Mobile devices dominate, accounting for 92 per cent of all social media screen time in 2025. The average user engages with approximately 6.8 different platforms per month. During that time, every piece of content encountered has been selected, ranked, and sequenced by algorithmic systems optimising for engagement.
The consequences of this optimisation have been the subject of intense academic scrutiny. A systematic review published in MDPI's “Societies” journal in 2025 synthesised a decade of peer-reviewed research examining the interplay between filter bubbles, echo chambers, and algorithmic bias, documenting a sharp increase in scholarly concern after 2018.
The distinction between filter bubbles and echo chambers matters. Filter bubbles, a term coined by internet activist Eli Pariser in 2011, describe environments where algorithmic curation immerses users in attitude-consistent information without their knowledge. Echo chambers emphasise active selection, where individuals choose to interact primarily with like-minded sources. A 2024 study in the Journal of Computer-Mediated Communication found that user query formulation, not algorithmic personalisation, was the primary driver of divergent search results. The way people phrase their questions matters more than the algorithm's filtering.
Yet this finding does not absolve the algorithms. A study on “Algorithmic Amplification of Biases on Google Search” published on arXiv found that individuals with opposing views on contentious topics receive different search results, and that users unconsciously express their beliefs through vocabulary choices, which the algorithm then reinforces. The researchers demonstrated that differences in vocabulary serve as unintentional implicit signals, communicating pre-existing attitudes to the search engine and resulting in personalised results that confirm those attitudes. The algorithm does not create the bias, but it amplifies it.
On TikTok, these dynamics are particularly pronounced. A major algorithmic audit published on arXiv in January 2025 conducted 323 independent experiments testing partisan content recommendations during the lead-up to the 2024 United States presidential election. The researchers analysed more than 340,000 videos over a 27-week period using controlled accounts across three states with varying political demographics. Their findings indicated that TikTok's recommendation algorithm skewed towards Republican content during that period, a result with significant implications given that, according to Tufts University's CIRCLE, 25 per cent of young people named TikTok as one of their top three sources of political information during the 2024 election cycle. The platform has already been fined 345 million euros by the Irish Data Protection Commission because its preselection of “public-by-default” accounts was deemed a deceptive design pattern.
The influence extends far beyond politics. AI-powered recommendation systems are fundamentally reshaping how people discover, evaluate, and purchase products. A McKinsey survey found that half of consumers now intentionally seek out AI-powered search engines, with a majority reporting that AI is the top digital source they use to make buying decisions. Among people who use AI for shopping, the technology has become the second most influential source, surpassing retailer websites, apps, and even recommendations from friends and family. McKinsey projects that by 2028, 750 billion dollars in United States revenue will flow through AI-powered search, while brands unprepared for this shift may see traditional search traffic decline by 20 to 50 per cent.
The numbers from the Interactive Advertising Bureau (IAB) reinforce this pattern. Their research found that 44 per cent of AI-powered search users describe it as their primary source of purchasing insight, compared to 31 per cent for traditional search, 9 per cent for retailer or brand websites, and just 6 per cent for review sites. Nearly 90 per cent of AI-assisted shoppers report that the technology helps them discover products they would not have found otherwise, and 64 per cent had AI surface a new product during a single shopping session.
What is striking is the degree of satisfaction users express. According to Bloomreach consumer surveys, 81 per cent of AI-assisted shoppers say the technology made their purchasing decisions easier, 77 per cent say it made them feel more confident, and 85 per cent agree that recommendations feel personalised. Over 70 per cent say AI often anticipates their needs before they even articulate them. From the consumer's perspective, the system is working brilliantly. The experience is frictionless.
But “frictionless” is precisely the word that should give us pause. When a system removes all friction from a decision, it also removes the cognitive engagement that constitutes genuine deliberation. A 2025 study published in PMC on AI's cognitive costs found that prolonged AI use was significantly associated with mental exhaustion, attention strain, and information overload (with a correlation coefficient of 0.905), while being inversely associated with decision-making self-confidence (r = -0.360). The researchers concluded that while AI integration improved efficiency in the short term, prolonged utilisation precipitated cognitive fatigue, diminished focus, and attenuated user agency.
This is the paradox at the heart of AI-mediated consumer life. The system makes choices easier in the moment while gradually eroding the capacity and inclination to make them independently.
To understand why these systems operate as they do, it is essential to examine the economic logic that drives them. Shoshana Zuboff, the Harvard Business School professor emerita whose 2019 book “The Age of Surveillance Capitalism” has become a foundational text in the field, argues that major technology companies have pioneered a new form of capitalism that “unilaterally claims human experience as free raw material for translation into behavioural data.” The excess data generated by users, what Zuboff terms “proprietary behavioural surplus,” is fed into machine learning systems and fabricated into prediction products that anticipate what users will do, think, feel, and buy.
Crucially, Zuboff's analysis extends beyond mere data collection. She documents how surveillance capitalists discovered that the most predictive behavioural data come not from passively observing behaviour but from actively intervening to “nudge, coax, tune, and herd behaviour toward profitable outcomes.” The goal, she writes, is no longer to automate information flows about people. “The goal now is to automate us.” This represents what Zuboff calls “instrumentarian power,” a form of control that operates not through coercion or ideology but through knowledge, prediction, and the subtle shaping of behaviour at scale. Unlike traditional totalitarian systems based on fear, surveillance capitalism operates through continuous, invisible behavioural guidance towards economically profitable ends.
In 2024, Zuboff and Mathias Risse, director of the Carr Center for Human Rights Policy, launched a programme at Harvard Kennedy School titled “Surveillance Capitalism or Democracy?” The initiative brought together figures including EU antitrust chief Margrethe Vestager, Nobel Prize-winning journalist Maria Ressa, and Baroness Beeban Kidron. Vestager emphasised at the September 2024 forum that “it's not too late” to curb the exploitation of personal data.
A December 2024 research paper published on ResearchGate, drawing on frameworks from both Zuboff and technology critic Evgeny Morozov, examined how AI systems facilitate the extraction, analysis, and commercialisation of behavioural data. The paper concluded that platforms and Internet of Things devices construct sophisticated mechanisms for behavioural modification, and advocated for a balance between technological innovation and social protection.
The relevance of this framework has only intensified as generative AI has matured. In 2025, AI no longer merely analyses clicks or searches. It anticipates needs before individuals are fully aware of them. Large language models and predictive systems function as accelerators of behavioural surplus, capable of absorbing vast quantities of human data to create economic value. Meanwhile, new regulatory initiatives such as the European AI Act confirm one of Zuboff's central contentions: without political regulation, the market does not self-correct.
The invisible influence of AI extends to the most fundamental level of human cognition. Research published in the journal Cureus in 2025 examined the neurobiological impact of prolonged social media use, focusing on how it affects the brain's reward, attention, and emotional regulation systems. The study found that frequent engagement with social media platforms alters dopamine pathways, a critical component in reward processing, fostering dependency patterns analogous to substance addiction. Changes in brain activity within the prefrontal cortex and amygdala suggested increased emotional sensitivity and compromised decision-making abilities.
A key 2024 paper by Hannah Metzler and David Garcia, published in Perspectives on Psychological Science, examined these algorithmic mechanisms directly. The researchers noted that algorithms could contribute to increasing depression, anxiety, loneliness, body dissatisfaction, and suicides by facilitating unhealthy social comparisons, addiction, poor sleep, cyberbullying, and harassment, especially among teenagers and girls. However, they cautioned that the debate frequently conflates the effects of time spent on social media with the specific effects of algorithms, making it difficult to isolate algorithmic causality.
The concept of “brain rot,” named the Oxford Word of the Year for 2024, captures the cultural dimension of this neurological reality. Research published in PMC in 2025 defined brain rot as the cognitive decline and mental exhaustion experienced by individuals due to excessive exposure to low-quality online materials. The study linked it to negative behaviours including doomscrolling, zombie scrolling, and social media addiction, all associated with psychological distress, anxiety, and depression. These factors impair executive functioning skills, including memory, planning, and decision-making.
The attention economy, as a theoretical framework, helps explain why platforms are designed to produce these effects. A paper published in the journal Futures applied an attention economic perspective to predict societal trends and identified what the authors described as “a spiral of attention scarcity.” They predicted an information environment that increasingly targets citizens with attention-grabbing content; a continuing trend towards excessive media consumption; and a continuing trend towards inattentive uses of information.
This spiral has measurable consequences. Research published in the Journal of Quantitative Description: Digital Media in 2025 documented that 39 per cent of respondents across 47 countries reported feeling “worn out” by the amount of news in 2024, up from 28 per cent in 2019. The phenomenon of “digital amnesia,” whereby individuals forget readily available information due to reliance on search engines and AI assistants, further illustrates how algorithmic mediation is altering basic cognitive processes. A systematic review published in March 2025 concluded that the digital age has significantly altered human attention, with increased multitasking, information overload, and algorithm-driven biases collectively impacting productivity, cognitive load, and decision-making.
The emergence of large language models has introduced an entirely new dimension to the problem of invisible AI influence. A 2025 study published in Big Data and Society by Christo Jacob, Paraic Kerrigan, and Marco Bastos introduced the concept of the “chat-chamber effect,” describing how AI chatbots like ChatGPT may create personalised information environments that function simultaneously as filter bubbles and echo chambers.
The researchers argued that algorithmic bias and media effects combine to create a prospect of AI chatbots providing politically congruent information to isolated subgroups, triggering effects that result from both algorithmic filtering and active user-AI communication. This dynamic is compounded by the persistent challenge of hallucination in large language models. The study cited research indicating that ChatGPT generates reference data with a hallucination rate as high as 25 per cent.
Given the capacity of large language models to mimic human communication, the researchers warned that incorporating hallucinating AI chatbots into daily information consumption may create feedback loops that isolate individuals in bubbles with limited access to counterattitudinal information. The ability of these systems to sound authoritative while producing fabricated content represents a qualitatively different kind of information risk than anything previously encountered in the history of media.
This concern gains additional weight when set alongside the growing use of AI for everyday decision-making. According to Bloomreach surveys, nearly 60 per cent of consumers report using AI to help them shop. Among frequent shoppers (those who purchase more than once a week), 66 per cent regularly use AI assistants such as ChatGPT to inform their purchase decisions. The IAB found that among AI shoppers, 46 per cent use AI “most or every time” they shop, and 80 per cent expect to rely on it more in the future. Research from the California Management Review at UC Berkeley has found that consumers prefer AI recommendations for practical, utilitarian purchases while favouring human guidance for more emotional or experiential ones, suggesting that the boundary between human and algorithmic judgment is becoming increasingly contextual.
The implications are significant. If the tools people use to make decisions are themselves shaped by biases, trained on data reflecting existing inequalities, and prone to generating plausible but inaccurate information, then the decisions emerging from those interactions are compromised at their foundation.
Governments and regulatory bodies have begun to respond, though the pace of regulation consistently lags behind the pace of technological deployment. The European Union has been the most aggressive actor in this space. The Digital Services Act (DSA), effective since 2024, explicitly prohibits a range of dark pattern techniques on digital platforms. The Digital Markets Act (DMA) bars designated gatekeepers from using “behavioural techniques or interface design” to circumvent their regulatory obligations.
Most significantly, the EU's Artificial Intelligence Act, adopted in June 2024, represents the world's first comprehensive legal framework for regulating AI. The regulation entered into force on 1 August 2024 and introduces a risk-based classification system. AI systems deemed to pose unacceptable risk, including those that manipulate human behaviour through subliminal techniques or exploit vulnerabilities based on age, disability, or socioeconomic status, are banned outright. The prohibition on banned AI systems took effect on 2 February 2025, with remaining obligations phasing in through 2027.
The EU has also launched consultations for a Digital Fairness Act, following an October 2024 “Fitness Check” in which the European Commission found that consumers remain inadequately protected against manipulative design elements. The proposed legislation would establish a binding EU-wide definition of dark patterns, categorised by severity, functionality, and potential impact on user decision-making. A public consultation was launched on 17 July 2025, with the final legislative proposal expected in the third quarter of 2026.
In the United States, enforcement has been more piecemeal. The FTC has pursued action against individual companies under Section 5 of the FTC Act. Notable cases include the ongoing proceedings against Amazon for allegedly using dark patterns to trick consumers into enrolling in Amazon Prime subscriptions, the December 2023 settlement requiring Credit Karma to pay three million dollars for misleading “pre-approved” credit card offers, and the 245 million dollar refund order against Epic Games for using dark patterns to induce children into making unintended in-game purchases in Fortnite.
At the state level, New York passed the Stop Addictive Feeds Exploitation (SAFE) Act to protect children from addictive algorithmic feeds, and Utah enacted legislation in 2024 to hold companies accountable for mental health impacts from algorithmically curated content.
Yet regulation, by its nature, operates reactively. By the time a law is drafted, debated, passed, and enforced, the technology it targets has typically evolved beyond its original scope. The EU AI Act's phased implementation, which will not be fully operative until 2027, illustrates this temporal mismatch. Legal scholars have noted the inherent difficulty: dark patterns operate in the grey zone between legitimate persuasion and outright manipulation, while EU consumer legislation still largely assumes that consumers are rational economic actors.
The most insidious aspect of invisible AI influence is not that it exists but that it operates below the threshold of awareness. A 2025 study published in Humanities and Social Sciences Communications introduced a system to evaluate population knowledge about algorithmic personalisation. Using data from 1,213 Czech respondents, it revealed significant demographic disparities in digital media literacy, underscoring what the researchers described as an urgent need for targeted educational programmes.
The research consistently shows that informed users can better evaluate privacy risks, guard against manipulation through tailored content, and adjust their online behaviour for more balanced information exposure. But achieving that awareness requires recognising the influence in the first place, which is precisely what these systems are designed to prevent.
The research also reveals a generational dimension. According to data from DemandSage and DataReportal, Generation Z users spend an average of 3 hours and 18 minutes daily on social media, with United States teenagers averaging 4 hours and 48 minutes. Millennials follow at 2 hours and 47 minutes, while Generation X averages 1 hour and 53 minutes. These are the individuals whose political views, consumer preferences, cultural tastes, and understanding of the world are being most intensively shaped by algorithmic curation, and the youngest among them have never known a world where such curation did not exist.
Trust in AI continues to grow even as evidence of its limitations accumulates. According to the Attest 2025 Consumer Adoption of AI Report, 43 per cent of consumers now trust information provided by AI chatbots or tools, up from 40 per cent the previous year. Trust in companies' handling of AI-collected data rose from 29 per cent in 2024 to 33 per cent in 2025. Among 18 to 30 year olds, 37 per cent trust AI companies with their data, compared to 27 per cent of those over 50. There is also a notable gender dimension: men are significantly more likely than women to use AI for purchasing decisions, at 52 per cent versus 43 per cent.
The picture that emerges from this research is not one of helpless individuals trapped in algorithmic prisons. It is something more nuanced. The algorithms are not imposing preferences from without; they are amplifying tendencies from within. They do not create desires; they detect, reinforce, and commercialise them. The filter bubble is not a wall erected around you; it is a mirror held up to your existing inclinations, polished and magnified until it becomes difficult to distinguish reflection from reality.
This distinction matters because it shifts the locus of responsibility. If algorithms merely reflected an objective external reality, the solution would be straightforward: fix the algorithm. But if they are amplifying subjective internal states, the challenge requires not only better technology and stronger regulation but also a form of cognitive self-defence that most people have never been taught to practise.
The academic literature offers some grounds for cautious optimism. A commentary published in Big Data and Society explored the concept of “protective filter bubbles,” documenting cases where algorithmic curation has provided safe spaces for feminist groups, gay men in China, and political dissidents in countries with restricted press freedom. The technology is not inherently destructive; its impact depends on the intentions and incentives of those who deploy it.
Researchers are also exploring technical solutions. A 2025 study published by Taylor and Francis proposed an “allostatic regulator” for recommendation systems, based on opponent process theory from psychology. The approach can be applied to the output layer of any existing recommendation algorithm to dynamically restrict the proportion of potentially harmful or polarised content recommended to users, offering a pathway for platforms to mitigate echo chamber effects without fundamentally redesigning their systems.
Recommendations from across the research literature converge on several themes. Greater transparency in how algorithms operate and what data they collect is consistently identified as essential. Educational programmes that build digital media literacy, particularly among younger users, are repeatedly advocated. Regulatory frameworks that keep pace with technological development are widely called for. And individual practices, including controlling screen time, curating digital content deliberately, and engaging in non-digital activities, are recommended as personal countermeasures against cognitive overload.
The nearly 5,000 daily digital interactions that now characterise modern connected life are not going to decrease. If anything, as the Internet of Things expands and AI systems become more deeply embedded in everyday objects and services, that number will continue to climb. The challenge is not to retreat from the digital world but to inhabit it with greater awareness of the forces shaping our experience within it.
Every time you open an app, scroll a feed, accept a recommendation, or ask an AI assistant for advice, you are participating in a system designed to learn from you and, in learning, to shape you. The transaction is invisible by design. But the fact that you cannot see it does not mean it is not happening. The first and most essential act of resistance is simply to notice.
IDC and Seagate, “Data Age 2025: The Evolution of Data to Life-Critical” (2017) and “The Digitization of the World: From Edge to Core” (2018). Authors: David Reinsel, John Gantz, John Rydning. Available at: https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf
Statista, “Data interactions per connected person per day worldwide 2010-2025.” Available at: https://www.statista.com/statistics/948840/worldwide-data-interactions-daily-per-capita/
Netflix recommendation statistics. ResearchGate citation: “Statistics show that up to 80% of watches on Netflix come from recommendations.” Available at: https://www.researchgate.net/figure/Statistics-show-that-up-to-80-of-watches-on-Netflix-come-from-recommendations-and-the_fig1_386513037
Spotify Fan Study (April 2024) on artist discovery through algorithmic features. Spotify Research: https://research.atspotify.com/search-recommendations
McKinsey, “New front door to the internet: Winning in the age of AI search.” Available at: https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/new-front-door-to-the-internet-winning-in-the-age-of-ai-search
Amazon recommendation engine and 35 per cent revenue attribution. Firney: https://www.firney.com/news-and-insights/ai-product-recommendations-from-amazons-35-revenue-model-to-your-e-commerce-platform
Cass R. Sunstein, “Nudging and Choice Architecture: Ethical Considerations” (2015). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551264
Richard H. Thaler and Cass R. Sunstein, “Nudge: Improving Decisions about Health, Wealth, and Happiness” (2008). Yale University Press.
European Commission, Deceptive Patterns Study (2022), finding 97 per cent of websites and apps used at least one dark pattern.
United States Federal Trade Commission, Dark Patterns Study (July 2024), examining 642 websites and apps. Available at: https://www.ftc.gov
DataReportal and Global WebIndex, social media usage statistics (2024-2025). Available at: https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide/
MDPI Societies, “Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth” (2025). Available at: https://www.mdpi.com/2075-4698/15/11/301
Journal of Computer-Mediated Communication, “It matters how you google it? Using agent-based testing to assess the impact of user choices in search queries and algorithmic personalization on political Google Search results” (2024). Available at: https://academic.oup.com/jcmc/article/29/6/zmae020/7900879
ArXiv, “Algorithmic Amplification of Biases on Google Search” (2024). Available at: https://arxiv.org/html/2401.09044v1
ArXiv, “TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race” (January 2025). Available at: https://arxiv.org/html/2501.17831v1
Tufts University CIRCLE, “Youth Rely on Digital Platforms, Need Media Literacy to Access Political Information” (2024). Available at: https://circle.tufts.edu/latest-research/youth-rely-digital-platforms-need-media-literacy-access-political-information
Interactive Advertising Bureau (IAB), “AI Ranks Among Consumers' Most Influential Shopping Sources” (2025). Available at: https://www.iab.com/news/ai-ranks-among-consumers-most-influential-shopping-sources-according-to-new-iab-study/
Bloomreach consumer surveys on AI shopping behaviour (2025). Referenced via: https://news.darden.virginia.edu/2025/06/17/nearly-60-use-ai-to-shop-heres-what-that-means-for-brands-and-buyers/
PMC, “The Cognitive Cost of AI: How AI Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC12367725/
Shoshana Zuboff, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power” (2019). PublicAffairs. Harvard Business School faculty page: https://www.hbs.edu/faculty/Pages/item.aspx?num=56791
Harvard Magazine, “Ending Surveillance Capitalism” (September 2024). Available at: https://www.harvardmagazine.com/2024/09/information-civilization
ResearchGate, “Artificial Intelligence and the Commodification of Human Behavior: Insights on Surveillance Capitalism from Shoshana Zuboff and Evgeny Morozov” (December 2024). Available at: https://www.researchgate.net/publication/387502050
Cureus, “Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11804976/
PMC, “Demystifying the New Dilemma of Brain Rot in the Digital Era: A Review” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11939997/
Futures, “An attention economic perspective on the future of the information age” (2024). Available at: https://www.sciencedirect.com/science/article/pii/S0016328723001477
Journal of Quantitative Description: Digital Media, news fatigue statistics across 47 countries (2025). Available at: https://journalqd.org/article/download/9064/7658
Big Data and Society, “The chat-chamber effect: Trusting the AI hallucination” (2025). Christo Jacob, Paraic Kerrigan, Marco Bastos. Available at: https://journals.sagepub.com/doi/10.1177/20539517241306345
Attest, “2025 Consumer Adoption of AI Report.” Available at: https://www.askattest.com/blog/articles/2025-consumer-adoption-of-ai-report
European Parliament, “EU AI Act: first regulation on artificial intelligence.” Available at: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
European Parliament, “Regulating dark patterns in the EU: Towards digital fairness” (2025). Available at: https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/767191/EPRS_ATA(2025)767191_EN.pdf
Humanities and Social Sciences Communications, “Algorithmic personalization: a study of knowledge gaps and digital media literacy” (2025). Available at: https://www.nature.com/articles/s41599-025-04593-6
Metzler, H. and Garcia, D., “Social Drivers and Algorithmic Mechanisms on Digital Media,” Perspectives on Psychological Science (2024). Available at: https://journals.sagepub.com/doi/10.1177/17456916231185057
Big Data and Society, “Rethinking the filter bubble? Developing a research agenda for the protective filter bubble” (2024). Jacob Erickson. Available at: https://journals.sagepub.com/doi/10.1177/20539517241231276
DemandSage, “Average Time Spent On Social Media” (2026 update). Available at: https://www.demandsage.com/average-time-spent-on-social-media/
RSISINTERNATIONAL, “A Systematic Review of the Impact of Artificial Intelligence, Digital Technology, and Social Media on Cognitive Functions” (2025). Available at: https://rsisinternational.org/journals/ijriss/articles/a-systematic-review-of-the-impact-of-artificial-intelligence-digital-technology-and-social-media-on-cognitive-functions/
California Management Review, “Humans or AI: How the Source of Recommendations Influences Consumer Choices for Different Product Types” (2024). Available at: https://cmr.berkeley.edu/2024/12/humans-or-ai-how-the-source-of-recommendations-influences-consumer-choices-for-different-product-types/
Taylor and Francis, “Reducing echo chamber effects: an allostatic regulator for recommendation algorithms” (2025). Available at: https://www.tandfonline.com/doi/full/10.1080/29974100.2025.2517191
Irish Data Protection Commission, TikTok fine of 345 million euros for deceptive design patterns affecting children. Referenced via: https://cbtw.tech/insights/illegal-dark-patterns-europe

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from folgepaula
What I said about what I said (because I said it weird)
A friend told me she loved the last post, but the title stayed just a bit out of reach for her. And because I like her honesty, and I trust the depth of her understanding, I will not simplify the thought. Instead, I’ll unfold it even more.
What I mean when I say “your future is your neighbor” is a small healthy provocation, yes it is ok, I never said I was cool.
Most people think of society as a system of rules and laws. Even if they’re not written down, they are shared through traditions and habits, creating a set of norms. You can organize these norms to be relatively functional in social life: what is legitimate and what is not, what is accepted and what is rejected, what is implicit and what is explicit. Are you still there?
Now, there’s another way to think about social life. A much richer one, in my eyes.
Let me leave you with this: in a society like ours, deeply sick, being “functional”, fitting in, being comfortable within it, should never be taken as a sign of success, but rather a symptom of your subjection to it.
I personally see social life as a circuit, a network of emotions and affections. These emotional exchanges are what shape relationships: between individuals, between individuals and institutions, between institutions and corporations, you got my point.
And yes we need to think about this, because there is no real way to escape social life. Even if you decide to live alone in a cave, ordering Foodora with “leave it by the door” every day, I can guarantee you the Austrian catholic church will still find you in 72h and ask for a contribution by post.
When I use the word “affections,” I mean literally whatever affects you.
Affections generate effects: answering or not answering the Catholic Church, for example. Paying or not paying that stupid tax. Shakespeare wrote “to be or not to be” in 1600.
Yes, folks, it's 2026 and that’s still the question.
Suddenly, a decision your parents once made of raising you in a catholic community, maybe as the spine of your upbringing, your first interactions, your school years, now comes back in a proposal letter. Kiddo, it’s your turn to decide if you stay as “one of us,” or if you part ways. Do you still want to belong? Pay it. But in my eyes, the cruelest part is this: if you’re not willing to pay it, you have to declare it. You have to contact them and say, to their face, that from now on you’re excluding yourself from the community. They use your affection to subject you into action. And by the way, here is the condition: there is only one legitimate way of being catholic in this country: pay it.
For me, Paula, baptized and raised in the catholic church and schools, it's a no brainer: to be a good christian, in this set up, means keeping a healthy distance from this version of the catholic church you’ve established in this country. But that’s just me. You do you.
I think I’ve made my point about how affections produce effects. These affections shape our tendencies and behaviors, often without us even realizing it. Many times, what moves us to act toward something is not a clearly stated plan or conscious decision, but the feelings we experience without being fully aware of them, without much elaboration.
And these affections circulate not only on a person to person level, but also at institutional, corporate, and political levels. In other words: wherever you find people, you will find affections.
We’re taught that the ideal way to navigate society is rationally. Rationality supposedly allows us to speak in a neutral space where we can say what we want, announce our interests, even clash with one another, and maybe (just maybe) reach a consensus. That’s the “rulebook” we learn back in kindergarten.
There’s no space for passion there. Because passions, or affections, are said to destabilize us. The classic reason versus passion division is more deeply baked into us than we admit.
I would insist that if there’s anything truly “rational” in social life, it’s precisely our affections. They shape our relationships, form our inner lives, and guide how we act on our fantasies, beliefs, and desires. To know yourself, who you truly are, what you can do, and what holds you back from it, means understanding your affections.
I personally don’t believe political changes (or any changes) simply come from new ideas, but from new affections that give rise to new ideas. Coherence definition.
So, when I say “your future is your neighbor,” I am making a wonder. I wonder whether a large part of our blockage in political and social creativity, our inability to imagine new futures, comes from the fact that fear has historically been the core affection of our world.
If a society is built by people who don’t claim this connection to one another, their desires no longer have a natural place to unfold and they end up wanting mostly the same things.
And when everyone wants the same things, the only relationships they form are competitive, sometimes even violent. That’s how insecurity is created.
Fear of violence, fear of death, fear of being isolated or undervalued, fear of immigrants, these fears become common affections, our main social drivers. And to deal with that, we invent governments and institutions whose purpose is essentially to protect us from one another.
So now, governments have to keep reminding us that our safety is always at risk.
Even intimate relationships are treated like contracts. Want to get married? Declare it at the notary office. Kant would say marriage is a contract between two people for the usufruct of their sexual faculties. I find that hilarious. Imagine coming home and your husband isn’t down for some cuddles, you might as well call the police because he’s breaking the contract.
Want to be part of the Catholic Church? Your signature and 1.1% of your annual taxable income, please.
Jokes aside, what I’m trying to highlight is the absurdity of contracts as the main connector of social relationships, how this twisted logic fails to grasp what’s really at stake. The goal becomes transforming hearts into risk-oriented minds.
When I say “your future is your neighbor,” I choose the word neighbor to play with the idea of this immediate other, the other who is close enough to affect you, even if they don’t resemble your idea of yourself. The only way to escape this installed logic of fear is to practice affection as your core mode of being. And if you do that long enough, you learn something real, and eventually you’re able to extend your sense of closeness far beyond your own street.
That’s what real change looks like to me.
/feb26