from Tony's Little Logbook

It's been a season of grief and loss.

But kind souls are holding space for me to soothe this pain, in community. Feelings of gratefulness and gladness wash over me.

Some nice things that have helped me to navigate surges of sadness and other emotions:

  • gelato
  • sharing my sorrows with a friend whom I feel emotionally safe with
  • impromptu sing-along sessions with strangers at public pianos (featuring pop songs with sad tear-jerking lyrics)
  • long solo walks, on both quiet nights and sun-drenched days
  • Anne Lamott's (hilarious) book: “Operating Instructions: A journal of my son's first year”
  • organic vegetables at dinner-time

I could go on and on, but you get the idea.

May I direct you now to Anne Lamott's Substack (e-newsletter). She's like an auntie who stays far away, lucid-eyed and pithily humorous when she comes over suddenly and gives you uncomfortable kisses that you never asked for, but which you appreciate anyway.

https://annelamott.substack.com/

#lunaticus

 
Read more...

from EpicMind

Illustration eines antiken Philosophen in Toga, der erschöpft an einem modernen Büroarbeitsplatz vor einem Computer sitzt, umgeben von leeren Bürostühlen und urbaner Architektur.

Freundinnen & Freunde der Weisheit, willkommen zur bereits dritten Ausgabe des wöchentlichen EpicMonday-Newsletters!

Produktivitätstools, Zeitmanagement-Methoden und Fokus-Techniken sollen helfen, den Arbeitstag effizient zu gestalten. Doch wer ausschliesslich auf Effizienz setzt, läuft Gefahr, kreative Potenziale zu blockieren. Denn gute Ideen entstehen selten im Modus maximaler Kontrolle. Psychologin Jennifer Haase verweist auf das sogenannte Cocktailparty-Phänomen: Unser Gehirn verarbeitet auch dann Informationen, wenn wir nicht bewusst darauf achten – entscheidend für das kreative Denken. Tools wie Trello oder Pomodoro sind nützlich für Routineaufgaben, können aber Innovation ersticken, wenn sie zu engmaschig eingesetzt werden.

Ein bewährtes Modell (entwickelt vom Sozialpsychologe Graham Wallas 1926 in seinem Buch The Art of Thought) für kreative Prozesse zeigt vier Phasen: Vorbereitung, Inkubation, Erleuchtung und Verifikation. Besonders die Inkubationsphase – also Zeiten der scheinbaren Untätigkeit – ist zentral für echte Durchbrüche. Spaziergänge, Gespräche, manuelle Tätigkeiten oder eine Stunde in der Kaffeeküche können genau jene geistige Beweglichkeit fördern, die effiziente Abläufe oft verhindern. Der Innovationsberater Tim Leberecht warnt deshalb vor einem „Kult der Effizienz“, der Unternehmen dazu verleitet, mit mittelmässigen Ergebnissen zufrieden zu sein – anstatt Raum für das Beste zu schaffen.

Auch Forschung zu Zeitmanagement liefert ein differenziertes Bild: Zwar steigert gutes Selbstmanagement das subjektive Wohlbefinden, nicht aber zwingend die Leistung. Wer zu viel plant, läuft Gefahr, sich in To-do-Listen zu verlieren und der „Planning Fallacy“ zu erliegen – der chronischen Unterschätzung von Aufwand. Die Empfehlung lautet daher: bewusst Pausen einbauen, Aufgaben hinterfragen und gelegentlich die Effizienzbrille absetzen. Denn Kreativität braucht nicht mehr Tools, sondern mehr Luft.

Denkanstoss zum Wochenbeginn

„Solange ein Mensch ein Buch schreibt, kann er nicht unglücklich sein.“ – Jean Paul (1763–1825)

ProductivityPorn-Tipp der Woche: Nein sagen

Du kannst nicht alles machen. Wenn Du ständig „Ja“ sagst, überlastest Du Dich selbst und riskierst, dass die Qualität Deiner Arbeit leidet. Lerne, freundlich, aber bestimmt abzulehnen, wenn etwas nicht in Deine Prioritäten passt.

Aus dem Archiv: Sinnvoll mit Prokrastination umgehen

Prokrastination ist ein komplexes Phänomen, das tief in unseren psychologischen Mustern verwurzelt ist. Indem man die zugrunde liegenden Ursachen versteht und gezielt Strategien anwendet, kann man lernen, mit Prokrastination umzugehen und ein produktiveres und erfüllteres Leben zu führen. Strukturiertes Prokrastinieren kann dabei eine hilfreiche Methode sein, um produktiv zu bleiben, auch wenn man Aufgaben aufschiebt.

weiterlesen …

Vielen Dank, dass Du Dir die Zeit genommen hast, diesen Newsletter zu lesen. Ich hoffe, die Inhalte konnten Dich inspirieren und Dir wertvolle Impulse für Dein (digitales) Leben geben. Bleib neugierig und hinterfrage, was Dir begegnet!


EpicMind – Weisheiten für das digitale Leben „EpicMind“ (kurz für „Epicurean Mindset“) ist mein Blog und Newsletter, der sich den Themen Lernen, Produktivität, Selbstmanagement und Technologie widmet – alles gewürzt mit einer Prise Philosophie.


Disclaimer Teile dieses Texts wurden mit Deepl Write (Korrektorat und Lektorat) überarbeitet. Für die Recherche in den erwähnten Werken/Quellen und in meinen Notizen wurde NotebookLM von Google verwendet. Das Artikel-Bild wurde mit ChatGPT erstellt und anschliessend nachbearbeitet.

Topic #Newsletter

 
Weiterlesen... Discuss...

from The Poet Sky

I hear the way you talk The unkind words you use The cruel jokes and jabs Rationalizing while insulting

All aimed at yourself

“I'm meant to be alone” “It's fine, no one notices me” “Silly, why would anyone care about me?” “It's okay, I always mess everything up”

Why not stop?

I know kindness is hard Complementing yourself feels impossible Little by little, you can do it I believe in you

Start with small steps

End the cruelty Silence the harsh words Cease the insults Stop being so mean

Because you deserve better than that

#Poetry #SelfLove

 
Read more... Discuss...

from tomson darko

Het is een treurig feit dat vooral Nederlandse, Griekse en Hongaarse joden relatief snel in de werkkampen van de Duitsers stierven.

De reden?

Ze konden minder goed de onmenselijke omstandigheden aan dan joden uit andere gebieden van Europa.

Andere joden waren al een zwaar leven, inclusief mentale en fysieke vernederingen, gewend.

Ook Belgische joden deden het relatief beter. Omdat de meesten al eerder in de twintigste eeuw gevlucht waren uit Polen, Rusland en Litouwen.

Dat is toch absurd om te lezen?

De bekendste strafpleiter van Nederland, Max Moszkowicz (1926–2022), kwam als tiener in een concentratiekamp terecht. Hij overleefde als een van de weinigen de Tweede Wereldoorlog.

Dat heeft veel met geluk te maken. Want ook Max heeft de dood af en toe in de ogen aangekeken daar.

Maar er speelden meer dingen mee die zijn overlevingskansen vergrootten.

Om te beginnen kwam Moszkowicz uit Limburg, maar hij was geboren in Duitsland. Zijn ouders waren al eerder gevlucht uit Polen.

Hij sprak daarom Duits en Pools, wat hem hielp in het kamp Birkenau (Auschwitz II).

Een ander punt dat hielp, was dat zijn vader ook in het kamp zat. Ze hadden enorm veel aan elkaar als mentale steun.

Helaas heeft zijn vader het einde van de Tweede Wereldoorlog niet gehaald. Hij overleed uiteindelijk in een ander kamp. Waarschijnlijk door uitputting.

Max kwam terecht bij het metselaarsschooltje in het kamp. Dat was een van de ‘betere’ plekken om te zijn. Het vernietigingskamp bleef maar groeien, dus er moest altijd wat gemetseld worden.

Maar denk niet dat er ook maar iets luxe aan was. Als je te laat kwam of niet oplette omdat je flauwviel van de honger, werd je afgeranseld. Ook vond er seksueel misbruik plaats.

Man. Man. Man.

De tranen sprongen zo vaak in mijn ogen bij het lezen van de barbaarsheid van de nazi’s. De willekeur van vernedering, vernietiging, afranseling en uithongering.

Lichamelijk was het al nauwelijks vol te houden: elke dag veertien uur lang zwaar werk doen en aan het einde van de dag een half broodje en wat gekookt water met aardappelschillen krijgen.

Maar mentaal was het ook nauwelijks vol te houden.

Toch deed Max iets briljants.

Als een van de weinigen in zijn blok van 800 man.

Zelfzorg.

Hij zorgde ervoor dat hij er elke dag goed verzorgd uitzag. Hij waste zijn gezicht in de sneeuw. Zorgde dat zijn kleding netjes bleef.

Dat gaf niet alleen een gevoel van zelfrespect en eigenwaarde. Het was ook een tegenbeeld aan de nazi’s, die hem als een minderwaardig, smerig en nutteloos beest zagen.

Het hielp hem niet alleen om respect af te dwingen, maar ook om een baantje in de bakkerij te krijgen. Een plek waar normaal gesproken geen jood welkom was. Maar hij werkte altijd hard en zag er schoon en netjes uit, en dat gaf blijkbaar genoeg vertrouwen bij de Duitsers.

In de bakkerij werken zorgde ervoor dat hij stukjes brood kon stelen om uit te delen of te ruilen met andere gevangenen.

Dit gaat heel gek klinken.

Maar als je gedachten niet zo lief voor je zijn, zorg dat je voor jezelf blijft zorgen.

Het is ook het eerste waar een psycholoog naar kijkt als je daar binnenkomt.

Heb je je haren gefatsoeneerd? Je nagels geknipt? Schone kleren aangedaan?

Omdat het een indicatie is van hoe je je mentaal voelt.

Hoe klote je ook voelt. Hoe zwaar je gedachten ook zijn.

  • Vind de energie om op te staan.
  • Om je tanden te poetsen.
  • Te ontbijten.
  • Een schone trui aan te trekken.
  • Een korte wandeling te maken.
  • Te reageren op appjes.
  • Op tijd te slapen.

Zelfzorg geeft niet alleen structuur aan je dag terwijl er chaos in je hoofd is. Het zorgt ook voor het behoud van zelfrespect. Het is de laatste verdedigingslinie van je mentale welzijn.

Zorg goed voor jezelf.
Juist als het leven tegenzit.

Het is wat Max in het concentratiekamp staande hield. Ook na het overleven van de Holocaust bleef hij goed voor zichzelf zorgen. Max stond in de rechtszaal bekend om zijn goed verzorgde, vlekkeloze toga met sneeuwwitte bef.

Het was een handelsmerk van hem om de tegenstanders en de rechter te imponeren.

Dat is een andere waarheid.

Hoe je je kleedt, is hoe je je voelt.

Liefs,

tomson

 
Lees verder...

from the ultimate question

Why do humans feel the need to indulge their senses in something like art? What is the purpose of art?

Our minds are double edged swords. It's all good when everything is hunky dory in our minds.

But when mental health diseases start cropping up, we need to either learn to control our minds through practices like meditation or keep our mind and bodies busy by find solace in art.

Art is supposed to help you express how you feel.

It doesn't matter if your painting or sketch will look good. It doesn't matter if your singing is melodious or dance is pleasing to watch.

What matters is how you feel when you express yourself through art. Immerse yourself in the process. Submit to the way it feels in that moment of expression. Lose yourself, relax and breathe.

 
Read more...

from Mitchell Report

⚠️ SPOILER WARNING: FULL SPOILERS

Promotional poster for "TRON Ares" featuring a futuristic motorcycle and rider in a reflective suit, standing on a rain-soaked city street bathed in red light. The towering buildings fade into a foggy, overcast sky.

My Rating: ⭐⭐⭐½ (3.5/5 stars)

A solid, if unremarkable, entry in the Tron series. Jared Leto stands out, and the plot introduces a novel twist: the digital world invades ours, spotlighting AI. It's a fine way to kill almost 2 hours. However, it's not worth a theater visit. Watching it on Disney+ is your best bet.

TMDb
This product uses the TMDb API but is not endorsed or certified by TMDb.

#review #movies

 
Read more... Discuss...

from SmarterArticles

The promotional materials are breathtaking. Artificial intelligence systems that can analyse medical scans with superhuman precision, autonomous vehicles that navigate complex urban environments, and vision-language models that understand images with the fluency of a seasoned art critic. The benchmark scores are equally impressive: 94% accuracy here, state-of-the-art performance there, human-level capabilities across dozens of standardised tests.

Then reality intrudes. A robotaxi in San Francisco fails to recognise a pedestrian trapped beneath its chassis and drags her twenty feet before stopping. An image recognition system confidently labels photographs of Black individuals as gorillas. A frontier AI model, asked to count the triangles in a simple geometric image, produces answers that would embarrass a primary school student. These are not edge cases or adversarial attacks designed to break the system. They represent the routine failure modes of technologies marketed as transformative advances in machine intelligence.

The disconnect between marketed performance and actual user experience has become one of the defining tensions of the artificial intelligence era. It raises uncomfortable questions about how we measure machine intelligence, what incentives shape the development and promotion of AI systems, and whether the public has been sold a vision of technological capability that fundamentally misrepresents what these systems can and cannot do. Understanding this gap requires examining the architecture of how AI competence is assessed, the economics that drive development priorities, and the cognitive science of what these systems actually understand about the world they purport to perceive.

The Benchmark Mirage

To understand why AI systems that excel on standardised tests can fail so spectacularly in practice, one must first examine how performance is measured. The Stanford AI Index Report 2025 documented a striking phenomenon: many benchmarks that researchers use to evaluate AI capabilities have become “saturated,” meaning systems score so high that the tests are no longer useful for distinguishing between models. This saturation has occurred across domains including general knowledge, reasoning about images, mathematics, and coding. The Visual Question Answering Challenge, for instance, now sees top-performing models achieving 84.3% accuracy, while the human baseline sits at approximately 80%.

The problem runs deeper than simple test exhaustion. Research conducted by MIT's Computer Science and Artificial Intelligence Laboratory revealed that “traditionally, object recognition datasets have been skewed towards less-complex images, a practice that has led to an inflation in model performance metrics, not truly reflective of a model's robustness or its ability to tackle complex visual tasks.” The researchers developed a new metric called “minimum viewing time” which quantifies the difficulty of recognising an image based on how long a person needs to view it before making a correct identification. When researchers at MIT developed ObjectNet, a dataset comprising images collected from real-life settings rather than curated repositories, they discovered substantial performance gaps between laboratory conditions and authentic deployment scenarios.

This discrepancy reflects a phenomenon that economists have studied for decades: Goodhart's Law, which states that when a measure becomes a target, it ceases to be a good measure. A detailed 68-page analysis from researchers at Cohere, Stanford, MIT, and the Allen Institute for AI documented systematic distortions in how companies approach AI evaluation. The researchers found that major technology firms including Meta, OpenAI, Google, and Amazon were able to “privately pit many model versions in the Arena and then only publish the best results.” This practice creates a misleading picture of consistent high performance rather than the variable and context-dependent capabilities that characterise real AI systems.

The problem of data contamination compounds these issues. When testing GPT-4 on benchmark problems from Codeforces in 2023, researchers found the model could regularly solve problems classified as easy, provided they had been added before September 2021. For problems added later, GPT-4 could not solve a single question correctly. The implication is stark: the model had memorised questions and answers from its training data rather than developing genuine problem-solving capabilities. As one research team observed, the “AI industry has turned benchmarks into targets, and now those benchmarks are failing us.”

The consequence of this gaming dynamic extends beyond misleading metrics. It shapes the entire trajectory of AI development, directing research effort toward whatever narrow capabilities will boost leaderboard positions rather than toward the robust, generalisable intelligence that practical applications require.

Counting Failures and Compositional Collapse

Perhaps nothing illustrates the gap between benchmark performance and real-world competence more clearly than the simple task of counting objects in an image. Research published in late 2024 introduced VLMCountBench, a benchmark testing vision-language models on counting tasks using only basic geometric shapes such as triangles and circles. The findings were revealing: while these sophisticated AI systems could count reliably when only one shape type was present, they exhibited substantial failures when multiple shape types were combined. This phenomenon, termed “compositional counting failure,” suggests that these systems lack the discrete object representations that make counting trivial for humans.

This limitation has significant implications for practical applications. A study using Bongard problems, visual puzzles that test pattern recognition and abstraction, found that humans achieved an 84% success rate on average, while the best-performing vision-language model, GPT-4o, managed only 17%. The researchers noted that “even elementary concepts that may seem trivial to humans, such as simple spirals, pose significant challenges” for these systems. They observed that “most models misinterpreted or failed to count correctly, suggesting challenges in AI's visual counting capabilities.”

Text-to-image generation systems demonstrate similar limitations. Research on the T2ICountBench benchmark revealed that “all state-of-the-art diffusion models fail to generate the correct number of objects, with accuracy dropping significantly as the number of objects increases.” When asked to generate an image of ten oranges, these systems frequently produce either substantially more or fewer items than requested. The failure is not occasional or marginal but systematic and predictable. As one research paper noted, “depicting a specific number of objects in the image with text conditioning often fails to capture the exact quantity of details.”

These counting failures point to a more fundamental issue in how current AI architectures process visual information. Unlike human cognition, which appears to involve discrete object representations and symbolic reasoning about quantities, large vision-language models operate on statistical patterns learned from training data. They can recognise that images containing many objects of a certain type tend to have particular visual characteristics, but they lack what researchers call robust “world models” that would allow them to track individual objects and their properties reliably.

The practical implications extend far beyond academic curiosity. Consider an AI system deployed to monitor inventory in a warehouse, assess damage after a natural disaster, or count cells in a medical sample. Systematic failures in numerical accuracy would render such applications unreliable at best and dangerous at worst.

The Architectural Divide

The question of whether these failures represent fundamental limitations of current AI architectures or merely training deficiencies remains actively debated. Gary Marcus, professor emeritus of psychology and neural science at New York University and author of the 2024 book “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” has argued consistently that neural networks face inherent constraints in tasks requiring abstraction and symbolic reasoning.

Marcus has pointed to a problem he first demonstrated in 1998: neural networks trained on even numbers could generalise to some new even numbers, but when tested on odd numbers, they would systematically fail. He concluded that “these tools are good at interpolating functions, but not very good at extrapolating functions.” This distinction between interpolation within known patterns and extrapolation to genuinely novel situations lies at the heart of the benchmark-reality gap.

Marcus characterises current large language models as systems that “work at the extensional level, but they don't work at the intentional level. They are not getting the abstract meaning of anything.” The chess-playing failures of models like ChatGPT, which Marcus has documented attempting illegal moves such as having a Queen jump over a knight, illustrate how systems can “approximate the game of chess, but can't play it reliably because it never induces a proper world model of the board and the rules.” He has emphasised that these systems “still fail at abstraction, at reasoning, at keeping track of properties of individuals. I first wrote about hallucinations in 2001.”

Research on transformer architectures, the technical foundation underlying most modern AI systems, has identified specific limitations in spatial reasoning. A 2024 paper titled “On Limitations of the Transformer Architecture” identified “fundamental incompatibility with the Transformer architecture for certain problems, suggesting that some issues should not be expected to be solvable in practice indefinitely.” The researchers documented that “when prompts involve spatial information, transformer-based systems appear to have problems with composition.” Simple cases where temporal composition fails cause all state-of-the-art models to return incorrect answers.

The limitations extend to visual processing as well. Research has found that “ViT learns long-range dependencies via self-attention between image patches to understand global context, but the patch-based positional encoding mechanism may miss relevant local spatial information and usually cannot attain the performance of CNNs on small-scale datasets.” This architectural limitation has been highlighted particularly in radiology applications where critical findings are often minute and contained within small spatial locations.

Melanie Mitchell, professor at the Santa Fe Institute whose research focuses on conceptual abstraction and analogy-making in artificial intelligence, has offered a complementary perspective. Her recent work includes a 2025 paper titled “Do AI models perform human-like abstract reasoning across modalities?” which examines whether these systems engage in genuine reasoning or sophisticated pattern matching. Mitchell has argued that “there's a lot of evidence that LLMs aren't reasoning abstractly or robustly, and often over-rely on memorised patterns in their training data, leading to errors on 'out of distribution' problems.”

Mitchell identifies a crucial gap in current AI systems: the absence of “rich internal models of the world.” As she notes, “a tenet of modern cognitive science is that humans are not simply conditioned-reflex machines; instead, we have inside our heads abstracted models of the physical and social worlds that reflect the causes of events rather than merely correlations among them.” Current AI systems, despite their impressive performance on narrow benchmarks, appear to lack this causal understanding.

An alternative view holds that these limitations may be primarily a consequence of training data rather than architectural constraints. Some researchers hypothesise that “the limited spatial reasoning abilities of current VLMs is not due to a fundamental limitation of their architecture, but rather is a limitation in common datasets available at scale on which such models are trained.” This perspective suggests that co-training multimodal models on synthetic spatial data could potentially address current weaknesses. Additionally, researchers note that “VLMs' limited spatial reasoning capability may be due to the lack of 3D spatial knowledge in training data.”

When Failures Cause Harm

The gap between benchmark performance and real-world capability becomes consequential when AI systems are deployed in high-stakes domains. The case of autonomous vehicles provides particularly sobering examples. According to data compiled by researchers at Craft Law Firm, between 2021 and 2024, there were 3,979 incidents involving autonomous vehicles in the United States, resulting in 496 reported injuries and 83 fatalities. The Stanford AI Index Report 2025 noted that the AI Incidents Database recorded 233 incidents in 2024, a 56.4% increase compared to 2023, marking a record high.

In May 2025, Waymo recalled over 1,200 robotaxis following disclosure of a software flaw that made vehicles prone to colliding with certain stationary objects, specifically “thin or suspended barriers like chains, gates, and even utility poles.” These objects, which human drivers would navigate around without difficulty, apparently fell outside the patterns the perception system had learned to recognise. Investigation revealed failures in the system's ability to properly classify and respond to stationary objects under certain lighting and weather conditions. As of April 2024, Tesla's Autopilot system had been involved in at least 13 fatal crashes according to NHTSA data, with Tesla's Full Self-Driving system facing fresh regulatory scrutiny in January 2025.

The 2018 Uber fatal accident in Tempe, Arizona, illustrated similar limitations. The vehicle's sensors detected a pedestrian, but the AI system failed to classify her accurately as a human, leading to a fatal collision. The safety driver was distracted by a mobile device and did not intervene in time. As researchers have noted, “these incidents reveal a fundamental problem with current AI systems: they excel at pattern recognition in controlled environments but struggle with edge cases that human drivers handle instinctively.” The failure to accurately classify the pedestrian as a human being highlighted a critical weakness in object recognition capabilities, particularly in low-light conditions and complex environments.

A particularly disturbing incident involved General Motors' Cruise robotaxi in San Francisco, where the vehicle struck a pedestrian who had been thrown into its path by another vehicle, then dragged her twenty feet before stopping. The car's AI systems failed to recognise that a human being was trapped underneath the vehicle. When the system detected an “obstacle,” it continued to move, causing additional severe injuries.

These cases highlight how AI systems that perform admirably on standardised perception benchmarks can fail catastrophically when encountering situations not well-represented in their training data. The gap between laboratory performance and deployment reality is not merely academic; it translates directly into physical harm.

The Gorilla Problem That Never Went Away

One of the most persistent examples of AI visual recognition failure involves the 2015 incident in which Google Photos labelled photographs of Black individuals as “gorillas.” In that incident, a Black software developer tweeted that Google Photos had labelled images of him with a friend as “gorillas.” The incident exposed how image recognition algorithms trained on biased data can produce racist outputs. Google's response was revealing: rather than solving the underlying technical problem, the company blocked the words “gorilla,” “chimpanzee,” “monkey,” and related terms from the system entirely.

Nearly a decade later, that temporary fix remains in place. By censoring these searches, the service can no longer find primates such as “gorilla,” “chimp,” “chimpanzee,” or “monkey.” Despite enormous advances in AI technology since 2015, Google Photos still refuses to label images of gorillas. This represents a tacit acknowledgement that the fundamental problem has not been solved, only circumvented. The workaround creates a peculiar situation where one of the world's most advanced image recognition systems cannot identify one of the most recognisable animals on Earth. As one analysis noted, “Apple learned from Google's mistake and simply copied their fix.”

The underlying issue extends beyond a single company's product. Research has consistently documented that commercially available facial recognition technologies perform far worse on darker-skinned individuals, particularly women. Three commercially available systems made by Microsoft, IBM, and Megvii misidentified darker female faces nearly 35% of the time while achieving near-perfect accuracy (99%) on white men.

These biases have real consequences. Cases such as Ousmane Bah, a teenager wrongly accused of theft at an Apple Store because of faulty face recognition, and Amara K. Majeed, wrongly accused of participating in the 2019 Sri Lanka bombings after her face was misidentified, demonstrate how AI failures disproportionately harm marginalised communities. The technology industry's approach of deploying these systems despite known limitations and then addressing failures reactively raises serious questions about accountability and the distribution of risk.

The Marketing Reality Gap

The discrepancy between how AI capabilities are marketed and how they perform in practice reflects a broader tension in the technology industry. A global study led by Professor Nicole Gillespie at Melbourne Business School surveying over 48,000 people across 47 countries between November 2024 and January 2025 found that although 66% of respondents already use AI with some regularity, less than half (46%) are willing to trust it. Notably, this represents a decline in trust compared to surveys conducted prior to ChatGPT's release in 2022. People have become less trusting and more worried about AI as adoption has increased.

The study found that consumer distrust is growing significantly: 63% of consumers globally do not trust AI with their data, up from 44% in 2024. In the United Kingdom, the situation is even starker, with 76% of shoppers feeling uneasy about AI handling their information. Research from the Nuremberg Institute for Market Decisions showed that only 21% of respondents trust AI companies and their promises, and only 20% trust AI itself. These findings reveal “a notable gap between general awareness of AI in marketing and a deeper understanding or trust in its application.”

Emily Bender, professor of linguistics at the University of Washington and one of the authors of the influential 2021 “stochastic parrots” paper, has been a prominent voice challenging AI hype. Bender was recognised in TIME Magazine's first 100 Most Influential People in Artificial Intelligence and is the author of the upcoming book “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.” She has argued that “so much of what we read about language technology and other things that get called AI makes the technology sound magical. It makes it sound like it can do these impossible things, and that makes it that much easier for someone to sell a system that is supposedly objective but really just reproduces systems of oppression.”

The practical implications of this marketing-reality gap are significant. A McKinsey global survey in early 2024 found that 65% of respondents said their organisations use generative AI in some capacity, nearly double the share from ten months prior. However, despite widespread experimentation, “comprehensive integration of generative AI into core business operations remains limited.” A 2024 Deloitte study noted that “organisational change only happens so fast” despite rapid AI advances, meaning many companies are deliberately testing in limited areas before scaling up.

The gap is particularly striking in mental health applications. Despite claims that AI is replacing therapists, only 21% of the 41% of adults who sought mental health support in the past six months turned to AI, representing only 9% of the total population. The disconnect between hype and actual behaviour underscores how marketing narratives can diverge sharply from lived reality.

Hallucinations and Multimodal Failures

The problem of AI systems generating plausible but incorrect outputs, commonly termed “hallucinations,” extends beyond text into visual domains. Research published in 2024 documented that multimodal large language models “often generate outputs that are inconsistent with the visual content, a challenge known as hallucination, which poses substantial obstacles to their practical deployment and raises concerns regarding their reliability in real-world applications.”

Object hallucination represents a particularly problematic failure mode, occurring when models identify objects that do not exist in an image. Researchers have developed increasingly sophisticated benchmarks to evaluate these failures. ChartHal, a benchmark featuring a taxonomy of hallucination scenarios in chart understanding, demonstrated that “state-of-the-art LVLMs suffer from severe hallucinations” when interpreting visual data.

The VHTest benchmark introduced in 2024 comprises 1,200 diverse visual hallucination instances across eight modes. Medical imaging presents particular risks: the MediHall Score benchmark was developed specifically to assess hallucinations in medical contexts through a hierarchical scoring system. When AI systems hallucinate in clinical settings, the consequences can be life-threatening.

Mitigation efforts have shown some promise. One recent framework operating entirely with frozen, pretrained vision-language models and requiring no gradient updates “reduces hallucination rates by 9.8 percentage points compared to the baseline, while improving object existence accuracy by 4.7 points on adversarial splits.” Research by Yu et al. (2023) explored human error detection to mitigate hallucinations, successfully reducing them by 44.6% while maintaining competitive performance.

However, Gary Marcus has argued that there is “no principled solution to hallucinations in systems that traffic only in the statistics of language without explicit representation of facts and explicit tools to reason over those facts.” This perspective suggests that hallucinations are not bugs to be fixed but fundamental characteristics of current architectural approaches. He advocates for neurosymbolic AI, which would combine neural networks with symbolic AI, making an analogy to Daniel Kahneman's System One and System Two thinking.

The ARC Challenge and the Limits of Pattern Matching

Francois Chollet, the creator of Keras, an open-source deep learning library adopted by over 2.5 million developers, introduced the Abstraction and Reasoning Corpus (ARC) in 2019 as a benchmark designed to measure fluid intelligence rather than narrow task performance. ARC consists of 800 puzzle-like tasks designed as grid-based visual reasoning problems. These tasks, trivial for humans but challenging for machines, typically provide only a small number of example input-output pairs, usually around three.

What makes ARC distinctive is its focus on measuring the ability to “generalise from limited examples, interpret symbolic meaning, and flexibly apply rules in varying contexts.” Unlike benchmarks that can be saturated through extensive training on similar problems, ARC tests precisely the kind of novel reasoning that current AI systems struggle to perform. The benchmark “requires the test taker to deduce underlying rules through abstraction, inference, and prior knowledge rather than brute-force or extensive training.”

From its introduction in 2019 until late 2024, ARC remained essentially unsolved by AI systems, maintaining its reputation as one of the toughest benchmarks available for general intelligence. The ARC Prize competition, co-founded by Mike Knoop and Francois Chollet, saw 1,430 teams submit 17,789 entries in 2024. The state-of-the-art score on the ARC private evaluation set increased from 33% to 55.5% during the competition period, propelled by techniques including deep learning-guided program synthesis and test-time training. More than $125,000 in prizes were awarded across top papers and top scores.

While this represents meaningful progress, it remains far below human performance and the 85% threshold set for the $500,000 grand prize. The persistent difficulty of ARC highlights a crucial distinction: current AI systems excel at tasks that can be solved through pattern recognition and interpolation within training distributions but struggle with the kind of abstract reasoning that humans perform effortlessly.

Trust Erosion and the Normalisation of Failure

Research on human-AI interaction has documented asymmetric trust dynamics: building trust in AI takes more time compared to building trust in humans, but when AI encounters problems, trust loss occurs more rapidly. Studies have found that simpler tasks show greater degradation of trust following errors, suggesting that failures on tasks perceived as easy may be particularly damaging to user confidence.

This pattern reflects what researchers term “perfect automation schema,” the tendency for users to expect flawless performance from AI systems and interpret any deviation as evidence of fundamental inadequacy rather than normal performance variation. The marketing of AI as approaching or exceeding human capabilities may inadvertently amplify this effect by setting unrealistic expectations.

Research comparing early and late errors found that initial errors affect trust development more negatively than late ones in some studies, while others found that trust dropped most for late mistakes. The explanation may be that early mistakes allow people to adjust expectations over time, whereas trust damaged at a later stage proves more difficult to repair. Research has found that “explanations that combine causal attribution (explaining why the error occurred) with boundary specification (identifying system limitations) prove most effective for competence-based trust repair.”

The normalisation of AI failures presents a concerning trajectory. If users come to expect that AI systems will periodically produce nonsensical or harmful outputs, they may either develop excessive caution that undermines legitimate use cases or, alternatively, become desensitised to failures in ways that increase risk. Neither outcome serves the goal of beneficial AI deployment.

Measuring Intelligence or Measuring Training

The fundamental question underlying these failures concerns what benchmarks actually measure. The dramatic improvement in AI performance on new benchmarks shortly after their introduction, documented by the Stanford AI Index, suggests that current systems are exceptionally effective at optimising for whatever metrics researchers define. In 2023, AI systems could solve just 4.4% of coding problems on SWE-bench. By 2024, this figure had jumped to 71.7%. Performance on MMMU and GPQA saw gains of 18.8 and 48.9 percentage points respectively.

This pattern of rapid benchmark saturation has led some researchers to question whether improvements reflect genuine capability gains or increasingly sophisticated ways of matching test distributions. The Stanford report noted that despite strong benchmark performance, “AI models excel at tasks like International Mathematical Olympiad problems but still struggle with complex reasoning benchmarks like PlanBench. They often fail to reliably solve logic tasks even when provably correct solutions exist.”

The narrowing performance gaps between models further complicate the picture. According to the AI Index, the Elo score difference between the top and tenth-ranked model on the Chatbot Arena Leaderboard was 11.9% in 2023. By early 2025, this gap had narrowed to just 5.4%. Similarly, the difference between the top two models shrank from 4.9% in 2023 to just 0.7% in 2024.

The implications for AI development are significant. If benchmarks are increasingly unreliable guides to real-world performance, the incentive structure for AI research may be misaligned with the goal of building genuinely capable systems. Companies optimising for benchmark rankings may invest disproportionately in test-taking capabilities at the expense of robustness and reliability in deployment.

Francois Chollet has framed this concern explicitly, arguing that ARC-style tasks test “the ability to generalise from limited examples, interpret symbolic meaning, and flexibly apply rules in varying contexts” rather than the ability to recognise patterns encountered during training. The distinction matters profoundly for understanding what current AI systems can and cannot do.

Reshaping Expectations and Rebuilding Trust

Addressing the gap between marketed performance and actual capability will require changes at multiple levels. Researchers have begun developing dynamic benchmarks that are regularly updated to prevent data contamination. LiveBench, for example, is updated with new questions monthly, many from recently published sources, ensuring that performance cannot simply reflect memorisation of training data. This approach represents “a close cousin of the private benchmark” that keeps benchmarks fresh without worrying about contamination.

Greater transparency about the conditions under which AI systems perform well or poorly would help users develop appropriate expectations. OpenAI's documentation acknowledges that their models struggle with “tasks requiring precise spatial localisation, such as identifying chess positions” and “may generate incorrect descriptions or captions in certain scenarios.” Such candour, while not universal in the industry, represents a step toward more honest communication about system limitations.

The AI Incidents Database, maintained by the Partnership on AI, and the AIAAIC Repository provide systematic tracking of AI failures. The AIAAIC documented that in 2024, while incidents declined to 187 compared to the previous year, issues surged to 188, the highest number recorded, totalling 375 occurrences, ten times more than in 2016. Accuracy and reliability and safety topped the list of incident categories. OpenAI, Tesla, Google, and Meta account for the highest number of AI-related incidents in the repository.

Academic researchers have proposed that evaluation frameworks should move beyond narrow task performance to assess broader capabilities including robustness to distribution shift, calibration of confidence, and graceful degradation when facing unfamiliar inputs. Melanie Mitchell has argued that “AI systems ace benchmarks yet stumble in the real world, and it's time to rethink how we probe intelligence in machines.”

Mitchell maintains that “just scaling up these same kinds of models will not solve these problems. Some new approach has to be created, as there are basic capabilities that current architectures and training methods aren't going to overcome.” She notes that current models “are not learning from their mistakes in any long-term sense. They can't carry learning from one session to another. They also have no 'episodic memory,' unlike humans who learn from experiences, mistakes, and successes.”

The gap between benchmark performance and real-world capability is not simply a technical problem awaiting a technical solution. It reflects deeper questions about how we define and measure intelligence, what incentives shape technology development, and how honest we are prepared to be about the limitations of systems we deploy in consequential domains. The answers to these questions will shape not only the trajectory of AI development but also the degree to which public trust in these technologies can be maintained or rebuilt.

For now, the most prudent stance may be one of calibrated scepticism: appreciating what AI systems can genuinely accomplish while remaining clear-eyed about what they cannot. The benchmark scores may be impressive, but the measure of a technology's value lies not in how it performs in controlled conditions but in how it serves us in the messy, unpredictable complexity of actual use.


References and Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Another quiet Sunday winds down. Looking ahead to the upcoming week I see one phone call I'll need to make tomorrow morning, two other calls I'll probably need to make on Wednesday, and other than that ... smooth sailing. After a good night's sleep, which I anticipate having, Monday morning will find me ready and willing to take it on.

Prayers, etc.: *I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Health Metrics: * bw= 219.03 lbs. * bp= 143/85 (67)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 07:45 – crispy oatmeal cookies * 09:00 – lasagna * 10:20 – toast and butter * 13:10 – egg rolls, spinach, egg plant, pancit, white rice * 15:45 – Ensaymada

Activities, Chores, etc.: * 06:30 – bank accounts activity monitored * 06:45 – read, pray, follow news reports from various sources, surf the socials, nap * 08:00 – pray the The Propers of the Day according to the 1962 Roman Missal for The Second Sunday after Epiphany, January 18th, 2026. * 08:30 – follow news reports from various sources, surf the socials * 12:30 – watching NFL Gameday on NFL Network * 14:00 – watching the Texans / Patriots Game on my phone because that's the only way my NFL+ membership lets me follow the game * 17:19 – game over, Patriots win, 28 to 16. * 18:00 – listening to relaxing music

Chess: * 13:10 – moved in all pending CC games

 
Read more...

from Chemin tournant

Autre lieu où l'on se déporte à l'est de tout, où l'on peut sans craindre bannir le moi, le nous, tribunaux féroces, et dans les rythmiques de ce dehors que l'on écoute, jeter son corps entier.

Je l'entends toujours dire, malgré ma désertion, et parler sans reproche, de cette voix si claire que sur son texte je me retourne encore. Mais j'ai trop écrit d'elle, qui n'est plus que le vide en moi, de moi, l'espace de sa parole à taire.

Nombre d’occurrences : 15

#VoyageauLexique

Dans ce deuxième Voyage au Lexique, je continue d’explorer, en me gardant de les exploiter, les mots de Ma vie au village (in Journal de la brousse endormie) dont le nombre d’occurrences est significatif.

 
Lire la suite... Discuss...

from Café histoire

Dans son ouvrage How I Take Photographs, Daido Moriyama présente quelques-unes de ses démarches. Une des premières présentée consiste pour lui à parcourir dans les deux sens une rue fréquentée. Pour lui, > «There is no better place to start than an ordinary shopping street – the kind you find in front of railway stations in any town or city in Japan.*»

Pas de rue commerçante ordinaire, puisque c'est dimanche, mais le bord de quai à Montreux, du côté de Territet, que nous avons parcouru dans les deux sens pour cette flânerie photographique inspirée par Daido Moriyama. Sans prétention.

Premier passage

La descente vers le bord du lac.

Le départ du quai près de l'Auberge de jeunesse de Montreux

Le port de Territet

Le Contre Temps, hors-saison et dans l'attente de la saison estivale

Le pêcheur

L'appel du large ou la joie espérée du pêcheur

Que serait Montreux sans ses palmiers et la promesse d'un doux séjour?

Sur le chemin du retour ou le re-passage

Piscator lacustrus. Labubu des Espaces

Le texte suivant accompagnait cette réalisation de la commune de Montreux: > Personnages issus de l’univers fantastique de l’artiste Kasing Lung. L’expression de cette peluche est souvent décrite comme espiègle, malicieuse, ou même légèrement sauvage, ce qui lui donne une personnalité forte et attachante. Ces figurines se déclinent sous divers coloris et formes tout en possédant leur propre nom. Ces sculptures végétales ont été imaginées et réalisées par les jardinier.ère.s de la Commune de Montreux.

« Pour une cause pure avec une épée pure »

Si je suis passé de nombreuses fois sur ce quai, c'est la première fois que je suis attardé sur ce monument et que j'y ai prêté attention. Probablement que le côté hors-saison de cette promenade dominicale a mis plus particulièrement en évidence le monument. Le texte sur la face présentée de cet obélisque est le suivant :

A LA GLOIRE DE LA FINLANDE ET DE SON PEUPLE HÉROÏQUE A LA MÉMOIRE DU NOBLE CHEVALIER LE BARON CARL GUSTAF MANNERHEIM MARECHAL DE FINLANDE 1867–1951 CANDIDA PRO CAUSA ENSE CANDENDO

En recherchant sur internet à l'aide du texte du document, il est possible d'arriver sur une page de l'armée suisse présentant le monument. On y apprend que le monument a été réalisé en 1955. Le baron Carl Gustaf Mannerheim (1867 – 1951), maréchal de Finlande, est devenu le premier commandant en chef de la jeune armée finlandaise créée lors de l’accession du pays à l’indépendance après la Révolution russe de 1917. Le Dictionnaire historique de la Suisse nous apprend que, durant la “guerre d'hiver” (1939-1940), il organisa la résistance de son pays en 1940-1941 contre les unités soviétiques et devint ainsi le symbole de l'indépendance nationale. Sous son influence, la Finlande se rapprocha de l'Allemagne dès le milieu de l'année 1940 et entra en guerre à ses côtés contre l'Union soviétique (“guerre de continuation”, 1941-1944). Enfin, il devint président de la République jusqu’en 1946. À partir de 1943, il vint régulièrement faire des séjours de santé à Lugano, Lausanne et Montreux (sanatorium de Valmont où il rédigea ses mémoires). L'article de Wikipedia le concernant me permet de comprendre que la citation Pro causa candida Ense candido (« Pour une cause pure avec une épée pure ») figurant sur le monument de Territet est la devise des Mannerheim. En effet, Wikipedia m'indique que cette citation figure également sur son tombeau du cimetière militaire de Hietaniemi à [Helsinki].(https://fr.wikipedia.org/wiki/Helsinki).

Voilà pour le côté week-end studieux de cette flânerie. Sur place, nous arrivons presque au terme de ce parcours.

Un dernier coup d’œil sur les quais.

Avant d'entreprendre la remontée…

J'espère que cette promenade vous aura plu et vous incitera tant à utiliser votre appareil photo dans vos pérégrinations qu'à entreprendre ce type de ballade.

Tags : #aucafé #Histoire #Roadbook #suisse🇨🇭 #montreux #photographie #twice #sonya6000 #sigma1850f28

 
Lire la suite... Discuss...

from Nerd for Hire

In last week's post, I mentioned that my main current writing goal is to finish the draft of a novel that I've been thinking about for a couple of years now but have been struggling to get down on paper. Normally, I'm a pantser. I might have a rough idea of where I want a story to go when I start it (though I don't always), but I don't sit down and plan it out. My preferred approach is to discover the story as I write it, then refine the arc and give it a more intentional-feeling pacing and flow during edits. This has worked for me thus far for the majority of my projects. It works especially well for short stories, but I've also written a couple of novellas and four novels this way, so I have tangible proof that it can work for longer stories, too. 

That said, I have written some select projects in the past that I planned out before writing. Any time I write a choose-your-own-adventure style story, for instance, I at least have a big-picture plan for how the pieces are going to flow together from the start. And when I'm ghostwriting novels, those always start from the outline first—it's the only way to wrangle the project in and make sure me and the client are on the same page from the start.

Of course, just because I know how to outline doesn't mean I enjoy it. To me, pantsing feels more organic and allows for more natural points of surprise. When I write a character into a corner, I need to be creative to get them out of it, in exactly the same way the character needs to be creative to get out of whatever bind I've put them into. What I've been reminding myself of lately, though, is that outlining doesn't need to mean putting rigid controls on what you write. There's a middle ground where you can get the thought-organizing, momentum-driving, rewrite-reducing benefits of an outline while still letting your story breathe and surprise you. With that in mind, here is my top advice for pantsers who are outline-curious on how to make the technique work for you.

#1: Don't limit yourself to just plot movement. 

One issue with stories written from an outline is that they can feel formulaic or overly architected. Sometimes you read them and can see the author moving the pieces around, or the characters feel like they're being directed through a series of actions rather than having those choices seem like their own, ones that arise out of their motivations, beliefs, and identity rather than something imposed by the person creating the story. 

While I can't confirm exactly why this happens in every instance, I suspect the problem often starts with what the writer focuses on when creating their outline. If you only think about how the plot will move, you're missing a critical ingredient of a compelling story: the development of the characters, and how their emotions, relationships, and motivations influence the choices they make and actions they take. 

An outline does need to clarify the plot movement, but that's not the only thing that should be in it. At each stage of the outline, think about the key players involved, how their prior experiences and beliefs influence what actions they'd take, how those actions move them closer to (or away from) their ultimate goal, and what impact each plot point would have on their emotional state, their relationships with other characters, and the choices they'd make in the future. An authentic character that's well-integrated into the plot and setting shouldn't be static. They change in response to the experiences you write for them, and planning out that evolution is just as important to creating a fully realized story as plotting out the story's action. 

#2: Let yourself take tangents.

One of the exciting things about pantsing is that I sometimes end up discovering new ways for the story to play out as I write, things that were never even in my brain when I first sat down to work on it. But you don't need to sacrifice this when you start from an outline if you take the same exploratory approach to writing it. 

Instead of just seeing the outline as a straight line from A to B to C, let yourself linger at each step and think about the different ways your characters might approach the situation. If you find yourself at a plot crossroads where you could take multiple paths forward, use the outline process to “audition” those paths and see which one will serve the story the best. One might stand out as the best option once you've finished the outline and know where you want the story to go. In other cases, you can wait to decide which path you’ll take until you're in the writing stage, when you'll be able to better assess which one seems like the most logical decision for your characters at that point of their journey.

The same idea can apply to filling in details of the characters' and world's history. When you reach a point that this backstory feels necessary to understand the choices characters make, or to get a full sense for the cultural, political, economic, etc. landscape that they're operating in, give yourself permission to go on a sidebar. Outline those backstory details the same way you would forward plot movement. That doesn't mean you'll necessarily include all of that information at this specific point in the story—these may be worldbuilding details that you want to sprinkle in through descriptions, or character context that you'll establish through conversations and flashbacks as you're building their identity on the page. But by brainstorming those background details as they come up, you'll give yourself a roadmap for which aspects of the world history or characters' past are actually need-to-know for the reader. Once you know that, all you need to do is find the right time and place to bring readers in on that knowledge. 

#3: Use whatever format makes sense for your brain.

Most people hear the word “outline” and think about the specific structured document that we're all taught to write for high school English, the kind that involves various levels of numbering and indenting and bullet points. This is one way to approach outlining a creative work, but that's not the only option. If thinking about things that way automatically kills all of your creativity or gives you flashbacks to writing five-paragraph essays, you can still get the same value out of doing things in a different format.

I’ll give some examples of other options. One way you can approach it is by writing a script-style outline. This can be especially effective for character-driven stories where conversations are going to be key plot drivers. With this style of outline, you write out many of the dialogue passages, surrounded by scene direction style summaries of their actions and expressions, as well as the setting and any other background information that you plan to work into the narrative. This is a kind of middle ground between outlining and pantsing. On your next pass, you convert this into full prose by filling in the narration and descriptive details around the dialogue, using the scene directions you wrote as a guide. 

Another option is to use a notecard outlining system. The basic idea here is that each chunk of the story (scene, chapter, plot point, etc.) gets its own notecard, where you can also write down things like the characters involved, where it's happening, and other details you'll want to make sure to include. This is the approach I default to when I'm doing choose-your-own narratives, since it makes it easier to visualize how the different plot choices branch off from each other, but it can be just as useful for other types of stories. I would say this is an especially good approach for more complex novels that have multiple plot threads or large casts of characters, because it also allows you to easily isolate each of these threads and experiment with different approaches to weaving them together. 

There are other options too, I'm sure, or you could come up with your own if none of the other approaches that people have tried seem like they'd work. The big-picture takeaway here is that there's more than one way to outline, and you don't need to lock yourself into anybody else's system. 

Outlines are tools, not rules

This was the big thing I needed to get into my own head before I could start to take advantage of outlining as a part of my process. I've heard a similar thing from other pantsers—that the idea of writing an outline feels restrictive, like it's preventing your creativity from having full room to blossom. But here's the thing about an outline: literally nobody else is going to see it. It doesn’t matter if it follows the rules or adheres to someone else’s standard. It's just a way to plan and organize your story before you start writing it. If you feel too constrained with a chapter-by-chapter outline, for instance, then you don't need to use that format. Maybe instead you just give yourself some key plot points to shoot for, and wait until you're writing it to decide where the chapter breaks will go. 

For the current novel, I'm starting with a big-picture outline divided into three acts. I've sketched out the basic plot movement and which characters will be involved, as well as how their motivations or allegiances will change over the course of the book. I plan to gradually reveal certain aspects of the world and characters to the reader, so I've also marked in the points where key info bits are going to be dropped. But there are some places where I haven't yet planned out exactly how the characters are going to get from one plot moment to the next—I know where I want them to end up, but I'm going to let myself figure out exactly how they get there when I sit down to write the thing. This kind of half-outlining gives me the structure I need to construct a complex plot involving a large cast of characters, so I'm not just stumbling around in the metaphorical woods for 30,000 words (like I did on my first attempt to write this novel a couple of years back), but it still leaves me some room to play when it's time for writing. That's important, because the actual act of writing a novel can be obnoxiously long and tedious, and it's even more so for me when I'm following a detailed outline and know exactly what comes next. Leaving myself some places to explore and make creative decisions during the writing phase I know is going to be crucial to forcing myself through those points where the writing doesn't feel exciting. 

That's the last tidbit of advice I'll end on. Writing a book takes a while. Experienced writers on the fast side of things can churn out a manuscript in 2-3 months, but for most people I'd say 6 months to a year is more realistic. In either case, though, you're going to be following this outline for a while, so it's smart to think about your writing process. Structure the outline in a way that will be easy for you to follow and matches up with how you prefer to write. If you like to work in chunks instead of writing through chronologically, for example, then doing a notecard system might be smart because it'll let you isolate or rearrange sections easily. The goal is to organize your thoughts and the story's structure, so whatever strategy will allow you to do that the best is the right tool for you, whether or not it matches with someone else's idea of what an outline should be.

See similar posts:

#WritingAdvice #NovelWriting

 
Leer más...

from eutychus

image

Almost two thousand years ago, a young man sat in a window, listening to the great teachings of St Paul. As the great teacher would be leaving the next day, he taught late into the night, well past midnight. The young man fell asleep, falling three stories down to his death. St Paul, being the responsible teacher that he was, brought him back from the dead through the power of the Holy Spirit.

Afterward, they had some food, talked until dawn (probably having some laughs at St Paul's expense – I mean, how dull were those teachings??), St Paul left, and everyone went home. It is written in Acts 20 that the people were greatly comforted that the young man was brought back to life, as nothing kills the mood of a Bible study like death.

That young man's name was Eutychus. The name Eutychus means “lucky”.

I have to imagine that the second conversation had a somewhat different tone than the first. St Paul had seen and experienced many extraordinary things, and he would have been an amazing teacher, but still… when you have a student falling asleep in class, well, it just stands to reason that there’s room for improvement. Regardless, once someone in a group of people that you know personally has just died and come back to life, right in front of you, that’s certainly going to capture your attention!

Not only that – but it also provided added evidence that St Paul’s words were Truth. Lots of people can claim to talk about God, and the right path to Him, and His Divine Will. But when that same person performs a miraculous healing in front of you, it lends an air of credibility to your words that teaching alone doesn’t possess. I’m not saying we should blindly follow after any miracle worker who comes our way (in fact, Scripture specifically tells us not to do that!), but the miracle was one sign that St Paul’s words were from God. St Paul’s teachings were also consistent with Scripture, which is another essential sign that his words were from God. The healing provided additional support and proof that St Paul was teaching in a way that would open up the Word of God to the listeners, and not mislead them.

We live in misleading times. There are a lot of people with a lot of opinions, and a lot of them believe that those opinions are righteous. Many of those people even believe that those opinions are Godly – and I do not believe that they can all be right, because there is such a diversity of opinion that I find it difficult to believe that God would have all those differing viewpoints. I recognize that God is much greater than I will ever comprehend, but some of these viewpoints are so different that it may well be beyond the ability of the Divine Majesty to contain all of them.

Which puts me in the position of having to choose; to decide where do I make a stand when it comes to what I believe about God? I am Catholic, so if I choose to follow the rules of the Church (which I do), then a lot of the decisions have been made for me. I believe in God the Father, God the Son, and God the Holy Spirit. I believe in the Truth of the Sacred Scripture and Sacred Tradition, as codified in the Bible and the Catechism. I believe in the efficacy of prayer, so I request prayer from both some of the living (the people I trust) and some of the dead (the Saints). I believe a lot of things.

I have believed different things at different points in my life, and I have always tried to live true to the beliefs that I had, when I had them. When I found those beliefs to be contrary to God, I did my best to let go of them; but for the most part, I found that my beliefs didn’t get replaced as much as they grew and changed.

My theological issues were very helpful to me in the beginning of my journey, of course, as they are for many people. But after the first few years passed, I found that I needed something a little more practical, a little more substantive; so, here I am. I’m trying to get to a place where I can reconcile a gentle, loving, all-powerful God with a world full of Sin. I need some way to take what I understand from the Scriptures and apply them to my life.

And there have been a few times when I felt like I almost understood something – like I was close to some sort of truth – and it just slipped away, like the memory of a dream in the morning. So I thought that if I started writing things down, perhaps at some point I’d catch one of those truths. Or at least, I’ll be able to remember what I was thinking when I thought these things.

Catholicism addresses many older issues through writings, and more contemporary issues through leadership. I live in the country of America, in the state of Minnesota. One very contemporary issue for us here is illegal immigration; in my opinion, that is not because we are constantly being accosted by illegal immigrants, but because we are constantly being told that illegal immigrants are such a problem for our society. A few months ago, the US Conference of Catholic Bishops came out with a statement expressing concern over the treatment of illegal immigrants in our country. Pope Leo XIV (the head of the Roman Catholic Church) supported it.

As I’m writing this, there are approximately 2000 ICE and Homeland Security agents being deployed to Minneapolis (where I live), ostensibly to rid the state of illegal Somali immigrants. As far as I know, that would make this the largest effort by far of the current administration’s crackdown on a metropolitan area to deport illegal immigrants. Interestingly, they came out with a study four days ago (“Mass Deportations Are Improving Americans’ Quality of Life”) that doesn’t include Minneapolis (or anywhere in Minnesota) as one of the top 20 metro areas with the largest illegal migrant populations.

There was a large fraud case involving some Somali persons; as far as I know, they have all been identified and are going through our legal process. If our justice system is fair, they will all serve jail terms and pay restitution, as will the person who was the head of this fraud (who is not Somali). My understanding is that some of them are already in jail. Since we don’t have a large population, the deployment of agents must be due to the fraud case.

But the actions of some do not justify the persecution of all. I’m not nearly as concerned about the fraud case, or even the deportations, as I am with the conduct of the persons who are coming here, and they way they are treating other people – whether they are citizens or not, legal or illegal. Whatever a person’s status may be, whether they broke the law by coming into this country illegally, or by shoplifting, or by fraud, or by murder – they should be treated with respect and decency.

Our own President was previously convicted of 34 counts of fraud himself; I highly doubt that he would have wanted ICE agents and Homeland Security going after him and his friends over that. And the ICE agent who recently ended the life of a local citizen – I have to imagine that he would want to have a fair trial before his peers, at the very least; not to be thrown into a cage somewhere and forgotten.

But this isn’t the world I live in today. My world is split between people who seem to think that those whose sole crime is being here illegally should be treated with less respect than we give animals in shelters, and those who would like to go back to how we were handling illegal immigration prior to 2017. And my perspective is fairly simple – the Catholic Church teaches (Catechism paragraph 2241):

The more prosperous nations are obliged, to the extent they are able, to welcome the foreigner in search of the security and the means of livelihood which he cannot find in his country of origin. Public authorities should see to it that the natural right is respected that places a guest under the protection of those who receive him.

Political authorities, for the sake of the common good for which they are responsible, may make the exercise of the right to immigrate subject to various juridical conditions, especially with regard to the immigrants' duties toward their country of adoption. Immigrants are obliged to respect with gratitude the material and spiritual heritage of the country that receives them, to obey its laws and to assist in carrying civic burdens.

The first sentence reads, “ The more prosperous nations are obliged, to the extent they are able, to welcome the foreigner…”. We are able. We should, and we must, welcome. Now, I realize that our immigration system needs a lot of improvement. A lot. And there are people who will come to this country to do bad things, I’m sure. But there are also people who are already in this country who do bad things. We have a process to deal with those people; we send them to jail (unless they’re able to get out of the legal process, because some people are able to get around our legal system). And I’m certainly not saying that we never need to deport anyone, ever.

But there is a certain type of teaching that I have heard, and it drones on and on and on, and it tells me about how important it is that America be just for Americans, and how much more prosperous we’ll all be once all of the illegal immigrants will be kicked out. And it tells me that once all of the right people are using the correct restrooms, and once all of the pronouns are being used correctly, and once all of the correct surgeries have been legislated away, then society will be a better place to live………

It’s enough to put a person to sleep.

Here’s my problem. The argument has no LIFE in it. I’m not talking about falling-out-of-a-window-and-being-brought-back-to-life miraculous life. I mean the kind of life that came from Jesus walking the streets of Jerusalem LIFE. He was constantly arguing with Pharisees over the Law, remember? It wasn’t because they didn’t understand the Law – they certainly understood it.

It was because they chose to use the Law to restrict people to the extent that the people felt oppressed by God, rather than freed by God. The intent of the Law was to help us to understand that we are all creatures of sin and limitation, BUT CALLED TO A HIGHER PURPOSE. When we use the Scriptures to focus on our own limitations, and when we are humbling our hearts in front of God, then we are at the beginning of wisdom. When we use the Scriptures to focus on other people’s limitations, and when we are judging others using God’s righteousness, we are becoming Pharisees ourselves.

I don’t want to hate immigrants. I don’t want to hate the government. I don’t want to hate anyone. And I don’t want to make anyone’s life more difficult, though I know that there’s no way to avoid that. And I know that my opinion isn’t really worth anything except to me, God, and whomever might want to read about it. But I don’t believe that my battle should ever be against people – it should be against spiritual forces of wickedness. People themselves – all people – are sacred.

But there are many, many words being said and written. I have a few words that I would like to write, and so here I am, writing down those words. I have fallen asleep, more than a few times. In some ways, you might even say that I’ve died, and been brought back to life; maybe I’ll write about that some time, too. It will, without doubt, be far more interesting than what I have written here. Because really, the only teaching I’m interested in, is the kind that is going to bring life.

Call me Eutychus.

 
Read more... Discuss...

from Suranyami

This is my docker-compose.yaml for beszel:

services:
  beszel:
    image: henrygd/beszel:latest
    x-ports:
      - beszel.your-domain.com:8090/https
    volumes:
      - ./beszel_data:/beszel_data
      - ./beszel_socket:/beszel_socket
  • Deploy the beszel webapp with uc deploy bezel.yml
  • Signup and login
  • Go to settings/tokens and activate “Universal Token”
  • Under the ••• drop-down menu, select “Copy Docker Compose”. This will give you something like this:
services:
  beszel-agent:
    image: henrygd/beszel-agent
    container_name: beszel-agent
    restart: unless-stopped
    network_mode: host
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./beszel_agent_data:/var/lib/beszel-agent
      # monitor other disks / partitions by mounting a folder in /extra-filesystems
      # - /mnt/disk/.beszel:/extra-filesystems/sda1:ro
    environment:
      LISTEN: 45876
      KEY: 'ssh-ed25519 xxxxxxxxxxxxxxxxxxxxxxxxx'
      TOKEN: xxxx-xxxxx-xxxxx-xxxxx
      HUB_URL: https://beszel.your-domain.com

Add this line to the bottom of it:

    deploy:
      mode: global

This will ensure that the agent is installed on all your machines.

I usually just paste the beszel-agent bit into the first docker-compose, then re-run:

uc deploy -f beszel.yml

This will give you some output like this:

[+] Deploying services 8/8
 ✔ Container beszel-agent-xmai on eon    Started         1.4s 
 ✔ Container beszel-agent-os6i on itx    Started         0.6s 
 ✔ Container beszel-agent-hkhd on node2  Started         0.6s 
 ✔ Container beszel-agent-w84p on node3  Started         1.4s 
 ✔ Container beszel-agent-qd42 on node4  Started         0.6s 
 ✔ Container beszel-agent-c79q on pico   Started         0.5s 
 ✔ Container beszel-agent-v7ff on rock4  Started         0.8s 
 ✔ Container beszel-agent-odec on rock5  Started         0.7s 

Then you might want to rename the nodes in the beszel web UI for easier machine identification. I still haven't worked out how to make that process automatic, but it's not a big deal.

 
Read more...

from nayavia

Nayavia is an early-stage project exploring how students experience college learning environments. It begins from a simple observation: the same college can feel enabling to some students and quietly misaligned for others, even when preparation and ability appear similar. Rather than focusing on rankings, predictions, or outcomes, Nayavia is interested in understanding what learning environments actually feel like from the inside.

Where this work currently stands At the moment, Nayavia exists as a research notebook. This work is focused on: thinking carefully about how college environments shape day-t0-day learning, listening to student experience without rushing to conclusions, questioning assumptions that are often taken for granted in college guidance. No data has been collected yet No analysis has been completed This emphasis is on forming the right questions before attempting answers. What this is not Nayavia is not a ranking system. It is not a recommendation engine. It is not a promise of better outcomes or a guide to choosing the right college. There is no advice being offered here, and no decisions being optimized. Research The core work currently lives in an ongoing research notebook. The writing is primarily for internal clarity. External readers may follow along, but the purpose is to document how the thinking evolves over time, including uncertainty, revisions, and dead ends.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog