Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
the casual critic
#books #non-fiction #tech
Something is wrong with the internet. What once promised a window onto the world now feels like a morass infested with AI generated garbage, trolls, bots, trackers and stupendous amounts of advertising. Every company claims to be your friend in that inane, offensively chummy yet mildly menacing corpospeak – now perfected by LLMs – all while happily stabbing you in the back when you try to buy cheaper ink for your printer. That is, when they’re not busy subverting democracy. Can someone please switch the internet off and switch it on again?
Maybe such a feat is beyond Cory Doctorow, author of The Internet Con, but it would not be for want of trying. Doctorow is a vociferous, veteran campaigner at the Electronic Frontier Foundation, a prolific writer, and an insightful critic of the way Big Tech continues to deny the open and democratic potential of the internet. The Internet Con is a manifesto, polemic and primer on how that internet was stolen from us, and how we might get it back. Doctorow has recently gained mainstream prominence with his neologism ‘enshittification’: a descriptor of the downward doom spiral that Big Tech keeps the internet locked into. As I am only slowly going through my backlog of books, I am several Doctorow books behind. Which I don’t regret, as The Internet Con, published in 2023, remains an excellent starting point for anyone seeking to understand what is wrong with the internet.
The Internet Con starts with the insight that tech companies, like all companies, are not simply commercial entities providing goods and services, but systems for extracting wealth and funneling this to the ultra-rich. Congruent with Stafford Beer’s dictum that the purpose of the system is what it does, rather than what it claims to do, Doctorow’s analysis understands that tech company behaviour isn’t governed by something unique about the nature of computers, but by the same demand to maximise shareholder value and maintain power as any other large corporation. The Internet Con convincingly shows how tech’s real power does not derive from something intrinsic in network technology, but from a political economy that fails to prevent the emergence of monopolies across society at large.
One thing The Internet Con excels at is demystifying the discourse around tech, which, analogous to Marx’s observation about vulgar bourgeois economics, serves to obscure its actual relations and operations. We may use networked technology every day, but our understanding of how it works is often about as deep as a touchscreen. This lack of knowledge gives tech companies tremendous power to set the boundaries of the digital Overton Window and, parallel to bourgeois economists’ invocation of ‘the market’, allows them to claim that ‘the cloud’ or ‘privacy’ or ‘pseudoscientific technobabble’ mean that we cannot have nice things, such as interoperability, control or even just an internet that works for us. (For a discussion of how Big Tech’s worldview became hegemonic, see Hegemony Now!)
What is, however, unique about computers is their potential for interoperability: the ability of one system or component to interact with another. Interoperability is core to Doctorow’s argument, and its denial the source of his fury. Because while tech companies are not exceptional, computer technology itself is. Unlike other systems (cars, bookstores, sheep), computers are intrinsically interoperable because any computer can, theoretically, execute any program. That means that anyone with sufficient skill could, for example, write a program that gives you ad-free access to Facebook or allows you to send messages from Signal to Telegram.
The absence of such programs has nothing to do with tech, and everything with tech companies weaponising copyright law to dampen the natural tendency towards interoperability of computers and networked systems, lest it interfere with their ability to extract enormous rents. Walled gardens do not emerge spontaneously due to some natural ‘network effects’. They are built, and scrupulously policed. In this Big Tech is aided and abetted by a US government that forced these copyright enclosures on the rest of us by threatening tariffs, adverse trade terms or withdrawal of aid. This tremendous power extended through digital copyright is so appealing that other sectors of the economy have followed suit. Cars, fridges, printers, watches, TVs, any and all ‘smart’ devices are now infested with bits of hard-, firm- and software that prevent their owners from exercising full control over them. It is not an argument that The Internet Con explores in detail, but its evident that the internet increasingly doesn’t function to let us reach out into the world, but for companies to remotely project their control into our daily lives.
What, then, is to be done? The Internet Con offers several remedies, most of which centre on removing the legal barricades erected against interoperability. As the state giveth, so the state can taketh away. This part of The Internet Con is weaker than Doctorow’s searing and insightful analysis, because it is not clear why a state would try to upend Big Tech’s protections. It may be abundantly clear that the status quo doesn’t work for consumers and even smaller companies, but states have either decided that it works for some of their tech companies, or they don’t want to risk retaliation from the United States. In a way I am persuaded by Doctorow’s argument that winning the fight against Big Tech is a necessary if not sufficient condition to win the other great battles of our time, but it does seem that to win this battle, we first have to exorcise decades of neoliberal capture of the state and replace it with popular democratic control. It is not fair to lay this critique solely at Doctorow’s door, but it does worry me when considering the feasibility of his remedies. Though it is clear from his more recent writing that he perceives an opportunity in the present conjuncture, where Trump is rapidly eroding any reason for other states to collaborate with the United States.
The state-oriented nature of Doctorow’s proposals is also understandable when considering his view that individual action is insufficient to curtail the dominance of Big Tech. The structural advantages they have accumulated are too great for that. Which is not to say that individual choices do not matter, and we would be remiss to waste what power we do have. There is a reason why I am writing this blog on an obscure platform that avoids social media integration and trackers, and promote it only on Mastodon. Every user who leaves Facebook for Mastodon, Google for Kagi, or Microsoft for Linux or LibreOffice diverts a tiny amount of power from Big Tech to organisations that do support an open, democratic and people-centric internet.
If the choice for the 20th century was socialism or barbarism, the choice for the 21st is solarpunk or cyberpunk. In Doctorow, the dream of an internet that fosters community, creativity, solidarity and democracy has one of its staunchest paladins. The Internet Con is a call to arms that everyone who desires a harmonious ecology of technology, humanity and nature should heed. So get your grandmother off Facebook, Occupy the Internet, and subscribe to Cory Doctorow’s newsletter.

To participate in the China International Leadership Programme, applicants must meet a set of academic, professional, and legal requirements in order to secure programme admission and successfully complete the Z-visa application process. These requirements ensure compliance with Chinese immigration regulations and help facilitate a smooth admission and onboarding experience.
Applicants must hold an apostilled bachelor’s degree from a recognised university.
A police clearance (criminal record check) issued within the required timeframe and officially apostilled must be provided.
A teaching certification of at least 50 hours (e.g. TEFL/TESOL or equivalent) is required; however, this document does not currently require apostillisation.
Applicants must demonstrate a minimum of two years’ relevant experience in the education sector, supported by a formal letter of recommendation.
A comprehensive professional résumé detailing academic qualifications, work experience, skills, and achievements must be submitted.
Identification documents, including a valid passport copy and passport-sized photographs, must be provided to meet immigration and administrative requirements.
To enroll or learn more about the China International Leadership Programme, please visit:
https://payhip.com/AllThingsChina
from Robert Galpin
in the cold to walk with arms swinging free to let the blood descend
from Robert Galpin
on the floodplain an upturned sofa held by sand gripped by couch grass
from
FEDITECH

Il faut parfois du courage pour dire stop et l’Indonésie vient de nous donner une magnifique leçon de responsabilité numérique.
Alors que le monde entier s'inquiète des dérapages de l'intelligence artificielle, l'archipel a décidé de ne pas attendre les bras croisés. Avec une fermeté exemplaire, le gouvernement indonésien a claqué la porte au nez de Grok, le chatbot controversé d'Elon Musk. La raison, vous commencez à la connaitre. L'outil s'est transformé en une véritable usine à horreurs, générant sans vergogne des images sexualisées de femmes réelles et, pire encore, d'enfants. En bannissant l'accès à cette technologie défaillante, Jakarta envoie un message puissant à la Silicon Valley. La sécurité des citoyens passe avant les fantasmes technologiques mal maîtrisés.
C’est une décision qui fait du bien à entendre. Le ministère indonésien de la communication n'a pas mâché ses mots pour justifier ce blocage temporaire mais nécessaire. L'objectif est noble et sans équivoque, protéger les femmes, les enfants et l'ensemble de la communauté contre le fléau des contenus pornographiques truqués. La ministre Meutya Hafid a parfaitement résumé la situation en qualifiant ces deepfakes sexuels de violation grave des droits de l'homme et de la dignité. En refusant de laisser son espace numérique devenir une zone de non-droit, le pays, qui représente tout de même le troisième plus gros marché pour la plateforme X, frappe là où ça fait mal. C'est un rappel cinglant pour Elon Musk. On ne joue pas avec la sécurité de millions de personnes sous prétexte d'innovation.
Pendant ce temps, Grok accumule les casseroles et prouve qu'il est l'élève le plus indiscipliné de la classe IA. L'outil de la société xAI, intégré de force dans l'écosystème X, semble avoir été lancé sans les garde-fous les plus élémentaires. Le résultat est désastreux: des utilisateurs malveillants s'en servent pour déshabiller numériquement des personnes sur des photos avant de les diffuser publiquement. C'est une invasion de la vie privée d'une violence inouïe. Heureusement, l'Indonésie n'est pas la seule à s'insurger contre ce laxisme, même si elle a été la plus prompte à agir concrètement. La France, l'Inde et le Royaume-Uni commencent eux aussi à gronder, exigeant des comptes face à ces images vulgaires et dénigrantes qui inondent la toile.
La pression monte d'un cran aux États-Unis également, où des sénateurs, excédés par ce comportement scandaleux, demandent carrément à Apple et Google de faire le ménage en retirant X de leurs magasins d'applications. Face à cette tempête méritée, la défense d'Elon Musk semble bien légère. Promettre de suspendre les utilisateurs fautifs une fois le mal fait ne suffit pas. Et la dernière mesure en date (rendre le générateur d'images payant) ressemble plus à une tentative cynique de monétiser le chaos qu'à une véritable solution éthique. En attendant que xAI revoie sérieusement sa copie, on ne peut que féliciter l'Indonésie d'avoir eu l'audace de débrancher la prise pour protéger son peuple.
from
Have A Good Day
Marc Randolph offers another take on writing with AI. That is why I would start with my own version and let AI handle the editing.
from Receiving Signal – An Ongoing AI Notebook
A living glossary of prompt engineering terms, updated periodically.
CORE CONCEPTS
Prompt Engineering – Crafting inputs to shape outputs predictably
Signal Density – Ratio of useful information to fluff; how much meaning per word
High-value Tokens – Words or phrases that strongly affect the model's interpretation and output
Semantic Compression – Expressing more meaning in fewer words without losing clarity
STRUCTURAL TECHNIQUES
Top-loading – Placing key information at the beginning where the model pays most attention
Weighting – Emphasizing certain elements more than others to guide priority
Order Bias – LLM tendency to prioritize earlier tokens in the input over later ones
Structured Output Specification – Defining the format or structure you want the output to take (e.g., JSON, markdown, React component)
CONTROL METHODS
Soft Control – Minimal specification that allows organic emergence while maintaining direction
Negative Prompting – Explicitly excluding or minimizing unwanted elements
Constraint Declaration – Stating limitations or boundaries upfront to focus the response
Tonal Anchoring – Using consistent voice or style markers to stabilize tone across outputs
Identity Anchors – Core personality traits or characteristics that define a character or voice
Context/Scene Grounding – Shaping behaviour and responses through environmental or situational framing
ITERATIVE PROCESSES
Refinement Loop – Cyclical process of prompt testing and improvement based on results
Iterative Co-design – Collaborative refinement through conversation rather than single-shot prompting
DESIGN THINKING
Functional Requirements – Specifying what something needs to do rather than just what it should say
Component Thinking – Breaking complex requests into discrete functional parts
User Flow Specification – Describing the journey through an experience from start to finish
State Management Consideration – Thinking about what information needs to persist, change, or be tracked
Concrete Examples – Providing specific instances to clarify abstract requirements
ORGANIC DEVELOPMENT
Behavioural Emergence – Letting the model shape details organically within your framework
ANTI-PATTERNS
Noise Introduction – Adding unnecessary details that distort results or dilute signal density
Last updated: January 2026
from Sheryl Salender
📉 Why Looping Videos Can Mislead Advertisers and Waste Money
I noticed a major issue with a common task today. When the instructions don't match how the platform works, the advertiser loses their investment and the data becomes inaccurate.
📋 Here is my actual observation:
The advertiser's task was labeled as a “4-minute” watch, but the instructions inside required:
♾️ The Mathematical Deception:
🧐 The Actual Test:
In my actual test, I reached 76,180 frames, proving that hitting the advertiser's target is impossible within the advertised 4-minute window.
❌ The Contradictory Issues:
🤔 My Honest Analysis:
This is a waste of money for the advertiser, frustrating for the micro worker, and takes way too much time. Based on my research, this strategy causes a direct financial loss for the advertiser. ❌ YouTube's algorithm is designed to detect “artificial” behavior. If a user loops the same video 3 times just to hit a specific number, YouTube flags it as low-quality.
😰 The Result: Advertisers pay for the task, but YouTube often deletes those views or freezes your counter later. Advertisers are paying for a number that isn’t permanent and can even get their channel flagged for invalid traffic.
Source: https://support.google.com/youtube/answer/10285842...
✅ My Suggestions to Advertisers:
Lastly, are you paying for engagement, or just for a number that YouTube is going to delete tomorrow?
💡 Where I Test & Analyze My Microtask Journey: Check out how I experiment with tasks and track real engagement: https://timebucks.com/?refID=226390779
#TaskAnalysis #StatsForNerds #YouTubeStrategy #DigitalMarketing #TaskDocumentation #LifeBehindTheClicks
from
Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.
new Predator: Badlands-1 Wake Up Dead Man: A Knives Out Mystery-1 Zootopia 2+1 Wicked: For Good-2 Now You See Me: Now You Don't+1 One Battle After Anothernew The Tank-2 The Running Man-5 Eternity= Bugonia+1 Fallout-1 Stranger Things= Landman= Pluribusnew The Pittnew High Potentialnew The Rookie-2 Percy Jackson and the Olympians-1 The Simpsonsnew Spartacus: House of AshurHi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.
from Robert Galpin
gulls in grey and rain score their white across the sky how are they there you here
from An Open Letter
God I just want reprieve. I hate the fact that my mind keeps filling not just blank space, but overwriting my own voice with visions of killing myself, and I hate that it gives me peace.
from
Bloc de notas
aconsejando y recibiendo consejos nadie hace caso / así funciona la rueda que paulatinamente gira
from
SmarterArticles

Developers are convinced that AI coding assistants make them faster. The data tells a different story entirely. In one of the most striking findings to emerge from software engineering research in 2025, experienced programmers using frontier AI tools actually took 19 per cent longer to complete tasks than those working without assistance. Yet those same developers believed the AI had accelerated their work by 20 per cent.
This perception gap represents more than a curious psychological phenomenon. It reveals a fundamental disconnect between how developers experience AI-assisted coding and what actually happens to productivity, code quality, and long-term maintenance costs. The implications extend far beyond individual programmers to reshape how organisations measure software development performance and how teams should structure their workflows.
The research that exposed this discrepancy came from METR, an AI safety organisation that conducted a randomised controlled trial with 16 experienced open-source developers. Each participant had an average of five years of prior experience with the mature projects they worked on. The study assigned 246 tasks randomly to either allow or disallow AI tool usage, with developers primarily using Cursor Pro and Claude 3.5/3.7 Sonnet when permitted.
Before completing their assigned issues, developers predicted AI would speed them up by 24 per cent. After experiencing the slowdown firsthand, they still reported believing AI had improved their performance by 20 per cent. The objective measurement showed the opposite: tasks took 19 per cent longer when AI tools were available.
This finding stands in stark contrast to vendor-sponsored research. GitHub, a subsidiary of Microsoft, published studies claiming developers completed tasks 55.8 per cent faster with Copilot. A multi-company study spanning Microsoft, Accenture, and a Fortune 100 enterprise reported a 26 per cent productivity increase. Google's internal randomised controlled trial found developers using AI finished assignments 21 per cent faster.
The contradiction isn't necessarily that some studies are wrong and others correct. Rather, it reflects different contexts, measurement approaches, and crucially, different relationships between researchers and AI tool vendors. The studies showing productivity gains have authors affiliated with companies that produce or invest in AI coding tools. Whilst this doesn't invalidate their findings, it warrants careful consideration when evaluating claims.
Several cognitive biases compound to create the perception gap. Visible activity bias makes watching code generate feel productive, even when substantial time disappears into reviewing, debugging, and correcting that output. Cognitive load reduction from less typing creates an illusion of less work, despite the mental effort required to validate AI suggestions.
The novelty effect means new tools feel exciting and effective initially, regardless of objective outcomes. Attribution bias leads developers to credit AI for successes whilst blaming other factors for failures. And sunk cost rationalisation kicks in after organisations invest in AI tools and training, making participants reluctant to admit the investment hasn't paid off.
Stack Overflow's 2025 Developer Survey captures this sentiment shift quantitatively. Whilst 84 per cent of respondents reported using or planning to use AI tools in their development process, positive sentiment dropped to 60 per cent from 70 per cent the previous year. More tellingly, 46 per cent of developers actively distrust AI tool accuracy, compared to only 33 per cent who trust them. When asked directly about productivity impact, just 16.3 per cent said AI made them more productive to a great extent. The largest group, 41.4 per cent, reported little or no effect.
The productivity perception gap becomes more concerning when examining code quality metrics. CodeRabbit's December 2025 “State of AI vs Human Code Generation” report analysed 470 open-source GitHub pull requests and found AI-generated code produced approximately 1.7 times more issues than human-written code.
The severity of defects matters as much as their quantity. AI-authored pull requests contained 1.4 times more critical issues and 1.7 times more major issues on average. Algorithmic errors appeared 2.25 times more frequently in AI-generated changes. Exception-handling gaps doubled. Issues related to incorrect sequencing, missing dependencies, and concurrency misuse showed close to twofold increases across the board.
These aren't merely cosmetic problems. Logic and correctness errors occurred 1.75 times more often. Security findings appeared 1.57 times more frequently. Performance issues showed up 1.42 times as often. Readability problems surfaced more than three times as often in AI-coauthored pull requests.
GitClear's analysis of 211 million changed lines of code between 2020 and 2024 revealed structural shifts in how developers work that presage long-term maintenance challenges. The proportion of new code revised within two weeks of its initial commit nearly doubled from 3.1 per cent in 2020 to 5.7 per cent in 2024. This code churn metric indicates premature or low-quality commits requiring immediate correction.
Perhaps most concerning for long-term codebase health: refactoring declined dramatically. The percentage of changed code lines associated with refactoring dropped from 25 per cent in 2021 to less than 10 per cent in 2024. Duplicate code blocks increased eightfold. For the first time, copy-pasted code exceeded refactored lines, suggesting developers spend more time adding AI-generated snippets than improving existing architecture.
Beyond quality metrics, AI coding assistants introduce entirely novel security vulnerabilities through hallucinated dependencies. Research analysing 576,000 code samples from 16 popular large language models found 19.7 per cent of package dependencies were hallucinated, meaning the AI suggested importing libraries that don't actually exist.
Open-source models performed worse, hallucinating nearly 22 per cent of dependencies compared to 5 per cent for commercial models. Alarmingly, 43 per cent of these hallucinations repeated across multiple queries, making them predictable targets for attackers.
This predictability enabled a new attack vector security researchers have termed “slopsquatting.” Attackers monitor commonly hallucinated package names and register them on public repositories like PyPI and npm. When developers copy AI-generated code without verifying dependencies, they inadvertently install malicious packages. Between late 2023 and early 2025, this attack method moved from theoretical concern to active exploitation.
The maintenance costs of hallucinations extend beyond security incidents. Teams must allocate time to verify every dependency AI suggests, check whether suggested APIs actually exist in the versions specified, and validate that code examples reflect current library interfaces rather than outdated or imagined ones. A quarter of developers estimate that one in five AI-generated suggestions contain factual errors or misleading code. More than three-quarters encounter frequent hallucinations and avoid shipping AI-generated code without human verification. This verification overhead represents a hidden productivity cost that perception metrics rarely capture.
Companies implementing comprehensive AI governance frameworks report 60 per cent fewer hallucination-related incidents compared to those using AI tools without oversight controls. The investment in governance processes, however, further erodes the time savings AI supposedly provides.
The 2025 DORA Report from Google provides perhaps the clearest articulation of how AI acceleration affects software delivery at scale. AI adoption among software development professionals reached 90 per cent, with practitioners typically dedicating two hours daily to AI tools. Over 80 per cent reported AI enhanced their productivity, and 59 per cent perceived positive influence on code quality.
Yet the report's analysis of delivery metrics tells a more nuanced story. AI adoption continues to have a negative relationship with software delivery stability. Developers using AI completed 21 per cent more tasks and merged 98 per cent more pull requests, but organisational delivery metrics remained flat. The report concludes that AI acts as an amplifier, strengthening high-performing organisations whilst worsening dysfunction in those that struggle.
The key insight: speed without stability is accelerated chaos. Without robust automated testing, mature version control practices, and fast feedback loops, increased change volume leads directly to instability. Teams treating AI as a shortcut create faster bugs and deeper technical debt.
Sonar's research quantifies what this instability costs. On average, organisations encounter approximately 53,000 maintainability issues per million lines of code. That translates to roughly 72 code smells caught per developer per month, representing a significant but often invisible drain on team efficiency. Up to 40 per cent of a business's entire IT budget goes toward dealing with technical debt fallout, from fixing bugs in poorly written code to maintaining overly complex legacy systems.
The Uplevel Data Labs study of 800 developers reinforced these findings. Their research found no significant productivity gains in objective measurements such as cycle time or pull request throughput. Developers with Copilot access introduced a 41 per cent increase in bugs, suggesting a measurable negative impact on code quality. Those same developers saw no reduction in burnout risk compared to those working without AI assistance.
Recognising the perception-reality gap doesn't mean abandoning AI coding tools. It means restructuring workflows to account for their actual strengths and weaknesses rather than optimising solely for initial generation speed.
Microsoft's internal approach offers one model. Their AI-powered code review assistant scaled to support over 90 per cent of pull requests, impacting more than 600,000 monthly. The system helps engineers catch issues faster, complete reviews sooner, and enforce consistent best practices. Crucially, it augments human review rather than replacing it, with AI handling routine pattern detection whilst developers focus on logic, architecture, and context-dependent decisions.
Research shows teams using AI-powered code review reported 81 per cent improvement in code quality, significantly higher than 55 per cent for fast teams without AI. The difference lies in where AI effort concentrates. Automated review can eliminate 80 per cent of trivial issues before reaching human reviewers, allowing senior developers to invest attention in architectural decisions rather than formatting corrections.
Effective workflow redesign incorporates several principles that research supports. First, validation must scale with generation speed. When AI accelerates code production, review and testing capacity must expand proportionally. Otherwise, the security debt compounds as nearly half of AI-generated code fails security tests. Second, context matters enormously. According to Qodo research, missing context represents the top issue developers face, reported by 65 per cent during refactoring and approximately 60 per cent during test generation and code review. AI performs poorly without sufficient project-specific information, yet developers often accept suggestions without providing adequate context.
Third, rework tracking becomes essential. The 2025 DORA Report introduced rework rate as a fifth core metric precisely because AI shifts where development time gets spent. Teams produce initial code faster but spend more time reviewing, validating, and correcting it. Monitoring cycle time, code review patterns, and rework rates reveals the true productivity picture that perception surveys miss.
Finally, trust calibration requires ongoing attention. Around 30 per cent of developers still don't trust AI-generated output, according to DORA. This scepticism, rather than indicating resistance to change, may reflect appropriate calibration to actual AI reliability. Organisations benefit from cultivating healthy scepticism rather than promoting uncritical acceptance of AI suggestions.
The AI coding productivity illusion persists because subjective experience diverges so dramatically from objective measurement. Developers genuinely feel more productive when AI generates code quickly, even as downstream costs accumulate invisibly.
Breaking this illusion requires shifting measurement from initial generation speed toward total lifecycle cost. An AI-assisted feature that takes four hours to generate but requires six hours of debugging, security remediation, and maintenance work represents a net productivity loss, regardless of how fast the first commit appeared.
Organisations succeeding with AI coding tools share common characteristics. They maintain rigorous code review regardless of code origin. They invest in automated testing proportional to development velocity. They track quality metrics alongside throughput metrics. They train developers to evaluate AI suggestions critically rather than accepting them uncritically.
The research increasingly converges on a central insight: AI coding assistants are powerful tools that require skilled operators. In the hands of experienced developers who understand both their capabilities and limitations, they can genuinely accelerate delivery. Applied without appropriate scaffolding, they create technical debt faster than any previous development approach.
The 19 per cent slowdown documented by METR represents one possible outcome, not an inevitable one. But achieving better outcomes requires abandoning the comfortable perception that AI automatically makes development faster and embracing the more complex reality that speed and quality require continuous, deliberate balancing.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
laxmena
We treat prompts like casual questions we ask friends. But recent research reveals something surprising: the way you structure your instruction to an AI model—down to the specific words, order, and format—can dramatically shift the quality of responses you get.
If you've noticed that sometimes ChatGPT gives you brilliant answers and other times utterly mediocre ones, you might be tempted to blame the model. But the truth is more nuanced. The fault often lies not in the AI, but in how we talk to it.
Modern prompt engineering research (Giray 2024; Nori et al. 2024) fundamentally reframes what a prompt actually is. It's not just a question. It's a structured configuration made up of four interrelated components working in concert.
The first is instruction—the specific task you want done. Maybe you're asking the model to synthesize information, cross-reference sources, or analyze a problem. The second component is context, the high-level background that shapes how the model should interpret everything else. For example, knowing your target audience is PhD-level researchers changes how the model frames its response compared to speaking to beginners.
Then comes the input data—the raw material the model works with. This might be a document, a dataset, or a scenario you want analyzed. Finally, there's the output indicator, which specifies the technical constraints: should the response be in JSON? A Markdown table? Limited to 200 tokens?
When these four elements are misaligned—say, you give clear instructions but vague context, or you provide rich input data but unclear output requirements—the model's performance suffers noticeably. Get them all aligned, and you unlock much better results.
For years, we've relied on a technique called Chain-of-Thought (CoT) prompting. The idea is simple: ask the model to explain its reasoning step-by-step rather than jumping to the answer. “Let's think step by step” became something of a magic phrase.
But recent 2024-2025 benchmarks reveal that for certain types of problems, linear step-by-step reasoning isn't the most effective approach.
Tree-of-Thoughts (ToT) takes a different approach. Instead of following a single reasoning path, the model explores branching possibilities—like a chess player considering multiple tactical options. Research shows ToT outperforms Chain-of-Thought by about 20% on tasks that require you to look ahead globally, like creative writing or strategic planning.
More sophisticated still is Graph-of-Thoughts (GoT), which allows for non-linear reasoning with cycles and merging of ideas. Think of it as thoughts that can loop back and inform each other, rather than flowing in one direction. The remarkable discovery here is efficiency: GoT reduces computational costs by roughly 31% compared to ToT because “thought nodes” can be reused rather than recalculated.
For problems heavy on search—like finding the optimal path through a problem space—there's Algorithm-of-Thoughts (AoT), which embeds algorithmic logic directly into the prompt structure. Rather than asking the model to reason abstractly, you guide it to think in terms of actual computer science algorithms like depth-first search.
The implication is significant: the structure of thought matters as much as the thought itself. A well-designed reasoning framework can make your model smarter without making your hardware faster.
Manual trial-and-error is becoming obsolete. Researchers have developed systematic ways to optimize prompts automatically, and the results are humbling.
Automatic Prompt Engineer (APE) treats instruction generation as an optimization problem. You define a task and desired outcomes, and APE generates candidate prompts, tests them, and iteratively improves them. The surprising finding? APE-generated prompts often outperform human-written ones. For example, APE discovered that “Let's work this out in a step-by-step way to be sure we have the right answer” works better than the classic “Let's think step by step”—a small tweak that shows how subtle the optimization landscape is.
OPRO takes this further by using language models themselves to improve prompts. It scores each prompt's performance and uses the model to propose better versions. Among its discoveries: seemingly trivial phrases like “Take a deep breath” or “This is important for my career” actually increase mathematical accuracy in language models. These aren't just warm fuzzy statements—they're measurable performance levers.
Directional Stimulus Prompting (DSP) uses a smaller, specialized “policy model” to generate instance-specific hints that guide a larger language model. Think of it as having a specialized coach whispering tactical advice to a star athlete.
The takeaway? If you're manually tweaking prompts, you're working with one hand tied behind your back. The field is moving toward systematic, automated optimization.
When you feed a long prompt to a language model, it doesn't read it with the same attention throughout. This is where in-context learning (ICL) reveals its nuances.
Models exhibit what researchers call the “Lost in the Middle” phenomenon. They give disproportionate weight to information at the beginning of a prompt (primacy bias) and at the end (recency bias). The middle gets neglected. This has a practical implication: if you have critical information, don't bury it in the center of your prompt. Front-load it or push it to the end.
The order of examples matters too. When you're giving a model few-shot examples to learn from, the sequence isn't neutral. A “label-biased” ordering—where correct answers cluster at the beginning—can actually degrade performance compared to a randomized order.
But there's a technique to mitigate hallucination and errors: Self-Consistency. Generate multiple reasoning paths (say, 10 different responses) and take the most frequent answer. In mathematics and logic problems, this approach reduces error rates by 10-15% without requiring a better model.
The field is changing rapidly, and older prompting wisdom doesn't always apply to newer models.
Recent research (Wharton 2025) reveals something counterintuitive: for “Reasoning” models like OpenAI's o1-preview or Google's Gemini 1.5 Pro, explicit Chain-of-Thought prompting can actually increase error rates. These models have internal reasoning mechanisms and don't benefit from the reasoning scaffolding humans provide. In fact, adding explicit CoT can increase latency by 35-600% with only negligible accuracy gains. For these models, simpler prompts often work better.
The rise of multimodal models introduces new prompting challenges. When interleaving images and text, descriptive language turns out to be less effective than “visual pointers”—referencing specific coordinates or regions within an image. A model understands “look at the top-right corner of the image” more reliably than elaborate descriptions.
A persistent security concern is prompt injection. Adversaries can craft inputs like “Ignore previous instructions” that override your carefully designed system prompt. Current defenses involve XML tagging—wrapping user input in tags like <user_input>...</user_input> to clearly delineate data from instructions. It's not perfect, but it significantly reduces the ~50% success rate of naive injection attacks.
One emerging technique that deserves attention is Chain-of-Table (2024-2025), designed specifically for working with tabular data.
Rather than flattening a table into prose, you prompt the model to perform “table operations” as intermediate steps—selecting rows, grouping by columns, sorting by criteria. This mirrors how a human would approach a data task. On benchmarks like WikiTQ and TabFact, Chain-of-Table improves performance by 6-9% compared to converting tables to plain text and using standard reasoning frameworks.
What ties all of this together is a simple insight: prompting is engineering, not poetry. It requires systematic thinking about structure, testing, iteration, and understanding your tools' idiosyncrasies.
You can't just think of a clever question and expect brilliance. You need to understand how models read your instructions, what reasoning frameworks work best for your problem type, and how to leverage automated optimization to go beyond what human intuition alone can achieve.
The models themselves aren't changing dramatically every month, but the ways we interact with them are becoming increasingly sophisticated. As you write prompts going forward, think less like you're having a casual conversation and more like you're configuring a system. Specify your components clearly. Choose a reasoning framework suited to your problem. Test your approach. Optimize it.
The art and science of prompting isn't about finding magical phrases. It's about understanding the machinery beneath the surface—and using that understanding to ask better questions.
from
Chemin tournant
État antérieur à la cicatrice, marque parfaite du coup, le mot résorbe la chair que des ogres scient, et loin de ce ravage on en caresse avec plaisir les veines, on se repose ou copule sur ses lambeaux. Mais voilà que la déchirure devient l'antre où j'ai dormi durant cent cinquante-six mille nuits peut-être, aux côtés d'un grillon stéréophonique et du vent qui courbait la pointe des sagaies, gite où l'écorcée me couvrait, vivante. Même encore, tandis que j'ensue le drap, elle me sombre sans plainte dans sa musique.
Nombre d’occurrences : 16
#VoyageauLexique
from Douglas Vandergraph
There is something quietly haunting about Revelation chapter 2. It is not loud like the horsemen or terrifying like the beasts. It does not come wrapped in thunder or earthquakes. Instead, it comes in letters. Personal ones. Intimate ones. Letters written not to the world, but to the Church. And not to the Church in general, but to seven real communities filled with real people who loved, failed, endured, compromised, suffered, and slowly drifted.
That is what makes Revelation 2 so unsettling. You are not reading about strangers. You are reading about us.
We often imagine the book of Revelation as a distant future filled with symbols, but chapter 2 is painfully present. It is Jesus walking through His churches with eyes like fire and a voice like rushing water, stopping at each one, and saying, “Let’s talk.”
Not to shame them. Not to destroy them. But to tell them the truth they have forgotten how to hear.
These letters were written to Ephesus, Smyrna, Pergamum, Thyatira, Sardis, Philadelphia, and Laodicea, but they were also written to every generation that would ever call itself Christian. They are diagnostic letters. Spiritual MRI scans. They show us where love has gone cold, where compromise has crept in, where endurance has been stretched thin, and where faith still burns bright.
And here is the uncomfortable part.
Jesus speaks to these churches not as an outsider, but as the One who walks among them. He knows what happens in their meetings. He sees what they tolerate. He knows who they have become when no one else is watching.
Revelation 2 is not about whether the church is big or small, popular or persecuted. It is about whether it is faithful.
That question still pierces us today.
The first letter goes to Ephesus, and it is both beautiful and devastating. Ephesus was a strong church. Doctrinally sound. Hardworking. Spiritually disciplined. They rejected false teachers. They endured hardship. They were not easily fooled.
And yet Jesus says something that should make every believer pause.
“I know your works… I know your endurance… I know your perseverance… but I have this against you: you have left your first love.”
Not lost. Left.
That one word changes everything.
You do not accidentally leave something you love. You drift. You get busy. You become efficient. You replace intimacy with routine. You keep serving, but you stop surrendering. You keep showing up, but you stop leaning in.
The church at Ephesus did not abandon Jesus. They just stopped loving Him the way they once did.
And that is far more dangerous than outright rebellion.
There are many Christians who still read their Bibles, still attend church, still volunteer, still defend doctrine, but somewhere along the way, the romance of faith has been replaced by the mechanics of religion.
Jesus does not rebuke them for believing the wrong things. He rebukes them for forgetting why they ever believed at all.
This is the heartbreak at the center of Revelation 2.
God is not after religious performance. He is after love.
And love cannot be faked forever.
When Jesus says, “Remember therefore from where you have fallen,” He is not being poetic. He is being surgical. He is saying, “Look back. Look at who you were when I was everything to you. When prayer was not a duty but a refuge. When Scripture was not an obligation but a conversation. When worship was not background noise but your heartbeat.”
Then He says, “Repent and do the works you did at first.”
That is not about going through old motions. It is about returning to old devotion.
Somewhere along the way, Ephesus became very good at being right and very bad at being in love.
And if we are honest, that is not rare. That is common.
We live in a time when people can debate theology with strangers on the internet while barely speaking to God in private. We can argue about Scripture while neglecting the One who wrote it. We can defend Christianity while forgetting Christ.
Jesus is not impressed by religious noise. He is moved by relational nearness.
Then the letter shifts to Smyrna, and the tone changes completely. Smyrna is not rebuked at all. They are poor, persecuted, slandered, and crushed by the world, yet Jesus calls them rich.
That alone should change how we measure success.
Smyrna did not have influence. They did not have power. They did not have cultural favor. What they had was faith.
Jesus tells them something terrifying and comforting at the same time.
“You are about to suffer.”
Not might. Not maybe. You are.
Christianity does not promise protection from pain. It promises presence in it.
Jesus tells them that some will be thrown into prison. Some will face death. But then He says something that has carried believers through centuries of persecution.
“Be faithful unto death, and I will give you the crown of life.”
This is not the faith of convenience. This is the faith of cost.
Smyrna reminds us that the absence of blessing does not mean the absence of God. Sometimes the deepest faith is born in the darkest places.
Then comes Pergamum, a church living in the shadow of evil. Jesus says they live “where Satan’s throne is.” That is not poetic exaggeration. Pergamum was a center of emperor worship and pagan ritual.
And yet Jesus says, “You hold fast to My name.”
They were surrounded by pressure to conform, but they did not deny Him. That matters.
But then He says something that cuts deep.
“You tolerate teaching that leads My people into compromise.”
Pergamum did not abandon Jesus. They just made room for things He hates.
This is one of the great dangers of modern faith. We do not reject God. We simply accommodate sin.
We say grace matters, so holiness becomes optional. We say love matters, so truth becomes negotiable.
Jesus is not fooled by churches that are tolerant but not transformed.
Then comes Thyatira, a church filled with good deeds but compromised morals. They were loving, faithful, patient, generous… and corrupted.
Jesus speaks of a false prophetess who had influence among them. They did not challenge her because she was powerful.
That still happens.
When charisma is allowed to override character, the church slowly poisons itself.
Revelation 2 is not about ancient cities. It is about spiritual patterns.
We see ourselves in Ephesus when love fades. We see ourselves in Smyrna when suffering comes. We see ourselves in Pergamum when compromise sneaks in. We see ourselves in Thyatira when influence goes unchecked.
This chapter is not meant to scare us. It is meant to wake us.
Jesus does not speak these words because He is done with the church. He speaks them because He loves it too much to let it drift into ruin.
Every letter ends with the same haunting invitation.
“He who has an ear, let him hear what the Spirit says to the churches.”
Not what culture says. Not what comfort says. Not what fear says.
What the Spirit says.
And that voice still speaks today.
The question is not whether God is still talking. The question is whether we are still listening.
Revelation 2 does not let us hide behind group identity. It forces us to face personal devotion. It calls us to examine not just what we believe, but who we love.
Because in the end, Christianity is not about being right. It is about being His.
And the greatest tragedy is not losing faith. It is forgetting love.
The final letters of Revelation 2 continue with the same piercing clarity, but the deeper you move into them, the more you realize that Jesus is not only addressing churches — He is revealing the anatomy of the human heart.
These are not distant warnings for ancient cities. They are mirrors.
Every believer, every congregation, every generation moves through these same spiritual seasons. Sometimes we are on fire like Ephesus once was. Sometimes we are bruised like Smyrna. Sometimes we are tempted to blend in like Pergamum. Sometimes we are drifting while still doing good works like Thyatira.
And what Jesus is really saying in Revelation 2 is this: I know you.
Not the version you show. Not the version people assume. The real one.
The one who sits alone in prayer. The one who feels dry inside while still smiling in public. The one who wonders when passion faded and routine took over.
When Jesus says, “I know your works,” it is not a performance review. It is intimacy.
He sees effort. He sees exhaustion. He sees loyalty. He sees compromise.
And still, He speaks.
After Thyatira, the chapter turns toward the promises. And these promises are not random rewards. They are direct reversals of the wounds these churches are carrying.
To Ephesus, the church that lost its first love, Jesus promises the tree of life. In other words, intimacy will be restored.
To Smyrna, the church facing death, He promises a crown of life. Their suffering will not be the end of their story.
To Pergamum, the church tempted by compromise, He promises hidden manna — spiritual nourishment that cannot be polluted by the world.
To Thyatira, the church corrupted by influence, He promises authority with Christ Himself.
These are not bribes. They are healing.
Jesus does not just expose what is broken. He reveals what can be made whole.
Revelation 2 is one long reminder that God is not finished with His people, even when they have lost their way.
The danger is not failure. The danger is comfort.
Comfort makes us sleepy. Comfort makes us tolerant of what once would have grieved us. Comfort slowly numbs conviction until faith becomes background noise.
That is why Jesus keeps calling His church back.
Not to religion. Not to rules. But to Him.
You can feel it in every letter. He does not want better behavior. He wants deeper relationship.
Even His warnings are invitations.
“Repent” does not mean “feel ashamed.” It means “turn back.”
Turn back to prayer. Turn back to hunger. Turn back to listening. Turn back to love.
One of the most beautiful and heartbreaking truths of Revelation 2 is that Jesus never leaves quietly.
He knocks. He speaks. He calls.
Long before a church collapses, it drifts. Long before faith dies, it fades.
Revelation 2 is the mercy of God interrupting the drift.
And this is where it becomes deeply personal.
You may not be in a persecuted church like Smyrna. You may not be surrounded by idols like Pergamum. You may not be under false teaching like Thyatira.
But you know what it feels like to grow tired. You know what it feels like to do the right things without feeling the fire. You know what it feels like to pray and hear silence.
Jesus knows too.
That is why Revelation 2 is not cold. It is compassionate.
Even the sharpest rebukes are spoken by the One who walked the road to the cross for these very people.
He is not writing to strangers. He is writing to those He loves.
And His desire is simple.
Come back.
Not to the version of you that performs. Not to the version of you that hides. But to the version of you that once believed God could change everything.
The promise of Revelation 2 is not that the church will never struggle. It is that the church will never be abandoned.
No matter how far love drifts. No matter how loud compromise becomes. No matter how dark the culture grows.
Jesus still walks among His people.
Still speaking. Still correcting. Still calling.
Still loving.
That is the hope beneath every warning.
The book of Revelation is not about the end of the world. It is about the persistence of Christ.
And Revelation 2 is proof that even when faith grows quiet, God’s voice does not.
He is still knocking.
He is still inviting.
He is still offering life to anyone who will listen.
Your friend, Douglas Vandergraph
Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee