Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Human in the Loop

On a Monday evening in October 2025, British television viewers settled in to watch Channel 4's Dispatches documentary “Will AI Take My Job?” For nearly an hour, they followed a presenter investigating how artificial intelligence threatens employment across medicine, law, fashion, and music. The presenter delivered pieces to camera with professional polish, narrating the documentary's exploration of AI's disruptive potential. Only in the final seconds did the bombshell land: the presenter wasn't real. The face, voice, and movements were entirely AI-generated, created by AI fashion brand Seraphinne Vallora for production company Kalel Productions. No filming occurred. The revelation marked a watershed moment in British broadcasting history, and a troubling milestone in humanity's relationship with truth.
“Because I'm not real,” the AI avatar announced. “In a British TV first, I'm an AI presenter. Some of you might have guessed: I don't exist, I wasn't on location reporting this story. My image and voice were generated using AI.”
The disclosure sent shockwaves through the media industry. Channel 4's stunt had successfully demonstrated how easily audiences accept synthetic presenters as authentic humans. Louisa Compton, Channel 4's Head of News and Current Affairs and Specialist Factual and Sport, framed the experiment as necessary education: “designed to address the concerns that come with AI, how easy it is to fool people into thinking that something fake is real.” Yet her follow-up statement revealed deep institutional anxiety: “The use of an AI presenter is not something we will be making a habit of at Channel 4. Instead our focus in news and current affairs is on premium, fact checked, duly impartial and trusted journalism, something AI is not capable of doing.”
This single broadcast crystallised a crisis that has been building for years. If audiences cannot distinguish AI-generated presenters from human journalists, even whilst actively watching, what remains of professional credibility? When expertise becomes unverifiable, how do media institutions maintain public trust? And as synthetic media grows indistinguishable from reality, who bears responsibility for transparency in an age when authenticity itself has become contested?
Channel 4's AI presenter wasn't an isolated experiment. The synthetic presenter phenomenon began in earnest in 2018, when China's state-run Xinhua News Agency unveiled what it called the “world's first AI news anchor” at the World Internet Conference in Wuzhen. Developed in partnership with Chinese search engine company Sogou, the system generated avatars patterned after real Xinhua anchors. One AI, modelled after anchor Qiu Hao, delivered news in Chinese. Another, derived from the likeness of Zhang Zhao, presented in English. In 2019, Xinhua and Sogou introduced Xin Xiaomeng, followed by Xin Xiaowei, modelled on Zhao Wanwei, a real-life Xinhua reporter.
Xinhua positioned these digital anchors as efficiency tools. The news agency claimed the simulations would “reduce news production costs and improve efficiency,” operating on its website and social media platforms around the clock without rest, salary negotiations, or human limitations. Yet technical experts quickly identified these early systems as glorified puppets rather than intelligent entities. As MIT Technology Review bluntly assessed: “It's essentially just a digital puppet that reads a script.”
India followed China's lead. In April 2023, the India Today Group's Aaj Tak news channel launched Sana, India's first AI-powered anchor. Regional channels joined the trend: Odisha TV unveiled Lisa, whilst Power TV introduced Soundarya. Across Asia, synthetic presenters proliferated, each promising reduced costs and perpetual availability.
The technology enabling these digital humans has evolved exponentially. Contemporary AI systems don't merely replicate existing footage. They generate novel performances through prompt-driven synthesis, creating facial expressions, gestures, and vocal inflections that have never been filmed. Channel 4's AI presenter demonstrated this advancement. Nick Parnes, CEO of Kalel Productions, acknowledged the technical ambition: “This is another risky, yet compelling, project for Kalel. It's been nail-biting.” The production team worked to make the AI “feel and appear as authentic” as possible, though technical limitations remained. Producers couldn't recreate the presenter sitting in a chair for interviews, restricting on-screen contributions to pieces to camera.
These limitations matter less than the fundamental achievement: viewers believed the presenter was human. That perceptual threshold, once crossed, changes everything.
For centuries, visual evidence carried special authority. Photographs documented events. Video recordings provided incontrovertible proof. Legal systems built evidentiary standards around the reliability of images. The phrase “seeing is believing” encapsulated humanity's faith in visual truth. Deepfake technology has shattered that faith.
Modern deepfakes can convincingly manipulate or generate entirely synthetic video, audio, and images of people who never performed the actions depicted. Research from Cristian Vaccari and Andrew Chadwick, published in Social Media + Society, revealed a troubling dynamic: people are more likely to feel uncertain than to be directly misled by deepfakes, but this resulting uncertainty reduces trust in news on social media. The researchers warned that deepfakes may contribute towards “generalised indeterminacy and cynicism,” intensifying recent challenges to online civic culture. Even factual, verifiable content from legitimate media institutions faces credibility challenges because deepfakes exist.
This phenomenon has infected legal systems. Courts now face what the American Bar Association calls an “evidentiary conundrum.” Rebecca Delfino, a law professor studying deepfakes in courtrooms, noted that “we can no longer assume a recording or video is authentic when it could easily be a deepfake.” The Advisory Committee on the Federal Rules of Evidence is studying whether to amend rules to create opportunities for challenging potentially deepfaked digital evidence before it reaches juries.
Yet the most insidious threat isn't that fake evidence will be believed. It's that real evidence will be dismissed. Law professors Bobby Chesney and Danielle Citron coined the term “liar's dividend” in their 2018 paper “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” published in the California Law Review in 2019. The liar's dividend describes how bad actors exploit public awareness of deepfakes to dismiss authentic evidence as manipulated. Politicians facing scandals increasingly claim real recordings are deepfakes, invoking informational uncertainty and rallying supporters through accusations of media manipulation.
Research published in 2024 investigated the liar's dividend through five pre-registered experimental studies administered to over 15,000 American adults. The findings showed that allegations of misinformation raise politician support whilst potentially undermining trust in media. These false claims produce greater dividends for politicians than traditional scandal responses like remaining silent or apologising. Chesney and Citron documented this tactic's global spread, with politicians in Russia, Brazil, China, Turkey, Libya, Poland, Hungary, Thailand, Somalia, Myanmar, and Syria claiming real evidence was fake to evade accountability.
The phrase “seeing is believing” has become obsolete. In its place: profound, corrosive uncertainty.
Journalism traditionally derived authority from institutional reputation and individual credibility. Reporters built reputations through years of accurate reporting. Audiences trusted news organisations based on editorial standards and fact-checking rigour. This system depended on a fundamental assumption: that the person delivering information was identifiable and accountable.
AI presenters destroy that assumption.
When Channel 4's synthetic presenter delivered the documentary, viewers had no mechanism to assess credibility. The presenter possessed no professional history, no journalistic credentials, no track record of accurate reporting. Yet audiences believed they were watching a real journalist conducting real investigations. The illusion was perfect until deliberately shattered.
This creates what might be called the credibility paradox. If an AI presenter delivers factual, well-researched journalism, is the content less credible because the messenger isn't human? Conversely, if the AI delivers misinformation with professional polish, does the synthetic authority make lies more believable? The answer to both questions appears to be yes, revealing journalism's uncomfortable dependence on parasocial relationships between audiences and presenters.
Parasocial relationships describe the one-sided emotional bonds audiences form with media figures who will never know them personally. Anthropologist Donald Horton and sociologist R. Richard Wohl coined the term in 1956. When audiences hear familiar voices telling stories, their brains release oxytocin, the “trust molecule.” This neurochemical response drives credibility assessments more powerfully than rational evaluation of evidence.
Recent research demonstrates that AI systems can indeed establish meaningful emotional bonds and credibility with audiences, sometimes outperforming human influencers in generating community cohesion. This suggests that anthropomorphised AI systems exploiting parasocial dynamics can manipulate trust, encouraging audiences to overlook problematic content or false information.
The implications for journalism are profound. If credibility flows from parasocial bonds rather than verifiable expertise, then synthetic presenters with optimised voices and appearances might prove more trusted than human journalists, regardless of content accuracy. Professional credentials become irrelevant when audiences cannot verify whether the presenter possesses any credentials at all.
Louisa Compton's insistence that AI cannot do “premium, fact checked, duly impartial and trusted journalism” may be true, but it's also beside the point. The AI presenter doesn't perform journalism. It performs the appearance of journalism. And in an attention economy optimised for surface-level engagement, appearance may matter more than substance.
Governments and industry organisations have begun addressing synthetic media's threats, though responses remain fragmented and often inadequate. The landscape resembles a patchwork quilt, each jurisdiction stitching together different requirements with varying levels of effectiveness.
The European Union has established the most comprehensive framework. The AI Act, which became effective in 2025, represents the world's first comprehensive AI regulation. Article 50 requires deployers of AI systems generating or manipulating image, audio, or video content constituting deepfakes to disclose that content has been artificially generated or manipulated. The Act defines deepfakes as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.”
The requirements split between providers and deployers. Providers must ensure AI system outputs are marked in machine-readable formats and detectable as artificially generated, using technical solutions that are “effective, interoperable, robust and reliable as far as technically feasible.” Deployers must disclose when content has been artificially generated or manipulated. Exceptions exist for artistic works, satire, and law enforcement activities. Transparency violations can result in fines up to 15 million euros or three per cent of global annual turnover, whichever is higher. These requirements take effect in August 2026.
The United States has adopted a narrower approach. In July 2024, the Federal Communications Commission released a Notice of Proposed Rulemaking proposing that radio and television broadcast stations must disclose when political advertisements contain “AI-generated content.” Critically, these proposed rules apply only to political advertising on broadcast stations. They exclude social media platforms, video streaming services, and podcasts due to the FCC's limited jurisdiction. The Federal Trade Commission and Department of Justice possess authority to fine companies or individuals using synthetic media to mislead or manipulate consumers.
The United Kingdom has taken a more guidance-oriented approach. Ofcom, the UK communications regulator, published its Strategic Approach to AI for 2025-26, outlining plans to address AI deployment across sectors including broadcasting and online safety. Ofcom identified synthetic media as one of three key AI risks. Rather than imposing mandatory disclosure requirements, Ofcom plans to research synthetic media detection tools, draw up online safety codes of practice, and issue guidance to broadcasters clarifying their obligations regarding AI.
The BBC has established its own AI guidelines, built on three principles: acting in the public's best interests, prioritising talent and creatives, and being transparent with audiences about AI use. The BBC's January 2025 guidance states: “Any use of AI by the BBC in the creation, presentation or distribution of content must be transparent and clear to the audience.” The broadcaster prohibits using generative AI to generate news stories or conduct factual research because such systems sometimes produce biased, false, or misleading information.
Industry-led initiatives complement regulatory efforts. The Coalition for Content Provenance and Authenticity (C2PA), founded in 2021 by Adobe, Microsoft, Truepic, Arm, Intel, and the BBC, develops technical standards for certifying the source and history of media content. By 2025, the Content Authenticity Initiative had welcomed over 4,000 members.
C2PA's approach uses Content Credentials, described as functioning “like a nutrition label for digital content,” providing accessible information about content's history and provenance. The system combines cryptographic metadata, digital watermarking, and fingerprinting to link digital assets to their provenance information. Version 2.1 of the C2PA standard, released in 2024, strengthened Content Credentials with digital watermarks that persist even when metadata is stripped from files.
This watermarking addresses a critical vulnerability: C2PA manifests exist as metadata attached to files rather than embedded within assets themselves. Malicious actors can easily strip metadata using simple online tools. Digital watermarks create durable links back to original manifests, acting as multifactor authentication for digital content.
Early trials show promise. Research indicates that 83 per cent of users reported increased trust in media after seeing Content Credentials, with 96 per cent finding the credentials useful and informative. Yet adoption remains incomplete. Without universal adoption, content lacking credentials becomes suspect by default, creating its own form of credibility crisis.
As synthetic media grows more sophisticated, detection technology races to keep pace. Academic research in 2024 revealed both advances and fundamental limitations in deepfake detection capabilities.
Researchers proposed novel approaches like Attention-Driven LSTM networks using spatio-temporal attention mechanisms to identify forgery traces. These systems achieved impressive accuracy rates on academic datasets, with some models reaching 97 per cent accuracy and 99 per cent AUC (area under curve) scores on benchmarks like FaceForensics++.
However, sobering reality emerged from real-world testing. Deepfake-Eval-2024, a new benchmark consisting of in-the-wild deepfakes collected from social media in 2024, revealed dramatic performance drops for detection models. The benchmark included 45 hours of videos, 56.5 hours of audio, and 1,975 images. Open-source detection models showed AUC decreases of 50 per cent for video, 48 per cent for audio, and 45 per cent for image detection compared to performance on academic datasets.
This performance gap illuminates a fundamental problem: detection systems trained on controlled academic datasets fail when confronted with the messy diversity of real-world synthetic media. Deepfakes circulating on social media undergo compression, editing, and platform-specific processing that degrades forensic signals detection systems rely upon.
The detection arms race resembles cybersecurity's endless cycle of attack and defence. Every improvement in detection capabilities prompts improvements in generation technology designed to evade detection. Unlike cybersecurity, where defenders protect specific systems, deepfake detection must work across unlimited content contexts, platforms, and use cases. The defensive task is fundamentally harder than the offensive one.
This asymmetry suggests that technological detection alone cannot solve the synthetic media crisis. Authentication must move upstream, embedding provenance information at creation rather than attempting forensic analysis after distribution. That's the logic behind C2PA and similar initiatives. Yet such systems depend on voluntary adoption and can be circumvented by bad actors who simply decline to implement authentication standards.
The dominant regulatory response to synthetic media centres on transparency: requiring disclosure when AI generates or manipulates content. The logic seems straightforward: if audiences know content is synthetic, they can adjust trust accordingly. Channel 4's experiment might be seen as transparency done right, deliberately revealing the AI presenter to educate audiences about synthetic media risks.
Yet transparency alone proves insufficient for several reasons.
First, disclosure timing matters enormously. Channel 4 revealed its AI presenter only after viewers had invested an hour accepting the synthetic journalist as real. The delayed disclosure demonstrated deception more than transparency. Had the documentary begun with clear labelling, the educational impact would have differed fundamentally.
Second, disclosure methods vary wildly in effectiveness. A small text disclaimer displayed briefly at a video's start differs profoundly from persistent watermarks or on-screen labels. The EU AI Act requires machine-readable formats and “effective” disclosure, but “effective” remains undefined and context-dependent. Research on warnings and disclosures across domains consistently shows that people ignore or misinterpret poorly designed notices.
Third, disclosure burdens fall on different actors in ways that create enforcement challenges. The EU AI Act distinguishes between providers (who develop AI systems) and deployers (who use them). This split creates gaps where responsibility diffuses. Enforcement requires technical forensics to establish which party failed in their obligations.
Fourth, disclosure doesn't address the liar's dividend. When authentic content is dismissed as deepfakes, transparency cannot resolve disputes. If audiences grow accustomed to synthetic media disclosures, absence of disclosure might lose meaning. Bad actors could add fake disclosures claiming real content is synthetic to exploit the liar's dividend in reverse.
Fifth, international fragmentation undermines transparency regimes. Content crosses borders instantly, but regulations remain national or regional. Synthetic media disclosed under EU regulations circulates in jurisdictions without equivalent requirements. This creates arbitrage opportunities where bad actors jurisdiction-shop for the most permissive environments.
The BBC's approach offers a more promising model: categorical prohibition on using generative AI for news generation or factual research, combined with transparency about approved uses like anonymisation. This recognises that some applications of synthetic media in journalism pose unacceptable credibility risks regardless of disclosure.
The synthetic presenter phenomenon exposes journalism's uncomfortable reliance on credibility signals that AI can fake. Professional credentials mean nothing if audiences cannot verify whether the presenter possesses credentials at all. Institutional reputation matters less when AI presenters can be created for any outlet, real or fabricated.
The New York Times reported cases of “deepfake” videos distributed by social media bot accounts showing AI-generated avatars posing as news anchors for fictitious news outlets like Wolf News. These synthetic operations exploit attention economics and algorithmic amplification, banking on the reality that many social media users share content without verifying sources.
This threatens the entire information ecosystem's functionality. Journalism serves democracy by providing verified information citizens need to make informed decisions. That function depends on audiences distinguishing reliable journalism from propaganda, entertainment, or misinformation. When AI enables creating synthetic journalists indistinguishable from real ones, those heuristics break down.
Some argue that journalism should pivot entirely towards verifiable evidence and away from personality-driven presentation. The argument holds superficial appeal but ignores psychological realities. Humans are social primates whose truth assessments depend heavily on source evaluation. We evolved to assess information based on who communicates it, their perceived expertise, their incentives, and their track record. Removing those signals doesn't make audiences more rational. It makes them more vulnerable to manipulation by whoever crafts the most emotionally compelling synthetic presentation.
Others suggest that journalism should embrace radical transparency about its processes. Rather than simply disclosing AI use, media organisations could provide detailed documentation: showing who wrote scripts AI presenters read, explaining editorial decisions, publishing correction records, and maintaining public archives of source material.
Such transparency represents good practice regardless of synthetic media challenges. However, it requires resources that many news organisations lack, and it presumes audience interest in verification that may not exist. Research on media literacy consistently finds that most people lack time, motivation, or skills for systematic source verification.
The erosion of reliable heuristics may prove synthetic media's most damaging impact. When audiences cannot trust visual evidence, institutional reputation, or professional credentials, they default to tribal epistemology: believing information from sources their community trusts whilst dismissing contrary evidence as fake. This fragmentation into epistemic bubbles poses existential threats to democracy, which depends on shared factual baselines enabling productive disagreement about values and policies.
No single solution addresses synthetic media's threats to journalism and public trust. The challenge requires coordinated action across multiple domains: technology, regulation, industry standards, media literacy, and institutional practices.
Technologically, provenance systems like C2PA must become universal standards. Every camera, editing tool, and distribution platform should implement Content Credentials by default. This cannot remain voluntary. Regulatory requirements should mandate provenance implementation for professional media tools and platforms, with financial penalties for non-compliance sufficient to ensure adoption.
Provenance systems must extend beyond creation to verification. Audiences need accessible tools to check Content Credentials without technical expertise. Browsers should display provenance information prominently, similar to how they display security certificates for websites. Social media platforms should integrate provenance checking into their interfaces.
Regulatory frameworks must converge internationally. The current patchwork creates gaps and arbitrage opportunities. The EU AI Act provides a strong foundation, but its effectiveness depends on other jurisdictions adopting compatible standards. International organisations should facilitate regulatory harmonisation, establishing baseline requirements for synthetic media disclosure that all democratic nations implement.
Industry self-regulation can move faster than legislation. News organisations should collectively adopt standards prohibiting AI-generated presenters for journalism whilst establishing clear guidelines for acceptable AI uses. The BBC's approach offers a template: categorical prohibitions on AI generating news content or replacing journalists, combined with transparency about approved uses.
Media literacy education requires dramatic expansion. Schools should teach students to verify information sources, recognise manipulation techniques, and understand how AI-generated content works. Adults need accessible training too. News organisations could contribute by producing explanatory content about synthetic media threats and verification techniques.
Journalism schools must adapt curricula to address synthetic media challenges. Future journalists need training in content verification, deepfake detection, provenance systems, and AI ethics. Programmes should emphasise skills that AI cannot replicate: investigative research, source cultivation, ethical judgement, and contextual analysis.
Professional credentials need updating for the AI age. Journalism organisations should establish verification systems allowing audiences to confirm that a presenter or byline represents a real person with verifiable credentials. Such systems would help audiences distinguish legitimate journalists from synthetic imposters.
Platforms bear special responsibility. Social media companies, video hosting services, and content distribution networks should implement detection systems flagging likely synthetic media for additional review. They should provide users with information about content provenance and highlight when provenance is absent or suspicious.
Perhaps most importantly, media institutions must rebuild public trust through consistent demonstration of editorial standards. Channel 4's AI presenter stunt, whilst educational, also demonstrated that broadcasters will deceive audiences when they believe the deception serves a greater purpose. Trust depends on audiences believing that news organisations will not deliberately mislead them.
Louisa Compton's promise that Channel 4 won't “make a habit” of AI presenters stops short of categorical prohibition. If synthetic presenters are inappropriate for journalism, they should be prohibited outright in journalistic contexts. If they're acceptable with appropriate disclosure, that disclosure must be immediate and unmistakable, not a reveal reserved for dramatic moments.
Channel 4's synthetic presenter experiment demonstrated an uncomfortable truth: current audiences cannot reliably distinguish AI-generated presenters from human journalists. This capability gap creates profound risks for media credibility, democratic discourse, and social cohesion. When seeing no longer implies believing, and when expertise cannot be verified, information ecosystems lose the foundations upon which trustworthy communication depends.
The technical sophistication enabling synthetic presenters will continue advancing. AI-generated faces, voices, and movements will become more realistic, more expressive, more human-like. Detection will grow harder. Generation costs will drop. These trends are inevitable. Fighting the technology itself is futile.
What can be fought is the normalisation of synthetic media in contexts where authenticity matters. Journalism represents such a context. Entertainment may embrace synthetic performers, just as it embraces special effects and CGI. Advertising may deploy AI presenters to sell products. But journalism's function depends on trust that content is true, that sources are real, that expertise is genuine. Synthetic presenters undermine that trust regardless of how accurate the content they present may be.
The challenge facing media institutions is stark: establish and enforce norms differentiating journalism from synthetic content, or watch credibility erode as audiences grow unable to distinguish trustworthy information from sophisticated fabrication. Transparency helps but remains insufficient. Provenance systems help but require universal adoption. Detection helps but faces an asymmetric arms race. Media literacy helps but cannot keep pace with technological advancement.
What journalism ultimately requires is an authenticity imperative: a collective commitment from news organisations that human journalists, with verifiable identities and accountable expertise, will remain the face of journalism even as AI transforms production workflows behind the scenes. This means accepting higher costs when synthetic alternatives are cheaper. It means resisting competitive pressures when rivals cut corners. It means treating human presence as a feature, not a bug, in an age when human presence has become optional.
The synthetic presenter era has arrived. How media institutions respond will determine whether professional journalism retains credibility in the decades ahead, or whether credibility itself becomes another casualty of technological progress. Channel 4's experiment proved that audiences can be fooled. The harder question is whether audiences can continue trusting journalism after learning how easily they're fooled. That question has no technological answer. It requires institutional choices about what journalism is, whom it serves, and what principles are non-negotiable even when technology makes violating them trivially easy.
The phrase “seeing is believing” has lost its truth value. In its place, journalism must establish a different principle: believing requires verification, verification requires accountability, and accountability requires humans whose identities, credentials, and institutional affiliations can be confirmed. AI can be a tool serving journalism. It cannot be journalism's face without destroying the trust that makes journalism possible. Maintaining that distinction, even as technology blurs every boundary, represents the central challenge for media institutions navigating the authenticity crisis.
The future of journalism in the synthetic media age depends not on better algorithms or stricter regulations, though both help. It depends on whether audiences continue believing that someone, somewhere, is telling them the truth. When that trust collapses, no amount of technical sophistication can rebuild it. Channel 4's synthetic presenter was designed as a warning. Whether the media industry heeds that warning will determine whether future generations can answer a question previous generations took for granted: Is the person on screen real?
Channel 4 Press Office. (2025, October). “Channel 4 makes TV history with Britain's first AI presenter.” Channel 4. https://www.channel4.com/press/news/channel-4-makes-tv-history-britains-first-ai-presenter
Compton, L. (2020). Appointed Head of News and Current Affairs and Sport at Channel 4. Channel 4 Press Office. https://www.channel4.com/press/news/louisa-compton-appointed-head-news-and-current-affairs-and-sport-channel-4
Vaccari, C., & Chadwick, A. (2020). “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Social Media + Society. https://journals.sagepub.com/doi/10.1177/2056305120903408
Chesney, B., & Citron, D. (2019). “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review, 107, 1753-1820.
European Union. (2025). “Artificial Intelligence Act.” Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems. https://artificialintelligenceact.eu/article/50/
Federal Communications Commission. (2024, July). “Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements.” Notice of Proposed Rulemaking. https://www.fcc.gov/document/fcc-proposes-disclosure-ai-generated-content-political-ads
Ofcom. (2025). “Ofcom's strategic approach to AI, 2025/26.” https://www.ofcom.org.uk/siteassets/resources/documents/about-ofcom/annual-reports/ofcoms-strategic-approach-to-ai-202526.pdf
British Broadcasting Corporation. (2025, January). “BBC sets protocol for generative AI content.” Broadcast. https://www.broadcastnow.co.uk/production-and-post/bbc-sets-protocol-for-generative-ai-content/5200816.article
Coalition for Content Provenance and Authenticity (C2PA). (2021). “C2PA Technical Specifications.” https://c2pa.org/
Content Authenticity Initiative. (2025). “4,000 members, a major milestone in the effort to foster online transparency and trust.” https://contentauthenticity.org/blog/celebrating-4000-cai-members
Xinhua News Agency. (2018). “Xinhua–Sogou AI news anchor.” World Internet Conference, Wuzhen. CNN Business coverage: https://www.cnn.com/2018/11/09/media/china-xinhua-ai-anchor/index.html
Horton, D., & Wohl, R. R. (1956). “Mass Communication and Para-social Interaction: Observations on Intimacy at a Distance.” Psychiatry, 19(3), 215-229.
American Bar Association. (2024). “The Deepfake Defense: An Evidentiary Conundrum.” Judges' Journal. https://www.americanbar.org/groups/judicial/publications/judges_journal/2024/spring/deepfake-defense-evidentiary-conundrum/
Nature Scientific Reports. (2024). “Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024.” https://arxiv.org/html/2503.02857v2
Digimarc Corporation. (2024). “C2PA 2.1, Strengthening Content Credentials with Digital Watermarks.” https://www.digimarc.com/blog/c2pa-21-strengthening-content-credentials-digital-watermarks

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from deadgirlreference
There are days when my whole body feels like an antenna — too many signals, too many layers, too many subtle frequencies that hit me all at once.
It’s not painful, just overwhelming in a way that’s hard to name.
I feel the shift in someone’s mood before they do. I sense a lie in the breath before the words arrive. I hear every emotion that escapes between sentences.
And when it becomes too much, I go quiet. Not because I’m withdrawing from the world, but because silence is the only space where I can gather myself.
People think quiet means empty. But mine is crowded — with observation, with intuition, with all the things I don’t say because not everyone deserves that level of truth.
I don’t leave rooms because I’m difficult. I leave because I know the cost of staying too long in places that drain me.
And I’m done paying with my own presence just to be polite.
If I’m going to stay anywhere now, it has to be a place where I don’t have to bargain with myself to exist.
from deadgirlreference
I owe no one the softened version of myself that swallows every reaction just to look “easy to handle.” Sometimes growth is just refusing to shrink anymore.
from deadgirlreference
Some days everything arrives at once — the noise, the movement, the unsaid things people carry in their shoulders, in their eyes, in the way they breathe near you.
I feel it all. Every small vibration meant for no one in particular still lands somewhere in my body.
It’s not that I’m fragile. I’m just awake in places people have trained themselves to sleep through.
And when the world gets too loud, I don’t collapse. I retract. I draw the curtains inside myself so I can hear my own thoughts again.
There’s nothing dramatic about that. It’s simply the only way I stay whole.
from deadgirlreference
jeg bærer overstimuleringen som et dyr under huden
enten spriker jeg med tennene eller faller stille bak øynene som om noen skrudde ned lyset i meg
og alltid den samme feilen:
jeg blir værende i rom som ikke tåler meg til jeg mister meg selv i forsøket på å virke ufarlig
og jeg sier unnskyld til mennesker jeg ikke engang husker hvorfor jeg lot røre meg
etterpå kommer varmen som flauhet gjør: en stille beskjed om at jeg solgte integriteten min for billig igjen
så jeg går hjem og lukker døren til alle som tror at min høflighet var for deres skyld
den var bare et symptom på at jeg hadde vært for lenge blant fremmede.
from
John Karahalis
“If you want to go fast, go alone. If you want to go far, go together.”
—Unknown
from anatolie
Five ⟺ Seven
Five loses hope distancing themselves from intrusions. With a worn down buffer, empty reserves, and a sense of scarcity that has progressed to starvation, they are forced to open up, indicating the transition from Rejection to Frustration. Afflicted by a profound state of depletion, they have nothing to lose by getting out in the open and filling up less discriminatingly at Seven.
At Five you gave up focusing on the source of intrusion and start focusing on what to be affected by instead and how to achieve this. Losing hope that diving into the current realities of a situation will be constructive, they start recreating and asserting their own reality irrespective of what already has been created. That there is something to fear or avoid in immediate surroundings may be so taken for granted that Sevens can be downright reckless in the face of negative consequences and risks.
Seven is a Five who has already been invaded. Already robbed, needs become more immediate. In the extremes of this state we can only do what is intrinsically motivating to ourselves, or we run out of fuel. Having lost their foundation at Five, Seven starts from scratch each moment. Their safe shell no longer exists, whether that be physically in the form of armor or psychologically in the form of detachment.
When defenses are overwhelmed, they give them up and stop investing in them. The starting point for Seven is having some pest to fend off, but having to do so indirectly; circumventing, avoiding, evading, or outmaneuvring it. Common pests include constraints, interruptions, insults, attacks, indications of being wrong, of having built something on shaky ground, and obstacles to what you want. We may for example hold beliefs that are incompatible to each other, enlivening whichever suits us in the situation we are in. Or we avoid facing an argument someone made, tricking ourselves we have addressed it when we are merely imagining that doing so is possible.
While postponing a confrontation can give us the chance to attain the resource or solution that were missing in the moment, this creates debts which have Sevens continually trading their energy to pay off the present moment. This in turn can make resting feel like standing in place while the ground falters underneath your feet, and they try instead to solve every deficit by finding some source of replenishment. Seven finds themselves on an endless search when as they branch out to find what solves their current debts, they encounter more and more drains and challenges which in turn create new needs, and on it goes.
Losing hope also of your capacity to hold onto resources, storage of possessions feel uncertain. This can explain an irrational spending habit, where the ego unconsciously responds to a situation where spending beyond one’s means would be fully reasonable. Of course there are just as many situations where spending is the better choice. Spending money is just an exchange of a general resource for a specific one.
Without a controlled environment where you can limit what can impact you, Five to Seven can’t rely on preparing for what will be required in the future. They cannot expect what challenges tomorrow brings, as conditions may change at any moment. Under these circumstances, making long term choices is not as rational. You do not have the luxury of long term planning when the demands of existence can’t be predicted beforehand. At Seven you have the constant task of reinvesting your valuables or currencies, making sure they suit the situation at hand, making sure of their relevancy. The skills you honed for your given set of conditions at Five may be useless in a different setting. Your chosen currency may turn illegitimate. Commitment may be avoided as it means being vulnerable to future unforeseen demands.
Seven has more acute and unpredictable challenges, and therefore a more varied spectrum of interests compared to Five. They prioritise mobility and adaptability over storage and amassing, though may locate and hunt down a wide variety of sources of fuel or charge that may never be revisited, as it didn’t end up being the panacea hoped for. Yet, when you can’t carry a large pouch, you need constant refill, and become hyper-attuned to shiny treasures and other fascinations. As the highest Frustration type, Seven has a belief in the possibility of attaining them, and get bursts of energy to go after them.
Sevens have to learn to quickly navigate unknown territory, and therefore develop more general and translatable talents like improvisation and making quick approximations. They begin to seek ways to generate what they need as they go, investing what they have into the next target of attention instead of holding it, and become dependent on getting something out of every moment and on their productions yielding frequent and sufficient returns on investment. They continually scan their surroundings, seeing its positive potential and engaging with the environment in exploring, probing ways that hopefully generate some unexpected beneficial result.
The fun-seeking of Seven is usually seen as a luxury need, when what is fun and interesting to us is a flaming sign of what is relevant to the challenges we perceive we have. Interest springs both from needs and from what supports the desired development for one’s life. It is an excellent indicator of what is genuinely beneficial for you, as long as it stems from true interest and not avoidance of something else. Being in a prolonged thrill-seeking state indicates some chronic impoverishment, where it may be all you can do to be survive until you can find the missing piece to a chronic problem. Our descent into habitual coping begin if we forget the original problem and stop registering when improvement can be made to it. A gluttonous tendency is often a combination of the presence of some resource and the absence of another, such as a spoiled child who feeds the hole of lacking parental presence with toys and candy and is dependent on their overconsumption to compensate.
One way to view Wisdom, the virtue assigned to Seven by Claudio Naranjo, is knowing what is relevant in a given situation, such as how and when to use information or other resources, knowing what’s needed, and seeing the place of something relative to the full picture. A classic example of lack of wisdom is when a usually very knowledgeable type of person insists on their correctness after delivering a fact which is not the centrally relevant truth of the situation, however true it may be in isolation. This person does not see the whole picture but focuses on the veracity of their piece in itself, while the person on the receiving end often can’t explain why the know-it-all is wrong even if they are right.
The act of clarifying, successfully selecting what is relevant and what is to be discarded, is another integration to Five. Sharpening the signal; what you can make sense of, and neutralising noise; what you can’t. There is no noise, however, only too much complexity for our minds to make sense of, which would lead to mental overwhelm without appropriate noise reduction or holistic thinking to scale.
Selecting a vantage point, a central perspective, with which to see the world from when finding one you sufficiently believe in, is yet another. You choose a permanent ground when you found one solid enough to convince you of its incomparable payoff or long-term stability, such as when a belief, a relationship, a home, or a challenge continues to yield fruits and you can sustain its cost and inherent constraints. Sevens can struggle with the lack of a sense of spending time, energy or other resources on the right activity, wondering whether what they are doing is the best use of their time and focus, wanting to follow many trails at once. Choosing and choosing out of one such trail, like of a train of thought, demonstrates how integration and disintegration continually happens at a micro level.
Sevens are distractible by new stimuli because investment has to be redetermined with any change in overview. By finding a worthwhile focus they integrate, and mental direction is more unified and stable at Five. You can choose to cut off alternatives and narrow down your reality when you believe an experience will cover its costs, that the present moment supplies you enough to deliver you to the future. Or better yet, that it is the optimal way to spend your time.
When for any reason you lose faith in how you invest your attention, Five zooms out, going from focus to overview and moving attention quickly between several foci, scanning their mental space or physical surroundings at Seven. They gain accuracy at the expense of precision, going from deep understanding and high resolution of one area of focus to quickly comparing and contrasting different appearances and their dynamic interplay. Appearance means surface level depth, or as far as you can quickly grasp something without getting stuck at a plateau that requires greater effort and investment to break past.
from
Larry's 100
So begins Larry's 100 Holiday Movie Season! My family and I have been studying the genre for a decade, and for the past five years, I have been reviewing them on my Instagram. I am now applying the format of the Drable/100 Word review and cataloging them here. But don't worry, Instagram Fam, I will still post them there to preserve this cherished tradition.
Notable Stars: Alicia Silverstone, Jameela Jamil, and Melissa Joan Hart. Silverstone, Hart produced for Mellisa Joan’s Mom’s Heartbreak Films production company.
Alicia Silverstone joins the Christmas Movie industrial complex.
Consciously uncoupling Silverstone and Oliver Hudson attempt to maintain post-breakup holiday normalcy with their young adult children, new paramours, friends, and Granddads. Awkward festive gatherings, hurt feelings, and rekindled emotions ensue.
After years of trying, Netflix got its version of the Hallmark Christmas RomCom right, mimicking the look/feel with a few meta inside jokes while tweaking the Young Professional Female Gets Stuck In Wintertown trope.
Middle-aged “what now” angst and Silverstone's puppy-dog eyes ground the plot, and the writing sprinkles the story with core genre elements: humor, empathy, warmth, and baking.
Stream it.

#100HotChocolates #100DaysToOffload #Larrys100 #100WordReview #MovieReview #ChristmasMovies #HolidayMovies #Netflix #Cinemastodon #FilmMastodon #AliciaSilverstone #AMerryLittleExMas
from
wystswolf

WolfCast Home Page – Listen, follow, subscribe
Wolf In Wool is finally available on your favorite podcasting platforms. Happy to have you reading, and it’s a superior way to be in my head. But sometimes, it’s fun to listen too.
I’ll still include a link so you can listen here if you prefer the simplicity of wolfinwool.
In either case. Thank you and happy to have you on my journey.
—
And I'll always add a soundcloud link for those who prefer the purity of a listen without the hassle of the podcast system:

#podcast #wolfcast #confession #essay #story # journal #poetry #wyst #poetry #100daystooffset #writing #story #osxs #travel
from essays-in-transit
Nominate your ideal dinner companion. This could be someone from the past or a contemporary, alive or dead, but who has an established reputation. What about this person’s reputation has impressed you?
I would like to sit down with Sylvia Plath and talk about the richness of her imagery and the breadth of her reading. Her writing, poetry as well as her journals, is full of obscure references and associations. I may have been too shy, but I would have liked to ask her how much her marriage to Ted Hughes bled through into her poems. In her journals, she described intense anxiety explained by Hughes as paranoia, which in retrospect seems to have been a cruel act of gaslighting. Sylvia Plath is almost entirely defined by The Bell Jar and her eventual suicide, but her journal reveals an intelligent person who thought deeply about the world and her place in it. Perhaps if we had had dinner she could have told me about the journals her husband destroyed after her death.
from essays-in-transit
in 2024, the Shrine of Chandavila in Spain, where two girls claimed to have seen apparitions of Our Lady of Sorrows, was approved by the Vatican as a location for Catholic worship (Brockhaus, 2024). While the Vatican recognised the shrine’s importance, it did not address the visions, revealing a tension between Catholic doctrine and everyday expression of faith.
The Catholic News Agency—an organisation devoted to promoting ‘the Dogmas, Rules and Regulations of the church’ (Eternal Word Television Network, n.d.)—quoted the ‘nihil obstat’ judgment issued by the Vatican as saying that the shrine may ‘continue to offer to the faithful . . . a place of interior peace, consolation, and conversion’ (Cardinal Víctor Manuel Fernández quoted in Brockhaus, 2024). The girls are portrayed as virtuous, dedicating themselves to charity. A ‘nihil obstat’ judgement is an endorsement of the positive impact on the faithful, without authenticating any supernatural phenomena, which seems to reflect a tendency of the Vatican to encourage devotion while maintaining control over claims of divine intervention.
According to the Catholic doctrine of the Virgin Birth, Mary is honoured as the obedient mother of Jesus, chosen to bear God’s son. In Catholic art she is often depicted in imperial blue robes. The imagery of splendour, and her indifference to it, reinforce her purity and elevate her above humanity (Sinclair, 2019, p. 69). Blessed to bear a child without sin, she is the ideal the faithful should strive to emulate. In popular Marian devotion, however, Mary takes on a different character—an understanding Mary who suffers with her followers. She endured the pain of childbirth and her son’s death, an image central to Marian shrines and apparitions (Sinclair, 2019, p. 86-87). The power popular faith has attributed to Mary—as a divine being who can touch the world—is in stark contrast to the passive role assigned to her in the official canon.
When the Vatican approved the Shrine of Chandavila for worship in 2024, Cardinal Fernández remarked that ‘[t]here is nothing one can object to in this beautiful devotion’ (Brockhaus, 2024). By describing the pilgrims’ experiences as subjective, and stressing the girls’ virtue, the Vatican shifted the focus away from supernatural intervention. The girls’ experiences were re-contextualised in the light of how, they for the rest of their lives, sought to emulate the celestial Mary. A more overt example of the regulation of popular devotion to Mary can be seen in a 2025 article in the Catholic News Agency. Hannah Brockhaus reported that the Vatican put to rest a decades-long debate about Mary’s role in the redemption of humanity. The title ‘Co-Redemptrix,’ used in some denominations, was rejected as inappropriate (Brockhaus, 2025). Cardinal Víctor Manuel Fernández is quoted as saying, ‘’[t]his text . . . aims to deepen the proper foundations of Marian devotion by specifying Mary’s place in her relationship with believers in light of the mystery of Christ . . .’ This statement shows how the Vatican reasserts Christ’s role while defining boundaries for Marian veneration.
Lived religion sometimes reevaluates and supplants established religious dogma. Within the Catholic faith, Marian devotion has morphed beyond the role defined for Mary in the Bible and by the Vatican. The Vatican’s careful endorsement of the Shrine of Chandavila shows it deliberately interpreting spontaneous spirituality under official canon, while downplaying its more independent claims.
References Brockhaus, H. (2024) ‘Vatican approves devotion to 1945 apparition of Our Lady of Sorrows in Spain’, Catholic News Agency, 23 August. Available at: https://www.catholicnewsagency.com/news/258883/vatican-approves-devotion-to-1945-apparition-of-our-lady-of-sorrows-in-spain (Accessed: 9 November 2025). Eternal Word Television Network (n.d.) Press Room: Our Mission. Available at: https://www.ewtn.com/pressroom (Accessed: 9 November 2025). Sinclair, S (2019) ‘Mary, the mother of Jesus’, in The Open University (ed.), Reputations, The Open University, pp. 45-106. Brockhaus, H. (2025) ‘Vatican nixes use of ‘Co-Redemptrix’ as title for Mary’, Catholic News Agency, 4 November. Available at: https://www.catholicnewsagency.com/news/267563/vatican-nixes-use-of-co-redemptrix-as-title-for-mary (Accessed: 9 November 2025).
from essays-in-transit
Cleopatra is a potent but mutable construct of femininity. Her name conjures up common associations-power, beauty and sex-these reputations can be called upon with no explanation, as in this Palmolive advertisement from 1910 (A111, Cleopatra Option 1, module materials). An elaborately clad Cleopatra can be seen seated on a curule-style sofa smiling, leaning over a vase of soap. In a time when the suffrage movement started gaining traction and gender roles were changing, Palmolive stripped Cleopatra of her intellect and title. They took a safer, reactionary route when drawing on a collective Western understanding of Cleopatra. This advertisement presents a sanitised and decorative sensuality stripped of intellect and agency; beauty is fulfilling, and a woman’s power stems from her role as consumer.
Palmolive’s advertisement shows Cleopatra sitting on a sofa in an elaborate headdress, constraining bodice, and heavy jewellery. She is smiling, leaning over her attendant, who is kneeling on the floor holding up a vessel filled with soap. To the right in the image, the merchant who brought the product is bowing with his arms crossed over his chest. Bright and colourfully decorated with vases and flowers, the illustration projects intimacy and innocence. Cleopatra is the epitome of femininity, and she is smiling because it was her goal all along. The ad-copy reinforces the message: ‘Once you become acquainted with . . . Palmolive . . . no other soap will satisfy’ (A111, Cleopatra Option 1, module materials) promising that Palmolive Soap will fulfil you.
The modern interpretation of Cleopatra may be softer than that of Roman historians Plutarch and Cassius Dio, but it is still influenced by it. Plutarch described her beauty as incomparable, and the ‘. . . attraction in her person . . . a peculiar force of character . . .’ (Plutarch, 1965, p. 294; quoted in A111 Book 1, p. 30). which put all under her spell. Cassius Dio also emphasised Cleopatra’s power as a kind of seductive magic; she bewitched and enslaved (A111 Book 1, p. 31). In the Roman accounts, Cleopatra did not convince; she enthralled; the Palmolive advertisement carries this idea forward. Cleopatra does not do, she is.
In this ad, Cleopatra is depicted as a woman who is unconcerned with matters outside of the domestic: the ruler, even the ‘master of a thousand flatteries’ (A111 Book 1, p. 31) is absent. There is one man whom she may command, a deferential, dark-skinned man eager to deliver products just for her. By representing itself in a role naturally inferior to a white woman in the early 20th century political landscape, Palmolive empowers her to consume, while signalling her appropriate lack of authority in other matters. By contrast, in medieval Arab accounts, Cleopatra was a noble and able monarch who furthered scientific learning. She was admired for her intellect and skill (A111 Book 1, pp. 33-35), not her beauty.
Cleopatra is one of the most well-known women in Western culture, and while the view of her has become more nuanced, Palmolive’s Cleopatra embodies the enduring Roman idea that a woman’s influence lies in her appearance, not her intellect. The advertisement leverages her reputation as a seductress but cuts her claws and turns consumption into a symbol of power.
References Palmolive Soap Company (1910) ‘Buying Palmolive 3,000 Years Ago’. Advertisement. Milwaukee: B. J. Johnson Soap Company. Fear, T. (2025) ‘Cleopatra’, in Jones, R. (ed.) Reputations. Milton Keynes: The Open University, pp. 5–43. The Open University (2025) Cleopatra in Hollywood. A111: Discovering the Arts and Humanities. The Open University. Available at: https://learn2.open.ac.uk/mod/oucontent/view.php?id=2487399 (Accessed: 15 October 2025).
| Seller | Link |
|---|---|
| Corrections Bookstore | https://www.correctionsbookstore.com/ |
| DiscountMags | https://www.discountmags.com/ |
| Biblio UK | https://biblio.co.uk/ |
| Cats In Hat | https://catsinhat.com/ |
| Manga Mart | https://mangamart.com/ |
| Atomic Empire | https://www.atomicempire.com/ |
| Moby The Great | https://www.mobythegreat.com/ |
| AllStora | https://allstora.com/ |
| Little District Books | https://littledistrictbooks.com/ |
| Books on Broad | https://www.booksonbroad.com/ |
| Mystery Lovers Bookshop | https://www.mysterylovers.com/ |
| Bookshop.org | https://bookshop.org/ |
| Seven Seas Entertainment | https://sevenseasentertainment.com/ |
| Magers & Quinn | https://www.magersandquinn.com/ |
| Powell’s | https://www.powells.com/ |
| The Next Chapter Hermiston | https://thenextchapterhermiston.com/ |
| Watermark Books | https://watermarkbooks.com/ |
| Bull Moose | http://www.bullmoose.com/ |
| Learned Owl Book Shop | https://learnedowl.com/ |
| AbeBooks | https://www.abebooks.com/ |
#books #danmei
from Unvarnished diary of a lill Japanese mouse
JOURNAL 29 novembre 2025
Je finis mon thé puis je redescends. Il y a beaucoup de monde aujourd'hui, ça sent la fin de l'année scolaire. Les parents viennent voir un peu comment ça se passe ici pour les inscriptions en janvier février...
Terminé ! olala yaura des inscriptions en février on dirait 😎 Il paraît les enfants et les ados font une super publicité autour d'eux 😅 Maintenant on va enfin aller manger, A, T.san l'ex secrétaire, Yôko qui a officiellement les clés, c’est elle qui va fermer la boutique, ka chan et moi.
On s'est quittées, chacune repartie de son côté. Restent nous deux, on se paye le love hôtel ce soir, demain matin grasse mat´ avec petit déjeuner de reines et baignoire à remous, on se refuse rien, on est fatiguées on a besoin de se dorloter. Takaichi trouve que les Japonais ne travaillent pas assez. La plupart n'ont même plus de vie privée, le métro le soir est plein de zombies, mais le matin aussi ! On est en train de tous devenir fous. Comptez pas sur moi, je veux pas mourir au travail, et je veux pas que ma chérie meure aussi. C'est dingo que ce soit moi la Japonaise qui la force à ralentir. La France m'a convaincue de ça, une vraie révolution culturelle, mais j'ai intégré. Même si c’est vrai que moi aussi je travaille trop, mais il y a un truc quand même, c’est que nous deux on travaille, oui, mais on fait ce qu'on aime faire, alors c’est supportable. En attendant le bain est prêt, on a une baignoire immense même pour deux !
from koanstudy
Hemingway is an app that checks your text for difficult sentences, adverbs, and passive voice. The new desktop version has just landed, and I'm testing it now.
The app highlights readability problems:
If you strive for lean prose, having the problem areas flagged makes editing easier.
It ticks the minimal writing environment boxes. In Write mode, it's just you and your text. There’s Markdown support, live preview, and HTML export. And it has a nice icon, which matters more to Mac users than they'd care to admit.
But I won't be switching to Hemingway just yet. The app is buggy. On a newish iMac, scrolling lags. Misspelled word underlining can be in the wrong places.
Despite my system language being set to British English, Hemingway marks Britishisms as errors. And the licence agreement permits only one user on one machine.
Still, these are fixable problems. What interests me more is a criticism that can also be levelled at the web version: Hemingway can leave text a little limp.
Here's the famous opening of A Tale of Two Cities:
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way—in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
Hemingway app scores it at grade 58 — essentially unreadable. Add the 14 full stops Hemingway would like, and you get this:
It was the best of times. It was the worst of times. It was the age of wisdom. It was the age of foolishness. It was the epoch of belief. It was the epoch of incredulity. It was the season of Light. It was the season of Darkness. It was the spring of hope. It was the winter of despair. We had everything before us. We had nothing before us. We were all going direct to Heaven. We were all going direct the other way. In short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
Grade 2. Much better.
Is it an improvement? No: Dickens was Dickens. But there isn’t a sub-editor alive today who wouldn’t punctuate the hell out of that sentence.
The edit doesn’t ruin the Dickens. It’s the same words in the same order. But it does drain its identity and its specialness.
The app assumes all long sentences are hard to read. I’m not expert on readability, but Dickens’ introduction isn't that hard to read.
Some adverbs are advisable. The passive voice is occasionally useful. Long sentences can be beautiful.
Its recommendations are perfect for utilitarian text — for which there are many uses. But for creative writing, handle with care. Let your instincts arbitrate.
Plain English doesn't have to be dull. But for the jobbing writer, Hemingway app isn't ready to be first-choice editor—not yet. And if creativity is high on your priority list, it may never be.
#notes #july2014
from Faucet Repair
17 November 2025
Floor 2 still life: In a 1956 interview with James Johnson Sweeney, Duchamp explains that “the danger is to lead yourself into a form of taste,” and this painting feels like it may have been an affirmation of that idea. The tension between that concept and dogged will to repeatedly poke at the personal/familiar is a potentially fruitful gap to widen; a cultivating of the ability to simultaneously self-reflect and self-negate. Relevant to how after being a vagabond for close to four months now, the idea of the familiar has warped. Paintings that are emerging are of consistent concerns popping up in the least consistent of places. They're waypoints, places to slow the senses into thought.