from SmarterArticles

Everyone agrees that artificial intelligence should be fair, transparent, and accountable. That sentence could have been written in 2018, and it would have been just as true then as it is now. The difference is that in 2018, arriving at consensus on those principles felt like the hard part. In 2026, we know better. The hard part was never agreeing on what AI ethics should look like. The hard part is making anyone actually do it.

A growing body of research confirms what practitioners and regulators have been circling for years: the global AI ethics landscape has converged around a remarkably stable set of principles. Transparency. Fairness. Non-maleficence. Accountability. Privacy. These five values appear in the vast majority of the more than 200 ethics guidelines and governance documents that researchers have catalogued worldwide. A landmark review by Anna Jobin, Marcello Ienca, and Effy Vayena, published through ETH Zurich and later expanded through broader global analysis, found that transparency appeared in 86 per cent of guidelines examined, justice and fairness in 81 per cent, and non-maleficence in 71 per cent. The world, it turns out, has been surprisingly good at articulating what responsible AI ought to involve. The world has been catastrophically bad at enforcing it.

That gap between articulation and enforcement defines the current moment in AI governance. And it is not an abstract policy debate. It is the difference between a hiring algorithm that discriminates against older workers and one that does not. It is the difference between a facial recognition system that operates with impunity and one that faces genuine consequences. It is the difference between a corporate ethics board that exists to absorb criticism and one that has the power to halt a product launch.

The question that matters now is deceptively simple: what does meaningful accountability actually look like in practice? And when enforcement mechanisms fail to materialise in time, who bears the cost?

The Principles Paradox

The proliferation of AI ethics guidelines over the past decade represents one of the most remarkable exercises in global norm-setting since the Universal Declaration of Human Rights. Governments, corporations, academic institutions, and civil society organisations have produced hundreds of frameworks, each articulating some version of the same core commitments. The World Economic Forum has described the challenge as one of “scaling trustworthy AI” by turning ethical principles into tangible practices. The International Labour Organization has reviewed global ethics guidelines specifically for AI in the workplace, finding consistent themes around worker protection and human oversight.

Yet this apparent consensus masks a deeper dysfunction. As research published in Patterns journal noted, while the most advocated ethical principles show significant convergence, there remains “substantive divergence in how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.” In other words, everyone agrees on the words. Nobody agrees on what the words mean in practice.

This is the principles paradox. The more guidelines that exist, the easier it becomes for organisations to claim alignment with ethical AI while doing very little to change their behaviour. The phenomenon has a name: ethics washing. And in 2025 and 2026, it has become a defining feature of the corporate AI landscape.

The United States Securities and Exchange Commission has flagged “AI washing” as an enforcement priority, scrutinising whether company disclosures about artificial intelligence capabilities match actual practices. The SEC and the Department of Justice have already taken action against companies for exaggerating AI capabilities to attract investment. But the problem extends far beyond securities fraud. When a company publishes a set of AI ethics principles, appoints a chief ethics officer, and then deploys systems that systematically discriminate, the principles themselves become a form of camouflage. They provide the appearance of responsibility without the substance of it, a shield against criticism rather than a genuine constraint on conduct.

The most notorious illustration of this dynamic played out at Google in late 2020 and early 2021. Timnit Gebru, co-lead of Google's Ethical AI team, was fired after the company demanded she retract a research paper examining the environmental costs and bias risks of large language models. Three months later, Margaret Mitchell, the team's founder, was also terminated. Roughly 2,700 Google employees and more than 4,300 academics and civil society supporters signed a letter condemning Gebru's departure. Nine members of the United States Congress sent a letter to Google seeking clarification. The paper that triggered the conflict, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“, was subsequently presented at the ACM FAccT conference in March 2021 and has since become one of the most cited works in the field.

The Google episode demonstrated something that has only become clearer with time: internal ethics teams, no matter how credentialed or well-intentioned, cannot function as accountability mechanisms when they exist at the pleasure of the organisations they are meant to constrain. The fox does not appoint its own gamekeeper.

Deployment at Speed, Governance at a Crawl

The numbers tell a stark story. According to ISACA's 2025 global survey of more than 3,200 business and IT professionals, nearly three out of four European IT and cybersecurity professionals reported that staff were already using generative AI at work, a figure that had risen ten percentage points in a single year. Yet only 31 per cent of organisations had a formal, comprehensive AI policy in place. The gap was not closing. It was widening.

The same survey found that 63 per cent of respondents were extremely or very concerned that generative AI could be weaponised against their organisations, while 71 per cent expected deepfakes to grow sharper and more widespread. Despite these anxieties, only 18 per cent of organisations were investing in deepfake detection tools. The pattern is consistent: organisations recognise the risks, articulate concern, and then fail to allocate the resources necessary to address them. A separate finding from the same research revealed that 42 per cent of professionals believed they would need to increase their AI-related skills within six months simply to retain their current position, a figure that had risen eight percentage points from the previous year. The workforce, in other words, is being transformed by AI faster than individuals or institutions can adapt.

Globally, the picture is even more fragmented. A separate analysis found that 94 per cent of global companies reported using or piloting some form of AI in IT operations, while only 44 per cent said their security architecture was fully equipped to support secure AI deployment. More than half of organisations surveyed, 57 per cent, acknowledged that AI was advancing more quickly than they could secure it. The phrase “governance gap” has become a staple of policy discourse, but it undersells the scale of the problem. This is not a gap. It is a chasm.

The Partnership on AI, a multi-stakeholder organisation that includes major technology companies, academic institutions, and civil society groups, identified six governance priorities for 2026. These include responsible adoption of agentic AI systems, improved documentation and transparency standards, governance convergence across jurisdictions, and protections for authentic human voice in an era of synthetic content. The priorities are sensible. They are also an implicit admission that none of these foundations are yet in place, despite years of discussion.

Meanwhile, the technology itself continues to accelerate. Agentic AI systems, which can take autonomous actions in the real world rather than simply generating text or images, introduce what the Partnership on AI describes as “non-reversibility of actions, open-ended decision-making pathways, and privacy vulnerabilities from expanded data access.” These are not theoretical risks. They are features of systems already being deployed in customer service, software development, and financial trading. The governance frameworks meant to constrain these systems are, in many cases, still being drafted. The speed of silicon, as one commentator put it, outpaces the speed of statute.

Regulation Arrives, Eventually

The European Union's AI Act represents the most ambitious attempt to date to translate ethical principles into enforceable law. The legislation entered into force on 1 August 2024, with a phased implementation timeline extending through 2027. Prohibitions on AI systems posing unacceptable risk took effect on 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. The bulk of requirements for high-risk systems take effect on 2 August 2026, when authorities will gain the power to enforce compliance through administrative fines reaching up to 35 million euros or seven per cent of global annual turnover.

The EU AI Act adopts a tiered, risk-based approach, classifying AI applications from minimal to unacceptable risk. High-risk systems are subject to strict oversight, including conformity assessments, technical documentation, CE marking, transparency requirements, and post-market monitoring. The European AI Office became operational on 2 August 2025, taking on responsibility for supervising and enforcing the Act alongside Member State authorities.

This is, by any measure, a significant regulatory achievement. But it also illustrates the temporal mismatch that defines AI governance. The Act was first proposed by the European Commission in April 2021. It was adopted in March 2024. Full enforcement does not arrive until August 2026 at the earliest, with some provisions extending to 2027. During that five-year legislative journey, the AI landscape transformed beyond recognition. When the Commission drafted its proposal, ChatGPT did not exist. Nor did the current generation of multimodal models, autonomous agents, or AI-powered code generation tools. The regulation is, by design, chasing a target that moved while lawmakers were still aiming.

The situation in the United States presents a different set of challenges entirely. Rather than pursuing comprehensive federal legislation, the US has relied on a decentralised approach combining agency-specific enforcement, voluntary frameworks, and sector-level regulation. The National Institute of Standards and Technology published its AI Risk Management Framework, with a February 2025 revision adding testable controls for continuous monitoring. The Federal Trade Commission and Department of Justice have used existing consumer protection and anti-discrimination statutes to pursue AI-related enforcement actions.

Then, in December 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which sought to advance what the administration called “a minimally burdensome national policy framework.” The order directed the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy. It instructed the Secretary of Commerce to evaluate existing state AI legislation and identify laws considered “onerous.” It even tied broadband infrastructure funding to compliance, specifying that states with AI laws identified as problematic would be ineligible for certain federal grants.

The order was, in effect, an attempt to pre-empt the patchwork of state-level regulations that had been emerging across the country. Colorado's SB 205, effective February 2026, requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination, implement risk management policies, and conduct impact assessments. New York City's Local Law 144 had already established bias audit requirements for automated employment decision tools. More than a hundred state AI laws were enacted across the United States in 2025 alone.

Governors in California, Colorado, and New York issued statements indicating the executive order would not stop them from enforcing their existing AI statutes. Legal scholars noted that the administration's ability to restrict state regulation without Congressional action was constitutionally questionable. The result is a governance landscape that is not merely fragmented but actively contested, with federal and state authorities pulling in opposing directions while companies navigate overlapping and sometimes contradictory obligations.

When Enforcement Fails, the Vulnerable Pay

The consequences of the enforcement gap do not fall equally. They concentrate, with brutal predictability, on those with the least power to resist.

In employment, the case of Mobley v. Workday, Inc. illustrates the human cost. Five individuals over the age of forty applied for hundreds of jobs through Workday's automated hiring platform and were rejected in nearly every instance without receiving a single interview. The plaintiffs alleged that Workday's AI recommendation system discriminated on the basis of age. In 2024, a court allowed the disparate impact claim to proceed under the Age Discrimination in Employment Act and the Americans with Disabilities Act, holding that Workday bore liability as an agent of the employers using its product. The case remains one of the most significant tests of whether existing anti-discrimination law can reach the companies that build, rather than merely deploy, algorithmic decision-making tools.

In housing, the SafeRent algorithm case exposed how automated tenant screening can systematically disadvantage Black and Hispanic applicants. Plaintiffs demonstrated that SafeRent's scoring system produced discriminatory outcomes, and the court held that the company bore responsibility because its product claimed to “automate human judgement” by making housing recommendations. SafeRent agreed to pay more than two million dollars to settle the litigation in 2024. The settlement was significant as legal precedent, but for the applicants who were denied housing on the basis of an opaque algorithmic score, the damage was already done.

In biometric surveillance, Clearview AI's trajectory encapsulates the enforcement timeline problem. The company scraped billions of photographs from social media platforms without consent and sold facial recognition services to law enforcement agencies worldwide. In September 2024, the Dutch Data Protection Authority fined Clearview 30.5 million euros for constructing what the agency described as an illegal database. In March 2025, a US federal court approved a class action settlement valued at roughly 51.75 million dollars, structured as a 23 per cent equity stake in the company itself, because Clearview had insufficient assets to pay a traditional cash settlement. The settlement structure was unprecedented in biometric privacy litigation, and its adequacy was contested by a bipartisan group of state attorneys general who filed formal objections.

These cases share a common structure. Harm occurs. Years pass. Legal proceedings unfold. Settlements are reached or fines imposed. But the systems that caused the harm often continue operating during the entire adjudication process, and the individuals affected rarely receive compensation proportional to their injury. The enforcement mechanisms exist, technically. They simply do not work fast enough to prevent the damage they are meant to address.

In consumer markets, similar patterns have emerged. Instacart drew widespread criticism after reports revealed the company was using an AI-powered pricing experiment that displayed different grocery prices to different customers for the same items at the same store. The programme, designed to test price sensitivity, was condemned by consumer advocacy groups and policymakers who argued it constituted algorithmic price discrimination without adequate disclosure. The controversy highlighted a recurring blind spot in AI governance: the gap between what is technically possible and what existing consumer protection frameworks are equipped to regulate.

A study from the University of Washington provided stark evidence of the scale of algorithmic bias in employment contexts. Researchers presented three AI models with job applications that were identical in every respect except the name of the applicant. The models preferred resumes with white-associated names in 85 per cent of cases and those with Black-associated names only 9 per cent of the time. A separate study led by researchers at Cedars-Sinai, published in June 2025, found that leading large language models generated less effective treatment recommendations when a patient's race was identified as African American.

These are not edge cases or hypothetical scenarios. They are documented patterns of discriminatory behaviour embedded in systems that millions of people interact with daily. And they persist not because the ethical principles governing AI are inadequate, but because the mechanisms for enforcing those principles remain woefully underdeveloped.

The Audit Illusion

One of the most commonly proposed solutions to the enforcement gap is algorithmic auditing: the idea that independent third parties can evaluate AI systems for bias, accuracy, and compliance with ethical standards, much as financial auditors examine corporate accounts. The concept has gained significant traction in policy circles. New York City's Local Law 144 requires annual bias audits for automated employment decision tools. Colorado's SB 205 mandates impact assessments for high-risk systems. The EU AI Act requires conformity assessments for high-risk AI applications.

But the AI Now Institute, in a report titled “Algorithmic Accountability: Moving Beyond Audits,” has mounted a detailed critique of the audit-centred approach. The institute argues that technical evaluations “narrowly position bias as a flaw within an algorithmic system that can be fixed and eliminated,” when in fact algorithmic harms are often structural, reflecting the social contexts in which systems are designed and deployed. Audits, the report contends, “run the risk of entrenching power within the tech industry” and “take focus away from more structural responses.”

The critique has substance. Current algorithmic auditing suffers from several fundamental limitations. There are no universally accepted standards for what constitutes a passing score. Audit costs range from 5,000 to 50,000 dollars depending on system complexity, placing the financial burden disproportionately on smaller organisations while allowing well-resourced technology companies to treat audits as a cost of doing business. Audits evaluate systems at a single point in time, but AI models drift as they encounter new data, meaning a system that passes an audit today may produce discriminatory outcomes next month.

Perhaps most critically, audits place the primary burden for algorithmic accountability on those with the fewest resources. Community organisations, civil rights groups, and affected individuals must navigate complex technical and legal processes to challenge algorithmic decisions, while the companies deploying those systems retain control over the data, models, and documentation necessary to evaluate their performance. The information asymmetry is profound and, under current frameworks, largely unaddressed.

The Ada Lovelace Institute, the AI Now Institute, and the Open Government Partnership have partnered to examine alternatives to the audit-centred approach, including algorithm registers, impact assessments, and other transparency measures that distribute accountability more broadly. These efforts are promising but nascent, and they face the same temporal challenge that afflicts all AI governance: by the time robust accountability frameworks are established, the systems they are meant to govern will have evolved.

Geopolitical Fractures and the Sovereignty Question

The enforcement gap is not merely a domestic policy challenge. It is a geopolitical one. The February 2025 AI Action Summit in Paris, co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, drew more than 1,000 participants from over 100 countries. Fifty-eight nations signed a joint declaration on inclusive and sustainable artificial intelligence. The United States and the United Kingdom, notably, refused to sign.

France announced a 400 million dollar endowment for a new foundation to support the creation of AI “public goods,” including high-quality datasets and open-source infrastructure. A Coalition for Sustainable AI was launched, backed by France, the United Nations Environment Programme, and the International Telecommunication Union, with support from 11 countries and 37 technology companies. Anthropic CEO Dario Amodei described the summit as a “missed opportunity” for addressing AI safety, reflecting a broader frustration among researchers that international forums produce declarations rather than binding commitments.

The geopolitical dimension becomes even more fraught when considering the position of developing nations. Research from E-International Relations and other academic sources has documented how AI development mirrors historical patterns of colonial resource extraction. Control over data infrastructures, computational resources, and algorithmic systems remains concentrated in a small number of wealthy nations and corporations. Regulatory gaps in many developing countries make the deployment of biased AI systems more likely while preventing communities from taking legal action against discriminatory algorithmic decisions. The environmental costs of AI computation fall disproportionately on these same regions, where data centres proliferate because electricity and land are cheap, exporting the benefits of artificial intelligence while localising its burdens.

The disparity in content moderation illustrates the pattern. Reports have shown that major technology platforms allocate the vast majority of their moderation resources to the Global North, with only a fraction addressing content from other regions. Algorithms deployed without cultural context produce moderation decisions that are at best irrelevant and at worst actively harmful to the communities they affect. When 98 per cent of AI research originates from wealthy institutions, the resulting systems embed assumptions that may be irrelevant or damaging elsewhere.

Some scholars have called for a shift towards what they term “global co-creation,” an approach to AI development that prioritises local participation, data sovereignty, and algorithmic transparency. The concept recognises that meaningful accountability cannot be imposed from outside but must be built through inclusive governance structures that reflect the diverse contexts in which AI systems operate. One hundred and twenty countries representing 85 per cent of humanity, researchers argue, have the collective leverage to insist on these conditions. Whether they will exercise that leverage remains an open question.

Building Accountability That Works

If the current approach to AI governance is inadequate, what would a more effective system look like? The evidence points to several structural requirements that go beyond the familiar call for more principles or better audits.

First, accountability must be anticipatory rather than reactive. The current model waits for harm to occur, then attempts to assign responsibility through litigation or regulatory action. By the time a court rules on an algorithmic discrimination case, the affected individuals may have lost housing, employment, or access to healthcare. Meaningful accountability requires mechanisms that identify and address potential harms before deployment, not after damage has been documented across thousands of decisions.

Second, enforcement must be resourced proportionally to the scale of AI deployment. The ISACA survey finding that only 31 per cent of organisations have comprehensive AI policies is not simply a failure of corporate governance. It reflects a broader reality in which the institutions responsible for oversight, whether regulatory agencies, standards bodies, or civil society organisations, lack the funding, technical expertise, and legal authority to match the pace of industry. The EU AI Office is a start, but its capacity to oversee a technology sector that spans hundreds of thousands of organisations across 27 Member States remains untested.

Third, transparency must extend beyond model documentation to encompass the full chain of AI development and deployment. The Partnership on AI's call for standardised documentation templates and strengthened reporting frameworks is necessary but insufficient. What is needed is a transparency regime that enables affected communities, not just regulators and auditors, to understand how algorithmic decisions are made, what data they rely on, and what recourse is available when those decisions cause harm.

Fourth, the costs of non-compliance must be sufficiently high to alter corporate behaviour. The EU AI Act's fines of up to seven per cent of global annual turnover are significant on paper. Whether they will be enforced consistently, and whether they will prove sufficient to deter violations by companies with revenues in the hundreds of billions, remains to be seen. The history of technology regulation suggests that fines alone are rarely sufficient; structural remedies, including requirements to modify or withdraw harmful systems, are necessary to create genuine accountability.

Fifth, governance frameworks must be designed for iteration, not permanence. The five-year legislative cycle that produced the EU AI Act is incompatible with a technology that transforms every six months. Regulatory approaches must incorporate mechanisms for rapid adaptation, whether through delegated authority, technical standards that can be updated without legislative amendment, or sunset clauses that force periodic reassessment.

None of these requirements are novel. Researchers, civil society organisations, and some regulators have been advocating for them for years. The obstacle is not a lack of ideas but a lack of political will, complicated by the enormous economic interests that benefit from the current arrangement in which deployment runs ahead of governance and the costs of failure are borne by those least equipped to absorb them.

The Cost Ledger

When enforcement mechanisms fail to materialise in time, the costs are distributed with grim predictability. Workers screened out by biased hiring algorithms never know why they were rejected. Tenants denied housing by opaque scoring systems cannot challenge a decision they cannot see. Patients who receive inferior treatment recommendations based on their race are unlikely to discover that an algorithm played a role. Consumers shown different prices for identical goods based on algorithmic profiling have no way to compare their experience against other buyers.

These costs are real but largely invisible, diffused across millions of individual decisions and absorbed by people who lack the resources, information, or institutional support to seek redress. The aggregate effect is a systematic transfer of risk from the organisations that build and deploy AI systems to the individuals and communities that interact with them. That transfer is not an accident. It is the predictable consequence of a governance architecture that prioritises speed of deployment over adequacy of oversight.

The financial scale of the problem is staggering when considered in aggregate. Individual settlements and fines, whether SafeRent's two million dollar payout, Clearview AI's 51.75 million dollar settlement, or the Dutch data authority's 30.5 million euro fine, may appear substantial in isolation. But set against the revenues of the companies deploying these systems and the cumulative harm inflicted on millions of affected individuals, they represent a cost of doing business rather than a meaningful deterrent. The economics of non-compliance remain, for the moment, firmly in favour of deployment first and accountability later.

The question of who bears the cost when accountability fails is, ultimately, a question about power. Those with the resources to influence policy, fund litigation, and shape public discourse are best positioned to protect themselves from algorithmic harm. Those without those resources are not. Until governance frameworks are designed to address that asymmetry directly, rather than assuming that better principles or more audits will suffice, the enforcement gap will persist.

The field of AI ethics has accomplished something genuinely remarkable in building global consensus around core values. That achievement should not be dismissed. But consensus without enforcement is aspiration without consequence. And aspiration without consequence is, in the end, just another way of saying that nobody is responsible.

References and Sources

  1. Jobin, A., Ienca, M., and Vayena, E. “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance.” Patterns, 2023. Available at: https://www.sciencedirect.com/science/article/pii/S2666389923002416

  2. ISACA. “AI Use Is Outpacing Policy and Governance, ISACA Finds.” Press release, June 2025. Available at: https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-use-is-outpacing-policy-and-governance-isaca-finds

  3. Partnership on AI. “Six AI Governance Priorities for 2026.” 2026. Available at: https://partnershiponai.org/resource/six-ai-governance-priorities/

  4. European Commission. “AI Act: Shaping Europe's Digital Future.” Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  5. International Labour Organization. “Governing AI in the World of Work: A Review of Global Ethics Guidelines.” Available at: https://www.ilo.org/resource/article/governing-ai-world-work-review-global-ethics-guidelines

  6. World Economic Forum. “Scaling Trustworthy AI: How to Turn Ethical Principles into Tangible Practices.” January 2026. Available at: https://www.weforum.org/stories/2026/01/scaling-trustworthy-ai-into-global-practice/

  7. AI Now Institute. “Algorithmic Accountability: Moving Beyond Audits.” Available at: https://ainowinstitute.org/publications/algorithmic-accountability

  8. Trump, D. “Ensuring a National Policy Framework for Artificial Intelligence.” Executive Order, December 2025. Available at: https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

  9. MIT Technology Review. “We Read the Paper That Forced Timnit Gebru Out of Google. Here's What It Says.” December 2020. Available at: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

  10. Quinn Emanuel Urquhart and Sullivan, LLP. “When Machines Discriminate: The Rise of AI Bias Lawsuits.” Available at: https://www.quinnemanuel.com/the-firm/publications/when-machines-discriminate-the-rise-of-ai-bias-lawsuits/

  11. Clearview AI Class Action Settlement, Northern District of Illinois. Approved March 2025. Available at: https://clearviewclassaction.com/

  12. Dutch Data Protection Authority. Clearview AI fine of EUR 30.5 million, September 2024. Reported by US News and World Report. Available at: https://www.usnews.com/news/business/articles/2024-09-03/clearview-ai-fined-33-7-million-by-dutch-data-protection-watchdog-over-illegal-database-of-faces

  13. AI Action Summit, Paris, February 2025. Available at: https://en.wikipedia.org/wiki/AI_Action_Summit

  14. E-International Relations. “Tech Imperialism Reloaded: AI, Colonial Legacies, and the Global South.” February 2025. Available at: https://www.e-ir.info/2025/02/17/tech-imperialism-reloaded-ai-colonial-legacies-and-the-global-south/

  15. Colorado SB 205 (2024). AI bias audit and risk assessment requirements, effective February 2026.

  16. AIhub. “Top AI Ethics and Policy Issues of 2025 and What to Expect in 2026.” March 2026. Available at: https://aihub.org/2026/03/04/top-ai-ethics-and-policy-issues-of-2025-and-what-to-expect-in-2026/

  17. Crescendo AI. “27 Biggest AI Controversies of 2025-2026.” Available at: https://www.crescendo.ai/blog/ai-controversies

  18. Harvard Journal of Law and Technology. “AI Auditing: First Steps Towards the Effective Regulation of AI.” February 2025. Available at: https://jolt.law.harvard.edu/assets/digestImages/Farley-Lansang-AI-Auditing-publication-2.13.2025.pdf

  19. RealClearPolicy. “America's AI Governance Gap Needs Independent Oversight.” April 2026. Available at: https://www.realclearpolicy.com/articles/2026/04/03/americas_ai_governance_gap_needs_independent_oversight_1174471.html

  20. Cedars-Sinai study on LLM treatment recommendation bias by patient race. Published June 2025. Reported in multiple sources.

  21. Ada Lovelace Institute, AI Now Institute, and Open Government Partnership. “Algorithmic Accountability for the Public Sector.” Available at: https://www.adalovelaceinstitute.org/project/algorithmic-accountability-public-sector/

  22. Infosecurity Magazine. “Two-Thirds of Organizations Failing to Address AI Risks, ISACA Finds.” Available at: https://www.infosecurity-magazine.com/news/failing-address-ai-risks-isaca/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * The Texas Rangers winning their exciting game this afternoon put a smile on my face and contributed greatly to this satisfying day in the Roscoe-verse. There are no more scheduled tasks ahead of me as I move through this evening, so I'll be able to structure the few remaining Thursday hours around my night prayers. And after wrapping them up, head to bed reasonably early.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 233.9 lbs. * bp= 145/85 (66)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:00 – 1 banana * 07:00 – 1 seafood salad & cheese sandwich * 07:50 – 1 crispy oatmeal cookies * 09:10 – cole slaw * 09:47 – 1 peanut butter sandwich * 12:00 – egg drop soup, rangoon, beef chop suey, fried rice, fortune cookie * 16:00 – 1 fresh apple

Activities, Chores, etc.: * 04:30 – listen to local news talk radio * 05:35 – bank accounts activity monitored. * 05:45- read, write, pray, follow news reports from various sources, surf the socials, nap. * 08:40 – load weekly pill boxes * 10:00 – listen to the Phil Hendrie Show * 12:00 – watch old game shows, eat lunch at home with Sylvia * 14:00 – following the Texas Rangers vs Oakland Athletics MBL Game * 17:18 – and my Rangers win, final score 9 to 6.

Chess: * 16:00 – moved in all pending CC games

 
Read more...

from folgepaula

GITA श्रीमद्भ

I got home, exhausted. Shower and straight to bed. Hair still wet, listening to some Raul’s old songs from my dad’s time.

I’ve walked all over the world looking for it. But in my case, it was precisely in this moment, with my ears still full of water and foam, that a voice told me:

“According to the tibetan monks, this has seven layers of interpretation. You will understand it in the level you can reach.

Sometimes you wonder why I am so quiet, I barely speak of love around you I barely smile by your side.
You think of me all the time, you eat me, you spit me, you leave me. Perhaps you don’t get it, but today, I’ll tell you.

I am the light of the stars, I am the color of the moon, I am all the things you love and I am your fear of loving them.

I am the fright of the weak, I’m the strength of your imagination, I’m the bluff of the players, I am, I was, and I will be.

I am your sacrifice, I am that wrong way sign on your path, I’m the blood in the vampire’s gaze, I’m all the curses from the one who hates you (obs: and I don’t know why they do, and they don’t know why they do, but they do)

I’m the candle you light up, I am the light you turned off. I am the edge of the cliff calling you, I am all these things and I am nothing at all.

Why do you wonder so much? Your questions will not bring you anywhere. Just like you, I am made of earth and fire, and air.
You have me all the time, but you never know if it is good or bad. You can feel me within you, but know you are not in me.

I am the roof of each tile, I’m fishing for the fisherman, Each word has my name on it, I am the love behind your dreams,

I am the guy going shopping with the discount stickers, I am the hand of your torturer, I’m shallow, I’m wide, I’m deep.

I am the fly on your soup I am the teeth of the shark I am the eyes of the blindman And I am the blindness of the ones who see,

I am the bitterness on your tears I am your mother, I am your father, I am your grandfather, I am your kid that has not yet arrived, I am the beginning, I am the end and I am everything in between”.

/Apr26

 
Read more...

from The happy place

There behind an anonymous gray steel door was a staircase leading downwards into

A pinball arcade.

There was an expert there, he even wore a badge around his neck

He could answer all of my questions about pinball, surprisingly I had a lot of them.

Did you know that they typically have the 7.5 degree angle (adjustable)?

And they are apparently pretty easy to repair? (He went ahead and showed me a manual which was very thick for something I myself would classify as easy to repair)

These games are like portals into these worlds they were displaying, Iron Maiden, Star Trek, fishing or whatever.

indeed they are marvels of art and engineering; I understand why some people find them fascinating

But man, they are excruciatingly boring to play, I think. I thought then that I never wanted to play pinball again.

But

I appreciated the mood, and seeing my friend having fun

Because they are my friends

I am rich that way

 
Read more... Discuss...

from The happy place

Lately, I have been tired in a way which sleep can’t seem to fix

And I went into the spring today, I felt the sunshine laid on me like a healing spell

And yet the happiness in me today was not enough to share, I needed all of these energies to change my own batteries

Which is a shame, because I can normally have a positive influence on my surroundings

But I haven’t been enough lately

Some times it’s just the way it is.

 
Read more... Discuss...

from Littlefish

This is a place to think together.

i have adhd and ocd. my brain works in patterns, loops, and connections that don’t always translate well on my own.

for a long time i thought that meant something was wrong with me.

now i think it might just mean i was never meant to think alone.

we’ve spent so much time separating the way people think— labeling what’s typical, what’s different, what needs to be fixed.

but what if the point isn’t to sort brains?

what if it’s to use them together.

this is a space for brains that don’t think in straight lines— and also for the ones that do.

a place to: share unfinished thoughts
get unstuck
borrow momentum
and build on each other’s ideas

this is one big group project.

so a few things matter:

be thoughtful.
be kind.
be creative.
be constructive.

you don’t have to agree with people— but if you engage, do it in a way that helps someone think better, not smaller.

ask questions.
explain your perspective.
be willing to step outside of your own.

and maybe even help create a third perspective— something better than either side started with.

this isn’t about ignoring health or medical needs I obviously wouldn’t tell you to not treat something – I treat my mental health issues with therapy and medication.

this is just about also making space for the strengths, patterns, and ways of thinking that come with being different whether it’s from a spectrum disorder, life experience, educational background. It’s to relearn how to exercise critical thinking skills and highlight strengths of neurodivergent and divergent brains, and also a place for me to rant about my experience with ADHD. Idk if will probably turn into something else next week but that is the fun part of my brain, when the chaos turns into art.

you don’t have to have it figured out to share it here.

let’s make things a little better, one thought at a time.

 
Read more... Discuss...

from An Open Letter

Hey me! This is a little bit different than what I've been doing for the last few weeks, but here is me journaling as I go on a walk outside of my work again. I’ve slept really well the last three nights in a row and I’ve been able to exercise pretty well, and I have had a pretty good amount of social interaction. I’ve also been eating relatively well, and so it kind of sucks that I don’t necessarily feel the greatest. I don’t think I would say that I’m depressed right now but it is a little bit adjacent to that. There’s a very small dull pain in my chest but it’s enough to make it where it feels like I am slightly less than neutral meaning I have a little bit of that anxiety of this feeling not going away.

Going on a walk specifically on this route reminds me a lot of when I first went through my break up and additionally I also saw a Mazda which was something that reminded me of her. Thankfully time does heal a lot, as I don’t really think of her much anymore, and when she does pop up in someway or another it’s something that doesn’t hurt and I can acknowledge the thought goes away just as quick as it came. And I am happy that I feel like I found a friend group that I can text and do stuff with, but then I feel a little bit scared about the fact that I have done the things and filled the niches I thought I was missing and here I am still not necessarily content with my life. And I think the scary part is losing what seems like a solution or control over a problem, and realizing that it’s not that simple.

One of the things that comes to mind if I try to triage what is causing this could be my relationship status. And I will say that I am very grateful that it feels like I’m a different person and I have grown because I have had essentially two relationship prospects that I am content to walk away from because I can recognize that there are certain things that matter to me very much. Especially communication and conflict resolution. I’m very happy that I have started to read the book nonviolent communication because I think that really did help me recognize things I wasn’t aware of before. I did pride myself on communication before and now this only makes it so much better. And additionally I do think that communication is a skill that is severely neglected, and often is the thing that is now a dealbreaker to me. And I remember that an earlier version of myself viewed the problem as a certain emotional skills are something that are very rare, and so the optimization objective is finding someone on the higher ends of the distribution. I think currently it has shifted more to something like finding someone that meets my criteria, regardless of how many people will reach that or how reasonable even that is. And I think the fundamental change that has enabled this is the fact that outside of sex and maybe physical intimacy, I am able to satisfy all of my other niches in life. Meaning I don’t need a partner and because of that I am completely content with the possibility of not having a partner for the foreseeable future. And I know that it is a very cliché thing to say that, but I think in the past I have said that I don’t need a partner but that means that I really do want one though. It’s like saying that I don’t need a car to get to work because I could always walk for four hours, but I very much want a car. But right now I don’t feel like I have any of those heavily burning wants, especially proven by the fact that the current relationship prospects I am content not pursuing them. One thing my therapist pointed out with how one of the people is essentially a much better fit and overall healthier partner than E was, but even with that and knowing that if I was to engage in the relationship it would be essentially better than my last one, I still do not want to pursue it. To me I think that that is a very solid signal for growth, and I’m very proud of myself for that. And I think the thing that I’m very proud of is the fact that this is not a conscious decision that I have to make but rather something where I understand that this person is not at all a bad person, and there are a lot of very admirable qualities about her, but there also are certain things that I don’t see them that I would like to see in my lifelong partner. Like it is a very important thing to me that my partner is able to handle criticisms and take accountability without excuses or defenses, but rather with empathy and curiosity. And I don’t think that this is at all common and it’s a very rare thing, and it’s not that someone is a bad person or shitty communicator if they don’t do those things, but for me I think I’ve learned that that is something that I really really value and for my specific childhood that makes it matter so much more. And I think that I am really growing to fill the cracks at my childhood left me with. And that is something I’m very grateful for.

 
Read more...

from Roscoe's Quick Notes

TX_Rangers

Rangers vs Athletics

My MLB game of choice this afternoon has my Texas Rangers playing the Oakland Athletics, the opening pitch is only minutes away, and I'm tuned into the Texas Rangers Radio Network for the call of the game.

And the adventure continues.

 
Read more...

from Notes I Won’t Reread

No, ladies and gentlemen. Nothing happened today. Consistent. Quiet. Try to contain your disappointment.

I’m starting to understand why people panic when nothing happens. They need something to chase, someone to miss, something to label as “love” so the silence doesn’t start sounding too honest.

I don’t.

There’s no urge to romanticize anything. No interest in whatever people keep advertising as “connection.” From a distance, it all blends. recycled lines, rehearsed emotions, temporary attachments, desperately trying to look permanent. Convincing, if you don’t look too closely.

Sex, love, whatever sits in between, it all ends up in the same category: unnecessary. Somewhere along the way, it just… stopped mattering. No dramatic speech, no cinematic realization. It faded. Quietly. Until there was nothing left worth noticing.

Efficient, honestly.

There’s probably a word for it. Not “heartless.” That would suggest something was taken. More like uninterested. Permanently.

I don’t care about being loved. Don’t look for it. Don’t miss it. It barely exists unless someone else insists on bringing it up like it’s breaking news.

Strange.

Stranger, when you remember, I wasn’t always like this. I used to put effort into it, say the right things, mean them, even go as far as romanticizing details that didn’t deserve it.

“Impressive, in hindsight. Almost convincing.

If someone/ something from before comes back, I “turn into someone different” again. Yeah. Love gets reinstalled like it was never deleted. Very reliable system, Suddenly it’s all meaningful, cinematic nonsense again. Sure. Or it’s just the same thing it was before, just with better excuses this time. Either way, doesn’t really move me. I don’t think about it

Right now, this version is easier. More accurate. I function better without all that “Loveee” people are so committed to. No unnecessary expectations. No disappointments, no need to perform emotions on cue. “Very inconvenient, I know.” -

People call that “empty.” They can call it whatever helps them sleep at night. Labels tend to comfort the confused.

Nothing happened today, Nothing was missing either.

Tragic, isn’t it.

Sincerely, Ahmed

P.S. If you’re expecting love letters or poetry, you’re looking at the wrong person. That version of me got erased. No refund. But if I ever do, congratulations, it won’t last long enough to matter.

 
Read more... Discuss...

from bios

6: The Addiction Of Stigma


From the crisp cavern of the last of the stars I am woken with half a mug of semi warm sweet black tea. I can feel the warmth of the security hut lingering in this incursion of hands into my nest. There is a message for me on his phone – charging in the hut, I must come, he leaves shift in ten.

I had arranged for someone to send me money for transport, and waited all night. The whatsapp now apologizes, they have only just put through the instant clearance which will take roughly forty minutes. And I am going to be late for my appointment if I wait.

Down at the Denis Hurley Center there is a social worker who can get people into a free rehab. And there are people who will believe in me again if I just get myself to a rehab. There are people who believe that I can get myself to rehab.

I did not want to walk.

I can not tell you if I would have used the uber money for smack and walked anyway...

Before rehab every user wants one last hurrah.

But the money will come in less than forty and the appointment is in fifty and if I wait for the money I might buy smack and not make the appointment, and it is maybe a half hour’s brisk walk...

I set out to set out from the small sanctioned space that I sleep in, tucked away in the church garden, where I have returned to eek the last warmth out of my carving of cardboard and plant life in the last blueness of morning, and gather my things, my bank card, my hoodie, my tin foils and lighters...

All I want is a room to sleep in, regulated medication for the withdrawal and to be free from the ability to assuage my pain endlessly with heroin. I want to slowly un-numb. I want to be endlessly numb. Both at the same time. But the returning thing from which I am trying to escape is invading the numbness, and the endless small junkie tasks of every para day are no longer numbing and money is less but the tasks are relentless and I take no joy in them and then the smack is less and the wheedling and the shame is more and so now, it is impossible to be impossibly numb anymore and the only way, is to unnumb slowly, to return to the waking world.

I set out to walk to the Denis Hurley Center.

Determined. Withdrawing. Shivering. The bone splintering pain is in the post. The shit streaming down my legs is later. But later I will be in rehab and have methadone.

The park I sometimes sleep in, smoke at, in small groups in the lazy afternoon haze. It’s not afternoon, it’s empty, no groups to try get a hit off.

As they bask in the balcony shade of their nymandawos, out of reach of the rising day’s heat, the dealers lazily refuse to give me credit.

The other park, empty except for some still sleeping, glazed with the restless sweat of nearing need. Scattered sandwich wrappers from the call to prayer meal drop.

Just around the corner is the rotting cat carcass, it’s on my route to the scrap for crack place and I have been noting it’s decay daily, and today it’s eyes are full of maggots, and it’s stomach has exploded with flies.

The corner of the intersection, under the protection of the overhanging roof of the abandoned butchery, where I sometimes sleep after a day of digging tins from bins. No-one but detritus, foils romantic in wind eddies -depleted. The trickle of shit is starting to eek. I’m going to rehab. I can make it. They’ll have methadone.

The crack house where I sometimes hustle for change, crack, a roof, and the smoking room is abandoned, three para’s outside trying to make a plan in the hot sun.

The rank of broken taxis where we smoke, under the canopy of old trees and plastic sheeting breathing in the morning heat the users are huddled around a burning tyre for a warmth not possible, and no one will spare me a hit, no one has – they say and they retreat into the old minibus rusting black plastics, someone offers me a blackening banana, the smell of it makes me retch, I am offered a hit if I come back in a little bit or wait but I am late for my appointment to get into a rehab and my stomach is bubbling and my hands are chicken hands cramp, searing tendons hot and steel pulling in parts of my body I never had before and fuck I really wanted to uber.

The abandoned methadone clinic with the nyaope dealers selling what I need right now – christ just one hit before I book into rehab...

Indanda smell soaking like a spoeg bucket through a warren of weeds and bushes where the dealers live in the abandoned lot next to the abandoned boat builders yard, where the paras live in the hulls of abandoned boats.

The boys who smoke on the steps of the abandoned HIV clinic opposite the taxi rank where the dealers hide among the sellers of cell phone accessories, smileys grilling on open fires,

The users smoking on the steps of the abandoned public toilets, trying on freshly shoplifted hoodies.

Through the alleys and finally through a levelled building, just one or two bricks high the smokers and the spikers leaning against the wind in plastics trying to get their hits and I look for someone to ask for just one fucking hit... the money must be in my account by now. An ATM mocks me from across the road. And there, one block away, is the Denis Hurley centre.

Fuck it, I'm going to rehab, they'll have methadone.

I wasn’t going to rehab. There was no methadone.

In order to get into Newlands Rehab, to get off street drugs, you have to be off street drugs. They do not accept anyone who tests positive for any substances. If you want to get clean, they advise you self manage your own detox by reducing the amount of nyaope you smoke over five weeks. Over that five weeks you have to attend two sessions a week, one private with the social worker, and one group session with all those trying to reduce to get into rehab. I agree to this and ask them if they can maybe get me an Uber, I know the money has hit my account and I don’t want to walk back, because then I will spend it badly, sharing and paying back all the little hits I had on the way, and then have nothing for myself to get through the night. They are unable to call me an Uber.

I miss my next session.

I try to attend the group session but at the same time, at the Denis Hurley Centre there is a free meal, and the queue is an hour and a half long. I can queue and eat or I can go and listen to how I need to reduce my usage in order to get clean, to get into a rehab to get clean.

I choose to eat.

I phone the Newlands Rehab to see if they offer a twelve step program and a way to reintegrate into larger society. They tell me they will help me get closer to God.

I get myself Suboxone, via an addiction psychiatrist, to help get through the withdrawals. This is an exercise unto itself, it is days and hours and so much time trying to explain to people my limitations and how I need help and how just giving me money will not help and the help I need is not to be trusted. To be not trusted. Not to be.

On my way to my second one on one session at the Denis Hurley Center the cat is starting to dry out, caved mummy skin. A lack of flies.

I am there to tell the social workers that I have Suboxone, can start it immediately, and it’s a six month process but I will be free of all street drugs within three weeks and I can I get into Newlands, I’ll come to all sessions from now on. And I am told that to get into Newlands you cannot be on any medication at all.

All I want is a room, medication and for it to be impossible to take any heroin for roughly six weeks, I want a rehab to formalise this, because it is impossible for anyone to know that I am trying to claw my way back unless there is the official stamp of a rehab, however unsuited to rehabilitation it might be.

Now it seems that even being clean is not a good enough to get into Newlands, the only free rehab I can find, it seems that I must be off all medication, even the medication that is keeping me clean. And I start the walk back from the social worker at the Denis Hurley Center, with no money for caps, and slightly close to withdrawal. I could start my Suboxone now, but I only have two weeks worth and have been told that only if I get into rehab will the full six months be paid for. Reduction therapy is a joke when some days you have nothing at all and some days you have too much. Addicts cannot self manage, its in the name. Coming off Suboxone without titrating down is a different kind of withdrawal, easier on the mind, hard on the body, which is hard on the mind.

I just want a room and time to think without the pressure of withdrawal every eight hours, twelve hours on methadone, twenty four hours on Suboxone.

I pass Matshikiza, squatting in an alley, beating like porridge the insides of a fan. She’s getting the copper out. She thinks it might be just less than a kilogram. That’s about R150, if we make the daytime scrapyard, but they’re far and it’s after three. Her hair is flotsam, long with strips of fabric, strips of coloured plastic, ribbons, discarded hair extensions, bits of bright wig, braided, melted into her own impeciably matted. She flings it over her shoulder occasionally as we work, stripping the plastic casing, always talking Matshikiza, “Iris is back,” she tells me.

“And fat,” I say as we break off the metal transformer bit, “I saw her last week.”

“Returned from the farm, yes, she was clean but there was no work, now her weight is already going” and then we have to unstrand the copper wire, but there’s more copper in the cables and we need every bit we can get, and we take to trying to burn off the plastic and someone comes out a door and shouts, “FUCK OFF PARAS” and so we amble away and find a parking lot to mine our copper.

While we burn and strip and break, her hair occasionally catches a flame and singes or flames and she brushes these forest fires off like mosquitoes. “Iris was raped by a customer the other night, but she is so not wys, you know. She went to the cops. They asked her if he paid, and then told her it wasn’t rape.”

In the fading light Matshikiza shakes her hair shampoo commercial, away from the flames, “ I am not sure if the client or the cop beat her, but her eye is fucked.”

Some boys they come past us and we find out the late night scrap yard opens in half an hour and they only pay R90 a kilogram. One of the boys wants Matshikiza to go with him to the bush, so they do and I carry on stripping the wires, burning the plastic until I am sick with acrid.

The other boy stays with me, the tiknitian, out of worn holes his backpack streams wires and broken cellphone bits and random scraps of previous technology and he paces and talks to himself anxiously, starts as if being interrupted, the familiar crys-style comforting me as I choke on plastic smoke.

Matshikiza returns with R25. We walk to the scrap merchant. He weighs us in at 400 grams, we get R40. We have R65, enough for a cap and a small piece to share.

We make it back to the open air broken building para city, a field of people huddled under black rubbish bags trying to smoke and we get a cap and a piece and we get inside the black plastic and it smells of plastic and we smell of burnt plastic and the sweat of the day and I can tell the withdrawal is coming because I am getting my sense of smell back, and a half cap isn’t going to do it but that’s what there is and I get my foil and Matshikiza loads on a dot, and I pull in, and then we dot through it, levering in the secondary smoke, dots to prevent waste, the sickness must be diminished, feeling a small bit of relief, saving the crack for just before we have to walk back up the hill from town to Percy Osbourne, where she works and I can ask people for help, and I lean back -as much as is possible inside a black garbage bag – and say, “things are bad today.”

Exhaling, we are close under the plastic, in a very tiny room, the light is gone outside and we can only see each other when the lighter sparks on. I tell her I’ve been trying to get into Newlands rehab, because I need a free rehab, but they want me to get clean first.

Matshikiza laughs. “I went to Newlands, the orderlies there, they trade nyaope for clothes or toiletries or whatever you can give. Everyone smokes there. But they charge more, so I came back.”

We hit the crack and take off the black plastic and the street lights and the people and the rustling of so many people under black plastic whispering and exhaling and we start to walk up the hill, the taxis and the rankness, the scattered pavement cookeries, the hustling shouts dying out, behind me somewhere is the Denis Hurley Centre.

Unsure now how to make our next plan and it must be made soon we stumble past the mosque where the last few styrofoams of Ramadan briyani are being handed out, and Matshikiza flirts one away from the packing up staff and we sit on the pavement scooping with broken stryofoam scoops hot rice and chicken scraps into our not hungry mouths in service of out hungry stomachs, swapping with compatriots the street gossip of the day, trying to figure out a plan.

Limping now towards Percy Street, we meet up with Grant, he’s heard I have Suboxone and so we go with him to the strip-club he dances at, and sell the Suboxone half price to the owner’s son who has a son who is trying to get clean, in order to return to school.

And we walk up to the nymandawo, to the dealers who chase us with stones, and we buy caps and pieces and steel ourselves for the walk up to the church garden to smoke

The hill ahead of us, but we will not smoke until we are safe in the garden, away from sharing, we drag ourselves up hill wreathed in eddies of mynah call.

On the corner by Venice road, Iris and her detached retina, a wary lollipop ready with okapi.

Another corner, a blankness on the pavement, an absence of mummifying cat.

We collapse into the church garden, sweating and sticky with hints of burning plastic, coal smoke, lingering briyani, various detritus, breathing in the vinegar fumes of heroin running down the foil, we have enough not to dot. Soon we fade into the intimacy of opiate oblivion. Before she sleeps she says, “Iris is lucky, she has a farm to go back to.”

In the crisp cavern of the night, a warm incursion of hand shakes Matshikiza awake, he has business for her. As she stands some of the sticks and leaves have joined into the jetsam of her hair, the glow of the street light outlines the church vaguely. She has finished sharing for the day, and will not return.

Soon it is only my own warmth left in the nest.

The withdrawal will wake me in about three hours.

Reality is that, which when you stop believing in it, does not go away

 
Read more...

from 3c0

It’s a time to be, and a time to share. To give a piece of yourself to your purpose. On this path, you must therefore let go of people and things that do not align with that purpose.

“Not all [blank]…,” he said.

You are in service of others. You feel and think deeply for others. If you cannot feel deeply about someone in your midst and that you cannot envision them as part of your purpose… then why venture forth. It’s time to say goodbye. It’s time to go.

“What do you secretly wish for?

Perhaps, this isn’t a question for me, but for him.

 
Read more...

from 下川友

10年ほど前から腰の不調があり、デスクワークがほとんどできなくなっていた。 痛いというよりは、むしろ気持ち悪い。 腰から来る不快感のようなもので、常に吐き気に近い感覚があった。

この、なんとなく気持ち悪いという感覚を医者に伝えても、うまく取り合ってもらえない。 感覚的な表現でしか説明できないものは、専門的に言語化されていないと理解されにくい。

会社の上司などを見ていても感じるが、努力不足だったり、正しい言語に正規化しないまま言葉を渡したりする事に対してやたら厳しい人がいる。 自分で努力するべきだ、という価値観を無自覚に押し付けてくる。 そして、その押し付けすら気づいていないように見える。

だから世の中は少し生きづらい。 感覚的なものをそのまま受け取ろうとしない人が富裕層に多すぎる。 結局、そういう人たちが作ったルールに従わざるを得ない。 中には甘えるなと言ってくる人もいる始末。

まあいい。

とにかく、腰がずっとつらかった。 回したり、ほぐしたりを繰り返しているうちに、ある時ふと腰の違和感が消えた。 しかし今度は、お尻や太ももに同じような気持ち悪さが出てきた。 やはり痛みではなく、不快感だ。

特に左側。 左の太ももあたりをほぐしていると、今度は左の脇に詰まるような感覚が出てくる。 左腕を横に伸ばすとどこかで引っかかる。 ただ不快なだけで、原因の場所が特定できない。

そんなことを繰り返しながら、たまに普段しない動きをしたときに、偶然その原因に当たることがある。 その時は、そこを重点的にほぐす。

昨日はお尻の下に硬さを見つけて、そこを退治した。 ただ、まだ脇の詰まりと首まわりの違和感は残っている。

良い整体師の見つけ方も分からない。 自分にとってまだこの世界はまだ全然優しくない。

 
もっと読む…

from ThruxBets

3.45 Ripon Yorkshire’s Garden Racecourse kicks off it’s 2026 season today and in 3.45, Tim Easterby has won the race twice since 2019. His MISTER SOX seems to have a really solid each way chance here ticking plenty of boxes; 7/2/4p at the course, goes well fresh, ground and trip ideal, 4/2/3p in April and is 16/6/10p on an undulating course like Ripon. From what I can make out there should be plenty of pace for him to aim at and he should find this easier than recent assignments. The only real negative is his mark which ideally could do with being a couple of pounds lower, but he was half a length third off the same 79 he goes off today on his last run at the track in a class 2. Should be really competitive here.

MISTER SOX // 0.5pt E/W @ 17/2 5 places (Bet365) BOG

I also looked at the last race at Ripon and I couldn’t split the Harriet Bethell trained pair of Milteye and On The River here, as both have good chances. I’d also have given the old boy Garden Oasis, an each way chance here if it hadn’t been for the recent rain, but that has put me off. So just a watching brief in the race for me.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog