from angelllyies

Hello!!!

Today i didnt ate sweets thats SO SO SO BADD :(((

Yeah i dont remember anything cleary so u had to look what i wrote about that , it was quite boring Its weird fronting tho ;–; I wanna go back but i cant the host doesnt want to he’s verryyy tired

By the way my name’s Ivy!!! ^^ Anyway i hope you guys had a great day i dont intend to make a whole “book” like the other loser Everyone should love PINK 💗💕💐💖💞💞💖💕💝💝🌷

Byee!! :3

 
Read more... Discuss...

from SmarterArticles

In June 2024, Goldman Sachs published a research note that rattled Silicon Valley's most cherished assumptions. The report posed what it called the “$600 billion question”: would the staggering investment in artificial intelligence infrastructure ever generate proportional returns? The note featured analysis from MIT economist Daron Acemoglu, who had recently calculated that AI would produce no more than a 0.93 to 1.16 percent increase in US GDP over the next decade, a figure dramatically lower than the techno-utopian projections circulating through investor presentations and conference keynotes. “Much of what we hear from the industry now is exaggeration,” Acemoglu stated plainly. Two months later, he was awarded the 2024 Nobel Memorial Prize in Economic Sciences, alongside his MIT colleague Simon Johnson and University of Chicago economist James Robinson, for research on the relationship between political institutions and economic growth.

That gap between what AI is promised to deliver and what it actually does is no longer an abstract concern for economists and technologists. It is reshaping public attitudes toward technology at a speed that should alarm anyone who cares about the long-term relationship between innovation and democratic society. When governments deploy algorithmic systems to deny healthcare coverage or detect welfare fraud, when corporations invest billions in tools that fail 95 percent of the time, and when the public is told repeatedly that superintelligence is just around the corner while chatbots still fabricate legal citations, something fundamental breaks in the social contract around technological progress.

The question is not whether AI is useful. It plainly is, in specific, well-defined applications. The question is what happens when an entire civilisation makes strategic decisions based on capabilities that do not yet exist and may never materialise in the form being sold.

The Great Correction Arrives

By late 2025, the AI industry had entered what Gartner's analysts formally classified as the “Trough of Disillusionment.” Generative AI, which had been perched at the Peak of Inflated Expectations just one year earlier, had slid into the territory where early adopters report performance issues, low return on investment, and a growing sense that the technology's capabilities had been systematically overstated. The positioning reflected difficulties organisations face when attempting to move generative AI from pilot projects to production systems. Integration with existing infrastructure presented technical obstacles, while concerns about data security caused some companies to limit deployment entirely.

The numbers told a damning story. According to MIT's “The GenAI Divide: State of AI in Business 2025” report, published in July 2025 and based on 52 executive interviews, surveys of 153 leaders, and analysis of 300 public AI deployments, 95 percent of generative AI pilot projects delivered no measurable profit-and-loss impact. American enterprises had spent an estimated $40 billion on artificial intelligence systems in 2024, yet the vast majority saw zero measurable bottom-line returns. Only five percent of integrated systems created significant value.

The study's authors, from MIT's NANDA initiative, identified what they termed the “GenAI Divide”: a widening split between high adoption and low transformation. Companies were enthusiastically purchasing and deploying AI tools, but almost none were achieving the business results that had been promised. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report stated. The core barrier, the authors concluded, was not infrastructure, regulation, or talent. It was that most generative AI systems “do not retain feedback, adapt to context, or improve over time,” making them fundamentally ill-suited for the enterprise environments into which they were being thrust.

This was not an outlier finding. A 2024 NTT DATA analysis concluded that between 70 and 85 percent of generative AI deployment efforts were failing to meet their desired return on investment. The Autodesk State of Design & Make 2025 report found that sentiment toward AI had dropped significantly year over year, with just 69 percent of business leaders saying AI would enhance their industry, representing a 12 percent decline from the previous year. Only 40 percent of leaders said they were approaching or had achieved their AI goals, a 16-point decrease that represented a 29 percent drop. S&P Global data revealed that 42 percent of companies scrapped most of their AI initiatives in 2025, up sharply from 17 percent the year before.

The infrastructure spending, meanwhile, continued accelerating even as returns failed to materialise. Meta, Microsoft, Amazon, and Google collectively committed over $250 billion to AI infrastructure during 2025. Amazon alone planned $125 billion in capital expenditure, up from $77 billion in 2024, a 62 percent increase. Goldman Sachs CEO David Solomon publicly acknowledged that he expected “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos called the environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman conceded that “people will overinvest and lose money.”

Trust in Freefall

The gap between AI's promises and its performance is not occurring in a vacuum. It is landing on a public already growing sceptical of the technology industry's claims, and it is accelerating a decline in trust that carries profound implications for democratic governance.

The 2025 Edelman Trust Barometer, based on 30-minute online interviews conducted between October and November 2024, revealed a stark picture. Globally, only 49 percent of respondents trusted artificial intelligence as a technology. In the United States, that figure dropped to just 32 percent. Three times as many Americans rejected the growing use of AI (49 percent) as embraced it (17 percent). In the United Kingdom, trust stood at just 36 percent. In Germany, 39 percent. The Chinese public, by contrast, reported 72 percent trust in AI, a 40-point gap that reflects not just different regulatory environments but fundamentally different cultural relationships with technology and state authority.

These figures represent a significant deterioration. A decade ago, 73 percent of Americans trusted technology companies. By 2025, that number had fallen to 63 percent. Technology, which was the most trusted sector in 90 percent of the countries Edelman studies eight years ago, now held that position in only half. The barometer also found that 59 percent of global employees feared job displacement due to automation, and nearly one in two were sceptical of business use of artificial intelligence.

The Pew Research Center's findings painted an even more granular picture of public anxiety. In an April 2025 report examining how the US public and AI experts view artificial intelligence, Pew found that 50 percent of American adults said they were more concerned than excited about the increased use of AI in daily life, up from 37 percent in 2021. More than half (57 percent) rated the societal risks of AI as high, compared with only 25 percent who said the benefits were high. Over half of US adults (53 percent) believed AI did more harm than good in protecting personal privacy, and 53 percent said AI would worsen people's ability to think creatively.

Perhaps most revealing was the chasm between expert optimism and public unease. While 56 percent of AI experts believed AI would have a positive effect on the United States over the next 20 years, only 17 percent of the general public agreed. While 47 percent of experts said they were more excited than concerned, only 11 percent of ordinary citizens felt the same. And despite their divergent levels of optimism, both groups shared a common scepticism about institutional competence: roughly 60 percent of both experts and the public said they lacked confidence that US companies would develop AI responsibly.

The Stanford HAI AI Index 2025 Report reinforced these trends globally. Across 26 nations surveyed by Ipsos, confidence that AI companies protect personal data fell from 50 percent in 2023 to 47 percent in 2024. Fewer people believed AI systems were unbiased and free from discrimination compared to the previous year. While 18 of 26 nations saw an increase in the proportion of people who believed AI products offered more benefits than drawbacks, the optimism was concentrated in countries like China (83 percent), Indonesia (80 percent), and Thailand (77 percent), while the United States (39 percent), Canada (40 percent), and the Netherlands (36 percent) remained deeply sceptical.

When Algorithms Replace Judgement

The erosion of public trust in AI would be concerning enough if it were merely a matter of consumer sentiment. But the stakes become existential when governments and corporations use overestimated AI capabilities to make decisions that fundamentally alter people's lives, and when those decisions carry consequences that cannot be undone.

Consider healthcare. In November 2023, a class action lawsuit was filed against UnitedHealth Group and its subsidiary, alleging that the company illegally used an AI algorithm called nH Predict to deny rehabilitation care to seriously ill elderly patients enrolled in Medicare Advantage plans. The algorithm, developed by a company called Senior Metrics and later acquired by UnitedHealth's Optum subsidiary in 2020, was designed to predict how long patients would need post-acute care. According to the lawsuit, UnitedHealth deployed the algorithm knowing it had a 90 percent error rate on appeals, meaning that nine out of ten times a human reviewed the AI's denial, they overturned it. UnitedHealth also allegedly knew that only 0.2 percent of denied patients would file appeals, making the error rate commercially inconsequential for the insurer despite being medically devastating for patients.

The human cost was documented in court filings. Gene Lokken, a 91-year-old Wisconsin resident named in the lawsuit, fractured his leg and ankle in May 2022. After his doctor approved physical therapy, UnitedHealth paid for only 19 days before the algorithm determined he was safe to go home. His doctors appealed, noting his muscles were “paralysed and weak,” but the insurer denied further coverage. His family paid approximately $150,000 over the following year until he died in July 2023. In February 2025, a federal court allowed the case to proceed, denying UnitedHealth's attempt to dismiss the claims and waiving the exhaustion of administrative remedies requirement, noting that patients faced irreparable harm.

The STAT investigative series “Denied by AI,” which broke the UnitedHealth story, was a 2024 Pulitzer Prize finalist in investigative reporting. A US Senate report released in October 2024 found that UnitedHealthcare's prior authorisation denial rate for post-acute care had jumped to 22.7 percent in 2022 from 10.9 percent in 2020. The healthcare AI problem extends far beyond a single insurer. ECRI, a patient safety organisation, ranked insufficient governance of artificial intelligence as the number two patient safety threat in 2025, warning that medical errors generated by AI could compromise patient safety through misdiagnoses and inappropriate treatment decisions. Yet only about 16 percent of hospital executives surveyed said they had a systemwide governance policy for AI use and data access.

The pattern repeats across domains where algorithmic systems are deployed to process vulnerable populations. In the Netherlands, the childcare benefits scandal stands as perhaps the most devastating example of what happens when governments trust flawed algorithms with life-altering decisions. The Dutch Tax and Customs Administration deployed a machine learning model to detect welfare fraud that illegally used dual nationality as a risk characteristic. The system falsely accused over 20,000 parents of fraud, resulting in benefits termination and forced repayments. Families were driven into bankruptcy. Children were removed from their homes. Mental health crises proliferated. Seventy percent of those affected had a migration background, and fifty percent were single-person households, mostly mothers. In January 2021, the Dutch government was forced to resign after a parliamentary investigation concluded that the government had violated the foundational principles of the rule of law.

The related SyRI (System Risk Indication) system, which cross-referenced citizens' employment, benefits, and tax data to flag “unlikely citizen profiles,” was deployed exclusively in neighbourhoods with high numbers of low-income households and disproportionately many residents from immigrant backgrounds. In February 2020, the Hague court ordered SyRI's immediate halt, ruling it violated Article 8 of the European Convention on Human Rights. Amnesty International described the system's targeting criteria as “xenophobic machines.” Yet investigations by Lighthouse Reports later confirmed that similar algorithmic surveillance practices continued under slightly adapted systems, even after the ban, with the government having “silently continued to deploy a slightly adapted SyRI in some of the country's most vulnerable neighbourhoods.”

The Stochastic Parrot Problem

Understanding why AI hype is so dangerous requires understanding what these systems actually do, as opposed to what their makers claim they do.

Emily Bender, a linguistics professor at the University of Washington who was included in the inaugural TIME100 AI list of most influential people in artificial intelligence in 2023, co-authored a now-famous paper arguing that large language models are fundamentally “stochastic parrots.” They do not understand language in any meaningful sense. They draw on training data to predict which sequence of tokens is most likely to follow a given prompt. The result is an illusion of comprehension, a pattern-matching exercise that produces outputs resembling intelligent thought without any of the underlying cognition.

In 2025, Bender and sociologist Alex Hanna, director of research at the Distributed AI Research Institute and a former Google employee, published “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.” The book argues that AI hype serves as a mask for Big Tech's drive for profit, with the breathless promotion of AI capabilities benefiting technology companies far more than users or society. “Who benefits from this technology, who is harmed, and what recourse do they have?” Bender and Hanna ask, framing these as the essential questions that the hype deliberately obscures. Library Journal called the book “a thorough, witty, and accessible argument against AI that meets the moment.”

The stochastic parrot problem has real-world consequences that compound the trust deficit. When AI systems fabricate information with perfect confidence, they undermine the epistemic foundations that societies rely on for decision-making. Legal scholar Damien Charlotin, who tracks AI hallucinations in court filings through his database, had documented at least 206 instances of lawyers submitting AI-generated fabricated case citations by mid-2025. Stanford University's RegLab found that even premium legal AI tools hallucinated at alarming rates: Westlaw's AI-Assisted Research produced hallucinated or incorrect information 33 percent of the time, providing accurate responses to only 42 percent of queries. LexisNexis's Lexis+ AI hallucinated 17 percent of the time. A 2025 study published in Nature Machine Intelligence found that large language models cannot reliably distinguish between belief and knowledge, or between opinions and facts, noting that “failure to make such distinctions can mislead diagnoses, distort judicial judgements and amplify misinformation.”

If the tools marketed as the most reliable in their field fabricate information roughly one-fifth to one-third of the time, what does this mean for the countless lower-stakes applications where AI outputs are accepted without verification?

The AI Washing Economy

The gap between marketing claims and actual capabilities has grown so pronounced that regulators have begun treating AI exaggeration as a form of securities fraud.

In March 2024, the US Securities and Exchange Commission brought its first “AI washing” enforcement actions, simultaneously charging two investment advisory firms, Delphia and Global Predictions, with making false and misleading statements about their use of AI. Delphia paid $225,000 and Global Predictions paid $175,000 in civil penalties. These firms had not been entirely without AI capabilities, but they had overstated what those systems could do, crossing the line from marketing enthusiasm into regulatory violation.

The enforcement actions escalated rapidly. In January 2025, the SEC charged Presto Automation, a formerly Nasdaq-listed company, in the first AI washing action against a public company. Presto had claimed its AI voice system eliminated the need for human drive-through order-taking at fast food restaurants, but the SEC alleged the vast majority of orders still required human intervention and that the AI speech recognition technology was owned and operated by a third party. In April 2025, the SEC and Department of Justice charged the founder of Nate Inc. with fraudulently raising over $42 million by claiming the company's shopping app used AI to process transactions, when in reality manual workers completed the purchases. The claimed automation rate was above 90 percent; the actual rate was essentially zero.

Securities class actions targeting alleged AI misrepresentations increased by 100 percent between 2023 and 2024. In February 2025, the SEC announced the creation of a dedicated Cyber and Emerging Technologies Unit, tasked with combating technology-related misconduct, and flagged AI washing as a top examination priority.

The pattern is instructive. When a technology is overhyped, the incentive to exaggerate capabilities becomes irresistible. Companies that accurately describe their modest AI implementations risk being punished by investors who have been conditioned to expect transformative breakthroughs. The honest actors are penalised while the exaggerators attract capital, creating a market dynamic that systematically rewards deception.

Echoes of Previous Bubbles

The AI hype cycle is not without historical precedent, and the parallels offer both warnings and qualified reassurance.

During the dot-com era, telecommunications companies laid more than 80 million miles of fibre optic cables across the United States, driven by wildly inflated claims about internet traffic growth. Companies like Global Crossing, Level 3, and Qwest raced to build massive networks. The result was catastrophic overcapacity: even four years after the bubble burst, 85 to 95 percent of the fibre laid remained unused, earning the nickname “dark fibre.” The Nasdaq composite rose nearly 400 percent between 1995 and March 2000, then crashed 78 percent by October 2002.

The parallels to today's AI infrastructure buildout are unmistakable. Meta CEO Mark Zuckerberg announced plans for an AI data centre “so large it could cover a significant part of Manhattan.” The Stargate Project aims to develop a $500 billion nationwide network of AI data centres. Goldman Sachs analysts found that hyperscaler companies had taken on $121 billion in debt over the past year, representing a more than 300 percent increase from typical industry debt levels. AI-related stocks had accounted for 75 percent of S&P 500 returns, 80 percent of earnings growth, and 90 percent of capital spending growth since ChatGPT launched in November 2022.

Yet there are important differences. Unlike many dot-com companies that had no revenue, major AI players are generating substantial income. Microsoft's Azure cloud service grew 39 percent year over year to an $86 billion run rate. OpenAI projects $20 billion in annualised revenue. The Nasdaq's forward price-to-earnings ratio was approximately 26 times in November 2023, compared to approximately 60 times at the dot-com peak.

The more useful lesson from the dot-com era is not about whether the bubble will burst, but about what happens to public trust and institutional decision-making in the aftermath. The internet survived the dot-com crash and eventually fulfilled many of its early promises. But the crash destroyed trillions in wealth, wiped out retirement savings, and created a lasting scepticism toward technology claims that took years to overcome. The institutions and individuals who made decisions based on dot-com hype, from pension funds that invested in companies with no path to profitability to governments that restructured services around technologies that did not yet work, bore costs that were never fully recovered.

Algorithmic Bias and the Feedback Loop of Injustice

Perhaps the most consequential long-term risk of the AI hype gap is its intersection with systemic inequality. When policymakers deploy AI systems in criminal justice, welfare administration, and public services based on inflated claims of accuracy and objectivity, the consequences fall disproportionately on communities that are already marginalised.

Predictive policing offers a stark illustration. The Chicago Police Department's “Strategic Subject List,” implemented in 2012 to identify individuals at higher risk of gun violence, disproportionately targeted young Black and Latino men, leading to intensified surveillance and police interactions in those communities. The system created a feedback loop: more police dispatched to certain neighbourhoods resulted in more recorded crime, which the algorithm interpreted as confirmation that those neighbourhoods were indeed high-risk, which led to even more policing. The NAACP has called on state legislators to evaluate and regulate the use of predictive policing, noting mounting evidence that these tools increase racial biases and citing the lack of transparency inherent in proprietary algorithms that do not allow for public scrutiny.

The COMPAS recidivism prediction tool, widely used in US criminal justice, was found to produce biased predictions against Black defendants compared to white defendants, trained on historical data saturated with racial bias. An audit by the LAPD inspector general found “significant inconsistencies” in how officers entered data into a predictive policing programme, further fuelling biased predictions. These are not edge cases or implementation failures. They are the predictable consequences of deploying pattern-recognition systems trained on data that reflects centuries of structural discrimination.

In welfare administration, the pattern is equally troubling. The Dutch childcare benefits scandal demonstrated how algorithmic systems can automate inequality at scale. The municipality of Rotterdam used a discriminatory algorithm to profile residents and “predict” social welfare fraud for three years, disproportionately targeting young single mothers with limited knowledge of Dutch. In the United Kingdom, the Department for Work and Pensions admitted, in documents released under the Freedom of Information Act, to finding bias in an AI tool used to detect fraud in universal credit claims. The tool's initial iteration correctly matched conditions only 35 percent of the time, and by the DWP's own admission, “chronic fatigue was translated into chronic renal failure” and “partially amputation of foot was translated into partially sighted.”

These failures share a common thread. The AI systems were deployed based on claims of objectivity and accuracy that did not withstand scrutiny. Policymakers, influenced by industry hype about AI's capabilities, trusted algorithmic outputs over human judgement, and the people who paid the price were those least equipped to challenge the decisions being made about their lives.

What Sustained Disillusionment Means for Innovation

The long-term consequences of the AI hype gap extend beyond immediate harms to individual victims. They threaten to reshape the relationship between society and technological innovation in ways that could prove difficult to reverse.

First, there is the problem of misallocated resources. The MIT study found that more than half of generative AI budgets were devoted to sales and marketing tools, despite evidence that the best returns came from back-office automation, eliminating business process outsourcing, cutting external agency costs, and streamlining operations. When organisations chase the use cases that sound most impressive rather than those most likely to deliver value, they waste capital that could have funded genuinely productive innovation. The study also revealed a striking shadow economy: while only 40 percent of companies had official large language model subscriptions, 90 percent of workers surveyed reported daily use of personal AI tools for job tasks, suggesting that the gap between corporate AI strategy and actual AI utility is even wider than the headline figures suggest.

Second, the trust deficit creates regulatory feedback loops that can stifle beneficial applications. As public concern about AI grows, so does political pressure for restrictive regulation. The 2025 Stanford HAI report found that references to AI in draft legislation across 75 countries increased by 21.3 percent, continuing a ninefold increase since 2016. In the United States, 73.7 percent of local policymakers agreed that AI should be regulated, up from 55.7 percent in 2022. This regulatory momentum is a direct response to the trust deficit, and while some regulation is necessary and overdue, poorly designed rules driven by public fear rather than technical understanding risk constraining beneficial applications alongside harmful ones. Colorado became the first US state to enact legislation addressing algorithmic bias in 2024, with California and New York following with their own targeted measures.

Third, the hype cycle creates a talent and attention problem. When AI is presented as a solution to every conceivable challenge, researchers and engineers are pulled toward fashionable applications rather than areas of genuine need. Acemoglu has argued that “we currently have the wrong direction for AI. We're using it too much for automation and not enough for providing expertise and information to workers.” The hype incentivises building systems that replace human judgement rather than augmenting it, directing talent and investment away from applications that could produce the greatest social benefit.

Finally, and perhaps most critically, the erosion of public trust in AI threatens to become self-reinforcing. Each failed deployment, each exaggerated claim exposed, each algorithmic system found to be biased or inaccurate further deepens public scepticism. Meredith Whittaker, president of Signal, has warned about the security and privacy risks of granting AI agents extensive access to sensitive data, describing a future where the “magic genie bot” becomes a nightmare if security and privacy are not prioritised. When public trust in AI erodes, even beneficial and well-designed systems face adoption resistance, creating a vicious cycle where good technology is tainted by association with bad marketing.

Rebuilding on Honest Foundations

The AI hype gap is not merely a marketing problem or an investment risk. It is a structural challenge to the relationship between technological innovation and public trust that has been building for years and is now reaching a critical inflection point.

The 2025 Edelman Trust Barometer found that the most powerful drivers of AI enthusiasm are trust and information, with hesitation rooted more in unfamiliarity than negative experiences. This finding suggests a path that does not require abandoning AI, but demands abandoning the hype. As people use AI more and experience its ability to help them learn, work, and solve problems, their confidence rises. The obstacle is not the technology itself but the inflated expectations that set users up for disappointment.

Gartner's placement of generative AI in the Trough of Disillusionment is, paradoxically, encouraging. As the firm's analysts note, the trough does not represent failure. It represents the transition from wild experimentation to rigorous engineering, from breathless promises to honest assessment of what works and what does not. The companies and institutions that emerge successfully from this phase will be those that measured their claims against reality rather than against their competitors' marketing materials.

The lesson from previous technology cycles is clear but routinely ignored. The dot-com bubble popped, but the internet did not disappear. What disappeared were the companies and institutions that confused hype with strategy. The same pattern will likely repeat with AI. The technology will mature, find its genuine applications, and deliver real value. But the path from here to there runs through a period of reckoning that demands honesty about what AI can and cannot do, transparency about the limitations of algorithmic decision-making, and accountability for the real harms caused by deploying immature systems in high-stakes contexts.

As Bender and Hanna urge, the starting point must be asking basic but important questions: who benefits, who is harmed, and what recourse do they have? As Acemoglu wrote in his analysis for “Economic Policy” in 2024, “Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing.” The potential is real. But potential is not performance, and treating it as such has consequences that a $600 billion question only begins to capture.


References and Sources

  1. Acemoglu, D. (2024). “The Simple Macroeconomics of AI.” Economic Policy. Massachusetts Institute of Technology. https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf

  2. Amnesty International. (2021). “Xenophobic Machines: Dutch Child Benefit Scandal.” Retrieved from https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/

  3. Bender, E. M. & Hanna, A. (2025). The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Penguin/HarperCollins.

  4. CBS News. (2023). “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims.” Retrieved from https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/

  5. Challapally, A., Pease, C., Raskar, R. & Chari, P. (2025). “The GenAI Divide: State of AI in Business 2025.” MIT NANDA Initiative. As reported by Fortune, 18 August 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

  6. Edelman. (2025). “2025 Edelman Trust Barometer.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer

  7. Edelman. (2025). “Flash Poll: Trust and Artificial Intelligence at a Crossroads.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence

  8. Edelman. (2025). “The AI Trust Imperative: Navigating the Future with Confidence.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector

  9. Gartner. (2025). “Hype Cycle for Artificial Intelligence, 2025.” Retrieved from https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence

  10. Goldman Sachs. (2024). “Top of Mind: AI: in a bubble?” Goldman Sachs Research. Retrieved from https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble

  11. Healthcare Finance News. (2025). “Class action lawsuit against UnitedHealth's AI claim denials advances.” Retrieved from https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances

  12. Lighthouse Reports. (2023). “The Algorithm Addiction.” Retrieved from https://www.lighthousereports.com/investigation/the-algorithm-addiction/

  13. Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D. & Ho, D. E. (2025). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” Journal of Empirical Legal Studies, 0:1-27. https://doi.org/10.1111/jels.12413

  14. MIT Technology Review. (2025). “The great AI hype correction of 2025.” Retrieved from https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/

  15. NAACP. (2024). “Artificial Intelligence in Predictive Policing Issue Brief.” Retrieved from https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief

  16. Nature Machine Intelligence. (2025). “Language models cannot reliably distinguish belief from knowledge and fact.” https://doi.org/10.1038/s42256-025-01113-8

  17. Novara Media. (2025). “How Labour Is Using Biased AI to Determine Benefit Claims.” Retrieved from https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/

  18. NTT DATA. (2024). “Between 70-85% of GenAI deployment efforts are failing to meet their desired ROI.” Retrieved from https://www.nttdata.com/global/en/insights/focus/2024/between-70-85p-of-genai-deployment-efforts-are-failing

  19. Pew Research Center. (2025). “How the US Public and AI Experts View Artificial Intelligence.” Retrieved from https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/

  20. Radiologybusiness.com. (2025). “'Insufficient governance of AI' is the No. 2 patient safety threat in 2025.” Retrieved from https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025

  21. SEC. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36. Retrieved from https://www.sec.gov/newsroom/press-releases/2024-36

  22. Stanford HAI. (2025). “The 2025 AI Index Report.” Stanford University Human-Centered Artificial Intelligence. Retrieved from https://hai.stanford.edu/ai-index/2025-ai-index-report

  23. STAT News. (2023). “UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans.” Retrieved from https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/

  24. The Dutch Childcare Benefits Scandal. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal

  25. Washington Post. (2024). “Big Tech is spending billions on AI. Some on Wall Street see a bubble.” Retrieved from https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Shad0w's Echos

The Incident

#nsfw #CeCe

We had a satisfying orgasm together in that stairway. CeCe bare without any cover. Me with my clothes in a pile watching her use this moment to reset herself and find center. I knew deep down she was hurting. I knew in this moment this is all she had to help ground her. I chose her long ago before I even knew it. I didn't pity her. I still admired her. Even though the world was far different from how I saw it. I did my best to understand her. What made her tick, why she did things. Why she needed porn. Why she needed to be naked and risk it all. No matter how out of hand this got, I would always love her.

We looked each other in the eyes deeply as we rubbed ourselves to orgasm in the cool stairwell. It was late and the building was still. I wasn't worried about getting caught.

We made it back to the dorms that cold night. I was dressed now. CeCe's naked body still trembling from the stairwell release, her caramel skin chilled but her eyes a bit clearer, the orgasm having reset her just enough to function. I wrapped her in a blanket as soon as we slipped inside our room, the open blinds letting in the faint glow of streetlights and moonlight.

She curled up on the bed, silent at first, but I knew it was coming. She started sobbing quietly. I held her, whispering reassurances about jobs and apartments, our future together. “We'll be okay,” I murmured, stroking her thick thighs. “You're safe now.” She nodded, exhausted, and we drifted into an uneasy sleep, her head on my chest, the weight of her meltdown lingering like a shadow.

But the peace shattered around 4 a.m. A frantic pounding echoed through the hall, followed by shouting. Someone was yelling CeCe's name, over and over, laced with hysteria. I bolted upright, heart slamming, as CeCe stirred beside me, her eyes widening in terror. “Mom?” she whispered fearfully, but before we could react, the door burst open.

Somehow, her mother had sweet-talked, or probably forced, her way past the night security at the building entrance, breaking every rule in a desperate bid to “save” her daughter. Somehow with an extreme show of force and strength her mother had rammed the door open. It all happened so fast.

There she stood in the doorway, wild-eyed and disheveled, coat thrown over pajamas, her face a mask of frantic rage. “CeCe! What you said on the phone! Porn!? Masturbation!? You're coming home now!! This isn't you!”

CeCe scrambled back, clutching the blanket to her chest, but her mom lunged forward, grabbing her arm, trying to drag her out—naked, into the cold hallway, the winter air seeping through the building's drafty corridors. “No! Let go!” CeCe screamed, twisting away, her full breasts heaving as the blanket slipped, exposing her curves to the chaos. I jumped up, yelling for her to stop, but her mom was beyond reason, ranting about sin and family honor, all because of CeCe's raw confession about porn and refusal to conform to traditional marriage. It was a full mental breakdown—her mom clawing at CeCe, sobbing incoherently, the scene drawing neighbors out of their rooms in shock.

Someone down the hall must have called campus police; sirens wailed faintly in the distance, growing louder as officers arrived, pulling her mom off CeCe and restraining her as she thrashed and wailed. “She's ruined! My baby girl's ruined by that filth!” The arrest was swift almost as quickly as her mom had arrived. Trespassing, disorderly conduct, and assault charges were pending. Almost ever door was open with a resident peeking out.

CeCe was left standing there in the hallway, naked and exposed to the cold, her ass and pussy on full display under the fluorescent lights, neighbors gawking before averting their eyes. This was not the exposure she wanted or fantasized about. She was just in shock, curled up in a ball on the cold floor. The winter chill bit into her skin, goosebumps rising on her thighs. Everything was all wrong and she was wide eyed and non-responsive. I was horrified.

I rushed to her side, grabbing her favorite hoodie and a stuffed animal from the room. I slid the hoodie over her naked body and gently placed the stuffed animal in her arms hoping she would reach out and hold it. It just laid there on her arms.. as if it didn't exist. She stared blankly unfocused on anything. She didn't focus on me. She was in some far away place.

The anxiety attack gripped her fully. She collapsed against me, hyperventilating, her body shaking uncontrollably, sobs turning to gasps as the world spun. “Tasha... I can't... breathe...” Campus security escorted us to the health center, where they called for professional counseling right then and there. The therapist on call helped stabilize her with breathing exercises and a mild sedative, but when CeCe started sessions the next day, she never revealed the full truth—nothing about her porn watching, her chronic masturbation, or her naked habits. She framed it as “family stress” and “independence issues,” her brilliant mind compartmentalizing to protect her core self. While it was not the whole truth, it still was the root of the problem. Her porn addiction was her discovering her true self and claiming independence from a toxic and oppressive situation. Porn was her safe space. I wasn't going to take that away from her.

After that night, CeCe cut her mom off completely. She had no desire to call, no desire to visit. She visibly shuddered when I asked if she was going to talk to her mom again.

She changed her number the next week. Then as calls from extended family rolled in, she blocked every family contact. Eventually she deleted all of her social media apps entirely. She was on a quest to be totally unreachable.

The cutoff was so complete that never bothered retrieving her belongings from home. “It's not worth it,” she said flatly one evening, naked on the bed with porn muted on her phone, her fingers idly circling her clit as if on autopilot. Instead, she poured all her time and effort into landing a good-paying job. She had a scholarship, but her mom was funding a good portion of her education. She didn't want to rely on anything from her family for any reason.

Using her sheer will and determination, her engineering prowess was able to shine through in interviews. She aced a position at a local tech firm, something entry-level but solid, using her skills to design software prototypes—brilliant work that paid just enough for us to afford a small apartment off-campus by summer. We could live together on our own with my job and her entry level position. When she got the job offer, she smiled. But she was never quite the same.

That night had broken something in her; her dreams narrowed, ambitions stripped down to basics. No more talk of grad school or big career leaps. No talks of upper management or 6 figure salaries. She just wanted a stable savings account to fund our life, endless porn to fuel her obsessions, and the freedom to be naked whenever and wherever she wanted.

Everything else felt hollow, tainted by the trauma. The need to go above her goals felt like something her mother wanted. It wasn't something she really wanted. I was a silent witness to a beautiful woman barely clinging onto normalcy trying to put parts of herself back together again. I knew she was lost. I was her compass and her rock. I didn't complain. I can't help but love her.

We moved out. We were now two college drop outs taking a different path in life. During the move I helped CeCe focus on small goals. I reminded her to focus on small wins and not think about the big stuff. We didn't have much at first, just a queen size mattress on the floor, some cheap furniture to make a small office area for our computer, and basic utensils to cook. I looked at the pitiful state of or living arrangement. CeCe reminded me daily that we have each other. She was right. She knew when I needed her the most.

A year passed in our shared life. We saved up for furniture together, we made financial decisions together. We thrived together. It was a seamless blend of companionship and unspoken intimacy that we never bothered to label. Tasha and CeCe. We never defined our relationship publicly. We never talked about marriage or slapped a title on it. We were just long-term roommates, best friends who shared everything.

CeCe tried, though, more than once, to nudge me toward dating, to “get away from the always naked chick,” as she'd self-deprecatingly call herself. She'd catch me staring during one of her open window goon sessions, fingers buried in her slick pussy as she moaned to a video of black women flashing in public.

She sighed, “Tasha, you deserve someone normal. Go out, find a guy or girl who doesn't spend half the day rubbing one out. I'm holding you back. You could have gotten your degree, but I was too busy on the verge of breaking down for you to focus. Yes I have this job now and I can provide for us, but I wound up dragging you down with me.”

There was a long silence. She didn't stop rubbing or watching, but I saw a tear stream down her cheek. The wound was still wide open from what she endured. I knew she needed more than therapy to make it out of this. When I made that silent vow to stay with her, that resolve never wavered. I wasn't going anywhere. I have shaped my whole world around her.

She was my world. On her good days, I loved the thrill of her escalations. The safety of our bond was perfect. Her autistic focus made her love so intensely, so unfiltered. I got up, pull her close, kissing her deeply, whispering, “You're all I need, CeCe. This is us.” And she'd melt into it, her thick thighs wrapping around me, but the guilt lingered in her eyes, even as she came undone under my touch. “You are my world Tasha.” She turned and looked me in my eyes and kissed me softly.

Eventually, as her job at the tech firm stabilized and my cafe gig evolved into management. We didn't have a degree, but we had stability. We started our careers. We were happy with each other.

Slowly, CeCe started smiling again. She dressed like a baddie for work, but she started wearing her hoodies again on casual days. We started going out on dates again. CeCe started exposing herself in public again. As we approached our 2nd year living together, CeCe was almost back to her old self. I think CeCe was ready to meet my mom. One day I asked, “Do you want to meet my family?” Her eyes twinkled and she smiled. “Yes, I would love that very much.”

 
Read more... Discuss...

from Roscoe's Story

In Summary: * Listening to the feed from The Maryland Sports Network for the pregame show then the radio call of tonight's men's basketball game between the Iowa Hawkyes and the Maryland Terrapins.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Health Metrics: * bw= 228.29 lbs. * bp= 145/83 (64)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:15 – 1 peanut butter sandwich, fried rice, * 09:30 – garden salad * 10:30 – enchiladas * 14:00 – 1 fresh apple * 18:30 – 1 bowl of air-popped popcorn

Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 06:10 – bank accounts activity monitored * 06:20 – read, pray, follow news reports from various sources, surf the socials * 16:00 – tune into Baltimore Sports Radio WJZ-FM – 105.7 FM The Fan for the pregame show then the call of tonight's men's college basketball game between the Iowa Hawkyes and the Maryland Terrapins. * 19.24 – And Maryland Wins! They upset the #25 team in the country. Final score: 77 to 70.

Chess: * 12:15 – moved in all pending CC games

 
Read more...

from wystswolf

It is 10:30 again.

Wolfinwool · Black Forest Dream

The village lights fall away behind me, and I step into the Black Forest, a wall-less cathedral. The rain is light—barely rain at all—just a mist that gathers on my lashes and darkens the shoulders of my coat. The air smells like pine sap and wet earth. Everything is black and alive. And I am invigorated.

I've no phone, and I am utterly lost.

I walk until the road gives way to a narrow trail. The hills rise, then dip into shadowed valleys. I can no longer see the hazy blue/black sky. The trees knit themselves overhead, and the world becomes close and breathing. I hear footsteps behind me.

Not echo.

Not memory.

Her.

I do not turn at first. I know the cadence of her presence the way I know my own pulse. She comes alongside me without ceremony, her shoulder brushing mine. Her hair is damp from the mist. There is warmth coming from her, subtle but undeniable.

“You went too far last night,” she says softly.

“I was looking for you,” I answer.

“I was already there.”

We climb a small rise together. At the top, the forest opens into a dark meadow, a hollow between hills. It is absolute blackness—no houses, no distant lamps. Just the quiet breathing of the earth.

I reach for her hand.

Not urgently. Not greedily.

Just to know that this is real.

Her fingers are cool from the rain, and she threads them through mine as if it is the most natural covenant in the world. We walk like that, saying little. The silence between us is not empty. It is thick and full and holy. I confess things I could not say in daylight.

I tell her about the fire in me. The way desire rises unbidden. The way my body betrays the careful architecture of my vows. I tell her I feel sometimes like a boy again—uncontained, startled by the intensity of wanting.

She does not pull away.

“I know,” she says.

There is no shame in her voice. No accusation. Only understanding. The rain deepens. It gathers at the ends of her hair. I can barely see her face, but I feel her watching me. Studying me as if I am something fragile and fierce at once.

“You don’t need to prove anything to me,” she whispers. “I already know what you feel.”

Her hand comes to my chest. Over my heart.

I feel it hammering against her palm.

I sink down into the wet grass—not in despair, not in defeat—but in reverence. The forest presses in around us. The dark feels protective, not threatening.

I rest my forehead against her stomach like a fawn seeking warmth. My hands find her waist, steady and trembling at once. I am not asking to take. I am asking to belong.

She places her fingers in my hair and holds me there.

“You must walk your life in the light,” she murmurs above me. “But here, in the dark, we are honest.”

The words undo me.

I rise slowly. I cup her face in my hands, rain-slick and luminous. When I kiss her, it is not fevered or frantic. It is slow. Intent. The kind of kiss that holds back as much as it gives. The kind that says: I want you. I choose restraint. I burn anyway.

The forest witnesses.

An owl calls somewhere down in the valley.

She steps backward then, not abruptly, not cruelly. Just gently, as mist moves off water. Her fingers slip from mine.

I reach for her—

But my hand closes on nothing but rain. And I am alone again in the Black Forest.

Lost, perhaps.

But steadier.

Because I know she was here.

Because I know the path back exists.

Because even in the deepest dark, I am not wandering without love.

Forevermore.


#poetry #wyst #romancingiberia #germany #blackforest

 
Read more... Discuss...

from Elias

I just revisited the website of the Qualia Research Institute https://qri.org, scrolled down a bit, and stumbled on:

Experience Our Scents Discover our selection of scents inspired by our research. Each scent is a unique > exploration of the state-space of consciousness.
View Scents

Of course, this caught my attention.

Now, before I dive deeper into the scent topic, I have to say that the mission of the QRI seems incredibly simple yet important and powerful and very.. relateable to me:

  1. Develop a precise mathematical language for describing subjective experience
  2. Understand the nature of emotional valence (happiness and suffering)
  3. Map out the full space of possible conscious experiences
  4. Build technologies to improve the lives of sentient beings

Now, let me give you a bit of what their scents are about:

The Magical Creatures line of scents from QRI is a collection designed to highlight the complex and irregular nature of the state-space of olfaction. We believe that the space of olfaction is not well-described by a simple Euclidean space, but is instead a complex ecosystem with hidden interstitial gems and exotic creatures. The scents in the line are designed to showcase “special effects” found in this space.

Andrés has a Master’s Degree in Psychology with an emphasis in computational models from Stanford and a professional background in graph theory, statistics, and affective science. Andrés was also the co-founder of the Stanford Transhumanist Association and first place winner of the Norway Math Olympiad. His work at QRI ranges from algorithm design, to psychedelic theory, to neurotechnology development, to mapping and studying the computational properties of consciousness. Andrés blogs at qualiacomputing.com.

 
Read more... Discuss...

from barelycompiles

Multistage docker builds keep image sizes down by allowing us to discard tools needed for our build and test steps. We create multiple images as stages and can pass artifacts from preceding stages to the current stage.

This works because of how docker images are structured. Each image is a stack of read-only filesystem layers where each instruction in the Dockerfile (RUN, COPY...) creates a new layer on top of previous ones. Each FROM statement starts a new, independent layer stack. When you COPY —from=, Docker reaches into that other stages layer stack and copies files into a new layer in the current stage. The other stage's layers themselves are discarded from the final image.

# ---- Build Stage ----
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# ---- Test Stage ----
FROM build AS test
RUN npm run test

# ---- Production Stage ----
FROM node:20-alpine AS production
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=build /app/dist ./dist
USER node
EXPOSE 3000
CMD ["node", "dist/index.js"]

So this should run through when we do: docker build --target production -t myapp . right? Not quite, we are skipping the test stage since test is not in the dependency chain leading to production. Docker will only build stages that are directly referenced.

So instead of: COPY --from=build /app/dist ./dist we need to do: COPY --from=test /app/dist ./dist

 
Read more...

from memorydial

I said I'd build it. Last post I wrote, “That's the next build.” The Eat Watch on my wrist worked, but it was deaf. I logged meals in MyFatnessPal, then tapped the same calories into the watch by hand. I was the middleware. Two systems, no bridge, me in the middle pressing buttons.

Then I built DogWatch.

DogWatch was supposed to be about counting dogs on the walk to daycare. It was. But it taught me plumbing. Data flowing from wrist to phone to server. A Garmin app that talked to Django. By the time the first walk synced, zero dogs and all, I had a pipeline.

If I could sync dog counts, I could sync calories.

The Build

The architecture is simple because the watch is stupid. On purpose.

Every five minutes, the Garmin sends one request to the server: give me today's numbers. The server checks what I've logged in MyFatnessPal, does the maths, and sends back three numbers. Goal. Consumed. Remaining.

The watch stores nothing. Calculates nothing. Decides nothing. It asks one question and displays the answer. Green means eat. Red means stop.

When I log a burrito at lunch, the server knows within five minutes. I don't open anything. I glance at my wrist. The number moved.

Midnight comes, the count starts fresh, and the watch goes green again. The first morning it worked, I just stood there looking at it. A zero I hadn't typed.

Walker called it a fuel gauge. The gauge doesn't know how the engine works. It just reads the tank.

The Skin

Walker never built the Eat Watch. But he drew one. In The Hacker's Diet he mocked up a watch face: square, black, a red-bordered LCD screen with “Marinchip Eat Watch” in italic script across the top. It looked like a Casio from 1985. A “Turbo Digital” badge sat at the bottom like a maker's mark on a thing that never existed.

I wanted mine to look like that. The problem was shape. Walker drew a rectangle. Garmin makes circles. So I redrew it: same bezels, same script, same badge, bent around a round face. The LCD tan, the red border, the italic branding. All of it, just curved.

Now it sits on my wrist. Green text, “EAT,” the remaining calories underneath. A relic from a future that never shipped, finally running on real hardware.

The Arc

A calorie counter. Then a Garmin app. Then a system to connect them. Each build was the logical next step, each question a little harder than the last. Could I build something useful? Could I build for hardware? Could I wire it all together?

The answer kept being yes.

The calorie counter talks to the watch. Loop closed.

I look at my wrist. Green. I can eat.

Walker imagined this in 1991. He never had the watch. I do.

-—

If you want to try this yourself:

FatWatch is a Garmin watch face that connects to MyFatnessPal. If there's enough interest I'll make both available. MyFatnessPal is the calorie counter that started all of this. You can read about it in the first post in this series.

 
Read more...

from Jall Barret

Gina Yashere as the Star Trek character Lura Thok. She wears a read starfleet uniform with her collar unzipped partially.

I haven't seen the new Star Trek show, Star Trek: Starfleet Academy. It's not that I wouldn't. I don't have Paramount+ because I don't have money coming in. Even if I did, I don't pay money to people eager to kiss the ass of fascists. I do pal around a lot in Star Trek communities.

One of the things I saw coming out of certain ends of 'the fandom' is this “It's impossible to have Jem'Hadar women. How do they have Jem'Hadar women?!” thing. Academy has had like six episodes or something so it's likely this question has already been answered.

It didn't really need to be, though.

Reading the text

As a long time Star Trek fan who has watched DS9 numerous times (and recently finished up another DS9 watch through), I saw Lura Thok (played by Gina Yashere) in some promotional graphics and thought “Oh, cool! Jem'Hadar!” I'm annoyed to say looking up her character's name gave me a slight spoiler but I'm not going to replicate that issue for you here.

In DS9, we're given hints over the course of several seasons that ultimately amount to this: the Founders didn't create the Vorta or the Jem'Hadar from scratch. They took some species that existed already and modified them to suit their purposes. The Jem'Hadar were created to be their soldiers and the Vorta were created to be administrators. According to Vorta legend, the Vorta had been tiny little critters we would likely assume were not fully sentient when they were what old SciFi might call “uplifted” by the Founders. Given how much the Founders lie, I'm not sure I would take that particularly seriously.

We don't have any information about where the Jem'Hadar originally came from but we have solid reasons to believe they weren't created from scratch. One example is in Hippocratic Oath where we discover that one Jem'Hadar was able to break his hold over the genetic addiction to ketracel-white. If the Founders had created the Jem'Hadar completely from scratch and completely controlled their breeding, it's unlikely that Goran'Agar would have been able to manage that. Other episodes reveal that the Jem'Hadar's genetic conditioning isn't as strong as the Founders believe (or purport to believe) it is.

A close up shot of Scott MacDonald in Jem'Hadar makeup playing Goran'Agar. We're looking over the shoulder of Julian Bashir, who is wearing his teal colored uniform.

The obvious answer

All of this speaks to the likelihood that the Jem'Hadar are another species that has been modified. It hints that Jem'Hadar women may actually still have been somewhat involved in the process of making new Jem'Hadar for the Gamma Quadrant at least. Even if the average Jem'Hadar warrior is completely unaware of that possibility.

That's stuff we can imagine while ignoring the end of DS9 but taking everything else in DS9 into account (without touching Picard or other properties).

Once we do take the end of DS9 into account, we have Odo leading the Founders and we have at least two species who have existed mainly to serve the Dominion which no longer exists. We see time and time again that Odo feels that wholesale slaughter is deeply evil. We also know how much he feels the Jem'Hadar should be able to choose their own destinies on an individual level.

Bumper Robinson in Jem'Hadar makeup playing the unnamed Jem'Hadar teenager in the DS9 episode The Abandoned. He looks at something off screen while Odo watches him.

Given that information, it's very likely that Odo would lead the Founders to restore the Jem'Hadar's independence and solve some of the dissatisfaction the Weyoun clones discussed about the Founders' tinkering with the Vorta genetic makeup.

Now, that last paragraph is a simple guess about what might have happened on the basis of my knowledge of the previous paragraphs and my knowledge of Odo as a character. There are certainly other ways that the creators of Starfleet Academy could have solved any issues — if I was to agree there were an issue to address.

One of several long roads

It's a SciFi show. There's any number of ways they could have solved it.

Goran'Agar could have stolen an orb and a shuttle and ridden it into the Celestial Temple and begged the Prophets to restore his people to what they were before the Founders messed with them. There could have been a big debate because What Was Before Can Never Be Again. Prophet Benjamin says that's true but every sentient being deserves a chance at a new start. Then the prophets coming up with some brand new way of being Jem'Hadar inspired and directed by the things that Goran'Agar has managed to make of himself since last we saw him.

Julian Bashir, international spy, could have taken over Section 31 through subterfuge. He could then repurposed their knowledge and capacities to create a retrovirus that would spread through the Jem'Hadar like a plague. The plague could remove their genetic propensity toward believing the Founders are gods and given them the ability for their bodies to synthesize what they need from food and restoring their ability to mate regularly.

The imagination is the limit.

Avery Brooks, Alexander Siddig, and Nana Visitor play characters in a spy thriller holonovel. Avery looks at Alexander skeptically while Nana watches from behind holding a cigarette holder in her hand.

The adventure's only getting started

With all that, we've still got people complaining because a Jem'Hadar woman showed up in a trailer and some promotional material for a show that hadn't yet come out. Complaints that, to my eye, seem to not reasonably engage with the information from shows we've theoretically already seen.

Now, I've got my own theories about why someone might want to do that. I wouldn't call those fan theories since I'm not a fan of what I think is going on. I won't impugn them by suggesting that they aren't actually fans of Star Trek. I will suggest that looking at the shows while paying a little more attention to the themes and subtext might be in order.

Now, how's the show? I still don't know. I hope it's the best Star Trek ever, though. It probably won't be but that won't necessarily be because it sucks. There's only one spot for “best.”

I hope I get to see it one day. I think the first time I heard people talking about wanting a Starfleet Academy show was sometime in the 90s. That wasn't an idea that really appealed to me at the time but there's no reason it can't be great. That's what I hope for it. That it's great. (I do also still hope that it's the best ever but, again, that's unlikely because of the way best works.)

Support the author

I've got two books out in the Vay Ideal series. It's a science fiction adventure series built around an eclectic assortment of travelers who find themselves running an independent ship. I'd love it if you'd check them out. While you can buy them on Amazon, the cover links will take you to a landing page which will let you choose any one of several other stores also.

A space ship flying away from a fuchsia planet. The is Vay Ideal - Book 1, Death In Transit, Jall Barret. Vay Ideal - Book 2. New Crimes, Old Names by Jall Barret. A shiny, metal, red box flies over a sky outside a walled city built on a hill. The sky is dark but has stars and hints of an arora.

#StarTrek #Essay #SciFi

 
Read more...

from Noisy Deadlines

  • Week notes in the middle of the week? Yes, why not!
  • ✏️ So, I was doing the 750 Words journaling daily and even though it's great exercise, I can't keep up the pace everyday. And because the website tracks streaks, seeing that I broke my streak makes me frustrated. And some days I was just writing for the sake of completing the streak. So I decided to get back to my journaling using Standard Notes, where I don't feel that much pressure. I still strive to write everyday, but it doesn't need to be 750 Words.
  • 🎭I took some time to acknowledge that January was busy for me, and that I needed to rest. There was a lot going on, and I was putting my standards way too high. I had to slow down and remember my own lessons learned.
  • 🎿 I completed the Level 1 Cross Country Ski course! It was crazy to have classes so late at night from 8pm to 9:30pm. This night routine affected my energy levels a lot! It just made me realize how much I need my sleep time and my daily routines for me to function well. I don't regret it, tho. Cross country skiing is harder than I thought, and I will continue with a Level 2 class this weekend.
  • ☃️ It has been pretty cold around here the past few days, so I didn't have the courage to go ice skating or skiing outside too much. I did, however, go skiing once on a beautiful trail called Mer Bleu. It was the first outing after I was done with the ski class, and it was very challenging! First of all, it was cold (-17C) and even though the trail was mostly flat, skiing for 2 hours for the first time was too much. My partner was with me, and he got excited to go on the big loop, and we didn’t know how long the loop was. So, 2 hours later, I was dead tired. I did take a long nap afterwards that day to recover.
  • ⛸️ I'm still doing my ice skating classes once a week. I'm doing very slow progress skating backwards. I don't think I'll ever be able to do backwards cross overs.
  • 📕 I read “The Just City” by Jo Walton for my local book club, and it was a weird experience. I heard very positive praises about this book, but it wasn't for me. It's basically a thought experiment on making Plato's Republic idea of a “just city” a reality, which is an interesting premise. But the book was constantly pointing out how this is actually a terrible idea, showing all the bad consequences, and I missed having more characters that actually questioned the status quo. Anyway, it was interesting having the discussion with my book club, since it's philosophy adjacent, but I won't continue with this series.
  • 📖 I'm almost done with “Persepolis Rising” (The Expanse #9) for my other book club, and it's so amazing!
  • ❄️ Looking forward to the upcoming long weekend!

📌 Cool online reads:

📺 Cool Videos:

#weeknotes

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog