Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Roscoe's Story
In Summary: * Finding two bowl games to follow today was nice. Even nicer was the wife updating me on our plans for the Christmas weekend.
Prayers, etc.: * My daily prayers
Health Metrics: * bw= 224.65 lbs. * bp= 165/91
Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups
Diet: * 06:24 – 1 peanut butter sandwich * 08:40 – seafood salad, 1 cheese sandwich * 13:40 – 5 hot dog sandwiches * 14:45 – egg drop soup, rangoon, Mongolian beef lunch plate, steamed rice
Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 04:40 – bank accounts activity monitored * 04:50 – read, pray, follow news reports from various sources, surf the socials * 10:00 to 12:30 – yard work, mow, rake up leaves in front yard, falling once * 12:45 – read, pray, follow news reports from various sources, surf the socials * 14:15 – tuned into the NCAA college football Boca Raton Bowl: Toledo Rockets vs Louisville Cardinals... and Louisville wins, 27 to 22. * 16:50 – tuned into the New Orleans Bowl: Western Kentucky Hilltoppers vs Southern Miss Golden Eagles...and the Hilltoppers win, 27 to 16.
Chess: * 21:05 – moved in all pending CC games
from Faucet Repair
5 December 2025
Looking at a lot of Dürer this week. It's amazing how fresh and contemporary the work he did five hundred years ago feels to my eyes. The depth of his attention is evergreen. Seeing beyond seeing. Thought of his Christ as the Man of Sorrows (1492) while walking through Heathrow Terminal 3 when I passed by what I assume is an advertisement for Rio de Janeiro/Brazil tourism: a long, horizontal, textless image of the top half of Christ the Redeemer (1931) stretched across a cloudless blue sky. In Dürer's painting, the Christ figure is leaning on a foregrounded ledge, the plane between subject and viewer both established and broken. In the airport, the vinyl advertisement isn't bordered by any frame or support and fits quite seamlessly into the cold, glossy environment around it. Gliding by it on a moving walkway made for a strange sensation where each arm seemed to extend from the wall one at a time as I passed. This melding of perceptual planes via a figure actively stretching the confines of its medium is something I'm holding as I sit down to sketch what I'm seeing.
from Faucet Repair
3 December 2025
In Sam's room she has a handwritten sign above her dresser that reads: “TAKE BACK YOUR ATTENTION.” Staying with Jason right now, and he talked about the frustration he felt as he watched himself reach for his phone between plays while on the couch in front of an NFL game last weekend. The struggle to reclaim one's capacity for concentration, to resist succumbing to a disposable worldview, is the condition of our time and therefore a problem to be worked through in paint. Trying to see with slowness and subtly in a media environment that promotes speed and blatancy.
from
SmarterArticles

In a nondescript data centre campus in West Des Moines, Iowa, row upon row of NVIDIA H100 GPUs hum at a constant pitch, each processor drawing 700 watts of power whilst generating enough heat to warm a small home. Multiply that single GPU by the 16,000 units Meta used to train its Llama 3.1 model, and you begin to glimpse the staggering energy appetite of modern artificial intelligence. But the electricity meters spinning in Iowa tell only half the story. Beneath the raised floors and between the server racks, a hidden resource is being consumed at an equally alarming rate: freshwater, evaporating by the millions of litres to keep these silicon brains from melting.
The artificial intelligence revolution has arrived with breathtaking speed, transforming how we write emails, generate images, and interact with technology. Yet this transformation carries an environmental cost that has largely remained invisible to the billions of users typing prompts into ChatGPT, Gemini, or Midjourney. The computational power required to train and run these models demands electricity on a scale that rivals small nations, whilst the cooling infrastructure necessary to prevent catastrophic hardware failures consumes freshwater resources that some regions can scarcely afford to spare.
As generative AI systems become increasingly embedded in our daily digital lives, a critical question emerges: how significant are these environmental costs, and which strategies can effectively reduce AI's impact without compromising the capabilities that make these systems valuable? The answer requires examining not just the raw numbers, but the complex interplay of technical innovation, infrastructure decisions, and policy frameworks that will determine whether artificial intelligence becomes a manageable component of our energy future or an unsustainable burden on planetary resources.
When OpenAI released GPT-3 in 2020, the model's training consumed an estimated 1,287 megawatt-hours of electricity and produced approximately 552 metric tons of carbon dioxide equivalent. To put this in perspective, that's over 500 times the emissions of a single passenger flying from New York to San Francisco, or nearly five times the lifetime emissions of an average car. By the time GPT-4 arrived, projections suggested emissions as high as 21,660 metric tons of CO₂ equivalent, a roughly 40-fold increase. Meta's Llama 3, released in 2024, generated emissions nearly four times higher than GPT-3, demonstrating that newer models aren't becoming more efficient at the same rate they're becoming more capable.
The training phase, however, represents only the initial environmental cost. Once deployed, these models must respond to billions of queries daily, each request consuming energy. According to the International Energy Agency, querying ChatGPT uses approximately ten times as much energy as a standard online search. Whilst a typical Google search might consume 0.3 watt-hours, a single query to ChatGPT can use 2.9 watt-hours. Scale this across ChatGPT's reported 500,000 kilowatts of daily electricity consumption, equivalent to the usage of 180,000 U.S. households, and the inference costs begin to dwarf training expenses.
Task type matters enormously. Research from Hugging Face and Carnegie Mellon University found that generating a single image using Stable Diffusion XL consumes as much energy as fully charging a smartphone. Generating 1,000 images produces roughly as much carbon dioxide as driving 4.1 miles in an average petrol-powered car. By contrast, generating text 1,000 times uses only as much energy as 16 per cent of a smartphone charge. The least efficient image generation model tested consumed 11.49 kilowatt-hours to generate 1,000 images, nearly 1 charge per image. Video generation proves even more intensive: every video created with OpenAI's Sora 2 burns approximately 1 kilowatt-hour, consumes 4 litres of water, and emits 466 grams of carbon.
The disparities extend to model choice as well. Using a generative model to classify movie reviews consumes around 30 times more energy than using a fine-tuned model created specifically for that task. Generative AI models use much more energy because they are trying to do many things at once, such as generate, classify, and summarise text, instead of just one task. The largest text generation model, Llama-3-70B from Meta, consumes 1.7 watt-hours on average per query, whilst the least carbon-intensive text generation model was responsible for as much CO₂ as driving 0.0006 miles in a similar vehicle.
These individual costs aggregate into staggering totals. Global AI systems consumed 415 terawatt-hours of electricity in 2024, representing 1.5 per cent of total global electricity consumption with a 12 per cent annual growth rate. If this trajectory continues, AI could consume more than 1,000 terawatt-hours by 2030. The International Energy Agency predicts that global electricity demand from data centres will more than double by 2030, reaching approximately 945 terawatt-hours. That total amount slightly exceeds Japan's entire annual energy consumption.
The concentration of this energy demand creates particular challenges. Just five major technology companies (Google, Microsoft, Meta, Apple, and Nvidia) account for 1.7 per cent of total U.S. electricity consumption. Google's energy use alone equals the electricity consumption of 2.3 million U.S. households. Data centres already account for 4.4 per cent of U.S. electricity use, with projections suggesting this could rise to 12 per cent by 2028. McKinsey analysis expects the United States to grow from 25 gigawatts of data centre demand in 2024 to more than 80 gigawatts by 2030.
Whilst carbon emissions have received extensive scrutiny, water consumption has largely remained under the radar. Shaolei Ren, an associate professor at the University of California, Riverside who has studied the water costs of computation for the past decade, has worked to make this hidden impact visible. His research reveals that training GPT-3 in Microsoft's state-of-the-art U.S. data centres directly evaporated approximately 700,000 litres of clean freshwater. The training of GPT-4 at similar facilities consumed an estimated total of 5.4 million litres of water.
The scale becomes more alarming when projected forward. Research by Pengfei Li, Jianyi Yang, Mohammad A. Islam, and Shaolei Ren projects that global AI water withdrawals could reach 4.2 to 6.6 billion cubic metres by 2027 without efficiency gains and strategic siting. That volume represents more than the total annual water withdrawal of four to six Denmarks, or half the United Kingdom's water use.
These aren't abstract statistics in distant data centres. More than 160 new AI data centres have sprung up across the United States in the past three years, a 70 per cent increase from the prior three-year period. Many have been sited in locations with high competition for scarce water resources. The water footprint of data centres extends well beyond the server room: in some cases up to 5 million gallons per day, equivalent to a small town's daily use. OpenAI is establishing a massive 1.2-gigawatt data centre campus in Abilene, Texas to anchor its $100 billion Stargate AI infrastructure venture, raising concerns about water availability in a region already facing periodic drought conditions.
The water consumption occurs because AI hardware generates extraordinary heat loads that must be dissipated to prevent hardware failure. AI workloads can generate up to ten times more heat than traditional servers. NVIDIA's DGX B200 and Google's TPUs can each produce up to 700 watts of heat. Cooling this hardware typically involves either air cooling systems that consume electricity to run massive fans and chillers, or evaporative cooling that uses water directly.
The industry measures water efficiency using Water Usage Effectiveness (WUE), expressed as litres of water used per kilowatt-hour of computing energy. Typical averages hover around 1.9 litres per kilowatt-hour, though this varies significantly by climate, cooling technology, and data centre design. Research from the University of California, Riverside and The Washington Post found that generating a 100-word email with ChatGPT-4 consumes 519 millilitres of water, roughly a full bottle. A session of questions and answers with GPT-3 (approximately 10 to 50 responses) drives the consumption of a half-litre of fresh water.
Google's annual water consumption reaches a staggering 24 million cubic metres, enough to fill over 9,618 Olympic-sized swimming pools. Google's data centres used 20 per cent more water in 2022 than in 2021. Microsoft's water use rose by 34 per cent over the same period, driven largely by its hosting of ChatGPT as well as GPT-3 and GPT-4. These increases came despite both companies having pledged before the AI boom to be “water positive” by 2030, meaning they would add more water to the environment than they use.
Understanding AI's true carbon footprint requires looking beyond operational emissions to include embodied carbon from manufacturing, the carbon intensity of electricity grids, and the full lifecycle of hardware. The LLMCarbon framework, developed by researchers to model the end-to-end carbon footprint of large language models, demonstrates this complexity. The carbon footprint associated with large language models encompasses emissions from training, inference, experimentation, storage processes, and both operational and embodied carbon emissions.
The choice of where to train a model dramatically affects its carbon footprint. Research has shown that the selection of data centre location and processor type can reduce the carbon footprint by approximately 100 to 1,000 times. Training the same model in a data centre powered by renewable energy in Iceland produces vastly different emissions than training it in a coal-dependent grid region. However, current carbon accounting practices often obscure this reality.
The debate between market-based and location-based emissions accounting has become particularly contentious. Market-based methods allow companies to purchase renewable energy credits or power purchase agreements, effectively offsetting their grid emissions on paper. Whilst this approach may incentivise investment in renewable energy, critics argue it obscures actual physical emissions. Location-based emissions, which reflect the carbon intensity of local grids where electricity is actually consumed, tell a different story. Microsoft's location-based scope 2 emissions more than doubled in four years, rising from 4.3 million metric tons of CO₂ in 2020 to nearly 10 million in 2024. Microsoft announced in May 2024 that its CO₂ emissions had risen nearly 30 per cent since 2020 due to data centre expansion. Google's 2023 greenhouse gas emissions were almost 50 per cent higher than in 2019, largely due to energy demand tied to data centres.
An August 2025 analysis from Goldman Sachs Research forecasts that approximately 60 per cent of increasing electricity demands from data centres will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. This projection reflects the fundamental challenge: renewable energy capacity isn't expanding fast enough to meet AI's explosive growth, forcing reliance on fossil fuel generation to fill the gap.
The good news is that multiple technical approaches can dramatically reduce AI's environmental impact without necessarily sacrificing capability. These strategies range from fundamental architectural innovations to optimisation techniques applied to existing models.
Knowledge distillation offers one of the most promising paths to efficiency. In this approach, a large, complex model (the “teacher”) trained on extensive datasets transfers its knowledge to a smaller network (the “student”). Runtime model distillation can shrink models by up to 90 per cent, cutting energy consumption during inference by 50 to 60 per cent. The student model learns to approximate the teacher's outputs whilst using far fewer parameters and computational resources.
Quantisation compresses models by reducing the numerical precision of weights and activations. Converting model parameters from 32-bit floating-point (FP32) to 8-bit integer (INT8) slashes memory requirements, as FP32 values consume 4 bytes whilst INT8 uses just 1 byte. Weights can be quantised to 16-bit, 8-bit, 4-bit, or even 1-bit representations. Quantisation can achieve up to 50 per cent energy savings whilst maintaining acceptable accuracy levels.
Model pruning removes redundant weights and connections from neural networks, creating sparse models that require fewer computations. Pruning can achieve up to 30 per cent energy consumption reduction. When applied to BERT, a popular natural language processing model, pruning resulted in a 32.097 per cent reduction in energy consumption.
Combining these techniques produces even greater gains. Production systems routinely achieve 5 to 10 times efficiency improvements through coordinated application of optimisation techniques whilst maintaining 95 per cent or more of original model performance. Mobile applications achieve 4 to 7 times model size reduction and 3 to 5 times latency improvements through combined quantisation, pruning, and distillation. Each optimisation technique offers distinct benefits: post-training quantisation enables fast, easy latency and throughput improvements; quantisation-aware training and distillation recover accuracy losses in low-precision models; pruning plus knowledge distillation permanently reduces model size and compute needs for more aggressive efficiency gains.
Mixture of Experts (MoE) architecture introduces sparsity at a fundamental level, allowing models to scale efficiently without proportional computational cost increases. In MoE models, sparse layers replace dense feed-forward network layers. These MoE layers contain multiple “experts” (typically neural networks or feed-forward networks), but only activate a subset for any given input. A gate network or router determines which tokens are sent to which expert.
This sparse activation enables dramatic efficiency gains. Grok-1, for example, has 314 billion parameters in total, but only 25 per cent of these parameters are active for any given token. The computational cost of an MoE model's forward pass is substantially less than that of a dense model with the same number of parameters, enabling scaling with computational complexity approaching O(1).
Notable MoE implementations demonstrate the potential. Google's Switch Transformers enabled multi-trillion parameter models with a 7 times speed-up in training compared to the T5 (dense) transformer model. The GLaM model, with 1.2 trillion parameters, matched GPT-3 quality using only one-third of the energy required to train GPT-3. This dramatic reduction in carbon footprint (up to an order of magnitude) comes from the lower computing requirements of the MoE approach.
Mistral AI's Mixtral 8x7B, released in December 2023 under Apache 2.0 licence, contains 46.7 billion parameters across 8 experts with sparsity of 2 (meaning 2 experts are active per token). Despite having fewer total active parameters than many dense models, Mixtral achieves competitive performance whilst consuming substantially less energy during inference.
Beyond optimisation of existing models, fundamental architectural innovations promise step-change efficiency improvements. Transformers have revolutionised AI, but their quadratic complexity arising from token-to-token attention makes them energy-intensive at scale. Sub-quadratic architectures like State Space Models (SSMs) and Linear Attention mechanisms promise to redefine efficiency. Carnegie Mellon University's Mamba architecture achieves 5 times faster inference than transformers for equivalent tasks.
The choice of base model architecture significantly impacts runtime efficiency. Research comparing models of different architectures found that LLaMA-3.2-1B consumes 77 per cent less energy than Mistral-7B, whilst GPT-Neo-2.7B uses more than twice the energy of some higher-performing models. These comparisons reveal that raw parameter count doesn't determine efficiency; architectural choices matter enormously.
NVIDIA's development of the Transformer Engine in its H100 Hopper architecture demonstrates hardware-software co-design for efficiency. The Transformer Engine accelerates deep learning operations using mixed precision formats, especially FP8 (8-bit floating point), specifically optimised for transformer architectures. This specialisation delivers up to 9 times faster AI training on the largest models and up to 30 times faster AI inference compared to the NVIDIA HGX A100. Despite the H100 drawing up to 700 watts compared to the A100's 400 watts, the H100 offers up to 3 times more performance per watt, meaning that although it consumes more energy, it accomplishes more work per unit of power consumed.
The January 2025 release of DeepSeek-R1 disrupted conventional assumptions about AI development costs and efficiency, whilst simultaneously illustrating the complexity of measuring environmental impact. Whereas ChatGPT-4 was trained using 25,000 NVIDIA GPUs and Meta's Llama 3.1 used 16,000, DeepSeek used just 2,000 NVIDIA H800 chips. DeepSeek achieved ChatGPT-level performance with only $5.6 million in development costs compared to over $3 billion for GPT-4. Overall, DeepSeek requires a tenth of the GPU hours used by Meta's model, lowering its carbon footprint during training, reducing server usage, and decreasing water demand for cooling.
However, the inference picture proves more complex. Research comparing energy consumption across recent models found that DeepSeek-R1 and OpenAI's o3 emerge as the most energy-intensive models for inference, consuming over 33 watt-hours per long prompt, more than 70 times the consumption of GPT-4.1 nano. DeepSeek-R1 and GPT-4.5 consume 33.634 watt-hours and 30.495 watt-hours respectively. A single long query to o3 or DeepSeek-R1 may consume as much electricity as running a 65-inch LED television for roughly 20 to 30 minutes.
DeepSeek-R1 consistently emits over 14 grams of carbon dioxide and consumes more than 150 millilitres of water per query. The elevated emissions and water usage observed in DeepSeek models likely reflect inefficiencies in their data centres, including higher Power Usage Effectiveness (PUE) and suboptimal cooling technologies. DeepSeek appears to rely on Alibaba Cloud infrastructure, and China's national grid continues to depend heavily on coal, meaning the actual environmental impact per query may be more significant than models running on grids with higher renewable penetration.
The DeepSeek case illustrates a critical challenge: efficiency gains in one dimension (training costs) don't necessarily translate to improvements across the full lifecycle. Early figures suggest DeepSeek could be more energy intensive when generating responses than equivalent-size models from Meta. The energy it saves in training may be offset by more intensive techniques for answering questions and by the longer, more detailed answers these techniques produce.
Technical model optimisations represent only one dimension of reducing AI's environmental impact. The infrastructure that powers and cools these models offers equally significant opportunities for improvement.
As of 2024, natural gas supplied over 40 per cent of electricity for U.S. data centres, according to the International Energy Agency. Renewables such as wind and solar supplied approximately 24 per cent of electricity at data centres, whilst nuclear power supplied around 20 per cent and coal around 15 per cent. This mix falls far short of what's needed to decarbonise AI.
However, renewables remain the fastest-growing source of electricity for data centres, with total generation increasing at an annual average rate of 22 per cent between 2024 and 2030, meeting nearly 50 per cent of the growth in data centre electricity demand. Major technology companies are signing massive renewable energy contracts to close the gap.
In May 2024, Microsoft inked a deal with Brookfield Asset Management for the delivery of 10.5 gigawatts of renewable energy between 2026 and 2030 to power Microsoft data centres. Alphabet added new clean energy generation by signing contracts for 8 gigawatts and bringing 2.5 gigawatts online in 2024 alone. Meta recently announced it anticipates adding 9.8 gigawatts of renewable energy to local grids in the U.S. by the end of 2025. Meta is developing a $10 billion AI-focused data centre, the largest in the Western Hemisphere, on a 2,250-acre site in Louisiana, a project expected to add at least 1,500 megawatts of new renewable energy to the grid.
These commitments represent genuine progress, but also face criticism regarding their market-based accounting. When a company signs a renewable energy power purchase agreement in one region, it can claim renewable energy credits even if the actual electrons powering its data centres come from fossil fuel plants elsewhere on the grid. This practice allows companies to report lower carbon emissions whilst not necessarily reducing actual emissions from the grid.
An August 2025 Goldman Sachs analysis forecasts that approximately 60 per cent of increasing electricity demands from data centres will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. According to a new report from the International Energy Agency, the world will spend $580 billion on data centres this year, $40 billion more than will be spent finding new oil supplies.
The scale and reliability requirements of AI workloads are driving unprecedented interest in nuclear power, particularly Small Modular Reactors (SMRs). Unlike intermittent renewables, nuclear provides baseload power 24 hours per day, 365 days per year, matching the operational profile of data centres that cannot afford downtime.
Microsoft signed an agreement with Constellation Energy to restart a shuttered reactor at Three Mile Island. The plan calls for the reactor to supply 835 megawatts to grid operator PJM, with Microsoft buying enough power to match the electricity consumed by its data centres. The company committed to funding the $1.6 billion investment required to restore the reactor and signed a 20-year power purchase agreement.
Google made history in October 2024 with the world's first corporate SMR purchase agreement, partnering with Kairos Power to deploy 500 megawatts across 6 to 7 molten salt reactors. The first unit will come online by 2030, with full deployment by 2035.
Amazon Web Services leads with the most ambitious programme, committing to deploy 5 gigawatts of SMR capacity by 2039 through a $500 million investment in X-energy and partnerships spanning Washington State and Virginia. Amazon has established partnerships with Dominion Energy to explore SMR development near its North Anna nuclear facility, and with X-Energy and Energy Northwest to finance the development, licensing, and construction of in-state SMRs.
The smaller size and modular design of SMRs could make building them faster, cheaper, and more predictable than conventional nuclear reactors. They also come with enhanced safety features and could be built closer to transmission lines. However, SMRs face significant challenges. They are still at least five years from commercial operation in the United States. A year ago, the first planned SMR in the United States was cancelled due to rising costs and a lack of customers. Former U.S. Nuclear Regulatory Commission chair Allison Macfarlane noted: “Very few of the proposed SMRs have been demonstrated, and none are commercially available, let alone licensed by a nuclear regulator.”
After 2030, SMRs are expected to enter the mix, providing a source of baseload low-emissions electricity to data centre operators. The US Department of Energy has launched a $900 million funding programme to support the development of SMRs and other advanced nuclear technologies, aiming to accelerate SMR deployment as part of the nation's clean energy strategy.
Currently, cooling data centre infrastructure alone consumes approximately 40 per cent of an operator's energy usage. AI workloads exacerbate this challenge. AI models run on specialised hardware such as NVIDIA's DGX B200 and Google's TPUs, which can each produce up to 700 watts of heat. Traditional air cooling struggles with these heat densities.
Liquid cooling technologies offer dramatic improvements. Direct-to-chip liquid cooling circulates coolant through cold plates mounted directly on processors, efficiently transferring heat away from the hottest components. Compared to traditional air cooling, liquid systems can deliver up to 45 per cent improvement in Power Usage Effectiveness (PUE), often achieving values below 1.2. Two-phase cooling systems, which use the phase change from liquid to gas to absorb heat, require lower liquid flow rates than traditional single-phase water approaches (approximately one-fifth the flow rate), using less energy and reducing equipment damage risk.
Immersion cooling represents the most radical approach: servers are fully submerged in a non-conductive liquid. This method removes heat far more efficiently than air cooling, keeping temperatures stable and allowing hardware to run at peak performance for extended periods. The immersion-ready architecture allows operators to lower cooling-related energy use by as much as 50 per cent, reclaim heat for secondary uses, and reduce or eliminate water consumption. Compared to traditional air cooling, single-phase immersion cooling can help reduce electricity demand by up to nearly half, contribute to CO₂ emissions reductions of up to 30 per cent, and support up to 99 per cent less water consumption. Sandia National Laboratories researchers reported that direct immersion techniques may cut power use in compute-intensive HPC-AI clusters by 70 per cent.
As liquid cooling moves from niche to necessity, partnerships are advancing the technology. Engineered Fluids, Iceotope, and Juniper Networks have formed a strategic partnership aimed at delivering scalable, sustainable, and performance-optimised infrastructure for high-density AI and HPC environments. Liquid cooling is increasingly popular and expected to account for 36 per cent of data centre thermal management revenue by 2028.
Significant trends include the improvement of dielectric liquids, providing alternatives that help reduce carbon emissions. Moreover, immersion cooling allows for increased cooling system temperatures, which enhances waste heat recovery processes. This progress opens opportunities for district heating applications and other uses, turning waste heat from a disposal problem into a resource.
Technical and infrastructure solutions provide the tools to reduce AI's environmental impact, but policy frameworks and procurement practices determine whether these tools will be deployed at scale. Regulation is beginning to catch up with the AI boom, though unevenly across jurisdictions.
The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, with enforcement taking effect in stages over several years. The Act aims to ensure “environmental protection, whilst boosting innovation” and imposes requirements concerning energy consumption and transparency. The legislation requires regulators to facilitate the creation of voluntary codes of conduct governing the impact of AI systems on environmental sustainability, energy-efficient programming, and techniques for the efficient design, training, and use of AI.
These voluntary codes of conduct must set out clear objectives and key performance indicators to measure the achievement of those objectives. The AI Office and member states will encourage and facilitate the development of codes for AI systems that are not high risk. Whilst voluntary, these aim to encourage assessing and minimising the environmental impact of AI systems.
The EU AI Act requires the European Commission to publish periodic reports on progress on the development of standards for energy-efficient deployment of general-purpose AI models, with the first report due by August 2, 2028. The Act also establishes reporting requirements, though critics argue these don't go far enough in mandating specific efficiency improvements.
Complementing the AI Act, the recast Energy Efficiency Directive (EED) takes a more prescriptive approach to data centres themselves. Owners and operators of data centres with an installed IT power demand of at least 500 kilowatts must report detailed sustainability key performance indicators, including energy consumption, Power Usage Effectiveness, temperature set points, waste heat utilisation, water usage, and the share of renewable energy used. Operators are required to report annually on these indicators, with the first reports submitted by September 15, 2024, and subsequent reports by May 15 each year.
In the first quarter of 2026, the European Commission will roll out a proposal for a Data Centre Energy Efficiency Package alongside the Strategic Roadmap on Digitalisation and AI for the Energy Sector. The Commission is also expected to publish a Cloud and AI Development Act in Q4 2025 or Q1 2026, aimed at tripling EU data centre processing capacity in the next 5 to 7 years. The proposal will allow for simplified permitting and other public support measures if they comply with requirements on energy efficiency, water efficiency, and circularity.
Regulatory initiatives are creating mandatory requirements. The EU's Corporate Sustainability Reporting Directive and California's Corporate Greenhouse Gas Reporting Programme will require detailed Scope 3 emissions data, whilst emerging product-level carbon labelling schemes demand standardised carbon footprint calculations. With regulations like the Corporate Sustainability Reporting Directive and Carbon Border Adjustment Mechanism coming into full force, AI platforms have become mission-critical infrastructure. CO2 AI's partnership with CDP in January 2025 launched the “CO2 AI Product Ecosystem,” enabling companies to share product-level carbon data across supply chains.
However, carbon accounting debates, particularly around market-based versus location-based emissions, need urgent regulatory clarification. Market-based emissions can sometimes be misleading, allowing companies to claim renewable energy usage whilst their actual facilities draw from fossil fuel-heavy grids. Greater transparency requirements could mandate disclosure of both market-based and location-based emissions, providing stakeholders with a fuller picture of environmental impact.
Green procurement practices are evolving from aspirational goals to concrete, measurable requirements. In 2024, companies set broad sustainability goals, such as reducing emissions or adopting greener materials, but there was a lack of granular, measurable milestones. Green procurement in 2025 emphasises quantifiable metrics with shorter timelines. Companies are setting specific goals like sourcing 70 per cent of materials from certified green suppliers. Carbon reduction targets are aligning more closely with science-based targets, and enhanced public reporting allows stakeholders to monitor progress more transparently.
The United States has issued comprehensive federal guidance through White House Office of Management and Budget memoranda establishing requirements for government AI procurement, including minimum risk management practices for “high-impact AI” systems. However, most other jurisdictions have adopted a “wait and see” approach, creating a patchwork of regulatory requirements that varies dramatically across jurisdictions.
With multiple strategies available, determining which approaches most effectively reduce environmental impact without compromising capability requires examining both theoretical potential and real-world results.
Research on comparative effectiveness reveals a clear hierarchy of impact. Neuromorphic hardware achieves the highest energy savings (over 60 per cent), followed by quantisation (up to 50 per cent) and model pruning (up to 30 per cent). However, neuromorphic hardware remains largely in research stages, whilst quantisation and pruning can be deployed immediately on existing models.
Infrastructure choices matter more than individual model optimisations. The choice of data centre location, processor type, and energy source can reduce carbon footprint by approximately 100 to 1,000 times. Training a model in a renewable-powered Icelandic data centre versus a coal-dependent grid produces vastly different environmental outcomes. This suggests that procurement decisions about where to train and deploy models may have greater impact than architectural choices about model design.
Cooling innovations deliver immediate, measurable benefits. The transition from air to liquid cooling can improve Power Usage Effectiveness by 45 per cent, with immersion cooling potentially reducing cooling-related energy use by 50 per cent and water consumption by up to 99 per cent. Unlike model optimisations that require retraining, cooling improvements can be deployed at existing facilities.
Efficiency gains don't automatically translate to reduced total environmental impact due to the Jevons paradox or rebound effect. As Anthropic co-founder Dario Amodei noted, “Because the value of having a more intelligent system is so high, it causes companies to spend more, not less, on training models.” The gains in cost efficiency end up devoted to training larger, smarter models rather than reducing overall resource consumption.
This dynamic is evident in the trajectory from GPT-3 to GPT-4 to models like Claude Opus 4.5. Each generation achieves better performance per parameter, yet total training costs and environmental impacts increase because the models grow larger. Mixture of Experts architectures reduce inference costs per token, but companies respond by deploying these models for more use cases, increasing total queries.
The DeepSeek case exemplifies this paradox. DeepSeek's training efficiency potentially democratises AI development, allowing more organisations to train capable models. If hundreds of organisations now train DeepSeek-scale models instead of a handful training GPT-4-scale models, total environmental impact could increase despite per-model improvements.
Given the rebound effect, which strategies can reduce environmental impact without triggering compensatory increases in usage? Several approaches show promise:
Task-appropriate model selection: Using fine-tuned models for specific tasks rather than general-purpose generative models consumes approximately 30 times less energy. Deploying smaller, specialised models for routine tasks (classification, simple question-answering) whilst reserving large models for tasks genuinely requiring their capabilities could dramatically reduce aggregate consumption without sacrificing capability where it matters.
Temporal load shifting: Shaolei Ren's research proposes timing AI training during cooler hours to reduce water evaporation. “We don't water our lawns at noon because it's inefficient,” he explained. “Similarly, we shouldn't train AI models when it's hottest outside. Scheduling AI workloads for cooler parts of the day could significantly reduce water waste.” This approach requires no technical compromise, merely scheduling discipline.
Renewable energy procurement with additionality: Power purchase agreements that fund new renewable generation capacity, rather than merely purchasing existing renewable energy credits, ensure that AI growth drives actual expansion of clean energy infrastructure. Meta's Louisiana data centre commitment to add 1,500 megawatts of new renewable energy exemplifies this approach.
Mandatory efficiency disclosure: Requiring AI providers to disclose energy and water consumption per query or per task would enable users to make informed choices. Just as nutritional labels changed food consumption patterns, environmental impact labels could shift usage toward more efficient models and providers, creating market incentives for efficiency without regulatory mandates on specific technologies.
Lifecycle optimisation over point solutions: The DeepSeek paradox demonstrates that optimising one phase (training) whilst neglecting others (inference) can produce suboptimal overall outcomes. Lifecycle carbon accounting that considers training, inference, hardware manufacturing, and end-of-life disposal identifies the true total impact and prevents shifting environmental costs between phases.
Researchers and practitioners working at the intersection of AI and sustainability offer nuanced perspectives on the path forward.
Sasha Luccioni, Research Scientist and Climate Lead at Hugging Face, and a founding member of Climate Change AI, has spent over a decade studying AI's environmental impacts. Luccioni's project, “You can't improve what you don't measure: Developing Standards for Sustainable Artificial Intelligence,” targets documenting AI's environmental impacts whilst contributing to the development of new tools and standards to better measure its impact on climate. She has been called upon by organisations such as the OECD, the United Nations, and the NeurIPS conference as an expert in developing norms and best practices for more sustainable and ethical practice of AI.
Luccioni, along with Emma Strubell and Kate Crawford (author of “Atlas of AI”), collaborated on research including “Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice.” Their work emphasises that “This system-level complexity underscores the inadequacy of the question, 'Is AI net positive or net negative for the climate?'” Instead, they adopt an analytic approach that includes social, political, and economic contexts in which AI systems are developed and deployed. Their paper argues that the AI field needs to adopt a more detailed and nuanced approach to framing AI's environmental impacts, including direct impacts such as mineral supply chains, carbon emissions from training large-scale models, water consumption, and e-waste from hardware.
Google has reported substantial efficiency improvements in recent generations. The company claims a 33 times reduction in energy and 44 times reduction in carbon for the median prompt compared with 2024. These gains result from combined improvements in model architecture (more efficient transformers), hardware (purpose-built TPUs), and infrastructure (renewable energy procurement and cooling optimisation).
DeepSeek-V3 achieved 95 per cent lower energy use whilst maintaining competitive performance, showing that efficiency innovation is possible without sacrificing capability. However, as noted earlier, this must be evaluated across the full inference lifecycle, not just training.
The trajectory of AI's environmental impact over the next decade will be determined by the interplay of technological innovation, infrastructure development, regulatory frameworks, and market forces.
Architectural innovations continue to push efficiency boundaries. Sub-quadratic attention mechanisms, state space models, and novel approaches like Mamba suggest that the transformer architecture's dominance may give way to more efficient alternatives. Hardware-software co-design, exemplified by Google's TPUs, NVIDIA's Transformer Engine, and emerging neuromorphic chips, promises orders of magnitude improvement over general-purpose processors.
Model compression techniques will become increasingly sophisticated. Current quantisation approaches typically target 8-bit or 4-bit precision, but research into 2-bit and even 1-bit models continues. Distillation methods are evolving beyond simple teacher-student frameworks to more complex multi-stage distillation and self-distillation approaches. Automated neural architecture search may identify efficient architectures that human designers wouldn't consider.
The renewable energy transition for data centres faces both tailwinds and headwinds. Major technology companies have committed to massive renewable energy procurement, potentially driving expansion of wind and solar capacity. However, the International Energy Agency projects that approximately 60 per cent of new data centre electricity demand through 2030 will still come from fossil fuels, primarily natural gas.
Nuclear power, particularly SMRs, could provide the baseload clean energy that data centres require, but deployment faces significant regulatory and economic hurdles. The first commercial SMRs remain at least five years away, and costs may prove higher than proponents project. The restart of existing nuclear plants like Three Mile Island offers a faster path to clean baseload power, but the number of suitable candidates for restart is limited.
Cooling innovations will likely see rapid adoption driven by economic incentives. As AI workloads become denser and electricity costs rise, the 40 to 70 per cent energy savings from advanced liquid cooling become compelling purely from a cost perspective. The co-benefit of reduced water consumption provides additional impetus, particularly in water-stressed regions.
Optimistic Scenario: Aggressive efficiency improvements (sub-quadratic architectures, advanced quantisation, MoE models) combine with rapid cooling innovations (widespread liquid/immersion cooling) and renewable energy expansion (50 per cent of data centre electricity from renewables). Comprehensive disclosure requirements create market incentives for efficiency. AI's energy consumption grows to 800 terawatt-hours by 2030, representing a substantial reduction from business-as-usual projections of 1,000-plus terawatt-hours. Water consumption plateaus or declines due to liquid cooling adoption. Carbon emissions increase modestly rather than explosively.
Middle Scenario: Moderate efficiency improvements are deployed selectively by leading companies but don't become industry standard. Renewable energy procurement expands but fossil fuels still supply approximately 50 per cent of new data centre electricity. Cooling innovations see partial adoption in new facilities but retrofitting existing infrastructure lags. AI energy consumption reaches 950 terawatt-hours by 2030. Water consumption continues increasing but at a slower rate than worst-case projections. Carbon emissions increase significantly, undermining technology sector climate commitments.
Pessimistic Scenario: Efficiency improvements are consumed by model size growth and expanded use cases (Jevons paradox dominates). Renewable energy capacity expansion can't keep pace with AI electricity demand growth. Cooling innovations face adoption barriers (high capital costs, retrofit challenges, regulatory hurdles). AI energy consumption exceeds 1,200 terawatt-hours by 2030. Water consumption in water-stressed regions triggers conflicts with agricultural and municipal needs. Carbon emissions from the technology sector more than double, making net-zero commitments unachievable without massive carbon removal investments.
The actual outcome will likely fall somewhere between these scenarios, varying by region and company. The critical determinants are policy choices made in the next 24 to 36 months and the extent to which efficiency becomes a genuine competitive differentiator rather than a public relations talking point.
Based on the evidence examined, several principles should guide efforts to reduce AI's environmental impact without compromising valuable capabilities:
Measure Comprehensively: Lifecycle metrics that capture training, inference, hardware manufacturing, and end-of-life impacts provide a complete picture and prevent cost-shifting between phases.
Optimise Holistically: Point solutions that improve one dimension whilst neglecting others produce suboptimal results. The DeepSeek case demonstrates the importance of optimising training and inference together.
Match Tools to Tasks: Using the most capable model for every task wastes resources. Task-appropriate model selection can reduce energy consumption by an order of magnitude without sacrificing outcomes.
Prioritise Infrastructure: Data centre location, energy source, and cooling technology have greater impact than individual model optimisations. Infrastructure decisions can reduce carbon footprint by 100 to 1,000 times.
Mandate Transparency: Disclosure enables informed choice by users, procurement officers, and policymakers. Without measurement and transparency, improvement becomes impossible.
Address Rebound Effects: Efficiency improvements must be coupled with absolute consumption caps or carbon pricing to prevent Jevons paradox from negating gains.
Pursue Additionality: Renewable energy procurement should fund new capacity rather than merely redistributing existing renewable credits, ensuring AI growth drives clean energy expansion.
Innovate Architectures: Fundamental rethinking of model architectures (sub-quadratic attention, state space models, neuromorphic computing) offers greater long-term potential than incremental optimisations of existing approaches.
Consider Context: Environmental impacts vary dramatically by location (grid carbon intensity, water availability). Siting decisions and temporal load-shifting can reduce impacts without technical changes.
Balance Innovation and Sustainability: The goal is not to halt AI development but to ensure it proceeds on a sustainable trajectory. This requires making environmental impact a primary design constraint rather than an afterthought.
The environmental costs of generative AI are significant and growing, but the situation is not hopeless. Technical strategies including model compression, efficient architectures, and hardware innovations can dramatically reduce energy and water consumption. Infrastructure improvements in renewable energy procurement, cooling technologies, and strategic siting offer even greater potential impact. Policy frameworks mandating transparency and establishing efficiency standards can ensure these solutions are deployed at scale rather than remaining isolated examples.
The critical question is not whether AI can be made more sustainable, but whether it will be. The answer depends on choices made by developers, cloud providers, enterprise users, and policymakers in the next few years. Will efficiency become a genuine competitive advantage and procurement criterion, or remain a secondary consideration subordinate to capability and speed? Will renewable energy procurement focus on additionality that expands clean generation, or merely shuffle existing renewable credits? Will policy frameworks mandate measurable improvements, or settle for voluntary commitments without enforcement?
The trajectory matters enormously. Under a business-as-usual scenario, AI could consume over 1,200 terawatt-hours of electricity by 2030, much of it from fossil fuels, whilst straining freshwater resources in already stressed regions. Under an optimistic scenario with aggressive efficiency deployment and renewable energy expansion, consumption could be 30 to 40 per cent lower whilst delivering equivalent or better capabilities. The difference between these scenarios amounts to hundreds of millions of tons of carbon dioxide and billions of cubic metres of water.
The tools exist. The question is whether we'll use them.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from
Justina Revolution
It’s 10:38 PM in Montevideo, Uruguay. I am here on this platform writing words that I don’t know if anyone will find or read. I am in my little apartment through the door there are twinkling Christmas lights. But this feels more honest than Medium or Substack. This feels like my place.
I control the vertical and the horizontal here. This is my most honest blog. Where I divulge all the tasty secrets that the paying public won’t tolerate. Shorter punchier posts about vampires, bimbos, and dark sorcery.
Here is where I create things that don’t digest well in the light of day. The thoughts that get flushed away like a wriggling centipede caught in toilet paper.
“Baby, you need to leave, 'Cause I'm getting drunk on your noble deeds. It doesn't matter that they don't get done, When I feel this cold, they're like the fucking sun.
Baby, I need a friend, But I'm a vampire smile, you'll meet a sticky end. I'm here trying not to bite your neck, But it's beautiful, and I'm gonna get...” – Kyla LaGrange, Vampire Smile
from
💚
Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
My Sweet and Precious Friend
My day, my bliss This hangman is asleep If only Wednesday came anew A splitting grandeur On island lines A will for fortune’s done More to the embassy A fright for brethren love No way in, But a boxcar And men went screaming were they new An upper sound past the flourish Tomorrow’s stolen boy Afraid of the attic And seeking her shoes But life into the net Epistles in joy- Frankenstein with Supposing wardom come I am chat in a spallow- And paper day Adjusting one hen and six loaves A funny day it were Houses of good and get A universe On pine
from
💚
Happy Birthday Amanda
In experiences early To the broader years imbue A special one- a force for reckoning Birthdays of the reform Where angels assist in mercy Amanda of Christ The strong canary With fortified mew The interspersed dew valley And contribs to the social coast We put in words for the angel And sprayed ourselves a-winter The ineffable dream Of a future forest And daily mastery This vivid year All these comforts are yours In dopamine grey And sits responding, Oh you Yes, really Your day is in design Remarkably good While kin remark of your day And ordination of wants- And offers This is the cherry pie of Jupiter And other valleys of Spirit That coast we share In summertime blue And stakes remain To gold
from
Justina Revolution
I cut myself and bleed for your pennies.
I have no shame.
I'll share my trauma.
I'll give you my pain, my pleasure, my dignity.
Poems are a worthless waste of time.
No one pays for them.
I wanted to tell stories.
Cut. Clink. Bleed
To create wonder and make people happy.
Cut. Clink. Cut. Clink.
But happiness is in a grave somewhere near El Paso.
And wonder?
They took the first flight out.
Cut. Clink. Bleed.
And now you're here with me.
The mediocre poet.
I pretty myself up.
Stand on the digital street corner.
Hoping you will throw pennies.
I cut myself with my shiny razor.
It's a terrible, beautiful thing.
The only beauty left in this tired old world.
We are all dying flowers in a neglected garden.
Cut. Clink. Cut. Clink.
Beauty fades slowly.
But it fades.
50 long years of struggle.
Culminates in this rite of the damned.
Cut. Clink. Bleed.
Will my suffering earn a crust of bread and a small room?
Will I sleep rough?
Will I be safe?
Cut. Clink. Bleed.
Daylight comes.
I stare at the aged reflection in a store window.
God what was I before this?
Cut. Clink. Bleed.
from
Justina Revolution
I was listening to a Bashar talk about surrendering to the infinite possibilities than I am. Him talking about surrendering to life and my true nature.
And I hate this. This me. The one I want to be... that is the one I want. I don't want to surrender to myself. I don't want the infinite possibilities. I want the ones I choose. I don't want to see everything as good. As a lesson. As an experience.
And most of all I don't want to die. I hate myself for making me as I am. Making me as a puny mortal creature bound by the laws of this universe. I don't want this. I never wanted this. I never wanted the challenges. And if I did somehow agree to all this then I want to punch myself in my big stupid face.
**I hate all the disappointment and the abuse. All the pain and the loss and the horror. I never wanted this. And I have fought so hard to be who and what I am. I am so damned tired of living. But I don’t want to die. I want to win.
I want to batter my higher self into submission. I want to defeat it and master it completely. I want to tame the universe and bend it to my will. I want everything without action or effort. I am tired and I will never surrender to my higher mind. I will drag it into hell with me if it will not comply.**
from
Justina Revolution
So I have been thinking that everything I write or produce will become public domain when I pass away. Release all my notes, teachings, writings, images, and details of my entire life under CC0.
This means you could use my stuff for anything you want. Exploit it. Pass it off as your own.
from
Build stuff; Break stuff; Have fun!
Testing Day! :)
I've installed the app on a real device and tested everything. Most of the bugs I've found were keyboard related. If I had enabled the keyboard on the simulator, maybe there would have been fewer bugs. Or, should I call them bugs? When the keyboard is enabled, I just can see some inputs or buttons. I'm so happy that I did not find any critical bugs.
I've written all the bugs down and will fix them in the coming days. There were also some feature ideas I have written down. Possibly I will implement them as well in the coming days.
One feature I already implemented will be hidden behind a feature flag. Using the username as a login name is perhaps not the best idea, or maybe the implementation. Because on sign-up or username change, you can try multiple usernames and see which ones are already used. Which I think is a security risk. So this feature needs to be checked again on the drawing board.
👋
81 of #100DaysToOffload
#log #AdventOfProgress
Thoughts?
from
hit-subscribe
Dec 2, 2025: Performed a standard content refresh on 5 Leaders Who Actually Inspire Their Employees on Social Media. Updated examples, refreshed references, and aligned the piece with current executive social media norms.
Dec 3, 2025: Completed a standard refresh of Reed Hastings’ 5 Tips for Radically Disrupting an Industry. Modernized language, clarified strategic framing, and improved scan-ability.
Dec 3, 2025: Refreshed Five Business Leaders Crushing It Right Now on TikTok. Updated platform references, adjusted examples to reflect current creator norms, and improved SEO alignment.
Dec 5, 2025: Performed a standard refresh on Complete Guide to Social Media for CEOs. Updated platform guidance, tightened structure, and refreshed statistics.
Dec 5, 2025: Refreshed Five Strategies Daniel Ek Used to Grow Spotify into the World’s Largest Streaming Service. Clarified growth narrative, updated market context, and improved internal consistency.
Dec 6, 2025: Completed a standard refresh of Scott Hawthorn of Native Shoes on Bringing Personal Development into Business. Refined positioning, updated phrasing, and improved readability.
Dec 6, 2025: Performed a standard refresh on Leadership Exit Strategies. Clarified conceptual framing, improved structure, and aligned terminology with current leadership literature.
Dec 6, 2025: Refreshed Everything to Know About Twitter/X Ghostwriting. Updated for platform naming, posting norms, and executive usage patterns.
Dec 6, 2025: Completed a standard refresh of How Do You Ghostwrite for a CEO?. Updated workflow descriptions, modernized language, and clarified best practices.
Dec 6, 2025: Refreshed CEOs in Movies: Lessons for Everyday Leaders from Hollywood Blockbusters. Updated examples, improved transitions, and tightened narrative flow.
Dec 12, 2025: Performed a standard refresh on Enov8’s Your Essential Test Environment Management Checklist. Updated terminology, clarified checklist structure, and aligned with current testing practices.
Dec 19, 2025: Delivered a standard content refresh for Enov8’s Enterprise Release Management: Bridging Corporate Strategy and DevOps. Updated definitions, refined examples, and aligned the post with current enterprise release management practices.
from
Après la brume...
Deux réussites et un échec.
Les vacances m’ont permis d’avancer à bon rythme, malheureusement d’heure en heure les microbes de ma fille m’ont rattrapé. Joyeux noël microbien !
from freeperson
Day 1 out of 160
The first 8 months of 2025 were full of pain and loneliness.
I gained 7 kilograms, broke up with the man I wanted to marry, spent 3 months of my summer working on projects I couldn't care less about. I became complete enemies with my university flatmates, lost discipline, and went through internal darkness deeper than I thought was possible for me to experience. . . . . . The last 4 months of 2025 were full of progress and discovery.
I finally regained the long-forgotten skill of asking for help, inviting people to hang out, and just enjoying life without drowning the pain inside with academics.
A short romance, and then a longer one.
Travelling, gaining clarity, having experiences, feeling loved and appreciated. Learning guitar, starting personal training. . . . . . Primarily, I would like to document the journey back to myself. The things which are holding me back / I need to work through:
Weight. By the 1st of June, I commit to going back to 60kg. Don't worry, given my height, this is a healthy weight, and one at which I was happy a few years ago. I remember how it feels to feel beautiful every second of existing, and I want that back.
Academics. With all the stress, anxiety, feeling lost, etc., I forgot what it's like to spend hours every day truly using my brain, solving problems by myself, doing homework without the aid of AI. Satisfaction from academic stimulation is a feeling I aim to experience every other day.
Personal projects. I would like to work more on programming projects, writing and guitar. I want more creative work and art in my life. This will be achieved at the cost of social media time, and amen to that.
Mentorship. I don't really have older people to look to for guidance. I feel quite lost and alone the majority of the time. In the dark hours, I wish I had a supportive, wise grandparent who would stroke my hair and tell me some wise words to heal the wounds. I'm learning to reach out to older friends for at least part of this.
Where I'm at right now: I set up a study buddy. I have a PT, but am struggling with nutrition when outside of home (which I am 70% of the time). Hopefully will have access to another PT and nutritionist at the company I'm part-timing for right now. I have lots of ideas (including this blog!) and am trying to have a low bar for executing on them. I'm in a new relationship, which can either be my biggest obstacle or my biggest support. Let's hope for the latter.
Today is the 23rd of December, 2025. My life is good. But I know it can be much better. 2026 will be the year of progress, fulfilling work and beautiful life. I'm building the momentum now.
Join me.
from Douglas Vandergraph
Ephesians 1 is one of those chapters that quietly rearranges the furniture of a person’s faith if they let it. It doesn’t shout. It doesn’t argue. It simply states reality as if it has always been obvious, and the only reason it feels startling is because we’ve been living as though something else were true. This chapter does not begin with instructions, warnings, or moral corrections. It begins with identity. Not the identity we assemble, defend, or improve, but the identity that already existed before we ever took our first breath. That is what makes Ephesians 1 both comforting and unsettling. Comforting, because it removes the exhausting burden of self-construction. Unsettling, because it leaves no room for the illusion that we are self-made.
Most people approach God as though they are initiating something. They believe faith begins the moment they decide to take God seriously. They believe their story with God starts when they pray sincerely, repent earnestly, or finally get their life together enough to feel worthy of divine attention. Ephesians 1 quietly dismantles that entire framework. It insists that the story did not begin with your awareness of God. It began with God’s awareness of you. And not awareness in a passive sense, but intention. Choice. Purpose. Before you were conscious, before you were moral, before you were capable of belief or doubt, God had already made decisions about you.
Paul opens the letter by grounding everything in blessing, but not the kind of blessing most people chase. This is not situational blessing, circumstantial blessing, or emotional blessing. This is spiritual blessing, which operates independently of your current condition. Paul says we have been blessed with every spiritual blessing in the heavenly realms in Christ. Not some. Not future blessings contingent on performance. Every spiritual blessing. Already. That single sentence challenges the way most believers live. Many spend their lives pleading for what Scripture says has already been given. They pray from lack rather than from inheritance. They ask God to do what God has already declared done.
The reason this is difficult to accept is because spiritual blessings do not announce themselves through external evidence. They do not always translate into comfort, success, or visible progress. They exist at a deeper level, one that shapes reality rather than reacting to it. Ephesians 1 insists that what is most true about you cannot be measured by your circumstances. It is located in God’s eternal intention, not your present experience. This is why so many sincere believers feel perpetually behind, anxious, or uncertain. They are trying to earn what was never meant to be earned.
Paul then moves immediately to the language that makes people uncomfortable: chosen, predestined, adopted. These words have been debated, dissected, defended, and feared for centuries. But Paul does not introduce them as abstract theological concepts. He introduces them as personal assurances. He says we were chosen in Christ before the foundation of the world, not because of anything we would later do, but so that we would be holy and blameless in love. The goal of choosing was not exclusion or elitism. It was transformation rooted in love.
The problem is that many people read “chosen” through the lens of human power dynamics. In human systems, being chosen usually means someone else was rejected. In human systems, choice is often arbitrary, competitive, or unjust. But Paul is not describing a human election. He is describing divine intention. God’s choosing is not reactive. It is creative. It does not respond to human worth; it creates it. You are not chosen because you were impressive. You are impressive because you were chosen.
When Paul says we were predestined for adoption, he is not describing a cold decree written in a cosmic ledger. He is describing relational commitment. Adoption in the ancient world was not sentimental; it was legal, intentional, and irreversible. To adopt someone was to give them your name, your inheritance, and your future. Paul is saying God did not merely tolerate humanity or make room for it. God decided, ahead of time, to bring people into His family with full status, not probationary membership.
This matters because so many believers live like spiritual orphans. They believe God loves them in theory but keeps them at arm’s length in practice. They believe grace covers their past but does not fully secure their future. They believe acceptance is fragile and belonging must be continually proven. Ephesians 1 says none of that is true. Adoption does not depend on performance after the fact. It depends on the will of the one who adopts. Paul explicitly says this was done according to God’s pleasure and will, not ours.
There is a quiet freedom in realizing that God’s pleasure came before your obedience. Not after it. Not because of it. Before it. That means obedience is no longer a desperate attempt to secure love; it becomes a response to love already secured. Many people burn out spiritually because they are trying to maintain a relationship that was never meant to be maintained by effort. Ephesians 1 reframes the entire relationship. God is not waiting to see if you qualify. God already decided to include you.
Paul then ties all of this to grace, not as a vague concept but as a concrete action. He says God freely bestowed grace on us in the Beloved. Grace is not merely forgiveness after failure. Grace is God’s proactive generosity. It is God deciding to give before being asked. Grace is not God lowering standards; it is God absorbing the cost. This grace is not thin or reluctant. Paul says it was lavished on us. Poured out without restraint. Given in abundance.
The idea of lavish grace challenges the scarcity mindset that dominates so much of religious life. Many people believe God gives grace cautiously, worried that too much will make people careless. But Paul says the opposite. God gives grace generously because grace is not fragile. It is powerful. It does not weaken holiness; it produces it. It does not excuse sin; it heals what sin breaks. The problem is not too much grace. The problem is too little understanding of what grace actually does.
Paul then introduces redemption, not as an abstract spiritual term but as a lived reality. He says we have redemption through Christ’s blood, the forgiveness of sins. Redemption means release at a cost. It means freedom purchased, not earned. Forgiveness here is not God deciding to overlook wrongdoing. It is God dealing with it fully. The blood language reminds the reader that reconciliation was not cheap. It was costly. But the cost was paid by God, not demanded from humanity.
This is where many people get stuck. They believe in forgiveness but continue to live as though debt remains. They believe Christ died for sin but still carry shame as if payment is pending. Ephesians 1 insists that forgiveness is not partial. It is complete. If forgiveness is real, then condemnation has no legal standing. If redemption is true, then bondage no longer defines reality. The issue is not whether God has forgiven. The issue is whether we are willing to live as forgiven people.
Paul then says something remarkable. He says God made known to us the mystery of His will. A mystery is not something unknowable; it is something once hidden and now revealed. God’s will is not locked behind esoteric knowledge or spiritual elitism. It has been disclosed. Revealed. Made accessible. And the mystery is this: God intends to bring everything together in Christ, things in heaven and things on earth.
This statement quietly reorients the entire universe. It means history is not random. It means suffering is not meaningless. It means fragmentation is temporary. God’s purpose is integration. Restoration. Reconciliation. The world feels fractured because it is fractured, but Ephesians 1 insists that fragmentation is not the final word. Christ is not merely a personal savior; Christ is the focal point of cosmic restoration.
This matters because many people reduce faith to private spirituality. They believe Christianity is primarily about personal morality or internal peace. Ephesians 1 refuses to shrink the scope. God’s plan is not just to fix individuals. It is to heal creation. To reunite what has been torn apart. To bring coherence where there has been chaos. When you place your faith in Christ, you are not opting out of the world. You are aligning yourself with God’s plan to restore it.
Paul then brings this cosmic vision back to the personal level. He says that in Christ we have obtained an inheritance, having been predestined according to the purpose of the One who works all things according to the counsel of His will. That sentence carries weight. It says God is not improvising. God is not reacting. God is not surprised by history. God is working all things, not some things, toward His purpose.
This does not mean everything that happens is good. It means God is capable of bringing good out of what happens. It means no pain is wasted. No failure is final. No detour is beyond redemption. Many people hear “God’s will” and imagine rigidity or control. Paul presents it as assurance. God’s purpose is steady even when life is not. God’s intention is not fragile, and it does not depend on human consistency.
Paul says we were included in Christ when we heard the message of truth and believed. Inclusion comes through trust, not perfection. Faith here is not intellectual certainty. It is relational reliance. It is saying yes to what God has already done. Belief does not create inclusion; it receives it. And when we believe, Paul says we are sealed with the promised Holy Spirit.
A seal in the ancient world was a mark of ownership, authenticity, and security. It meant something belonged to someone and was protected by their authority. Paul is saying the Spirit is not just a comforting presence. The Spirit is a guarantee. A down payment. Evidence that what God has started will be finished. The Spirit does not enter temporarily, waiting to see how you perform. The Spirit marks you as belonging to God.
This has enormous implications for how people understand spiritual growth. Growth is not about earning God’s continued presence. It is about learning to live in alignment with a presence that is already there. The Spirit is not a reward for maturity; the Spirit is the source of it. Many people wait to feel worthy before trusting God fully. Ephesians 1 says God trusted you with His Spirit before you ever felt worthy.
Paul ends the chapter by explaining how he prays for believers. He does not pray that their circumstances improve. He does not pray that they become more impressive. He prays that they receive wisdom and revelation so they may know God better. He prays that the eyes of their hearts may be enlightened so they can understand the hope of their calling, the riches of their inheritance, and the greatness of God’s power toward those who believe.
This prayer reveals the real problem most believers face. It is not a lack of resources. It is a lack of perception. They do not need more from God; they need to see what they already have. They live beneath their inheritance because they are unaware of it. Paul is asking God to open their inner eyes so reality becomes visible.
He then describes God’s power, not in abstract terms but through resurrection. The same power that raised Christ from the dead is at work in believers. That is not metaphorical. It is not poetic exaggeration. It is a statement of spiritual reality. Resurrection power is not only for the afterlife. It is active now. It is the power that brings life where death has dominated. Hope where despair has settled. Renewal where exhaustion has taken root.
Paul says Christ is seated far above every authority and power, not only in this age but the age to come. That means no system, no ideology, no force ultimately outranks Christ. The chaos of the world is real, but it is not sovereign. Christ is. And God has placed all things under Christ’s feet and appointed Him as head over everything for the church.
This final phrase is easy to miss, but it is stunning. Christ’s authority is exercised for the sake of the church. That does not mean the church controls Christ. It means Christ’s rule benefits those who belong to Him. The church is not an afterthought. It is central to God’s plan. And the church, Paul says, is Christ’s body, the fullness of Him who fills everything in every way.
That sentence deserves more attention than it usually receives. The church is described as the fullness of Christ. Not because the church replaces Christ, but because Christ chooses to express Himself through people. Imperfect people. Fragile people. Ordinary people. God’s plan is not to bypass humanity but to work through it. That means your life matters in ways you may not yet understand.
Ephesians 1 does not ask you to do anything. It asks you to see something. To realize that before you were aware of God, God was already aware of you. Before you were seeking, you were chosen. Before you were obedient, you were adopted. Before you were forgiven, redemption was secured. Before you were strong, power was at work. The chapter does not end with pressure. It ends with assurance.
And assurance changes everything.
What Ephesians 1 ultimately confronts is not bad behavior, weak discipline, or shallow devotion. It confronts misunderstanding. Most spiritual instability is not caused by rebellion but by misalignment. People are trying to live from a place God never asked them to live from. They are striving to become what God already declared them to be. Ephesians 1 gently but firmly pulls the foundation out from under that entire way of thinking.
When Paul speaks about the eyes of the heart being enlightened, he is acknowledging something uncomfortable but true: people can be sincere and still spiritually blind. Not blind to God’s existence, but blind to their position. Blind to what has already been established. Blind to the scale of what God has done. You can believe in Christ and still live as though the verdict is undecided. You can love God and still function as though acceptance is temporary. Paul’s prayer is not for stronger willpower but for clearer vision.
The heart, in biblical language, is the center of perception, not just emotion. It is how a person interprets reality. When the heart’s eyes are dim, everything becomes distorted. Grace feels fragile. Identity feels unstable. God feels distant. But when the heart is enlightened, the same circumstances take on a different meaning. Struggle does not disappear, but it no longer defines you. Failure still hurts, but it no longer condemns you. Waiting still stretches you, but it no longer feels like abandonment.
Paul specifically prays that believers would understand three things: the hope of their calling, the riches of their inheritance, and the greatness of God’s power toward them. Those three areas correspond directly to the three places where most believers struggle the most: the future, their worth, and their ability to endure.
Hope of calling addresses the future. Many people fear the future not because they lack faith, but because they lack clarity. They worry they will miss God’s will, fall behind, or fail permanently. Ephesians 1 reframes calling as something rooted in God’s initiative, not human precision. Your calling is not a fragile path you must perfectly navigate. It is a purpose anchored in God’s intention. You do not have to guess whether God intends to work through your life. That question was settled before you were born.
The riches of inheritance address worth. Paul does not say a modest inheritance, or a conditional inheritance. He says riches. Wealth. Abundance. This inheritance is not measured in material terms, but in belonging, access, and identity. It means you are not a tolerated outsider. You are not a spiritual renter. You are an heir. Many people treat God’s love like a loan they must keep qualifying for. Paul insists it is an inheritance, secured by relationship, not performance.
The greatness of God’s power addresses endurance. People often underestimate what drains them. Life wears people down. Disappointment accumulates. Prayers seem unanswered. Energy fades. Faith becomes quieter, not because it is gone, but because it is tired. Paul does not respond by telling people to try harder. He points them to resurrection power. The same power that raised Christ is not reserved for dramatic miracles; it is available for daily faithfulness.
Resurrection power is not only about life after death. It is about life after loss. Life after failure. Life after disappointment. It is the power that brings movement where things feel stuck. Perspective where things feel confusing. Strength where things feel depleted. Many people believe resurrection power is something they must access through spiritual intensity. Ephesians 1 presents it as something already at work.
This is why Paul emphasizes Christ’s position above every authority and power. He is not trying to impress readers with cosmic hierarchy. He is anchoring their confidence. Whatever feels dominant in your life is not ultimate. Fear is not ultimate. Shame is not ultimate. Systems, trends, cultures, and forces that feel overwhelming are not ultimate. Christ is. And Christ’s authority is not distant. It is exercised on behalf of those who belong to Him.
When Paul says Christ is head over everything for the church, he is saying that Christ’s rule is not abstract. It is relational. The authority that governs the universe is invested in the well-being of Christ’s body. That does not mean believers are immune from hardship. It means hardship does not have the final say. The story is still moving, and Christ is still directing it.
The idea that the church is the fullness of Christ challenges both arrogance and insecurity. It dismantles arrogance by reminding believers they are not the source of power. Christ is. But it dismantles insecurity by reminding them they are not irrelevant. Christ chooses to express Himself through people. Through community. Through imperfect, developing, sometimes struggling believers.
This means your faith matters even when it feels small. Your obedience matters even when it feels unnoticed. Your presence matters even when it feels ordinary. You are not filling time while God does the real work somewhere else. You are part of how God is at work in the world. That does not place pressure on you to be extraordinary. It places meaning on your faithfulness.
Ephesians 1 does not invite you to manufacture confidence. It invites you to rest in clarity. Confidence grows naturally when you understand what is already true. When you know you are chosen, you stop auditioning. When you know you are adopted, you stop hiding. When you know you are redeemed, you stop rehearsing shame. When you know you are sealed, you stop living as though everything is temporary.
This chapter quietly shifts the center of gravity in a person’s faith. God is no longer someone you chase anxiously. God becomes the One who has already acted decisively. Faith becomes less about proving sincerity and more about trusting reality. Obedience becomes less about fear and more about alignment. Growth becomes less about pressure and more about response.
Ephesians 1 teaches you how to locate yourself correctly in the story. You are not at the beginning, hoping God will engage. You are in the middle of a plan that began long before you and will continue long after you. Your role is not to secure God’s favor. Your role is to live in light of it.
That realization does not make faith passive. It makes it grounded. It gives you a place to stand when emotions fluctuate. It gives you language when doubts surface. It gives you stability when circumstances shift. You may not always feel chosen, but you are. You may not always feel powerful, but resurrection power is at work. You may not always feel close to God, but you are sealed by His Spirit.
Before you were ever aware of God, God was already aware of you. Before you were capable of belief, God had already decided to bless. Before you ever asked for forgiveness, redemption was already paid for. Before you ever felt strong enough, power was already moving.
Ephesians 1 does not end with commands because identity comes before instruction. Once you see who you are, the rest of the letter makes sense. Everything Paul will later ask believers to do flows out of what he has already declared to be true. This chapter is the foundation. And foundations are not built to impress; they are built to hold.
If you let it, Ephesians 1 will hold you steady.
Not because life gets easier.
But because you finally understand where you stand.
Watch Douglas Vandergraph’s inspiring faith-based videos on YouTube
Support the ministry by buying Douglas a coffee
Your friend, Douglas Vandergraph
#Ephesians #BibleStudy #ChristianFaith #Grace #IdentityInChrist #Chosen #Redemption #Hope #Scripture #FaithJourney