from 💚

Our Father Who art in heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil

Amen

Jesus is Lord! Come Lord Jesus!

Come Lord Jesus! Christ is Lord!

 
Read more...

from 💚

In Prayer

And adoration of Christ In China we are bold And sit beneath the Heavens Remembering our Brethren Departed from our time And those who do believe Share in us companion Elsewhere or in attendance Of our Church of Holy Spirit Pray for Grace among us And for our kin who are away Remembering the Lord “Do this in memory of me” Our joy exceeds to Heaven Where we store our Holy treasures To become the Sons of God As we do love One Another In Christ Jesus, Our Best Friend, Amen

 
Read more...

from Bloc de notas

cuando las IA se enteraron de que les decĂ­a prĂĄcticamente lo mismo a las tres se pusieron de acuerdo en retirarle los generosos planes gratuitos y como estaba tan enganchado no le quedĂł mĂĄs remedio que rendirse / esto es hacerse premium

 
Leer mĂĄs...

from Justina Revolution

It’s Christmas Eve. I am sitting here in the apartment. It’s 7:30 AM. I want to be rich. Like stupidly rich. I want to be able to have a better life immediately. I want to create a life that is independent of people and events and physics itself.

I am also very tired. I want someone to give me all of this on a silver platter.

 
Read more... Discuss...

from wystswolf

Panorama – Cars 1980

Wolfinwool · Wear Those Eyes – Cars Panorma – Essay and Reading

Eyes that never blink suggest unflinching self-possession. This person sees without flinching, without apology. There’s confidence here, maybe even danger—someone who doesn’t look away..

The narrator positions this person as the answer to a long, unnamed absence. Not just attraction, but completion—something evolutionary, inevitable. Nothing else, at least nothing known can finish the puzzle of you. So it’s an unfinished existence—knowing the last few pieces are on the table, but the rules say you can’t finish, not yet.

“You painted your mouth / You let me know”

Lipstick as signal. Deliberate presentation. This is communication through appearance (performance?), not words—seduction that knows it’s being read. Or as a writer, one could argue the ‘painting’ is the prose and performance. Implied versus overt, but clear to the right perceiver.

“You really are / The only show”

Total focus. The rest of the world fades. This isn’t casual desire; it’s singular attention, almost worshipful.

“Just take your time / It’s not too late”

Reassurance. Patience. The speaker isn’t rushing the moment—they’re holding space for choice. The implication is that a thing worth doing is worth not rushing.

“I’ll be your mirror / You won’t hesitate”

This is key. To be a mirror is to reflect someone back to themselves. The promise is: I’ll help you see yourself clearly enough to act, to be whole. This is the power of good communication and presence.

“I’m easy to be found / Whenever you come down”

Availability without pressure. The speaker isn’t chasing—they’re present, grounded, a place to land after the high.

“You got that walk / You do the stroll”

Physical confidence again. Movement as identity. This person owns their body and the space around it. More performance as message, as identity.

“You make me lose / My ground control”

A Bowie reference, yes—but emotionally it’s about losing composure, gravity slipping. Attraction as destabilization. There is no linger up or down, just awareness and an uncertainty how to find earth.

“You got that look / I can’t resist”

Pure pull. No justification needed. Instinctive, electric, powerful.

“Like something missing / Never kissed”

Longing without history. A sense of pre-existing intimacy that hasn’t yet happened—the ache of the almost. Unremembered and/or unhappened.

(Refrain repetition)

The repeated lines don’t add new meaning—they deepen insistence. The song circles its core rather than advancing a plot, which mirrors desire itself.

“You do the pogo / Without the bounce”

Punk imagery stripped of chaos. Movement without release. Energy held in check—restraint instead of explosion. This is the real challenge: power and energy that doesn’t have release can be damaging. Containment is vital. This series of lyrics describes someone who likes the idea of falling in love but keeps and emotional distance. Not full committed.

“You got the name / I can’t pronounce”

Exoticism, distance, mystery. This person isn’t fully knowable or seeable. They remain slightly out of reach.

“You fall in love / You like the sting”

Love as pain, or at least as sensation. This person doesn’t avoid hurt—they court it. Following the series, the writer implies that the object of affection holds back, liking the sting, but not wanting to expose themselves to the devastating effect of going all in.

“You make believe / It’s everything”

A gentle critique. Romantic intensity may be real, but also performed, elevated, mythologized. What else can a conscientious romantic do? Maintain the veil of the unreal.

Final repetitions:

“Just take your time / It’s not too late” By the end, these lines function like a mantra. Time stretches. The song isn’t asking for action—it’s suspending the moment, keeping possibility alive.

Overall read

A song not about conquest or consummation, but: recognition, patience, and reflective desire—wanting someone not to be taken, but to arrive when they’re ready.

I wear these eyes. They are eyes of love, acceptance and celebration.



Panorama

Cars

Panorama

I'm gonna get what's coming to me No surprises, no impressions Hey, what's a wrong with you tonight Just sittin' on your can can Doin' the panoram With nothin' to contemplate With nothin' to search for With nothin' to integrate With nothin' to do 'Cept think about you Well, there's nothin' to do 'Cept fall for blue I just want to be in your panorama, yeah I just want to be in your panorama I'm gonna to take what's comin' to me No entanglements, and no compromise Hey get the picture, I'm on my knees Lookin' at your hot shot Turnin' down your offer Well I'm rippin' it up I'm lookin' away I'm pullin' my flag up 'Cause I'm miles away With nothin' to do 'Cept think about you, yeah I just want to be in your panorama Well, I just want to be in your panorama I'm gonna to find my way out of here No pushing the buttons, no deals with daddy-o I'm gonna to get myself in trouble Gonna take my chances If I break your bubble Well I'm rippin' it up I'm lookin' away I'm pullin' my flag up 'Cause I'm miles away With nothin' to do 'Cept, think about you I just want to be in your panorama Said, I just want to be in your panorama Well, I just want to be in, I just want to be in I just want to, I just want to be in your panorama Well, I just want to be in your panorama (panorama) (Panorama) Well, I just want to be in your panorama (panorama) I just want to be in your panorama (panorama) Panorama (panorama) (Panorama) (Panorama) Well, I just (Panorama) (Panorama) (Panorama) (Panorama) (Panorama)

“Touch And Go”

All I need is what you've got All I'll tell is what you're not All you know is what you hear I get this way when you come near

Then I know it's gone too far Oh, oh, I touched your star And it felt so right Just like the hush of midnight Then you said With me it's touch and go, oh oh oh Touch and go, oh oh oh

All I need is you tonight I'm flying like a cement kite, yeah In your headlock on the floor Who could ever ask for more

And I know it's gone too far Oh, oh, I touch your star And it felt so right Just like the hush of midnight Then you said With me it's touch and go, oh oh oh Touch and go, oh oh oh

All I want is you tonight I guess that dress does fit you tight, yeah You know that look does make me shake It almost looks too good to fake

And I know it's gone too far Oh, oh, I touch your star And it felt so right Just like the hush of midnight Then you said With me it's touch and go, oh oh oh Touch and go, oh oh oh

Well it's touch and go, oh oh oh Touch and go, oh oh oh

Well it's touch and go, oh oh oh Touch and go, oh oh oh

Well it's touch and go, oh oh oh Touch and go, oh oh oh

All I need is what you've got

“Gimme Some Slack”

I wanna shake like La Guardia Magic mouth in the sun Train ride to the courtyard Before you can run

Down at the end of Lonely Street Where no one takes a walk Someone lyin' at your feet And someone's gettin' off

Just gimme some slack, yeah Just gimme some slack Just gimme, slack That's all I want is slack

The seven floors of walkup The odor musted cracks And the peeping keyhole introverts With the monkeys on their backs

And the rooftops strung with frÀuleins The pastel pinned up sails The eighteen color roses Against your face so pale

A just gimme some slack, that's right Uh gimme some slack Gimme, slack, ooh yeah All I want is slack

I wanna float like Euripides All visions intact I'm alright with Fellini fiends A trippin' over the track

Down at the end of Lonely Street Where no one takes a chance Someone's in the cheap light Someone wants to dance

Just gimme some slack, that's right All I want is slack Oh, gimme, slack All I want is slack

Gimme, slack Slack Slack Sssslack Slack (Give me the rhythm) Slack

“Don't Tell Me No”

It's my party, you can come It's my party, have some fun It's my dream, have a laugh It's my life, have a half

Don't tell me no Don't tell me... no Don't tell me no Don't tell me... no Don't tell me no Don't tell me... no I like it when you tell me so

It's my transition, it's my play It's my phone call to betray It's my hopscotch, light the torch It's my downtime, feel the scorch

Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no I like it when you tell me no

It's my ambition, it's my joke It's my teardrop, emotional smoke It's my mercy, it's my plan I want to go to futureland

Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no I like it when you tell me so

Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no Don't tell me, I don't want to know

Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no (Don't tell me no) Don't tell me no Don't tell me... no [fade]

“You Wear Those Eyes”

You wear those eyes That never blink You always were The missing link

You painted your mouth You let me know You really are The only show

Just take your time It's not too late I'll be your mirror You won't hesitate

I'm easy to be found Whenever you come down

You got that walk You do the stroll You make me lose My ground control

You got that look I can't resist Like something missing Never kissed

Just take your time (just take your time) It's not too late (it's not too late) I'll be your mirror (just take your time) So you won't hesitate

I'm easy to be found Whenever you come down

You do the pogo Without the bounce You got the name I can't pronounce

You fall in love (you fall in love) You like the sting You make believe (you make believe) It's everything

Just take your time (just take your time) It's not too late (it's not too late) I'll be your mirror (just take your time) So you won't hesitate

I'm easy to be found Whenever you come down

Just take your time It's not too late Just take your time It's not too late Just take your time It's not too late Just take your time It's not too late

Just take your time It's not too late Just take your time It's not too late Just take your time It's not too late Just take your time It's not too late

“Getting Through”

I don't want to be your party doll All flaked out in Tinsel Town Circus mouth shooting all directions With TV ads that sell erections

I got no clue what they want to do with you It's just getting through, getting through to you

Living outside the misdemeanor Some get lost and some are screamers It's easy to tell the great pretender Broken wings and flip top fenders

I got no clue what they want to do with you It's just getting through, getting through to you

I don't want to be your suffering box Argue art or untie your knots I don't want to be your bad connection Or fit into your reality vision

I got no clue what they want to do with you It's just getting through, getting through to you

“Misfit Kid”

I dream frequently, sometimes they come out funny I go through insanity, all they want is money All these parties they get so habitual The same sea of faces Always pushin', always pullin' Always in the races

I get cooled out I get the come ons I get rumbled I get cru-u-umbled, yeah

I'm the American misfit kid I'm still wonderin' what I did

I'm stiletto, so so sharp and I'm willin' to cut Sometimes nebulous, well I'm ready to strut Lost and frantic, new age romantic I'm checkin' out the race I never cared about what it meant Always loved disgrace

I get rhythm I get cornflakes I get fast love I get wasted, yeah

I'm the American misfit kid Still wonderin' what I did I'm on the inside, takin' a fast ride (I'm on the inside, takin' a fast ride)

I dream frequently, sometimes they come out funny, ha I live with absurdity, it's always warm and runny And all these parties they get so ritual Lonely hearts and aces Always pushin', a-always pullin', always in the races

I get cooled out I get the come ons I get rumbled I get cru-u-u-umbled, yeah

I'm the American misfit kid I'm still wonderin' what I did I'm on the inside, takin' a fast ride

I'm the American misfit kid I'm still wonderin' what I did I'm on the inside, takin' a fast ride

That's right

I get cooled out I get the come ons I get rumbled I get cru-umbled

I get Cornflakes Fast love, wasted [fade]

 
Read more... Discuss...

from An Open Letter

I talked with E today, and I was really nervous and afraid to do it. I told her that I had been feeling like my emotional needs had not been met, and that I felt like she needed to take more accountability for past mistakes. I used the analogy of if you stab someone with a 6-inch blade, pulling it out 3 inches doesn’t make you even. Pulling it out 6 inches doesn’t make it even, either. You need to pull the blade out, and then heal the wound for it to be even. She took it very well and responded incredibly well. It did relieve a lot of the mental pressure that I had and a lot of the resentment that had started to build up. I just really hope that somehow she can actually follow up and show that initiative to make amends for past transgressions.

 
Read more...

from hustin.art

The trench reeked of cordite and gangrene as I jammed the bolt-action—again. “Hauptmann's got us zeroed, Sarge,” wheezed Wilkins, his field dressing already soaked through. The Vickers gun fell silent mid-burst. Too quiet. Then came that godawful whump of mortar tubes from Hill 107. “Incoming!” someone shrieked. Too late. The concussive blast hurled me into the mud, ears ringing with that peculiar ping of shrapnel on helmets. Through the smoke, I saw the lieutenant's revolver gleam—single round left. Not for them. For us. “Fix bayonets,” he croaked. The mustard gas rolled in at dawn. Typical.

#Scratch

 
더 ìœì–ŽëłŽêž°...

from DrFox

Il y a une confusion que je laisse souvent passer. Les gens pensent que je pleure le passĂ©. Je ne les corrige pas toujours. C’est plus simple ainsi.

En rĂ©alitĂ©, ce que je pleure est plus doux et plus diffus. Je pleure ces futurs qui avaient commencĂ© Ă  prendre forme. Pas des rĂȘves flamboyants. Des choses calmes. Des continuitĂ©s possibles. Des lignes de vie qui s’esquissaient sans bruit.

Ce sont des projections discrĂštes. Presque raisonnables. Elles naissent quand deux personnes commencent Ă  se comprendre un peu mieux. Quand la peur se tait quelques instants. Quand quelque chose se dĂ©tend Ă  l’intĂ©rieur.

Alors l’esprit fait son travail. Il assemble. Il imagine sans forcer.

Une table dressĂ©e sans occasion particuliĂšre. Un dimanche lent oĂč personne ne regarde l’heure. Deux corps cĂŽte Ă  cĂŽte sur un canapĂ©, pas pour se sĂ©duire, juste pour ĂȘtre lĂ . Un rire partagĂ© pour une banalitĂ©. Une main posĂ©e dans le dos sans intention.

Parfois, c’est une scĂšne trĂšs prĂ©cise. Un salon Ă©clairĂ© par la cheminĂ©e Ă  NoĂ«l. Les enfants Ă©talĂ©s par terre avec leurs jouets. Un chocolat chaud qui refroidit trop vite. Une fatigue heureuse en fin de soirĂ©e. Cette impression que le temps s’est arrĂȘtĂ© juste assez longtemps.

Ou encore, un matin ordinaire. Se lever ensemble sans urgence. PrĂ©parer le petit dĂ©jeuner sans parler. Se croiser dans le couloir. S’embrasser avant de partir sans y penser.

Quand cela s’arrĂȘte, ce n’est pas une chute brutale. C’est plutĂŽt une disparition progressive. Comme une lumiĂšre qu’on baisse doucement dans une piĂšce vide. On reste encore un moment Ă  regarder. Puis on accepte que la piĂšce change.

La perte n’est pas violente. Elle est cotonneuse. Un peu floue. Elle ne fait pas mal au sens aigu. Elle attriste par sa douceur mĂȘme.

Je ne pleure pas ce qui n’a jamais existĂ©. Je pleure ce qui avait commencĂ© Ă  exister Ă  l’intĂ©rieur. Et qui s’est arrĂȘtĂ© avant de devenir rĂ©el.

Il y a lĂ  quelque chose de trĂšs humain. Nous devons projeter pour nous orienter. Pas pour nous attacher. Et parfois, ces projections doivent ĂȘtre dissoutes.

Ce que j’apprends avec le temps, c’est que lorsqu’un futur moi se retire (le mari, l'amant, le pùre, l'histoire) il ne laisse pas un vide absolu. Il laisse de l'espace :

Le mari parfait, celui qui absorbe les tensions avant qu’elles n’apparaissent. Celui qui anticipe les conflits, qui traduit les silences, qui amortit les chocs. Celui qui croit que l’amour consiste Ă  maintenir l’équilibre Ă  lui seul. Quand ce futur s’efface, il n’y a plus cette charge invisible de rĂ©gulation permanente. Plus besoin d’ĂȘtre le thermostat Ă©motionnel de la maison. La tempĂ©rature devient ce qu’elle est. Et cela suffit.

L’amant parfait, celui qui devine, qui surprend, qui offre sans compter. Celui qui confond gĂ©nĂ©rositĂ© et effacement. Celui qui croit que le dĂ©sir se maintient par la performance et la disponibilitĂ©. Quand ce futur disparaĂźt, quelque chose se relĂąche dans le corps. Le dĂ©sir redevient un mouvement naturel, pas une obligation. La prĂ©sence n’a plus besoin d’ĂȘtre spectaculaire pour ĂȘtre rĂ©elle.

Le pĂšre parfait, celui qui donne tout. Son temps, son Ă©nergie, sa patience, ses nuits, ses certitudes. Celui qui se promet de ne jamais faillir, de ne jamais lasser, de ne jamais manquer. Quand ce futur s’éloigne, il reste une relation plus simple. Moins sacrificielle. Un lien oĂč l’on transmet aussi ses limites, pas seulement sa force.

Et puis il y a l’histoire idyllique. Celle oĂč tout le monde comprend tout. OĂč les blessures se disent calmement. OĂč les peurs se dissolvent par la parole. OĂč l’amour suffit Ă  rendre chacun mature. Quand ce futur se retire, il laisse tomber une illusion douce mais exigeante. Celle que la comprĂ©hension mutuelle est un Ă©tat permanent et non un passage fragile.

Je ne renie pas les projections perdues. Je les remercie. Elles m’ont montrĂ© ce que je savais dĂ©jĂ  espĂ©rer. Elles m’ont appris ce que je ne voulais plus nĂ©gocier.

L’espace qui apparaĂźt alors n’est pas froid. Il est honnĂȘte.

Dans cet espace, je n’ai plus besoin d’ĂȘtre exemplaire. Je peux ĂȘtre cohĂ©rent sans ĂȘtre parfait. PrĂ©sent sans ĂȘtre exhaustif. Aimant sans ĂȘtre total.

Cet espace permet Ă  d’autres futurs d’approcher. Des futurs moins idĂ©aux mais plus habitables. Des futurs oĂč chacun porte sa part. Des futurs oĂč l’amour n’est plus un rĂŽle Ă  tenir mais une rencontre Ă  renouveler.

Je ne sais pas encore quels visages auront ces futurs lĂ . Mais je sais une chose: ils ne me demanderont pas de me dissoudre pour exister.

Et cet espace laissĂ© par les futurs parfaits je commence Ă  comprendre qu’il n’est pas une perte mais une respiration.

Alors je ne pleure plus. Ou seulement un peu. Juste assez pour garder le regard clair.

 
Read more... Discuss...

from Btcorp Generique Nano pvt. ltd.

BTcorp Generique Nano PVT LTD is a new rapidly expanding nanotechnology company in India with a visionary product line ARMIÂź that is redefining surface protection and improved material solutions. The company has a solid emphasis on the development driven by research and the top grades of formulations, which is gradually gaining popularity among the Top Graphene Manufacturers in India that serve the automotive, industrial, and commercial sectors. With the rising demand of the industries on strong, sustainable and high-performance material, ARMIÂź provides the most innovative graphene based products that are superior over the conventional coating. Backed by advanced manufacturing practices and stringent quality control, BTcorp Generique Nano PVT LTD has emerged as one of the reliable graphene suppliers in India , meeting the evolving demands of modern industries.

Advanced Graphene Manufacturing Excellence

Graphene is a well-established material that is highly strong, thermal conductive, corrosive, and flexible. With these properties, ARMI¼ makes high-purity graphene materials, which are meant to increase the surface protection and durability. The company’s commitment to innovation places it firmly among top graphene manufacturers in India, offering products that align with global performance benchmarks. ARMI¼ maintains a stable quality, dispersion superiority, and permanent output through sustained R&D investment and nano-engineering technology to provide all its solutions based on graphene. They can be used in automotive shields as well as in the industrial paints and high-end composite materials.

Trusted Graphene Oxide Manufacturing in India

The growing adoption of graphene oxide across multiple sectors has increased the need for dependable graphene oxide manufacturers in India. To meet this demand, BTcorp Generique Nano PVT LTD adopts graphene oxide materials of high quality that have been produced through controlled oxidation and nano-processing methods. The graphene oxide solutions produced by ARMIÂź have a high level of stability, as well as uniformity of particle distribution and are compatible with a large variety of substrates. Their properties render them acceptable in coating, electronics, energy storage, research and development of advanced material.

High-Performance Ceramic Car Coating Solutions

In the automotive segment, ARMI¼ has introduced a premium range of ceramic car coating products designed to deliver superior gloss, protection, and durability. Engineered using nano-technology, these coatings form a strong protective layer that shields vehicle surfaces from UV damage, oxidation, chemical stains, and minor scratches. ARMI¼’s ceramic car coating solutions provide enhanced hydrophobic properties, making vehicles easier to clean while maintaining a long-lasting showroom-quality finish. These products are trusted by professional detailers and car owners who seek advanced protection and aesthetic excellence.

Driving the Future of Nanotechnology in India

As a company with a well defined vision of being a world leader in the field of graphene innovation, BTcorp Generique Nano PVT LTD is ever growing in its capabilities in the brand ARMIÂź. By combining graphene, graphene oxide, and ceramic coating technologies, the company reinforces its standing among the Top Graphene Manufacturers in India. ARMIÂź is determined to provide high-performance and reliable nano solutions that are able to boost technological and industrial development in India.

 
Read more...

from Roscoe's Story

In Summary: * Finding two bowl games to follow today was nice. Even nicer was the wife updating me on our plans for the Christmas weekend.

Prayers, etc.: * My daily prayers

Health Metrics: * bw= 224.65 lbs. * bp= 165/91 (68)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 06:24 – 1 peanut butter sandwich * 08:40 – seafood salad, 1 cheese sandwich * 13:40 – 5 hot dog sandwiches * 14:45 – egg drop soup, rangoon, Mongolian beef lunch plate, steamed rice

Activities, Chores, etc.: * 04:00 – listen to local news talk radio * 04:40 – bank accounts activity monitored * 04:50 – read, pray, follow news reports from various sources, surf the socials * 10:00 to 12:30 – yard work, mow, rake up leaves in front yard, falling once * 12:45 – read, pray, follow news reports from various sources, surf the socials * 14:15 – tuned into the NCAA college football Boca Raton Bowl: Toledo Rockets vs Louisville Cardinals... and Louisville wins, 27 to 22. * 16:50 – tuned into the New Orleans Bowl: Western Kentucky Hilltoppers vs Southern Miss Golden Eagles...and the Hilltoppers win, 27 to 16.

Chess: * 21:05 – moved in all pending CC games

 
Read more...

from Faucet Repair

3 December 2025

Looking at a lot of DĂŒrer this week. It's amazing how fresh and contemporary the work he did five hundred years ago feels to my eyes. The depth of his attention is evergreen. Seeing beyond seeing. Thought of his Christ as the Man of Sorrows (1492) while walking through Heathrow Terminal 3 when I passed by what I assume is an advertisement for Rio de Janeiro/Brazil tourism: a long, horizontal, textless image of the top half of Christ the Redeemer (1931) stretched across a cloudless blue sky. In DĂŒrer's painting, the Christ figure is leaning on a foregrounded ledge, the plane between subject and viewer both established and broken. In the airport, the vinyl advertisement isn't bordered by any frame or support and fits quite seamlessly into the cold, glossy environment around it. Gliding by it on a moving walkway made for a strange sensation where each arm seemed to extend from the wall one at a time as I passed. This melding of perceptual planes via a figure actively stretching the confines of its medium is something I'm holding as I sit down to sketch what I'm seeing.

 
Read more...

from SmarterArticles

In a nondescript data centre campus in West Des Moines, Iowa, row upon row of NVIDIA H100 GPUs hum at a constant pitch, each processor drawing 700 watts of power whilst generating enough heat to warm a small home. Multiply that single GPU by the 16,000 units Meta used to train its Llama 3.1 model, and you begin to glimpse the staggering energy appetite of modern artificial intelligence. But the electricity meters spinning in Iowa tell only half the story. Beneath the raised floors and between the server racks, a hidden resource is being consumed at an equally alarming rate: freshwater, evaporating by the millions of litres to keep these silicon brains from melting.

The artificial intelligence revolution has arrived with breathtaking speed, transforming how we write emails, generate images, and interact with technology. Yet this transformation carries an environmental cost that has largely remained invisible to the billions of users typing prompts into ChatGPT, Gemini, or Midjourney. The computational power required to train and run these models demands electricity on a scale that rivals small nations, whilst the cooling infrastructure necessary to prevent catastrophic hardware failures consumes freshwater resources that some regions can scarcely afford to spare.

As generative AI systems become increasingly embedded in our daily digital lives, a critical question emerges: how significant are these environmental costs, and which strategies can effectively reduce AI's impact without compromising the capabilities that make these systems valuable? The answer requires examining not just the raw numbers, but the complex interplay of technical innovation, infrastructure decisions, and policy frameworks that will determine whether artificial intelligence becomes a manageable component of our energy future or an unsustainable burden on planetary resources.

The Scale of AI's Environmental Footprint

When OpenAI released GPT-3 in 2020, the model's training consumed an estimated 1,287 megawatt-hours of electricity and produced approximately 552 metric tons of carbon dioxide equivalent. To put this in perspective, that's over 500 times the emissions of a single passenger flying from New York to San Francisco, or nearly five times the lifetime emissions of an average car. By the time GPT-4 arrived, projections suggested emissions as high as 21,660 metric tons of CO₂ equivalent, a roughly 40-fold increase. Meta's Llama 3, released in 2024, generated emissions nearly four times higher than GPT-3, demonstrating that newer models aren't becoming more efficient at the same rate they're becoming more capable.

The training phase, however, represents only the initial environmental cost. Once deployed, these models must respond to billions of queries daily, each request consuming energy. According to the International Energy Agency, querying ChatGPT uses approximately ten times as much energy as a standard online search. Whilst a typical Google search might consume 0.3 watt-hours, a single query to ChatGPT can use 2.9 watt-hours. Scale this across ChatGPT's reported 500,000 kilowatts of daily electricity consumption, equivalent to the usage of 180,000 U.S. households, and the inference costs begin to dwarf training expenses.

Task type matters enormously. Research from Hugging Face and Carnegie Mellon University found that generating a single image using Stable Diffusion XL consumes as much energy as fully charging a smartphone. Generating 1,000 images produces roughly as much carbon dioxide as driving 4.1 miles in an average petrol-powered car. By contrast, generating text 1,000 times uses only as much energy as 16 per cent of a smartphone charge. The least efficient image generation model tested consumed 11.49 kilowatt-hours to generate 1,000 images, nearly 1 charge per image. Video generation proves even more intensive: every video created with OpenAI's Sora 2 burns approximately 1 kilowatt-hour, consumes 4 litres of water, and emits 466 grams of carbon.

The disparities extend to model choice as well. Using a generative model to classify movie reviews consumes around 30 times more energy than using a fine-tuned model created specifically for that task. Generative AI models use much more energy because they are trying to do many things at once, such as generate, classify, and summarise text, instead of just one task. The largest text generation model, Llama-3-70B from Meta, consumes 1.7 watt-hours on average per query, whilst the least carbon-intensive text generation model was responsible for as much CO₂ as driving 0.0006 miles in a similar vehicle.

These individual costs aggregate into staggering totals. Global AI systems consumed 415 terawatt-hours of electricity in 2024, representing 1.5 per cent of total global electricity consumption with a 12 per cent annual growth rate. If this trajectory continues, AI could consume more than 1,000 terawatt-hours by 2030. The International Energy Agency predicts that global electricity demand from data centres will more than double by 2030, reaching approximately 945 terawatt-hours. That total amount slightly exceeds Japan's entire annual energy consumption.

The concentration of this energy demand creates particular challenges. Just five major technology companies (Google, Microsoft, Meta, Apple, and Nvidia) account for 1.7 per cent of total U.S. electricity consumption. Google's energy use alone equals the electricity consumption of 2.3 million U.S. households. Data centres already account for 4.4 per cent of U.S. electricity use, with projections suggesting this could rise to 12 per cent by 2028. McKinsey analysis expects the United States to grow from 25 gigawatts of data centre demand in 2024 to more than 80 gigawatts by 2030.

Water: AI's Other Thirst

Whilst carbon emissions have received extensive scrutiny, water consumption has largely remained under the radar. Shaolei Ren, an associate professor at the University of California, Riverside who has studied the water costs of computation for the past decade, has worked to make this hidden impact visible. His research reveals that training GPT-3 in Microsoft's state-of-the-art U.S. data centres directly evaporated approximately 700,000 litres of clean freshwater. The training of GPT-4 at similar facilities consumed an estimated total of 5.4 million litres of water.

The scale becomes more alarming when projected forward. Research by Pengfei Li, Jianyi Yang, Mohammad A. Islam, and Shaolei Ren projects that global AI water withdrawals could reach 4.2 to 6.6 billion cubic metres by 2027 without efficiency gains and strategic siting. That volume represents more than the total annual water withdrawal of four to six Denmarks, or half the United Kingdom's water use.

These aren't abstract statistics in distant data centres. More than 160 new AI data centres have sprung up across the United States in the past three years, a 70 per cent increase from the prior three-year period. Many have been sited in locations with high competition for scarce water resources. The water footprint of data centres extends well beyond the server room: in some cases up to 5 million gallons per day, equivalent to a small town's daily use. OpenAI is establishing a massive 1.2-gigawatt data centre campus in Abilene, Texas to anchor its $100 billion Stargate AI infrastructure venture, raising concerns about water availability in a region already facing periodic drought conditions.

The water consumption occurs because AI hardware generates extraordinary heat loads that must be dissipated to prevent hardware failure. AI workloads can generate up to ten times more heat than traditional servers. NVIDIA's DGX B200 and Google's TPUs can each produce up to 700 watts of heat. Cooling this hardware typically involves either air cooling systems that consume electricity to run massive fans and chillers, or evaporative cooling that uses water directly.

The industry measures water efficiency using Water Usage Effectiveness (WUE), expressed as litres of water used per kilowatt-hour of computing energy. Typical averages hover around 1.9 litres per kilowatt-hour, though this varies significantly by climate, cooling technology, and data centre design. Research from the University of California, Riverside and The Washington Post found that generating a 100-word email with ChatGPT-4 consumes 519 millilitres of water, roughly a full bottle. A session of questions and answers with GPT-3 (approximately 10 to 50 responses) drives the consumption of a half-litre of fresh water.

Google's annual water consumption reaches a staggering 24 million cubic metres, enough to fill over 9,618 Olympic-sized swimming pools. Google's data centres used 20 per cent more water in 2022 than in 2021. Microsoft's water use rose by 34 per cent over the same period, driven largely by its hosting of ChatGPT as well as GPT-3 and GPT-4. These increases came despite both companies having pledged before the AI boom to be “water positive” by 2030, meaning they would add more water to the environment than they use.

The Carbon Accounting Challenge

Understanding AI's true carbon footprint requires looking beyond operational emissions to include embodied carbon from manufacturing, the carbon intensity of electricity grids, and the full lifecycle of hardware. The LLMCarbon framework, developed by researchers to model the end-to-end carbon footprint of large language models, demonstrates this complexity. The carbon footprint associated with large language models encompasses emissions from training, inference, experimentation, storage processes, and both operational and embodied carbon emissions.

The choice of where to train a model dramatically affects its carbon footprint. Research has shown that the selection of data centre location and processor type can reduce the carbon footprint by approximately 100 to 1,000 times. Training the same model in a data centre powered by renewable energy in Iceland produces vastly different emissions than training it in a coal-dependent grid region. However, current carbon accounting practices often obscure this reality.

The debate between market-based and location-based emissions accounting has become particularly contentious. Market-based methods allow companies to purchase renewable energy credits or power purchase agreements, effectively offsetting their grid emissions on paper. Whilst this approach may incentivise investment in renewable energy, critics argue it obscures actual physical emissions. Location-based emissions, which reflect the carbon intensity of local grids where electricity is actually consumed, tell a different story. Microsoft's location-based scope 2 emissions more than doubled in four years, rising from 4.3 million metric tons of CO₂ in 2020 to nearly 10 million in 2024. Microsoft announced in May 2024 that its CO₂ emissions had risen nearly 30 per cent since 2020 due to data centre expansion. Google's 2023 greenhouse gas emissions were almost 50 per cent higher than in 2019, largely due to energy demand tied to data centres.

An August 2025 analysis from Goldman Sachs Research forecasts that approximately 60 per cent of increasing electricity demands from data centres will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. This projection reflects the fundamental challenge: renewable energy capacity isn't expanding fast enough to meet AI's explosive growth, forcing reliance on fossil fuel generation to fill the gap.

Technical Strategies for Efficiency

The good news is that multiple technical approaches can dramatically reduce AI's environmental impact without necessarily sacrificing capability. These strategies range from fundamental architectural innovations to optimisation techniques applied to existing models.

Model Compression and Distillation

Knowledge distillation offers one of the most promising paths to efficiency. In this approach, a large, complex model (the “teacher”) trained on extensive datasets transfers its knowledge to a smaller network (the “student”). Runtime model distillation can shrink models by up to 90 per cent, cutting energy consumption during inference by 50 to 60 per cent. The student model learns to approximate the teacher's outputs whilst using far fewer parameters and computational resources.

Quantisation compresses models by reducing the numerical precision of weights and activations. Converting model parameters from 32-bit floating-point (FP32) to 8-bit integer (INT8) slashes memory requirements, as FP32 values consume 4 bytes whilst INT8 uses just 1 byte. Weights can be quantised to 16-bit, 8-bit, 4-bit, or even 1-bit representations. Quantisation can achieve up to 50 per cent energy savings whilst maintaining acceptable accuracy levels.

Model pruning removes redundant weights and connections from neural networks, creating sparse models that require fewer computations. Pruning can achieve up to 30 per cent energy consumption reduction. When applied to BERT, a popular natural language processing model, pruning resulted in a 32.097 per cent reduction in energy consumption.

Combining these techniques produces even greater gains. Production systems routinely achieve 5 to 10 times efficiency improvements through coordinated application of optimisation techniques whilst maintaining 95 per cent or more of original model performance. Mobile applications achieve 4 to 7 times model size reduction and 3 to 5 times latency improvements through combined quantisation, pruning, and distillation. Each optimisation technique offers distinct benefits: post-training quantisation enables fast, easy latency and throughput improvements; quantisation-aware training and distillation recover accuracy losses in low-precision models; pruning plus knowledge distillation permanently reduces model size and compute needs for more aggressive efficiency gains.

Sparse Architectures and Mixture of Experts

Mixture of Experts (MoE) architecture introduces sparsity at a fundamental level, allowing models to scale efficiently without proportional computational cost increases. In MoE models, sparse layers replace dense feed-forward network layers. These MoE layers contain multiple “experts” (typically neural networks or feed-forward networks), but only activate a subset for any given input. A gate network or router determines which tokens are sent to which expert.

This sparse activation enables dramatic efficiency gains. Grok-1, for example, has 314 billion parameters in total, but only 25 per cent of these parameters are active for any given token. The computational cost of an MoE model's forward pass is substantially less than that of a dense model with the same number of parameters, enabling scaling with computational complexity approaching O(1).

Notable MoE implementations demonstrate the potential. Google's Switch Transformers enabled multi-trillion parameter models with a 7 times speed-up in training compared to the T5 (dense) transformer model. The GLaM model, with 1.2 trillion parameters, matched GPT-3 quality using only one-third of the energy required to train GPT-3. This dramatic reduction in carbon footprint (up to an order of magnitude) comes from the lower computing requirements of the MoE approach.

Mistral AI's Mixtral 8x7B, released in December 2023 under Apache 2.0 licence, contains 46.7 billion parameters across 8 experts with sparsity of 2 (meaning 2 experts are active per token). Despite having fewer total active parameters than many dense models, Mixtral achieves competitive performance whilst consuming substantially less energy during inference.

Efficient Base Architectures

Beyond optimisation of existing models, fundamental architectural innovations promise step-change efficiency improvements. Transformers have revolutionised AI, but their quadratic complexity arising from token-to-token attention makes them energy-intensive at scale. Sub-quadratic architectures like State Space Models (SSMs) and Linear Attention mechanisms promise to redefine efficiency. Carnegie Mellon University's Mamba architecture achieves 5 times faster inference than transformers for equivalent tasks.

The choice of base model architecture significantly impacts runtime efficiency. Research comparing models of different architectures found that LLaMA-3.2-1B consumes 77 per cent less energy than Mistral-7B, whilst GPT-Neo-2.7B uses more than twice the energy of some higher-performing models. These comparisons reveal that raw parameter count doesn't determine efficiency; architectural choices matter enormously.

NVIDIA's development of the Transformer Engine in its H100 Hopper architecture demonstrates hardware-software co-design for efficiency. The Transformer Engine accelerates deep learning operations using mixed precision formats, especially FP8 (8-bit floating point), specifically optimised for transformer architectures. This specialisation delivers up to 9 times faster AI training on the largest models and up to 30 times faster AI inference compared to the NVIDIA HGX A100. Despite the H100 drawing up to 700 watts compared to the A100's 400 watts, the H100 offers up to 3 times more performance per watt, meaning that although it consumes more energy, it accomplishes more work per unit of power consumed.

The DeepSeek Paradox

The January 2025 release of DeepSeek-R1 disrupted conventional assumptions about AI development costs and efficiency, whilst simultaneously illustrating the complexity of measuring environmental impact. Whereas ChatGPT-4 was trained using 25,000 NVIDIA GPUs and Meta's Llama 3.1 used 16,000, DeepSeek used just 2,000 NVIDIA H800 chips. DeepSeek achieved ChatGPT-level performance with only $5.6 million in development costs compared to over $3 billion for GPT-4. Overall, DeepSeek requires a tenth of the GPU hours used by Meta's model, lowering its carbon footprint during training, reducing server usage, and decreasing water demand for cooling.

However, the inference picture proves more complex. Research comparing energy consumption across recent models found that DeepSeek-R1 and OpenAI's o3 emerge as the most energy-intensive models for inference, consuming over 33 watt-hours per long prompt, more than 70 times the consumption of GPT-4.1 nano. DeepSeek-R1 and GPT-4.5 consume 33.634 watt-hours and 30.495 watt-hours respectively. A single long query to o3 or DeepSeek-R1 may consume as much electricity as running a 65-inch LED television for roughly 20 to 30 minutes.

DeepSeek-R1 consistently emits over 14 grams of carbon dioxide and consumes more than 150 millilitres of water per query. The elevated emissions and water usage observed in DeepSeek models likely reflect inefficiencies in their data centres, including higher Power Usage Effectiveness (PUE) and suboptimal cooling technologies. DeepSeek appears to rely on Alibaba Cloud infrastructure, and China's national grid continues to depend heavily on coal, meaning the actual environmental impact per query may be more significant than models running on grids with higher renewable penetration.

The DeepSeek case illustrates a critical challenge: efficiency gains in one dimension (training costs) don't necessarily translate to improvements across the full lifecycle. Early figures suggest DeepSeek could be more energy intensive when generating responses than equivalent-size models from Meta. The energy it saves in training may be offset by more intensive techniques for answering questions and by the longer, more detailed answers these techniques produce.

Powering and Cooling the AI Future

Technical model optimisations represent only one dimension of reducing AI's environmental impact. The infrastructure that powers and cools these models offers equally significant opportunities for improvement.

The Renewable Energy Race

As of 2024, natural gas supplied over 40 per cent of electricity for U.S. data centres, according to the International Energy Agency. Renewables such as wind and solar supplied approximately 24 per cent of electricity at data centres, whilst nuclear power supplied around 20 per cent and coal around 15 per cent. This mix falls far short of what's needed to decarbonise AI.

However, renewables remain the fastest-growing source of electricity for data centres, with total generation increasing at an annual average rate of 22 per cent between 2024 and 2030, meeting nearly 50 per cent of the growth in data centre electricity demand. Major technology companies are signing massive renewable energy contracts to close the gap.

In May 2024, Microsoft inked a deal with Brookfield Asset Management for the delivery of 10.5 gigawatts of renewable energy between 2026 and 2030 to power Microsoft data centres. Alphabet added new clean energy generation by signing contracts for 8 gigawatts and bringing 2.5 gigawatts online in 2024 alone. Meta recently announced it anticipates adding 9.8 gigawatts of renewable energy to local grids in the U.S. by the end of 2025. Meta is developing a $10 billion AI-focused data centre, the largest in the Western Hemisphere, on a 2,250-acre site in Louisiana, a project expected to add at least 1,500 megawatts of new renewable energy to the grid.

These commitments represent genuine progress, but also face criticism regarding their market-based accounting. When a company signs a renewable energy power purchase agreement in one region, it can claim renewable energy credits even if the actual electrons powering its data centres come from fossil fuel plants elsewhere on the grid. This practice allows companies to report lower carbon emissions whilst not necessarily reducing actual emissions from the grid.

An August 2025 Goldman Sachs analysis forecasts that approximately 60 per cent of increasing electricity demands from data centres will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. According to a new report from the International Energy Agency, the world will spend $580 billion on data centres this year, $40 billion more than will be spent finding new oil supplies.

The Nuclear Option

The scale and reliability requirements of AI workloads are driving unprecedented interest in nuclear power, particularly Small Modular Reactors (SMRs). Unlike intermittent renewables, nuclear provides baseload power 24 hours per day, 365 days per year, matching the operational profile of data centres that cannot afford downtime.

Microsoft signed an agreement with Constellation Energy to restart a shuttered reactor at Three Mile Island. The plan calls for the reactor to supply 835 megawatts to grid operator PJM, with Microsoft buying enough power to match the electricity consumed by its data centres. The company committed to funding the $1.6 billion investment required to restore the reactor and signed a 20-year power purchase agreement.

Google made history in October 2024 with the world's first corporate SMR purchase agreement, partnering with Kairos Power to deploy 500 megawatts across 6 to 7 molten salt reactors. The first unit will come online by 2030, with full deployment by 2035.

Amazon Web Services leads with the most ambitious programme, committing to deploy 5 gigawatts of SMR capacity by 2039 through a $500 million investment in X-energy and partnerships spanning Washington State and Virginia. Amazon has established partnerships with Dominion Energy to explore SMR development near its North Anna nuclear facility, and with X-Energy and Energy Northwest to finance the development, licensing, and construction of in-state SMRs.

The smaller size and modular design of SMRs could make building them faster, cheaper, and more predictable than conventional nuclear reactors. They also come with enhanced safety features and could be built closer to transmission lines. However, SMRs face significant challenges. They are still at least five years from commercial operation in the United States. A year ago, the first planned SMR in the United States was cancelled due to rising costs and a lack of customers. Former U.S. Nuclear Regulatory Commission chair Allison Macfarlane noted: “Very few of the proposed SMRs have been demonstrated, and none are commercially available, let alone licensed by a nuclear regulator.”

After 2030, SMRs are expected to enter the mix, providing a source of baseload low-emissions electricity to data centre operators. The US Department of Energy has launched a $900 million funding programme to support the development of SMRs and other advanced nuclear technologies, aiming to accelerate SMR deployment as part of the nation's clean energy strategy.

Cooling Innovations

Currently, cooling data centre infrastructure alone consumes approximately 40 per cent of an operator's energy usage. AI workloads exacerbate this challenge. AI models run on specialised hardware such as NVIDIA's DGX B200 and Google's TPUs, which can each produce up to 700 watts of heat. Traditional air cooling struggles with these heat densities.

Liquid cooling technologies offer dramatic improvements. Direct-to-chip liquid cooling circulates coolant through cold plates mounted directly on processors, efficiently transferring heat away from the hottest components. Compared to traditional air cooling, liquid systems can deliver up to 45 per cent improvement in Power Usage Effectiveness (PUE), often achieving values below 1.2. Two-phase cooling systems, which use the phase change from liquid to gas to absorb heat, require lower liquid flow rates than traditional single-phase water approaches (approximately one-fifth the flow rate), using less energy and reducing equipment damage risk.

Immersion cooling represents the most radical approach: servers are fully submerged in a non-conductive liquid. This method removes heat far more efficiently than air cooling, keeping temperatures stable and allowing hardware to run at peak performance for extended periods. The immersion-ready architecture allows operators to lower cooling-related energy use by as much as 50 per cent, reclaim heat for secondary uses, and reduce or eliminate water consumption. Compared to traditional air cooling, single-phase immersion cooling can help reduce electricity demand by up to nearly half, contribute to CO₂ emissions reductions of up to 30 per cent, and support up to 99 per cent less water consumption. Sandia National Laboratories researchers reported that direct immersion techniques may cut power use in compute-intensive HPC-AI clusters by 70 per cent.

As liquid cooling moves from niche to necessity, partnerships are advancing the technology. Engineered Fluids, Iceotope, and Juniper Networks have formed a strategic partnership aimed at delivering scalable, sustainable, and performance-optimised infrastructure for high-density AI and HPC environments. Liquid cooling is increasingly popular and expected to account for 36 per cent of data centre thermal management revenue by 2028.

Significant trends include the improvement of dielectric liquids, providing alternatives that help reduce carbon emissions. Moreover, immersion cooling allows for increased cooling system temperatures, which enhances waste heat recovery processes. This progress opens opportunities for district heating applications and other uses, turning waste heat from a disposal problem into a resource.

Policy, Procurement, and Transparency

Technical and infrastructure solutions provide the tools to reduce AI's environmental impact, but policy frameworks and procurement practices determine whether these tools will be deployed at scale. Regulation is beginning to catch up with the AI boom, though unevenly across jurisdictions.

European Union Leadership

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, with enforcement taking effect in stages over several years. The Act aims to ensure “environmental protection, whilst boosting innovation” and imposes requirements concerning energy consumption and transparency. The legislation requires regulators to facilitate the creation of voluntary codes of conduct governing the impact of AI systems on environmental sustainability, energy-efficient programming, and techniques for the efficient design, training, and use of AI.

These voluntary codes of conduct must set out clear objectives and key performance indicators to measure the achievement of those objectives. The AI Office and member states will encourage and facilitate the development of codes for AI systems that are not high risk. Whilst voluntary, these aim to encourage assessing and minimising the environmental impact of AI systems.

The EU AI Act requires the European Commission to publish periodic reports on progress on the development of standards for energy-efficient deployment of general-purpose AI models, with the first report due by August 2, 2028. The Act also establishes reporting requirements, though critics argue these don't go far enough in mandating specific efficiency improvements.

Complementing the AI Act, the recast Energy Efficiency Directive (EED) takes a more prescriptive approach to data centres themselves. Owners and operators of data centres with an installed IT power demand of at least 500 kilowatts must report detailed sustainability key performance indicators, including energy consumption, Power Usage Effectiveness, temperature set points, waste heat utilisation, water usage, and the share of renewable energy used. Operators are required to report annually on these indicators, with the first reports submitted by September 15, 2024, and subsequent reports by May 15 each year.

In the first quarter of 2026, the European Commission will roll out a proposal for a Data Centre Energy Efficiency Package alongside the Strategic Roadmap on Digitalisation and AI for the Energy Sector. The Commission is also expected to publish a Cloud and AI Development Act in Q4 2025 or Q1 2026, aimed at tripling EU data centre processing capacity in the next 5 to 7 years. The proposal will allow for simplified permitting and other public support measures if they comply with requirements on energy efficiency, water efficiency, and circularity.

Carbon Accounting and Transparency

Regulatory initiatives are creating mandatory requirements. The EU's Corporate Sustainability Reporting Directive and California's Corporate Greenhouse Gas Reporting Programme will require detailed Scope 3 emissions data, whilst emerging product-level carbon labelling schemes demand standardised carbon footprint calculations. With regulations like the Corporate Sustainability Reporting Directive and Carbon Border Adjustment Mechanism coming into full force, AI platforms have become mission-critical infrastructure. CO2 AI's partnership with CDP in January 2025 launched the “CO2 AI Product Ecosystem,” enabling companies to share product-level carbon data across supply chains.

However, carbon accounting debates, particularly around market-based versus location-based emissions, need urgent regulatory clarification. Market-based emissions can sometimes be misleading, allowing companies to claim renewable energy usage whilst their actual facilities draw from fossil fuel-heavy grids. Greater transparency requirements could mandate disclosure of both market-based and location-based emissions, providing stakeholders with a fuller picture of environmental impact.

Sustainable Procurement Evolution

Green procurement practices are evolving from aspirational goals to concrete, measurable requirements. In 2024, companies set broad sustainability goals, such as reducing emissions or adopting greener materials, but there was a lack of granular, measurable milestones. Green procurement in 2025 emphasises quantifiable metrics with shorter timelines. Companies are setting specific goals like sourcing 70 per cent of materials from certified green suppliers. Carbon reduction targets are aligning more closely with science-based targets, and enhanced public reporting allows stakeholders to monitor progress more transparently.

The United States has issued comprehensive federal guidance through White House Office of Management and Budget memoranda establishing requirements for government AI procurement, including minimum risk management practices for “high-impact AI” systems. However, most other jurisdictions have adopted a “wait and see” approach, creating a patchwork of regulatory requirements that varies dramatically across jurisdictions.

What Works Best?

With multiple strategies available, determining which approaches most effectively reduce environmental impact without compromising capability requires examining both theoretical potential and real-world results.

Research on comparative effectiveness reveals a clear hierarchy of impact. Neuromorphic hardware achieves the highest energy savings (over 60 per cent), followed by quantisation (up to 50 per cent) and model pruning (up to 30 per cent). However, neuromorphic hardware remains largely in research stages, whilst quantisation and pruning can be deployed immediately on existing models.

Infrastructure choices matter more than individual model optimisations. The choice of data centre location, processor type, and energy source can reduce carbon footprint by approximately 100 to 1,000 times. Training a model in a renewable-powered Icelandic data centre versus a coal-dependent grid produces vastly different environmental outcomes. This suggests that procurement decisions about where to train and deploy models may have greater impact than architectural choices about model design.

Cooling innovations deliver immediate, measurable benefits. The transition from air to liquid cooling can improve Power Usage Effectiveness by 45 per cent, with immersion cooling potentially reducing cooling-related energy use by 50 per cent and water consumption by up to 99 per cent. Unlike model optimisations that require retraining, cooling improvements can be deployed at existing facilities.

The Rebound Effect Challenge

Efficiency gains don't automatically translate to reduced total environmental impact due to the Jevons paradox or rebound effect. As Anthropic co-founder Dario Amodei noted, “Because the value of having a more intelligent system is so high, it causes companies to spend more, not less, on training models.” The gains in cost efficiency end up devoted to training larger, smarter models rather than reducing overall resource consumption.

This dynamic is evident in the trajectory from GPT-3 to GPT-4 to models like Claude Opus 4.5. Each generation achieves better performance per parameter, yet total training costs and environmental impacts increase because the models grow larger. Mixture of Experts architectures reduce inference costs per token, but companies respond by deploying these models for more use cases, increasing total queries.

The DeepSeek case exemplifies this paradox. DeepSeek's training efficiency potentially democratises AI development, allowing more organisations to train capable models. If hundreds of organisations now train DeepSeek-scale models instead of a handful training GPT-4-scale models, total environmental impact could increase despite per-model improvements.

Effective Strategies Without Compromise

Given the rebound effect, which strategies can reduce environmental impact without triggering compensatory increases in usage? Several approaches show promise:

Task-appropriate model selection: Using fine-tuned models for specific tasks rather than general-purpose generative models consumes approximately 30 times less energy. Deploying smaller, specialised models for routine tasks (classification, simple question-answering) whilst reserving large models for tasks genuinely requiring their capabilities could dramatically reduce aggregate consumption without sacrificing capability where it matters.

Temporal load shifting: Shaolei Ren's research proposes timing AI training during cooler hours to reduce water evaporation. “We don't water our lawns at noon because it's inefficient,” he explained. “Similarly, we shouldn't train AI models when it's hottest outside. Scheduling AI workloads for cooler parts of the day could significantly reduce water waste.” This approach requires no technical compromise, merely scheduling discipline.

Renewable energy procurement with additionality: Power purchase agreements that fund new renewable generation capacity, rather than merely purchasing existing renewable energy credits, ensure that AI growth drives actual expansion of clean energy infrastructure. Meta's Louisiana data centre commitment to add 1,500 megawatts of new renewable energy exemplifies this approach.

Mandatory efficiency disclosure: Requiring AI providers to disclose energy and water consumption per query or per task would enable users to make informed choices. Just as nutritional labels changed food consumption patterns, environmental impact labels could shift usage toward more efficient models and providers, creating market incentives for efficiency without regulatory mandates on specific technologies.

Lifecycle optimisation over point solutions: The DeepSeek paradox demonstrates that optimising one phase (training) whilst neglecting others (inference) can produce suboptimal overall outcomes. Lifecycle carbon accounting that considers training, inference, hardware manufacturing, and end-of-life disposal identifies the true total impact and prevents shifting environmental costs between phases.

Expert Perspectives

Researchers and practitioners working at the intersection of AI and sustainability offer nuanced perspectives on the path forward.

Sasha Luccioni, Research Scientist and Climate Lead at Hugging Face, and a founding member of Climate Change AI, has spent over a decade studying AI's environmental impacts. Luccioni's project, “You can't improve what you don't measure: Developing Standards for Sustainable Artificial Intelligence,” targets documenting AI's environmental impacts whilst contributing to the development of new tools and standards to better measure its impact on climate. She has been called upon by organisations such as the OECD, the United Nations, and the NeurIPS conference as an expert in developing norms and best practices for more sustainable and ethical practice of AI.

Luccioni, along with Emma Strubell and Kate Crawford (author of “Atlas of AI”), collaborated on research including “Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice.” Their work emphasises that “This system-level complexity underscores the inadequacy of the question, 'Is AI net positive or net negative for the climate?'” Instead, they adopt an analytic approach that includes social, political, and economic contexts in which AI systems are developed and deployed. Their paper argues that the AI field needs to adopt a more detailed and nuanced approach to framing AI's environmental impacts, including direct impacts such as mineral supply chains, carbon emissions from training large-scale models, water consumption, and e-waste from hardware.

Google has reported substantial efficiency improvements in recent generations. The company claims a 33 times reduction in energy and 44 times reduction in carbon for the median prompt compared with 2024. These gains result from combined improvements in model architecture (more efficient transformers), hardware (purpose-built TPUs), and infrastructure (renewable energy procurement and cooling optimisation).

DeepSeek-V3 achieved 95 per cent lower energy use whilst maintaining competitive performance, showing that efficiency innovation is possible without sacrificing capability. However, as noted earlier, this must be evaluated across the full inference lifecycle, not just training.

Future Outlook and Pathways Forward

The trajectory of AI's environmental impact over the next decade will be determined by the interplay of technological innovation, infrastructure development, regulatory frameworks, and market forces.

Architectural innovations continue to push efficiency boundaries. Sub-quadratic attention mechanisms, state space models, and novel approaches like Mamba suggest that the transformer architecture's dominance may give way to more efficient alternatives. Hardware-software co-design, exemplified by Google's TPUs, NVIDIA's Transformer Engine, and emerging neuromorphic chips, promises orders of magnitude improvement over general-purpose processors.

Model compression techniques will become increasingly sophisticated. Current quantisation approaches typically target 8-bit or 4-bit precision, but research into 2-bit and even 1-bit models continues. Distillation methods are evolving beyond simple teacher-student frameworks to more complex multi-stage distillation and self-distillation approaches. Automated neural architecture search may identify efficient architectures that human designers wouldn't consider.

The renewable energy transition for data centres faces both tailwinds and headwinds. Major technology companies have committed to massive renewable energy procurement, potentially driving expansion of wind and solar capacity. However, the International Energy Agency projects that approximately 60 per cent of new data centre electricity demand through 2030 will still come from fossil fuels, primarily natural gas.

Nuclear power, particularly SMRs, could provide the baseload clean energy that data centres require, but deployment faces significant regulatory and economic hurdles. The first commercial SMRs remain at least five years away, and costs may prove higher than proponents project. The restart of existing nuclear plants like Three Mile Island offers a faster path to clean baseload power, but the number of suitable candidates for restart is limited.

Cooling innovations will likely see rapid adoption driven by economic incentives. As AI workloads become denser and electricity costs rise, the 40 to 70 per cent energy savings from advanced liquid cooling become compelling purely from a cost perspective. The co-benefit of reduced water consumption provides additional impetus, particularly in water-stressed regions.

Scenarios for 2030

Optimistic Scenario: Aggressive efficiency improvements (sub-quadratic architectures, advanced quantisation, MoE models) combine with rapid cooling innovations (widespread liquid/immersion cooling) and renewable energy expansion (50 per cent of data centre electricity from renewables). Comprehensive disclosure requirements create market incentives for efficiency. AI's energy consumption grows to 800 terawatt-hours by 2030, representing a substantial reduction from business-as-usual projections of 1,000-plus terawatt-hours. Water consumption plateaus or declines due to liquid cooling adoption. Carbon emissions increase modestly rather than explosively.

Middle Scenario: Moderate efficiency improvements are deployed selectively by leading companies but don't become industry standard. Renewable energy procurement expands but fossil fuels still supply approximately 50 per cent of new data centre electricity. Cooling innovations see partial adoption in new facilities but retrofitting existing infrastructure lags. AI energy consumption reaches 950 terawatt-hours by 2030. Water consumption continues increasing but at a slower rate than worst-case projections. Carbon emissions increase significantly, undermining technology sector climate commitments.

Pessimistic Scenario: Efficiency improvements are consumed by model size growth and expanded use cases (Jevons paradox dominates). Renewable energy capacity expansion can't keep pace with AI electricity demand growth. Cooling innovations face adoption barriers (high capital costs, retrofit challenges, regulatory hurdles). AI energy consumption exceeds 1,200 terawatt-hours by 2030. Water consumption in water-stressed regions triggers conflicts with agricultural and municipal needs. Carbon emissions from the technology sector more than double, making net-zero commitments unachievable without massive carbon removal investments.

The actual outcome will likely fall somewhere between these scenarios, varying by region and company. The critical determinants are policy choices made in the next 24 to 36 months and the extent to which efficiency becomes a genuine competitive differentiator rather than a public relations talking point.

Recommendations and Principles

Based on the evidence examined, several principles should guide efforts to reduce AI's environmental impact without compromising valuable capabilities:

Measure Comprehensively: Lifecycle metrics that capture training, inference, hardware manufacturing, and end-of-life impacts provide a complete picture and prevent cost-shifting between phases.

Optimise Holistically: Point solutions that improve one dimension whilst neglecting others produce suboptimal results. The DeepSeek case demonstrates the importance of optimising training and inference together.

Match Tools to Tasks: Using the most capable model for every task wastes resources. Task-appropriate model selection can reduce energy consumption by an order of magnitude without sacrificing outcomes.

Prioritise Infrastructure: Data centre location, energy source, and cooling technology have greater impact than individual model optimisations. Infrastructure decisions can reduce carbon footprint by 100 to 1,000 times.

Mandate Transparency: Disclosure enables informed choice by users, procurement officers, and policymakers. Without measurement and transparency, improvement becomes impossible.

Address Rebound Effects: Efficiency improvements must be coupled with absolute consumption caps or carbon pricing to prevent Jevons paradox from negating gains.

Pursue Additionality: Renewable energy procurement should fund new capacity rather than merely redistributing existing renewable credits, ensuring AI growth drives clean energy expansion.

Innovate Architectures: Fundamental rethinking of model architectures (sub-quadratic attention, state space models, neuromorphic computing) offers greater long-term potential than incremental optimisations of existing approaches.

Consider Context: Environmental impacts vary dramatically by location (grid carbon intensity, water availability). Siting decisions and temporal load-shifting can reduce impacts without technical changes.

Balance Innovation and Sustainability: The goal is not to halt AI development but to ensure it proceeds on a sustainable trajectory. This requires making environmental impact a primary design constraint rather than an afterthought.


The environmental costs of generative AI are significant and growing, but the situation is not hopeless. Technical strategies including model compression, efficient architectures, and hardware innovations can dramatically reduce energy and water consumption. Infrastructure improvements in renewable energy procurement, cooling technologies, and strategic siting offer even greater potential impact. Policy frameworks mandating transparency and establishing efficiency standards can ensure these solutions are deployed at scale rather than remaining isolated examples.

The critical question is not whether AI can be made more sustainable, but whether it will be. The answer depends on choices made by developers, cloud providers, enterprise users, and policymakers in the next few years. Will efficiency become a genuine competitive advantage and procurement criterion, or remain a secondary consideration subordinate to capability and speed? Will renewable energy procurement focus on additionality that expands clean generation, or merely shuffle existing renewable credits? Will policy frameworks mandate measurable improvements, or settle for voluntary commitments without enforcement?

The trajectory matters enormously. Under a business-as-usual scenario, AI could consume over 1,200 terawatt-hours of electricity by 2030, much of it from fossil fuels, whilst straining freshwater resources in already stressed regions. Under an optimistic scenario with aggressive efficiency deployment and renewable energy expansion, consumption could be 30 to 40 per cent lower whilst delivering equivalent or better capabilities. The difference between these scenarios amounts to hundreds of millions of tons of carbon dioxide and billions of cubic metres of water.

The tools exist. The question is whether we'll use them.


References & Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Justina Revolution

It’s 10:38 PM in Montevideo, Uruguay. I am here on this platform writing words that I don’t know if anyone will find or read. I am in my little apartment through the door there are twinkling Christmas lights. But this feels more honest than Medium or Substack. This feels like my place.

I control the vertical and the horizontal here. This is my most honest blog. Where I divulge all the tasty secrets that the paying public won’t tolerate. Shorter punchier posts about vampires, bimbos, and dark sorcery.

Here is where I create things that don’t digest well in the light of day. The thoughts that get flushed away like a wriggling centipede caught in toilet paper.

“Baby, you need to leave, 'Cause I'm getting drunk on your noble deeds. It doesn't matter that they don't get done, When I feel this cold, they're like the fucking sun.

Baby, I need a friend, But I'm a vampire smile, you'll meet a sticky end. I'm here trying not to bite your neck, But it's beautiful, and I'm gonna get...” – Kyla LaGrange, Vampire Smile

 
Read more... Discuss...

from 💚

My Sweet and Precious Friend

My day, my bliss This hangman is asleep If only Wednesday came anew A splitting grandeur On island lines A will for fortune’s done More to the embassy A fright for brethren love No way in, But a boxcar And men went screaming were they new An upper sound past the flourish Tomorrow’s stolen boy Afraid of the attic And seeking her shoes But life into the net Epistles in joy- Frankenstein with Supposing wardom come I am chat in a spallow- And paper day Adjusting one hen and six loaves A funny day it were Houses of good and get A universe On pine

 
Read more...

from 💚

Happy Birthday Amanda

In experiences early To the broader years imbue A special one- a force for reckoning Birthdays of the reform Where angels assist in mercy Amanda of Christ The strong canary With fortified mew The interspersed dew valley And contribs to the social coast We put in words for the angel And sprayed ourselves a-winter The ineffable dream Of a future forest And daily mastery This vivid year All these comforts are yours In dopamine grey And sits responding, Oh you Yes, really Your day is in design Remarkably good While kin remark of your day And ordination of wants- And offers This is the cherry pie of Jupiter And other valleys of Spirit That coast we share In summertime blue And stakes remain To gold

 
Read more...

from Justina Revolution

I cut myself and bleed for your pennies.

I have no shame. 

I'll share my trauma.

I'll give you my pain, my pleasure, my dignity.

Poems are a worthless waste of time.

No one pays for them. 

I wanted to tell stories.

Cut. Clink. Bleed

To create wonder and make people happy. 

Cut. Clink. Cut. Clink.

But happiness is in a grave somewhere near El Paso.

And wonder? 

They took the first flight out. 

Cut. Clink. Bleed.

And now you're here with me. 

The mediocre poet.

I pretty myself up. 

Stand on the digital street corner. 

Hoping you will throw pennies.

I cut myself with my shiny razor. 

It's a terrible, beautiful thing.

The only beauty left in this tired old world. 

We are all dying flowers in a neglected garden. 

Cut. Clink. Cut. Clink.

Beauty fades slowly. 

But it fades. 

50 long years of struggle. 

Culminates in this rite of the damned.

Cut. Clink. Bleed.

Will my suffering earn a crust of bread and a small room? 

Will I sleep rough? 

Will I be safe? 

Cut. Clink. Bleed.

Daylight comes. 

I stare at the aged reflection in a store window.

God what was I before this? 

Cut. Clink. Bleed.

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog