Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from Mitchell Report
⚠️ SPOILER WARNING: MAJOR SPOILERS

In the desolate wasteland of New Vegas, three survivors and their loyal dog embark on a perilous journey through a post-apocalyptic world where every step could be their last.
My Rating: ⭐⭐⭐⭐ (4/5 stars)
Episodes: 8 | Aired: 12-16-2025
This season answered a lot of questions and introduced a few new ones. But I actually liked this season better than the first season. I think they did the flashbacks much better this season, and this is thoroughly engaging and enjoyable entertainment. The one negative I would have is that this is not the first TV show or movie that has tried to technologize the 50s after World War 2 and the 60s. I am always astonished at the technology they built but other things that seem like simpler items that they miss, and I say why doesn't the technology they have built work here or there. Oh, and I didn't miss the few references to modern day issues and Trump. I find it amazing how politics of today is meshing into TV shows. Some do it well (like here) and some don't (like the recent Superman of 2025). But one thing on the real life side is the old axiom, history seems to always repeat itself. Looking forward to Season 3.
#review #tv #streaming
from folgepaula
as sweet as possible as spontaneous as possible as sincere as possible as serene as possible as strong as possible as symbolic as possible as soothing as possible as soulful as possible
/feb26
from Sinnorientierung
A Message of Hope
Each of you is unique, unrepeatable, irreplaceable, incomparable, separate, and distinct. You have been given a body and a pyche which are sometimes similar in character type and/or traits to others, but beyond that your are a spirit person with a limited degree of freedom and a capacity to respond to life an its demands. There never was, there never is, there never will be an absolute twin, a clone, one who can replace you. You are a one of a kind and life is calling, inviting, and challenging you to become the authentic you by trancending yourself and at the same time forgetting yourself.
If you simply search for pleasure or power, you will experience something missing. You will at some moment feel empty, a void, a vacuum. You will wonder, “What's it all about?”
When the need for meaning finally occurs to you, you will beging to seach for meaning every day.
...
McKilopp, T. (1993) A MESSAGE OF HOPE, The International Forum of Logotherapy, p. 4
#LogoTherapy #FranklViktor #McKillopp #hope #UniquePerson #meaning
from
Reflections
This fairly recent obsession with metrics in the workplace is driving companies insane.
A while back, I watched a video about all the ways hotels are trying to save money by, among other things, eliminating storage space, making the bathroom less private, removing desks, and pressuring guests to work at the bar, where they can spend more money. (By the way, that bartender? They're also the receptionist.) These changes are, of course, driven by metrics like “GSS” and “ITR,” whatever the f@*k those are.
Is there a kernel of truth to all of this? Sure. Aloft Hotels are cozy, and they seem to follow this playbook. I didn't mind staying in one when I was stuck in San Francisco for one night more than ten years ago. Would I want to stay in one of their rooms during a business trip or anything else lasting more than a couple of days? Hell no. I'd like a desk and somewhere to put clothes. (I know, I'm so needy. I travel with clothes.)
Metrics are fine, sometimes, when their use is limited and their shortcomings are genuinely appreciated. Taking them too seriously and letting them make the decisions, however, is a recipe for disaster. Hard questions demand more thoughtfulness than that. “GSS” and “ITR” are meaningful until they aren't, and nobody is going to find solace in those abbreviations when generations of potential customers steer clear of your business because they actually want something good.
Sadly, I don't think most businesses think that far ahead.
Show me the metric which proves that your business isn't incurring massive risk by ignoring common sense. Until then, I don't care about “the numbers.”
#Life #SoftwareDevelopment #Tech
from Healthier
Lydia Joly, middle, on her parents’ farm circa 1967 — son, Loran, left; sister, right; my great-grandmother, back row. When great-grandmother was not visiting, I would sometimes sleep in the bed she had slept in when at the farm… “The apple doesn’t fall far from the tree?”
“Becoming Home – full film”:
https://youtu.be/NtPbAuFMI0c?si=bcCTE2fZH3PVN7vy
“The documentary “Becoming Home” touched my heart, a few years ago. Make by filmmaker Michael DuBois, he chronicled the “first year after the death of his mother. He set out to discover why she had the astounding impact on others that she did…”
Michael lives on Cape Cod, as of when he created this documentary…
“Becoming Home” is his finished story. It is the story of his mother, and her grace through life. It is the story of his childhood. And it is the story of learning to move forward after those losses, without moving away from them. Directed by Michael F. DuBois Produced by Bert Mayer and Larissa Farrell Director of Photography Mark Kammel Original Music by Derek Hamilton Featuring Music by Sky Flying By and Pete Miller”
My mother, Lydia Joly, age 87, war refugee from Piaski, Poland, with time in a relocation camp in northern Germany after World War I also — arrived Ellis Island 1950 — image by son Loran
Christmas card 2024 with Lydia’s self-made Gingerbread house
Lydia — my mother — was born in Lubelskie County, Poland.
We see her village, Piaski, here, with beautiful music…
https://youtu.be/XF04EznukOY?si=E2qJLDS5jNsJxzaI
No wonder she loves gardening and flowers…
Lydia, gardening, 2025, age 87
from
Iain Harper's Blog
Caveat: this article contains a detailed examination of the state of open source/ weight AI technology that is accurate as of February 2026. Things move fast.
I don’t make a habit of writing about wonky AI takes on social media, for obvious reasons. However, a post from an AI startup founder (there are seemingly one or two out there at the moment) caught my attention.
His complaint was that he was spending $1,000 a week on API calls for his AI agents, realised the real bottleneck was infrastructure rather than intelligence, and dropped $10,000 on a Mac Studio with an M3 Ultra and 512GB of unified memory. His argument was essentially every model is smart enough, the ceiling is infrastructure, and the future belongs to whoever removes the constraints first.
It’s a beguiling pitch and it hit a nerve because the underlying frustration is accurate. Rate limits, per-token costs, and context window restrictions do shape how people build with these models, and the desire to break free of those constraints is understandable. But the argument collapses once you look at what local models can actually do today compared to what frontier APIs deliver, and why the gap between the two is likely to persist for the foreseeable future.
To understand why, you need to look at the current open-source model ecosystem in some detail, examine what’s actually happening on the frontier, and think carefully about the conditions that would need to hold for convergence to happen.
The open-source model ecosystem has matured considerably over the past eighteen months, to the point where dismissing it as a toy would be genuinely unfair. The major families that matter right now are Meta’s Llama series, Alibaba’s Qwen line, and DeepSeek’s V3 and R1 models, with Mistral, Google’s Gemma, and Microsoft’s Phi occupying important niches for specific use cases.
DeepSeek’s R1 release in January 2025 was probably the single most consequential open-source event in the past two years. Built on a Mixture of Experts architecture with 671 billion total parameters but only 37 billion activated per forward pass, R1 achieved performance comparable to OpenAI’s o1 on reasoning benchmarks including GPQA, AIME, and Codeforces. What made it seismic was the claimed training cost: approximately $5.6 million, compared to the hundred-million-dollar-plus budgets associated with frontier models from the major Western labs. NVIDIA lost roughly $600 billion in market capitalisation in a single day when the implications sank in.
The Lawfare Institute’s analysis of DeepSeek’s achievement noted an important caveat that often gets lost in the retelling: the $5.6 million figure represents marginal training cost for the final R1 phase, and does not account for DeepSeek’s prior investment in the V3 base model, their GPU purchases (which some estimates put at 50,000 H100-class chips), or the human capital expended across years of development. The true all-in cost was substantially higher. But even with those qualifications, the efficiency gains were highly impressive, and they forced the entire industry to take algorithmic innovation as seriously as raw compute scaling.
Alibaba’s Qwen3 family, released in April 2025, pushed things further. The 235B-A22B variant uses a similar MoE approach, activating 22 billion parameters out of 235 billion, and it introduced hybrid reasoning modes that can switch between extended chain-of-thought and direct response depending on task complexity. The newer Qwen3-Coder-480B-A35B, released later in 2025, achieves 61.8% on the Aider Polyglot benchmark under full precision, which puts it in the same neighbourhood as Claude Sonnet 4 and GPT-4.1 for code generation specifically.
Meta’s Llama 4, released in early 2025, moved to natively multimodal MoE with the Scout and Maverick variants processing vision, video, and text in the same forward pass. Mistral continued to punch above its weight with the Large 3 release at 675 billion parameters, and their claim of delivering 92% of GPT-5.2’s performance at roughly 15% of the price represents the kind of value proposition that makes enterprise buyers think twice about their API contracts.
According to Menlo Ventures’ mid-2025 survey of over 150 technical leaders, open-source models now account for approximately 13% of production AI workloads, with the market increasingly structured around a durable equilibrium. Proprietary systems define the upper bound of reliability and performance for regulated or enterprise workloads, while open-source models offer cost efficiency, transparency, and customisation for specific use cases.
By any measure, this is a serious and capable ecosystem. The question is whether it’s capable enough to replace frontier APIs for agentic, high-reasoning work.
The Mac Studio with an M3 Ultra and 512GB of unified memory is genuinely impressive hardware for local inference. Apple’s unified memory architecture means the GPU, CPU, and Neural Engine all share the same memory pool without the traditional separation between system RAM and VRAM, which makes it uniquely suited to running large models that would otherwise require expensive multi-GPU setups. Real-world benchmarks show the M3 Ultra achieving approximately 2,320 tokens per second on a Qwen3-30B 4-bit model, which is competitive with an NVIDIA RTX 3090 while consuming a fraction of the power.
But the performance picture changes dramatically as model size increases. Running the larger Qwen3-235B-A22B at Q5 quantisation on the M3 Ultra yields generation speeds of approximately 5.2 tokens per second, with first-token latency of around 3.8 seconds. At Q4KM quantisation, users on the MacRumors forums report around 30 tokens per second, which is usable for interactive work but a long way from the responsiveness of cloud APIs processing multiple parallel requests on clusters of H100s or B200s. And those numbers are for the quantised versions, which brings us to the core technical problem.
Quantisation is the process of reducing the numerical precision of a model’s weights, typically from 16-bit floating point down to 8-bit or 4-bit integers, in order to shrink the model enough to fit in available memory. The trade-off is information loss, and research published at EMNLP 2025 by Mekala et al. makes the extent of that loss uncomfortably clear. Their systematic evaluation across five quantisation methods and five models found that while 8-bit quantisation preserved accuracy with only about a 0.8% drop, 4-bit methods led to substantial losses, with performance degradation of up to 59% on tasks involving long-context inputs. The degradation worsened for non-English languages and varied dramatically between models and tasks, with Llama-3.1 70B experiencing a 32% performance drop on BNB-nf4 quantisation while Qwen-2.5 72B remained relatively robust under the same conditions.
Separate research from ACL 2025 introduces an even more concerning finding for the long-term trajectory of local models. As models become better trained on more data, they actually become more sensitive to quantisation degradation. The study’s scaling laws predict that quantisation-induced degradation will worsen as training datasets grow toward 100 trillion tokens, a milestone likely to be reached within the next few years. In practical terms, this means that the models most worth running locally are precisely the ones that lose the most from being compressed to fit.
When someone says they’re using a local model, they’re usually running a quantised version of an already-smaller model than the frontier labs deploy. The experience might feel good in interactive use, but the gap becomes apparent on exactly the tasks that matter most for production agentic work. Multi-step reasoning over long contexts, complex tool use orchestration, and domain-specific accuracy where “pretty good” is materially different from “correct.”
The most persistent advantage that frontier models hold over open-source alternatives has less to do with architecture and more to do with what happens after pre-training. Reinforcement Learning from Human Feedback and its variants form a substantial part of this gap, and the economics of closing it are unfavourable for the open-source community.
RLHF works by having human annotators evaluate pairs of model outputs and indicate which response better satisfies criteria like helpfulness, accuracy, and safety. Those preferences train a reward model, which then guides further optimisation of the language model through reinforcement learning. The process turns a base model that just predicts the next token into something that follows instructions well, pushes back when appropriate, handles edge cases gracefully, and avoids the confident-but-wrong failure mode that plagues undertrained systems.
The cost of doing this well at scale is staggering. Research from Daniel Kang at Stanford estimates that high-quality human data annotation now exceeds compute costs by up to 28 times for frontier models, with the data labelling market growing at a factor of 88 between 2023 and 2024 while compute costs increased by only 1.3 times. Producing just 600 high-quality RLHF annotations can cost approximately $60,000, which is roughly 167 times more than the compute expense for the same training iteration. Meta’s post-training alignment for Llama 3.1 alone required more than $50 million and approximately 200 people.
The frontier labs have also increasingly moved beyond basic RLHF toward more sophisticated approaches. Anthropic’s Constitutional AI has the model critique its own outputs against principles derived from human values, while the broader shift toward expert annotation, particularly for code, legal reasoning, and scientific analysis, means the humans providing feedback need to be domain practitioners rather than general-purpose annotators. This is expensive, slow, and extremely difficult to replicate through the synthetic and distilled preference data that open-source projects typically rely on.
The 2025 introduction of RLTHF (Targeted Human Feedback) from research surveyed in Preprints.org offers some hope, achieving full-human-annotation-level alignment with only 6-7% of the human annotation effort by combining LLM-based initial alignment with selective human corrections. But even these efficiency gains don’t close the fundamental gap: frontier labs can afford to spend tens of millions on annotation because they recoup it through API revenue, while open-source projects face a collective action problem where the cost of annotation is concentrated but the benefits are distributed.
The picture is not uniformly bleak for open-source, and understanding where the gap has closed is as important as understanding where it hasn’t.
Code generation is the domain where convergence has happened fastest. Qwen3-Coder’s 61.8% on Aider Polyglot at full precision puts it within striking distance of frontier coding models, and the Unsloth project’s dynamic quantisation of the same model achieves 60.9% at a quarter of the memory footprint, which represents remarkably small degradation. For writing, editing, and iterating on code, a well-configured local model running on capable hardware is now a genuinely viable alternative to an API, provided you’re not relying on long-context reasoning across an entire codebase.
Classification, summarisation, and embedding tasks have been viable on local models for some time, and the performance gap for these workloads is now negligible for most practical purposes. Document processing, data extraction, and content drafting all fall into the category where open-source models deliver sufficient quality at dramatically lower cost.
The OpenRouter State of AI report’s analysis of over 100 trillion tokens of real-world usage data shows that Chinese open-source models, particularly from Alibaba and DeepSeek, have captured approximately 13% of weekly token volume with strong growth in the second half of 2025, driven by competitive quality combined with rapid iteration and dense release cycles. This adoption is concentrated in exactly the workloads described above: high-volume, well-defined tasks where cost efficiency matters more than peak reasoning capability.
Privacy-sensitive applications represent another area where local models have an intrinsic advantage that no amount of frontier improvement can overcome. MacStories’ Federico Viticci noted that running vision-language models locally on a Mac Studio for OCR and document analysis bypasses the image compression problems that plague cloud-hosted models, while keeping sensitive documents entirely on-device. For regulated industries where data sovereignty matters, local inference is a feature that frontier APIs cannot match.
If the question is whether open-source models running on consumer hardware will eventually match frontier models across all tasks, the honest answer requires examining several conditions that would need to hold simultaneously.
The first is that Mixture of Experts architectures and similar efficiency innovations would need to continue improving at their current rate, allowing models with hundreds of billions of total parameters to activate only the relevant subset for each task while maintaining quality. The early evidence from DeepSeek’s MoE approach and Qwen3’s hybrid reasoning is encouraging, but there appear to be theoretical limits to how sparse activation can get before coherence suffers on complex multi-step problems.
The second condition is that the quantisation problem would need a genuine breakthrough rather than incremental improvement. The ACL 2025 finding that better-trained models are more sensitive to quantisation is a structural headwind that current techniques are not on track to solve. Red Hat’s evaluation of over 500,000 quantised model runs found that larger models at 8-bit quantisation show negligible degradation, but the story at 4-bit, where you need to be for consumer hardware, is considerably less encouraging for anything beyond straightforward tasks.
The third and most fundamental condition is that the post-training gap would need to close, which requires either a dramatic reduction in the cost of expert human annotation or a breakthrough in synthetic preference data that produces equivalent alignment quality. The emergence of techniques like RLTHF and Online Iterative RLHF suggests the field is working on this, but the frontier labs are investing in these same efficiency gains while simultaneously scaling their annotation budgets. It’s a race where both sides are accelerating, and the side with revenue-funded annotation budgets has a structural advantage.
The fourth condition is that inference hardware would need to improve enough to make unquantised or lightly quantised large models viable on consumer devices. Apple’s unified memory architecture is the most promising path here, and the progression from M1 to M4 chips has been impressive, but even the top-spec M3 Ultra at 512GB can only run the largest MoE models at aggressive quantisation levels. The next generation of Apple Silicon with 1TB+ unified memory would change the calculus significantly, but that’s likely several years away, and memory costs just shot through the ceiling.
Given all of these dependencies, a realistic timeline for broad convergence across most production tasks is probably three to five years, with coding and structured data tasks converging first, creative and analytical tasks following, and complex multi-step reasoning with tool use remaining a frontier advantage for the longest.
The most pragmatic position right now (which is also the least satisfying one to post about), is that the future is hybrid rather than either-or. The smart deployment pattern routes high-volume, lower-stakes tasks to local models where the cost savings compound quickly and the quality gap is negligible, while reserving frontier API calls for the work that demands peak reasoning: complex multi-step planning, high-stakes domain-specific analysis, nuanced tool orchestration, and anything where being confidently wrong carries real cost.
This is approximately what the Menlo Ventures survey data suggests enterprise buyers are doing already, with model API spending more than doubling to $8.4 billion while open-source adoption stabilises around 13% of production workloads. The enterprises that are getting value from local models are not using them as wholesale API replacements; they’re using them as a complementary layer that handles the grunt work while the expensive models handle the hard problems.
There’s also the operational burden that is rarely mentioned in relation to model use. When you run models locally, you effectively become your own ML ops team. Model updates, quantisation format compatibility, prompt template differences across architectures, memory management under load, and testing when new versions drop, all of that falls on you. The API providers handle model improvements, scaling, and infrastructure, and you get a better model every few months without changing a line of code. For a small team that should be spending its time on product rather than infrastructure, that operational overhead has real cost even if it doesn’t show up on an invoice.
The future of AI probably does involve substantially more local compute than we have today. Costs will come down, architectures will improve, hardware will get more capable, and the hybrid model will become standard practice. The question is not who removes the constraints first, it’s who understands which constraints actually matter.
from audiobook-reviews

This, as were the last two, is a book I discovered in Tiny Bookshop. I like a good love story and the game's blurb sounded pretty good.
Actually listening to the book, I found it has a bit too much drama for my taste. Why are all the protagonist together with these absolute garbage people?
The love story though is well told and charming. Even if some of thoughts Harriet is having toward her crush gave me «Good Intentions» vibes in a way that did not feel appropriate for the book.
I'm also not sure we needed to hear the outcome in quite so many words. The book comes to an epic climax. Stopping there and leaving the rest to the listener's imagination would have been fine, too. It's something we do not get nearly enough of these days. At least in the books I listen to.
I want to take a moment to talk about the letter Harriet writes. In the story it goes that she sits down, in the middle of the night no less and writes the letter in one go, even refuses to proof read it. Remember too, that she is a wedding photographer and not an author or a journalist who has a lot of practice. I am sorry, but that is bullshit. That letter is so well written. Clearly, these are the carefully written words of Mhairi McFarlane and not those of Harriet. Now, I am sure that this is necessary. The letter is pretty long and we get to hear all of it and were it written in a more realistic fashion, that part of the story might be hard to get through. Nonetheless, it shattered my suspension of disbelief.
But also, it is an interesting way of doing exhibition. I've not had that in too many books before, so fair enough I guess.
Social media plays a big role in this book. It gets mentioned from the beginning, reminding us that this is a contemporary piece of work.
Anyone can make up a story, paint themselves as a victim and their adversary as the abuser. Online mobs are quick to judge and ruthless in their damnation. They don't wait around to ask if there might be another side to the story.
By making this a central plot point, the book serves as a warning, to not believe everything you see online, just because it sounds sincere and plausible. A warning that can't be made often enough in these times.
The audio quality is good, as you'd expect from any modern recording. I am, however, not too happy with the performance of Chloe Massey in reading the book.
Yes, the different people do all get their own voices. But they are not very pronounced and, worse, not very consistent either. It is especially hard to distinguish what Harriet is saying out loud and what she is merely thinking in her head, sometimes making conversations hard to follow.
This might be to blame on the book in parts — there are some books that are suited better to being made into an audiobook than others.
Overall it's still an enjoyable listen, but it could definitely be better. If you're going to listen to the book, maybe check out this other recording here. It doesn't have as many reviews as the one I listened to, but they are better, particularly concerning the recording.
If you're looking for a romantic story and are not turned off by a bit of drama, then this is definitely for you!
from Decent Project
Quick Look • Barlow penned his bold and controversial take on the future of the Internet 30 years ago. • Many of Barlow's predictions about the capabilities of the Internet to remain free have not panned out and unforseen threats have cropped up. • But, it's hard not to feel his central thesis can still be achieved—though perhaps, only if you chose it.
Today marks 30 years since John Perry Barlow—co-founder of the Electronic Frontier Foundation—opened his laptop at the 1996 Economic World Forum in Davos and penned an unforgiving declaration of independence for the Internet.
Now, three decades later, as the Internet has come to dominate our lives and is in a crucial period of transition, it's a good time to reflect on Barlow's early vision of the Internet and how it might guide the Internet's future.
There are, no doubt, many parts of his declaration that simply haven't come true, but I can't help thinking that the spirit of Barlow's message can be achieved—at least for those who might be willing to seize it.
A Declaration of the Independence of Cyberspace by John Perry Barlow, February 8, 1996
There are a lot of ways to describe Barlow's A Declaration of the Independence of Cyberspace.
Lofty, inspiring, and courageous might come to mind. But so might pompous, blow-hard, specious and grandiloquent.
When Barlow wrote his declaration, the Internet was a very different place. Most homes, if they had computers at all, put them in “computer rooms.” The idea that every individual would have their own computer—let alone multiple—connected to the Internet at all times wasn't yet a thing. Total Internet users were counted in the millions, rather than billions.
Google had not yet been founded, and social media, cryptocurrency, and generative AI were many years away.
Yet, Barlow made bold declarations about the capabilities of cyberspace to flourish, and governments' inability to contain it:
“You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”
He claims that cyberspace exists without borders and outside jurisdictions:
“Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule.”
While in spirit, some of these declarations may be true. I think most would disagree with Barlow's claim that governments do not possess “any” methods of enforcing their laws on the Internet, and that because users are distributed around the globe that jurisdictions are rendered meaningless.
Governments around the world have spent the last three decades working to reign in the Internet and enforce laws on its users. In China, the country's Great Firewall heavily regulates the country's domestic Internet. Iran recently disconnected its country from the world's Internet amid widespread protests.
And in the West, the United States and Europe have been on foolhardy campaigns to institute age verification laws, crackdown on VPNs, and backdoor end-to-end encryption on popular messaging services.
The United States has incredible influence to enforce its criminal laws concerning computer crimes well beyond its geographic borders.
Alexandre Cazes of notorious AlphaBay fame was arrested in Thailand, while Kim Dotcom of Megaupload was arrested in New Zealand—both on U.S. warrants. Julian Assange spent seven years in the Ecuadorian Embassy in London to avoid arrest by the U.S. and other western countries, and Edward Snowden fled to Russia in 2013 to avoid prosecution.
Additionally, Barlow's declaration did not predict the considation of the Internet and how that has impacted online freedom.
Over the past 30 years we have seen the giants of the Internet rise and take hold: Google, Meta, Apple, Microsoft, X/Twitter, and Amazon.
They operate as proxy governments online, enforcing terms of service and privacy policies that limit expression and pry into every aspect of our lives.
They have disrupted industries in the real world and online, baited our society and culture with “free” services, and sunk their tentacles into our communications, our jobs, our banking, our health, our entertainment, and even our frontdoors and living rooms.
Digital town squares and convenience have given way to hive-minds and dependence.
In re-reading Barlow's declaration today, it's hard not to question whether there remains the kind of freedom online that Barlow describes.
There are threats to Internet freedom on all sides, as society moves into this post-modern era we find ourselves.
Yet, I can't help but agree with Barlow's claim that the Internet is “an act of nature and it grows itself through our collective actions.”
In Iran, as the government shut down access to the Internet, people started setting up Starlink connections to reach the global Internet; people in China have used VPNs and international SIM cards to circumvent the country's firewall for years; and age restriction laws in the United States are easily avoided with a VPN or TOR browser.
The more restrictions that governments attempt, the more they push people toward alternatives.
The same goes for corporations.
Some have sought refuge from Microsoft's unwanted and invasive AI features by moving to Linux. Others have ditched Google for Proton or Tuta. Others still, have left behind centralized social media for the Fediverse.
Linux desktop usage topped 5% globally in 2025
We have to ask whether Barlow's declaration is really a declaration, or is it more of a plea?
The second sentence of his declaration is a request: “I ask you of the past to leave us alone.”
Unfortunately, I think it's safe to say this plea has not—and will not—be respected. Governments are not going to step away from the Internet and corporations are not going stop maximizing their control and revenue.
Yet, for those who are willing to break free from governments and the centralized control of corporations online, there are still ways to make the best of Barlow's vision a reality:
“We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”
~ Torman
Verify this post: Source | Signature | PGP Key
#privacy #Internet #policy #InternetFreedom
If you enjoyed reading this or found it informative, please consider subscribing in order to receive posts directly to your inbox:
Also feel free to leave a comment here:
from
Roscoe's Quick Notes

This afternoon the Indiana University Women's Basketball Team will play their annual Pink Game. Their opponent will be the Purdue Boilermakers who will be traveling down to meet them on the floor of IU's Assembly Hall.
If the Internet doesn't crap out on me, I'll be listening to the radio call of the game streaming from B97 – The Home for IU Women's Basketball.
And the adventure continues.
from ChadNasci.com
Just testing out this platform. Here we go.
from 下川友
今日は6時半に目が覚めた。 外を見ると雪が積もっていて、早朝の澄んだ空気の中でその景色が見れたのは幸運だった。
ぼーっと眺めたり、写真を撮ったりしながら、10分ほどベランダに出ていた。 最近は腰痛予防に腹筋のインナーマッスルを鍛える習慣を続けていて、それが自信になっているのか、前より寒さに強くなった気がする。 凍える仕草をせず、少し強気に振る舞って、内心寒く感じてきた辺りで部屋に戻った。
最近は人と会う機会が、いつもより少しだけ増えている。 俺はだいたい15分前には現地に着いてしまう性格なので、待ち時間にコンビニで飲み物を買う。 「冬にしか飲まないからな」と思いながら、冬は決まってほっとレモンを選ぶ。飲むたびに「思ったより甘いな」と毎回感じている。
塀の上に猫がいたので、しばらく眺めて楽しんだ。 そのあと焦点を手前にずらして、猫をぼんやりとした輪郭で見ることで、もう一度楽しむ事ができる。 猫のことだから、表情をはっきり見なくても、きっと想像通りのあの顔をしているのだろう。
冬はみんな厚着をするから、満員電車ではそのぶん夏より乗れる人数が減る気がしているが、実際はどれくらい変わるのだろう。 朝の通勤電車は体感では乗車率200%ほどで、いつも乗れない人が出ている気がするが、電車の本数が季節で変わるという話は聞かない。 ということは、全員が厚着でも乗車数にはそこまで影響がないのだろう。
喫茶店に入ると、角砂糖の入った容器が大体置いてある。 自分はコーヒーも紅茶も基本ブラックだが、角砂糖の容器には店ごとのこだわりが出ていて、つい観察してしまう。 「自分の食卓にも置こうか」と思うものの、使う機会がない。 置き場所を考えて、最初は自分の向かいに置いてみたり、次は中央に置いてみたりするだろうが、ほとんど開けることがないまま埃をかぶり、やがて食器棚の奥にしまわれる未来を想像すると、少し寂しくなった。
喫茶店でゆっくりしていると、向かいの女性が今やっているゲームについて早口で語っていた。 体を使ったり、手を大きく振ったりして説明している様子から、本当に好きなのだとわかる。 腕を大きく振ったとき、その影が一瞬だけ机全体を覆った。
別の席では、40代くらいのおじさんが電話をしていた。 「そうかそうか、元気でやってるか」と言ったあと、「本題はここからだ」と言わんばかりに、 「お前、コンテストに出るって言って、それから出てないだろ」と相手に問いかけていた。 「いや、俺はいいんだけど、先生が『あいつはいつ出るんだ』って詰めてくるからなあ」と言っていて、じゃあこの仲介的なおじさんは一体どういう立場なのだろうと気になった。
家に帰ると、通販で買ったSサイズのニットが届いていた。 普段はMサイズだが、今回はSサイズを着こなすことに挑戦した。 着てみると見事にジャストサイズで、自分の体格にも合っていた。 「賭けに勝ったぞ」と思いつつ、普段の姿勢が悪すぎて、鏡を見ると右肩と左肩の高さが明らかに違い、歪んだボディラインが目立っていた。 この服自体はとても気に入っているので、しばらく着るだろう。 明日からはしばらく姿勢を意識して生活していく。
from
Jujupiter
I usually have six nominees instead of five for this category. It’s because movie posters are rectangular instead of square so to fit in an Instagram post, I needed six 😅 But screw that: the year in movies was just too good so I have seven entries! Melbourne International Film Festival was amazing, especially when it came to the movies coming from the Cannes selection.

And now, the nominees.
It's What's Inside by Greg Jardin

A sci-fi comedy in which a bunch of friends are given a machine that allows them to swap bodies. It’s funny at first but questions about attraction and social status show up and it becomes hilarious. A great first movie for Greg Jardin.
Red Rooms by Pascal Plante

A young woman is obsessed about a serial killer and attends the trial. This Canadian movie is highly confronting though no violence is shown, especially because it remains ambivalent all along about its main character. It takes you on a ride but finds a strange way to redeem itself at the end.
Mars Express by Jérémie Périn

In this French animated movie, set in the future on Mars, two agents investigate the disappearance of two students. It’s greatly animated, the world building is impressive and the story works really well. It’s such a shame that it bombed because it’s a real gem.
Sirāt by Oliver Laxe

In Morocco, a man and his son looking for his daughter in free parties decide to follow some partygoers deeper into the desert. This movie doesn’t really follow conventions and punches you right in the guts to remind you about some hard truths in life. Strong and beautiful.
It Was Just An Accident by Jafar Panahi

In Iran, a man kidnaps someone he thinks was his torturer in jail but before he kills him, he decides to check with other victims first. This year’s Palme d’Or is a drama, but it’s also got a strong dark sense of humour. Definitely worth a watch.
The Secret Agent by Kleber Mendonça Filho

During the Brazilian military dictatorship, a man tries to leave the country to escape a hit on his head. It’s impossible to describe the genre of this movie: is it a political thriller, magical realism or even a horny period drama?! It’s all at the same time.
A Useful Ghost by Ratchapoom Boonbunchachoke

In Thailand, a woman haunts a vacuum cleaner to reunite with her husband. This is the craziest movie I have seen this year, if not ever. I had high expectations and it did not disappoint, I laughed a lot. Let’s not forget the strong social and political commentary as well.
And the winner is… Well, I was unable to choose between those two very different movies so it’s a tie! The winners are Red Rooms by Pascal Plante and A Useful Ghost by Ratchapoom Boonbunchachoke!
#JujuAwards #MovieOfTheYear #JujuAwards2025 #BestOf2025
from
Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.
new The Housemaid+3 The Wrecking Crew-2 Anaconda-2 Greenland 2: Migration-1 Zootopia 2-3 The Ripnew We Bury the Dead-2 Predator: Badlandsnew Hamnet-3 Sinners= Fallout= A Knight of the Seven Kingdoms= The Pitt+1 High Potential-1 The Rookie+1 The Night Managernew Wonder Mannew Bridgertonnew Shrinking-4 HijackHi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.
from An Open Letter
I’m so exhausted from all the moving and headaches. We didn’t even have hot water, let alone Internet.
from Robert Galpin
charcoal sketch of dawn cyanotype sky two jackets, boots— broccoli for the chickens
from
Talk to Fa
If I have to ask for it, it’s not for me.