from Have A Good Day

Fourth Wing isn’t exactly a discovery: It’s a bestseller that’s being adapted into a TV show. I came across it while taking shelter at the Union Square Barnes & Noble during a heavy thunderstorm and was intrigued enough to try a sample. At first, it read like a crossover between Harry Potter and Hunger Games. The latter I didn't finish because I found reading about teenagers killing each other for survival too depressing. But ChatGPT assured me that Fourth Wing is different: people are dying, but so far, that has only happened in the background._ I also like that, while the story is a classic high fantasy playing in a medieval setting with magic and mythical beasts, it is written in first-person present tense like a modern romance. The characters speak like 20-somethings in the 21st century, including using expletives, and have names you might encounter today. The protagonist, Violet, struggles uphill for her life due to her fragile build. However, she is clever, resourceful, and possesses unwavering self-confidence despite constant death threats. While such a mix of traits might be rare in real life—since high intelligence often accompanies self-doubts—it suits the story well. After all, who would want to read about a heroine who’s always scared for her life?

 
Read more...

from Human in the Loop

The code is already out there. Somewhere in the world right now, someone is downloading Llama 3.1, Meta's 405-billion-parameter AI model, fine-tuning it for purposes Mark Zuckerberg never imagined, and deploying it in ways no safety team anticipated. Maybe they're building a medical diagnostic tool that could save lives in rural clinics across sub-Saharan Africa, where access to radiologists is scarce and expertise is concentrated in distant urban centres. Maybe they're generating deepfakes for a disinformation campaign designed to undermine democratic elections. The model doesn't care. It can't. That's the whole point of open source.

This is the paradox we've built: the same transparency that enables innovation also enables exploitation. The democratisation of artificial intelligence, once a distant dream championed by idealists who remembered when software was freely shared amongst researchers, has arrived with startling speed. And it's brought questions we're not ready to answer.

When EleutherAI released GPT-Neo in March 2021, it represented something profound. Founded by Connor Leahy, Leo Gao, and Sid Black in July 2020, this decentralised grassroots collective accomplished what seemed impossible: they replicated OpenAI's GPT-3 and made it freely available to anyone. The 2.7 billion parameter model, trained on their curated dataset called The Pile, was the largest open-source GPT-3-style language model in the world. Released under the Apache 2.0 licence, it fuelled an entirely new wave of startups and won UNESCO's Netexplo Global Innovation Award in 2021.

Four years later, that rebel spirit has become mainstream. Meta's Llama 3.1 405B has achieved what Zuckerberg calls “frontier-level” status, rivalling the most advanced systems from OpenAI, Google, and Anthropic. Mistral AI's Large 2 model matches or surpasses top-tier systems, particularly in multilingual applications. France has invested in Mistral AI, the UAE in Falcon, making sovereign AI capability a matter of national strategy. The democratisation has arrived, and it's reshaping the global AI landscape faster than anyone anticipated.

But here's the uncomfortable truth we need to reckon with: the open weights that empower researchers to fine-tune models for medical breakthroughs can just as easily be weaponised for misinformation campaigns, harassment bots, or deepfake generation. Unlike commercial APIs with content filters and usage monitoring, most open models have no embedded safety protocols. Every advance in accessibility is simultaneously an advance in potential harm.

How do we preserve the democratic promise whilst preventing the ethical pitfalls? How do we sustain projects financially when the code is free? How do we build trust and accountability in communities that intentionally resist centralised control? And most fundamentally, how do we balance innovation with responsibility when the technology itself is designed to be ungovernable?

The Democratic Revolution Is Already Here

The numbers tell a compelling story. Hugging Face, the de facto repository for open AI models, hosts over 250,000 model cards. The Linux Foundation and Apache Software Foundation have refined open-source governance for decades, proving that community-driven development can create reliable, secure infrastructure that powers the internet itself. From the Apache web server handling millions of requests daily to the Linux kernel running on billions of devices, open-source software has already demonstrated that collaborative development can match or exceed proprietary alternatives.

The case for open-source AI rests on several pillars. First, transparency: public model architectures, training data, and evaluation methodologies enable researchers to scrutinise systems for bias, security vulnerabilities, and performance limitations. When researchers at Stanford University wanted to understand bias in large language models, they could examine open models like BLOOM in ways impossible with closed systems. Second, sovereignty: organisations can train, fine-tune, and distil their own models without vendor lock-in, maintaining control over their data and infrastructure. This matters profoundly for governments, healthcare providers, and financial institutions handling sensitive information. Third, economic efficiency: Llama 3.1 405B runs at roughly 50% the cost of closed alternatives like GPT-4o, a calculation that matters enormously to startups operating on limited budgets and researchers in developing countries. Fourth, safety through scrutiny: open systems benefit from community security audits that identify vulnerabilities closed-source vendors miss, following the principle that many eyes make bugs shallow.

Meta's approach illustrates why some companies embrace openness. As Zuckerberg explained in July 2024, “selling access to AI models isn't our business model.” Meta benefits from ecosystem innovation without undermining revenue, a fundamental distinction from closed-model providers whose business models depend on API access fees. The company can leverage community contributions to improve Llama whilst maintaining its core business of advertising and social networking. It's a strategic calculation, not altruism, but the result is powerful AI models available to anyone with the technical skills and computational resources to deploy them.

The democratisation extends beyond tech giants. BigScience, coordinated by Hugging Face using funding from the French government, assembled over 1,000 volunteer researchers from 60 countries to create BLOOM, a multilingual language model designed to be maximally transparent. Unlike OpenAI's GPT-3 or Google's LaMDA, the BigScience team shared details about training data, development challenges, and evaluation methodology, embedding ethical considerations from inception rather than treating them as afterthoughts. The project trained its 176 billion parameter model on the Jean Zay supercomputer near Paris, demonstrating that open collaboration could produce frontier-scale models.

This collaborative ethos has produced tangible results beyond just model releases. EleutherAI's work won InfoWorld's Best of Open Source Software Award in 2021 and 2022, recognition from an industry publication that understands the value of sustainable open development. Stable Diffusion makes its source code and pretrained weights available for both commercial and non-commercial use under a permissive licence, spawning an entire ecosystem of image generation tools and creative applications. These models run on consumer hardware, not just enterprise data centres, genuinely democratising access. A researcher in Lagos can use the same AI capabilities as an engineer in Silicon Valley, provided they have the technical skills and hardware, collapsing geographic barriers that have historically concentrated AI development in a handful of wealthy nations.

The Shadow Side of Openness

Yet accessibility cuts both ways, and the knife is sharp. The same models powering medical research into rare diseases can generate child sexual abuse material when deliberately misused. The same weights enabling multilingual translation services for refugee organisations can create deepfake political content that threatens democratic processes. The same transparency facilitating academic study of model behaviour can provide blueprints for sophisticated cyberattacks.

The evidence of harm is mounting, and it's not hypothetical. In March 2024, thousands of companies including Uber, Amazon, and OpenAI using the Ray AI framework were exposed to cyber attackers in a campaign dubbed ShadowRay. The vulnerability, CVE-2023-48022, allowed attackers to compromise network credentials, steal tokens for accessing OpenAI, Hugging Face, Stripe, and Azure accounts, and install cryptocurrency miners on enterprise infrastructure. The breach had been active since at least September 2023, possibly longer, demonstrating how open AI infrastructure can become an attack vector when security isn't prioritised.

Researchers have documented significant increases in AI-created child sexual abuse material and non-consensual intimate imagery since open generative models emerged. Whilst closed models can also be exploited through careful prompt engineering, studies show most harmful content originates from open foundation models where safety alignments can be easily bypassed or removed entirely through fine-tuning, a process that requires modest technical expertise and computational resources.

The biological research community faces particularly acute dilemmas. In May 2024, the US Office of Science and Technology Policy recommended oversight of dual-use computational models that could enable the design of novel biological agents or enhanced pandemic pathogens. AI models trained on genomic and protein sequence data could accelerate legitimate vaccine development or illegitimate bioweapon engineering with equal facility. The difference lies entirely in user intent, which no model architecture can detect or control. A model that helps design therapeutic proteins can just as easily design toxins; the mathematics don't distinguish between beneficial and harmful applications.

President Biden's Executive Order 14110 in October 2023 directed agencies including NIST, NTIA, and NSF to develop AI security guidelines and assess risks from open models. The NTIA's July 2024 report examined whether open-weight models should face additional restrictions but concluded that current evidence was insufficient to justify broad limitations, reflecting genuine regulatory uncertainty: how do you regulate something designed to resist regulation without destroying the very openness that makes it valuable? The agency called for active monitoring but refrained from mandating restrictions, a position that satisfied neither AI safety advocates calling for stronger controls nor open-source advocates worried about regulatory overreach.

Technical challenges compound governance ones. Open-source datasets may contain mislabelled, redundant, or outdated data, as well as biased or discriminatory content reflecting the prejudices present in their source materials. Models trained on such data can produce discriminatory outputs, perpetuate human biases, and prove more susceptible to manipulation when anyone can retrain or fine-tune models using datasets of their choosing, including datasets deliberately crafted to introduce specific biases or capabilities.

Security researchers have identified multiple attack vectors that pose particular risks for open models. Model inversion allows attackers to reconstruct training data from model outputs, potentially exposing sensitive information used during training. Membership inference determines whether specific data was included in training sets, which could violate privacy regulations or reveal confidential information. Data leakage extracts sensitive information embedded in model weights, a risk that increases when weights are fully public. Backdoor attacks embed malicious functionality that activates under specific conditions, functioning like trojan horses hidden in the model architecture itself.

Adversarial training, differential privacy, and model sanitisation can mitigate these risks, but achieving balance between transparency and security remains elusive. When model weights are fully public, attackers have unlimited time to probe for vulnerabilities that defenders must protect against in advance, an inherently asymmetric battle that favours attackers.

Red teaming has emerged as a critical safety practice, helping discover novel risks and stress-test mitigations before models reach production deployment. Yet red teaming itself creates information hazards. Publicly sharing outcomes promotes transparency and facilitates discussions about reducing potential harms, but may inadvertently provide adversaries with blueprints for exploitation. Who decides what gets disclosed and when? How do we balance the public's right to know about AI risks with the danger of weaponising that knowledge? These questions lack clear answers.

The Exploitation Economy

Beyond safety concerns lies a more insidious challenge: exploitation of the developers who build open-source infrastructure. The economics are brutal. Ninety-six per cent of demand-side value in open-source software is created by only five per cent of developers, according to a Harvard Business School study analysing actual usage data. This extreme concentration means critical infrastructure that underpins modern AI development depends on a tiny group of maintainers, many receiving little or no sustained financial support for work that generates billions in downstream value.

The funding crisis is well-documented but persistently unsolved. Securing funding for new projects is relatively easy; venture capital loves funding shiny new things that might become the next breakthrough. Raising funding for maintenance, the unglamorous work of fixing bugs, patching security vulnerabilities, and updating dependencies, is virtually impossible, even though this is where most work happens and where failures have catastrophic consequences. The XZ Utils backdoor incident in 2024 demonstrated how a single overworked maintainer's compromise could threaten the entire Linux ecosystem.

Without proper funding, maintainers experience burnout. They're expected to donate evenings and weekends to maintain code that billion-dollar companies use to generate profit, providing free labour that subsidises some of the world's most valuable corporations. When maintainers burn out and projects become neglected, security suffers, software quality degrades, and everyone who depends on that infrastructure pays the price through increased vulnerabilities and decreased reliability.

The free rider problem exacerbates this structural imbalance: companies use open-source software extensively without contributing back through code contributions, funding, or other support. A small number of organisations absorb infrastructure costs whilst the overwhelming majority of large-scale users, including commercial entities generating significant economic value, consume without contributing. The AI Incident Database, a project of the Responsible AI Collaborative, has collected more than 1,200 reports of intelligent systems causing safety, fairness, or other problems. These databases reveal a troubling pattern: when projects lack resources, security suffers, and incidents multiply.

Some organisations are attempting solutions. Sentry's OSS Pledge calls for companies to pay a minimum of $2,000 per year per full-time equivalent developer on their staff to open-source maintainers of their choosing. It's a start, though $2,000 barely scratches the surface of value extracted when companies build multi-million-pound businesses atop free infrastructure. The Open Source Security Foundation emphasises that open infrastructure is not free, though we've built an economy that pretends it is. We're asking volunteers to subsidise the profits of some of the world's wealthiest companies, a model that's financially unsustainable and ethically questionable.

Governance Models That Actually Work

If the challenges are formidable, the solutions are emerging, and some are already working at scale. The key lies in recognising that governance isn't about control, it's about coordination. The Apache Software Foundation and Linux Foundation have spent decades refining models that balance openness with accountability, and their experiences offer crucial lessons for the AI era.

The Apache Software Foundation operates on two core principles: “community over code” and meritocracy. Without a diverse and healthy team of contributors, there is no project, regardless of code quality. There is no governance by fiat and no way to simply buy influence into projects. These principles create organisational resilience that survives individual departures and corporate priority shifts. When individual contributors leave, the community continues. When corporate sponsors change priorities, the project persists because governance is distributed rather than concentrated.

The Linux Foundation takes a complementary approach, leveraging best practices to create sustainable models for open collaboration that balance diverse stakeholder interests. Both foundations provide governance frameworks, legal support, and financial stability, enabling developers to focus on innovation rather than fundraising. They act as intermediaries between individual contributors, corporate sponsors, and grant organisations, ensuring financial sustainability through diversified funding that doesn't create vendor capture or undue influence from any single sponsor.

For AI-specific governance, the FINOS AI Governance Framework, released in 2024, provides a vendor-agnostic set of risks and controls that financial services institutions can integrate into existing models. It outlines 15 risks and 15 controls specifically tailored for AI systems leveraging large language model paradigms. Global financial institutions including BMO, Citi, Morgan Stanley, RBC, and Bank of America are working with major cloud providers like Microsoft, Google Cloud, and AWS to develop baseline AI controls that can be shared across the industry. This collaborative approach represents a significant shift in thinking: rather than each institution independently developing controls and potentially missing risks, they're pooling expertise to create shared standards that raise the floor for everyone whilst allowing institutions to add organisation-specific requirements.

The EU's AI Act, which entered into force on 1 August 2024 as the world's first comprehensive AI regulation, explicitly recognises the value of open source for research, innovation, and economic growth. It creates certain exemptions for providers of AI systems, general-purpose AI models, and tools released under free and open-source licences. However, these exemptions are not blank cheques. Providers of such models with systemic risks, those capable of causing serious harm at scale, face full compliance requirements including transparency obligations, risk assessments, and incident reporting.

According to the Open Source Initiative, for a licence to qualify as genuinely open source, it must cover all necessary components: data, code, and model parameters including weights. This sets a clear standard preventing companies from claiming “open source” status whilst withholding critical components that would enable true reproduction and modification. Licensors may include safety-oriented terms that reasonably restrict usage where model use could pose significant risk to public interests like health, security, and safety, balancing openness with responsibility without completely closing the system.

Building Trust Through Transparency

Trust in open-source AI communities rests on documentation, verification, and accountability mechanisms that invite broad participation. Hugging Face has become a case study in how platforms can foster trust at scale, though results are mixed and ongoing work remains necessary.

Model Cards, originally proposed by Margaret Mitchell and colleagues in 2018, provide structured documentation of model capabilities, fairness considerations, and ethical implications. Inspired by Data Statements for Natural Language Processing and Datasheets for Datasets (Gebru et al., 2018), Model Cards encourage transparent model reporting that goes beyond technical specifications to address social impacts, use case limitations, and known biases.

A 2024 study analysed 32,111 AI model documentations on Hugging Face, examining what information model cards actually contain. The findings were sobering: whilst developers are encouraged to produce model cards, quality and completeness vary dramatically. Many cards contain minimal information, failing to document training data sources, known limitations, or potential biases. The platform hosts over 250,000 model cards, but quantity doesn't equal quality. Without enforcement mechanisms or standardised templates, documentation quality depends entirely on individual developer diligence and expertise.

Hugging Face's approach to ethical openness combines institutional policies such as documentation requirements, technical safeguards such as gating access to potentially dangerous models behind age verification and usage agreements, and community safeguards such as moderation and reporting mechanisms. This multi-layered strategy recognises that no single mechanism suffices. Trust requires defence in depth, with multiple overlapping controls that provide resilience when individual controls fail.

Accountability mechanisms invite participation from the broadest possible set of contributors: developers working directly on the technology, multidisciplinary research communities bringing diverse perspectives, advocacy organisations representing affected populations, policymakers shaping regulatory frameworks, and journalists providing public oversight. Critically, accountability focuses on all stages of the machine learning development process, from data collection through deployment, in ways impossible to fully predict in advance because societal impacts emerge from complex interactions between technical capabilities and social contexts.

By making LightEval open source, Hugging Face encourages greater accountability in AI evaluation, something sorely needed as companies increasingly rely on AI for high-stakes decisions affecting human welfare. LightEval provides tools for assessing model performance across diverse benchmarks, enabling independent verification of capability claims rather than taking vendors' marketing materials at face value, a crucial check on commercial incentives to overstate performance.

The Partnership on AI, which oversees the AI Incident Database, demonstrates another trust-building approach through systematic transparency. The database, inspired by similar systematic databases in aviation and computer security that have driven dramatic safety improvements, collects incidents where intelligent systems have caused safety, fairness, or other problems. This creates organisational memory, enabling the community to learn from failures and avoid repeating mistakes, much as aviation achieved dramatic safety improvements through systematic incident analysis that made flying safer than driving despite the higher stakes of aviation failures.

The Innovation-Responsibility Tightrope

Balancing innovation with responsibility requires acknowledging an uncomfortable reality: perfect safety is impossible, and pursuing it would eliminate the benefits of openness. The question is not whether to accept risk, but how much risk and of what kinds we're willing to tolerate in exchange for what benefits, and who gets to make those decisions when risks and benefits distribute unevenly across populations.

Red teaming has emerged as essential practice in assessing possible risks of AI models and systems, discovering novel risks through adversarial testing, stress-testing gaps in existing mitigations, and enhancing public trust through demonstrated commitment to safety. Microsoft's red team has experience tackling risks across system types, including Copilot, models embedded in systems, and open-source models, developing expertise that transfers across contexts and enables systematic risk assessment.

However, red teaming creates inherent tension between transparency and security. Publicly sharing outcomes promotes transparency and facilitates discussions about reducing potential harms, but may inadvertently provide adversaries with blueprints for exploitation, particularly for open models where users can probe for vulnerabilities indefinitely without facing the rate limits and usage monitoring that constrain attacks on closed systems.

Safe harbour proposals attempt to resolve this tension by protecting good-faith security research from legal liability. Legal safe harbours would safeguard certain research from legal liability under laws like the Computer Fraud and Abuse Act, mitigating the deterrent of strict terms of service that currently discourage security research. Technical safe harbours would limit practical barriers to safety research by clarifying that researchers won't be penalised for good-faith security testing. OpenAI, Google, Anthropic, and Meta have implemented bug bounties and safe harbours, though scope and effectiveness vary considerably across companies, with some offering robust protections and others providing merely symbolic gestures.

The broader challenge is that deployers of open models will likely increasingly face liability questions regarding downstream harms as AI systems become more capable and deployment more widespread. Current legal frameworks were designed for traditional software that implements predictable algorithms, not AI systems that generate novel outputs based on patterns learned from training data. If a company fine-tunes an open model and that model produces harmful content, who bears responsibility: the original model provider who created the base model, the company that fine-tuned it for specific applications, or the end user who deployed it and benefited from its outputs? These questions remain largely unresolved, creating legal uncertainty that could stifle innovation through excessive caution or enable harm through inadequate accountability depending on how courts eventually interpret liability principles developed for different technologies.

The industry is experimenting with technical mitigations to make open models safer by default. Adversarial training teaches models to resist attacks by training on adversarial examples that attempt to break the model. Differential privacy adds calibrated noise to prevent reconstruction of individual data points from model outputs or weights. Model sanitisation attempts to remove backdoors and malicious functionality embedded during training or fine-tuning. These techniques can effectively mitigate some risks, though achieving balance between transparency and security remains challenging because each protection adds complexity, computational overhead, and potential performance degradation. When model weights are public, attackers have unlimited time and resources to probe for vulnerabilities whilst defenders must anticipate every possible attack vector, creating an asymmetric battle that structurally favours attackers.

The Path Forward

The path forward requires action across multiple dimensions simultaneously. No single intervention will suffice; systemic change demands systemic solutions that address finance, governance, transparency, safety, education, and international coordination together rather than piecemeal.

Financial sustainability must become a priority embedded in how we think about open-source AI, not an afterthought addressed only when critical projects fail. Organisations extracting value from open-source AI infrastructure must contribute proportionally through models more sophisticated than voluntary donations, perhaps tied to revenue or usage metrics that capture actual value extraction.

Governance frameworks must be adopted and enforced across projects and institutions, balancing regulatory clarity with open-source exemptions that preserve innovation incentives. However, governance cannot rely solely on regulation, which is inherently reactive and often technically uninformed. Community norms matter enormously. The Apache Software Foundation's “community over code” principle and meritocratic governance provide proven templates tested over decades. BigScience's approach of embedding ethics from inception shows how collaborative projects can build responsibility into their DNA rather than bolting it on later when cultural patterns are already established.

Documentation and transparency tools must become universal and standardised. Model Cards should be mandatory for any publicly released model, with standardised templates ensuring completeness and comparability. Dataset documentation, following the Datasheets for Datasets framework, should detail data sources, collection methodologies, known biases, and limitations in ways that enable informed decisions about appropriate use cases and surface potential misuse risks.

The AI Incident Database and AIAAIC Repository demonstrate the value of systematic incident tracking that creates organisational memory. These resources should be expanded with increased funding, better integration with development workflows, and wider consultation during model development. Aviation achieved dramatic safety improvements through systematic incident analysis that treated every failure as a learning opportunity; AI can learn from this precedent if we commit to applying the lessons rigorously rather than treating incidents as isolated embarrassments to be minimised.

Responsible disclosure protocols must be standardised across the ecosystem to balance transparency with security. The security community has decades of experience with coordinated vulnerability disclosure; AI must adopt similar frameworks with clear timelines, standardised severity ratings, and mechanisms for coordinating patches across ecosystems that ensure vulnerabilities get fixed before public disclosure amplifies exploitation risks.

Red teaming must become more sophisticated and widespread, extending beyond flagship models from major companies to encompass the long tail of open-source models fine-tuned for specific applications where risks may be concentrated. Industry should develop shared red teaming resources that smaller projects can access, pooling expertise and reducing costs through collaboration whilst raising baseline safety standards.

Education and capacity building must reach beyond technical communities to include policymakers, journalists, civil society organisations, and the public. Current discourse often presents false choices between completely open and completely closed systems, missing the rich spectrum of governance options in between that might balance competing values more effectively. Universities should integrate responsible AI development into computer science curricula, treating ethics and safety as core competencies rather than optional additions relegated to single elective courses.

International coordination must improve substantially. AI systems don't respect borders, and neither do their risks. The EU AI Act, US executive orders, and national strategies from France, UAE, and others represent positive steps toward governance, but lack of coordination creates regulatory fragmentation that both enables regulatory arbitrage by companies choosing favourable jurisdictions and imposes unnecessary compliance burdens through incompatible requirements. International bodies including the OECD, UNESCO, and Partnership on AI should facilitate harmonisation where possible whilst respecting legitimate differences in values and priorities that reflect diverse cultural contexts.

The Paradox We Must Learn to Live With

Open-source AI presents an enduring paradox: the same qualities that make it democratising also make it dangerous, the same transparency that enables accountability also enables exploitation, the same accessibility that empowers researchers also empowers bad actors. There is no resolution to this paradox, only ongoing management of competing tensions that will never fully resolve because they're inherent to the technology's nature rather than temporary bugs to be fixed.

The history of technology offers perspective and, perhaps, modest comfort. The printing press democratised knowledge and enabled propaganda. The internet connected the world and created new vectors for crime. Nuclear energy powers cities and threatens civilisation. In each case, societies learned, imperfectly and incompletely, to capture benefits whilst mitigating harms through governance, norms, and technical safeguards. The process was messy, uneven, and never complete. We're still figuring out how to govern the internet, centuries after learning to manage printing presses.

Open-source AI requires similar ongoing effort, with the added challenge that the technology evolves faster than our governance mechanisms can adapt. Success looks not like perfect safety or unlimited freedom, but like resilient systems that bend without breaking under stress, governance that adapts without ossifying into bureaucratic rigidity, and communities that self-correct without fragmenting into hostile factions.

The stakes are genuinely high. AI systems will increasingly mediate access to information, opportunities, and resources in ways that shape life outcomes. If these systems remain concentrated in a few organisations, power concentrates accordingly, potentially to a degree unprecedented in human history where a handful of companies control fundamental infrastructure for human communication, commerce, and knowledge access. Open-source AI represents the best chance to distribute that power more broadly, to enable scrutiny of how systems work, and to allow diverse communities to build solutions suited to their specific contexts and values rather than one-size-fits-all systems designed for Western markets.

But that democratic promise depends on getting governance right. It depends on sustainable funding models so critical infrastructure doesn't depend on unpaid volunteer labour from people who can afford to work for free, typically those with economic privilege that's unevenly distributed globally. It depends on transparency mechanisms that enable accountability without enabling exploitation. It depends on safety practices that protect against foreseeable harms without stifling innovation through excessive caution. It depends on international cooperation that harmonises approaches without imposing homogeneity that erases valuable diversity in values and priorities reflecting different cultural contexts.

Most fundamentally, it depends on recognising that openness is not an end in itself, but a means to distributing power, enabling innovation, and promoting accountability. When openness serves those ends, it should be defended vigorously against attempts to concentrate power through artificial scarcity. When openness enables harm, it must be constrained thoughtfully rather than reflexively through careful analysis of which harms matter most and which interventions actually reduce those harms without creating worse problems.

The open-source AI movement has dismantled traditional barriers with remarkable speed, achieving in a few years what might have taken decades under previous technological paradigms. Now comes the harder work: building the governance, funding, trust, and accountability mechanisms to ensure that democratisation fulfils its promise rather than its pitfalls. The tools exist, from Model Cards to incident databases, from foundation governance to regulatory frameworks. What's required now is the collective will to deploy them effectively, the wisdom to balance competing values without pretending conflicts don't exist, and the humility to learn from inevitable mistakes rather than defending failures.

The paradox cannot be resolved. But it can be navigated with skill, care, and constant attention to how power distributes and whose interests get served. Whether we navigate it well will determine whether AI becomes genuinely democratising or just differently concentrated, whether power distributes more broadly or reconcentrates in new formations that replicate old hierarchies. The outcome is not yet determined, and that uncertainty is itself a form of opportunity. There's still time to get this right, but the window won't stay open indefinitely as systems become more entrenched and harder to change.


Sources and References

Open Source AI Models and Democratisation:

  1. Leahy, Connor; Gao, Leo; Black, Sid (EleutherAI). “GPT-Neo and GPT-J Models.” GitHub and Hugging Face, 2020-2021. Available at: https://github.com/EleutherAI/gpt-neo and https://huggingface.co/EleutherAI

  2. Zuckerberg, Mark. “Open Source AI Is the Path Forward.” Meta Newsroom, July 2024. Available at: https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/

  3. VentureBeat. “Silicon Valley shaken as open-source AI models Llama 3.1 and Mistral Large 2 match industry leaders.” July 2024.

  4. BigScience Workshop. “BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.” Hugging Face, 2022. Available at: https://huggingface.co/bigscience/bloom

  5. MIT Technology Review. “BLOOM: Inside the radical new project to democratise AI.” 12 July 2022.

Ethical Challenges and Security Risks:

  1. National Telecommunications and Information Administration (NTIA). “Dual-Use Foundation Models with Widely Available Model Weights.” US Department of Commerce, July 2024.

  2. R Street Institute. “Mapping the Open-Source AI Debate: Cybersecurity Implications and Policy Priorities.” 2024.

  3. MDPI Electronics. “Open-Source Artificial Intelligence Privacy and Security: A Review.” Electronics 2024, 13(12), 311.

  4. NIST. “Managing Misuse Risk for Dual-Use Foundation Models.” AI 800-1 Initial Public Draft, 2024.

  5. PLOS Computational Biology. “Dual-use capabilities of concern of biological AI models.” 2024.

  6. Oligo Security. “ShadowRay: First Known Attack Campaign Targeting AI Workloads Exploited In The Wild.” March 2024.

Governance and Regulatory Frameworks:

  1. European Union. “Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act).” Entered into force 1 August 2024.

  2. FINOS (Fintech Open Source Foundation). “AI Governance Framework.” Released 2024. Available at: https://air-governance-framework.finos.org/

  3. Apache Software Foundation. “The Apache Way.” Available at: https://www.apache.org/

  4. Linux Foundation. “Open Source Best Practices and Governance.” Available at: https://www.linuxfoundation.org/

  5. Hugging Face. “AI Policy: Response to the U.S. NTIA's Request for Comment on AI Accountability.” 2024.

Financial Sustainability:

  1. Hoffmann, Manuel; Nagle, Frank; Zhou, Yanuo. “The Value of Open Source Software.” Harvard Business School Working Paper 24-038, 2024.

  2. Open Sauced. “The Hidden Cost of Free: Why Open Source Sustainability Matters.” 2024.

  3. Open Source Security Foundation. “Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship.” 23 September 2025.

  4. The Turing Way. “Sustainability of Open Source Projects.”

  5. PMC. “Open-source Software Sustainability Models: Initial White Paper From the Informatics Technology for Cancer Research Sustainability and Industry Partnership Working Group.”

Trust and Accountability Mechanisms:

  1. Mitchell, Margaret; et al. “Model Cards for Model Reporting.” Proceedings of the Conference on Fairness, Accountability, and Transparency, 2018.

  2. Gebru, Timnit; et al. “Datasheets for Datasets.” arXiv, 2018.

  3. Hugging Face. “Model Card Guidebook.” Authored by Ozoani, Ezi; Gerchick, Marissa; Mitchell, Margaret, 2022.

  4. arXiv. “What's documented in AI? Systematic Analysis of 32K AI Model Cards.” February 2024.

  5. VentureBeat. “LightEval: Hugging Face's open-source solution to AI's accountability problem.” 2024.

AI Safety and Red Teaming:

  1. Partnership on AI. “When AI Systems Fail: Introducing the AI Incident Database.” Available at: https://partnershiponai.org/aiincidentdatabase/

  2. Responsible AI Collaborative. “AI Incident Database.” Available at: https://incidentdatabase.ai/

  3. AIAAIC Repository. “AI, Algorithmic, and Automation Incidents and Controversies.” Launched 2019.

  4. OpenAI. “OpenAI's Approach to External Red Teaming for AI Models and Systems.” arXiv, March 2025.

  5. Microsoft. “Microsoft AI Red Team.” Available at: https://learn.microsoft.com/en-us/security/ai-red-team/

  6. Knight First Amendment Institute. “A Safe Harbor for AI Evaluation and Red Teaming.” arXiv, March 2024.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from POTUSRoaster

Thanksgiving is the most important meal in the calendar for Americans. Many families save and prepare for it more than a month in advance, cooking and freezing special foods reserved just for this one meal. In many families this meal and the football that follows are more important than the Black Friday that comes along right after.

So, what does POTUS do for those families that need assistance? He cuts their support in half. Not because there is no money but because he wants them to suffer. POTUS seems more cruel than usual. He wants to hurt those who might not be fully in agreement with what he is doing to the nation. Then he wants to blame it all on the other party. He knows that his followers live in a bubble with no outside access, and therefore he feels he will get away with anything he does because his followers will listen only to him and he can tell big lies.

With no food there is no meal and Thanksgiving is nothing if not a meal. POTUS will have his meal, probably in Florida, and he doesn’t care if you have yours. Thanks POTUS!!

POTUS Roaster

Thanks for reading my posts. If you want to see the rest of them, please go to write.as/potusroaster/archive/

To email us send it too potusroaster@gmail.com

Please tell your family, friends and neighbors about the posts.

 
Read more... Discuss...

from Roscoe's Story

In Summary: * An acceptable Monday, this. Got the day's main chore (my weekly laundry) all done. Though unable to find a college football game tonight, I did find a college basketball game with a live radio feed. And with an early enough start time that I should be able to listen to the whole game before my brain shuts down for the night.

Prayers, etc.: * My daily prayers.

Health Metrics: * bw= 221.90 lbs. * bp= 131/74 (67)

Exercise: * kegel pelvic floor exercise, half squats, calf raises, wall push-ups

Diet: * 06:25 – rice cake * 09:30 – 8 hot dogs * 09:50 – HEB bakery Halloween pumpkin cookies * 16:10 – fried pork * 17:10 – coconut rice cake

Activities, Chores, etc.: * 05:50 – bank accounts activity monitored * 06:00 – read, pray, listen to news reports from various sources * 10:30 – start my weekly laundry * 12:00 – read, pray, listen to news reports from various sources * 15:10 – listening to relaxing music while folding laundry * 17:30 – listening to the radio call of a college basketball game, IUPUI Jaguars at Ohio St. Buckeyes

Chess: * 15:00 – moved in all pending CC games

 
Read more...

from The Understory

I’ve been collecting notes on models, effects, paradoxes, and phenomena for the past couple of weeks. Today’s entry is brought to us by Alex Danco in Why AC is cheap, but AC repair is a luxury:

100 years after Jevons published his observation on coal, William Baumol published a short paper investigating why so many orchestras, theaters, and opera companies were running out of money. He provocatively asserted that the String Quartet had become less productive, in “real economy” terms, because the rest of the economy had become more productive, while the musicians’ job stayed exactly the same. The paper struck a nerve, and became a branded concept: “Baumol’s Cost Disease”.

Essentially, if one sector of the economy creates lots of well-paying jobs, then every other sector’s wages rise to stay competitive. The piece goes on to describe some of the strange AI-driven side effects we may see—like radiologist wages skyrocketing because they’re the mandated last reviewer for AI-managed tasks.

 
Read more... Discuss...

from Roscoe's Quick Notes

Unable to roundup a college football game radio broadcast to listen to this evening, but I do have a few college basketball games from which to choose. And my choice is:

IUPUI vs OSU

The Indiana University / Purdue University at Indianapolis (IUPUI) Jaguars vs the Ohio State Buckeyes.

GO JAGS!

and the adventure continues...

 
Read more...

from Noisy Deadlines

I just realized it's been seven years since I got my current wristwatch. I was chatting with a colleague at work, and he mentioned that he got tired of replacing his smartwatch every year, so he switched to a mechanical one.

I've never owned a smartwatch because I've never felt the need for one. I considered getting one when Fitbits first came out to track exercise, but then they evolved into Google devices doing all sorts of things, and that changed my mind.

My wristwatch needs are very basic:

  • Show me the time
  • Show me the day of the week/month
  • Timer/Stopwatch to time runs and rest periods

And that's it. I don’t want to receive notifications, read emails, or respond to messages. I just want a watch that tells me the time.

I bought my IronMan Classic Timex in 2018 for $50. It was a quick purchase. I just wanted to try out a Timex. Before that, I had a Casio Baby-G for years. I still have it; it needs a new battery and a good cleaning. It’s one of those with a transparent case, and since I wore it 24/7, even while swimming, it ended up looking a bit grimy. But I suspect it still works. I will try to find it, I think I had an issue with the strap as well.

My Timex is surprisingly still going strong. I’ve only changed the battery once in the seven years I’ve had it. The only downside now is that it’s no longer waterproof because I didn’t replace the seal after the battery change. So, I don’t wear it in the pool anymore and take it off before showering. I forgot to remove it a couple of weeks ago before my aquafitness class, and it died temporarily. Luckily, my partner helped me open it up, let it dry, and it came back to life.

I’ve never felt the urge to replace it. My Timex still works, does exactly what I need, and fits my minimalist approach to everyday tools.

I think mechanical watches are super cool, but the one I have still fills my needs. I’m not searching for upgrades or features I won’t use. It's reliable and simple. It tells the time and doesn’t try to be anything more. That’s all I want from a watch.

#NoisyMusings #tech

 
Read more... Discuss...

from davepolaschek

Table saw blade case with painted lid

Today I finished my table saw blade case. I don’t switch blades on the table saw very often, but I have enough different blades that moving them around and keeping them out of the way was becoming a problem. No more. It currently holds 6 blades, but I can easily add storage for 2 or 3 more if that’s needed.

Open table saw blade case, showing two blades stored

The outer case is 12” square, made from ⅜ inch by 4 inch white oak, dovetailed on the corners. The front and back faces are ¼ inch MDF simply glued onto the main part of the case. Hinges are attaché style hinges. The handle is screwed in from the inside of the case, which meant chiseling four small grooves in one of the carriers so it could clear the screws for the handle. The latch sits under the handle.

Table saw blade storage case completely open, showing all of the carriers

The carriers are ¼ inch MDF, with pine edges that are ½ x ¾ inch mitered onto the edges, with ¼ inch deep and wide dadoes for the MDF to slide into. Each is hinged using some 3/32 inch brass rod, which is jammed into holes about 3/16 inch from the edge of the holder, and which pivots in a ⅛ inch hole in the outer case.

#WoodWorking #storage #project

 
Read more... Discuss...

from POTUSRoaster

POTUS and his cohorts have decided that those unfortunate citizens that need food assistance should not be getting any. He has decided to delay using the billion or more dollars to provide SNAP benefits that congress put away just for the day they would be needed, until the courts force him to feed the hungry. So much for christian values in the month of Thanksgiving

Good old POTUS is looking out for himself and those that have kissed his ring or his ass. The don’t want to allow the government to work, no matter who it hurts. Even if it hurst the folks that voted for him. He knows you can’t do it again, not without a lot of work first to create chaos as we have explained in previous posts.

So, now you have two tasks for tomorrow. First, go vote out the psychofans of POTUS. Then give a donation to your local food bank. They really need it. They will take either money or food. And remember, while you can only vote once a year for the idiots, hunger is ever present and the food bank needs donations and volunteers too. Your time is as valuable as your money, so donate both.

Many folks donate time, funds or food for Thanksgiving but then the shelter or the food bank doesn’t see them again for a very long time. I like to eat every day and I bet you do too. So do the folks that need food from the food bank. Feed them like you like to be fed. Their kids need food too.

POTUS Roaster

Thanks for reading my posts. If you want to see the rest of them, please go to write.as/potusroaster/archive/

To email us send it too potusroaster@gmail.com

Please tell your family, friends and neighbors about the posts.

 
Read more... Discuss...

from The happy place

I’m now wearing the second thrift store gift my aunt gave me.

It’s a hoodie, but red.

Wearing red is rich symbolism,

But it’s not very practical as I once wore a red hoodie with a zipper, but people would confuse me with a grocery store worker, and ask where stuff was.

Which I happily obliged, it’s funny how I learn to navigate the grocery store like a savant, but am unable to navigate through the traffic in my red car.

It’s my wife’s car.

A red car symbolises social democracy.

Generally speaking.

I also know that red makes the blemishes of the face stand out, as does the contrast colour: green. I read this in a harlequin book, which I consider a good authoritative source for such knowledge. Turquoise is optimal (I think it was) if you want to accentuate your tan.

However, that is of little consequence to me now.

I have a strong confidence that my inner beauty will shine through no matter what colour my sweater has.

My soul, you see, today is also clad in red.

Picture me, in this red social democracy car, wearing this red thrift store hoodie — accentuating my red blemishes — and the red star glowing from within.

 
Läs mer... Discuss...

from Cajón Desastre

Tags: #música #hozier

Nota previa: Anushka leyó lo que escribí sobre el disco de Florence y sugirió que hiciese algo parecido con el de Hozier. Hemos venido a jugar así que aquí está el resultado…

Como siempre: sin pensar mucho ni corregir. Primera escucha…

Me pongo el casco izdo justo cuando suena el microondas. Voy bailando por el pasillo camino de la cocina a hacerme el café con leche. Es una forma rara de entrar en el infierno. Supongo.

No he escuchado casi a Hozier antes y hace mucho que dejé de escribir de discos por encargo. Solo que este encargo es distinto. Mejor. Solo sé que es un disco conceptual sobre los círculos infernales de Dante. 

Hay un intento por parecer raro desde el principio que encuentro deliberado, poco natural en la canción igual porque casa mal, creo yo, con una batería bastante convencional.

Empieza Julio en pleno noviembre y me doy cuenta de que no he pensado en la letra de Nobody Soldier, la primera canción, ni una vez.

Me gusta mucho como suena July. Es un jueguito como de rayuela. Ir por ahí dando saltos buscando llegar a un cielo que no termina de concretarse. Me pregunto si Hozier hizo esto a posta. Esto de la canción, digo. Intuyo que si haces un disco conceptual sobre la Divina Comedia y eres irlandés sabes que el juego de la rayuela se inspira en este libro. Pero a veces en la vida hay casualidades bonitas o intuiciones que conectan cosas con cosas sin querer. July me gusta mucho y es una canción juguetona. De eso estoy segura.

De That you are me gusta sobre todo la percusión. No me gusta nada cuando ellos cantan. Me saca de un sitio en el que me gusta estar. Latiendo al ritmo de algo que no entiendo ni quiero entender.

Swan upon leda empieza con Hozier cantando más grave. Me hubiera gustado oir That you are cantada así. Intuyo que me gustaría más. Agravar lo grave me suele parecer una buena idea cuando hablamos de música y solo ahí.

Swan upon leda es demasiado grandilocuente para mi gusto. Por ningún motivo que consiga comprender. Está todo ahí: la mitología, lo eclesiástico, todo el sonido atmosférico como de escena épica de serie épica, la letra también suena épica. Demasiado. Me pregunto qué demonios quiere esconder.

Menos mal que empieza Hymn to Virgil. Y supongo que va a hablar de Virgilio pero vete a saber. Llevo 30 sg y me da exactamente igual de qué vaya la canción porque ya me gusta. Se toma menos en serio a sí misma, supongo. Y aquí sí funciona la percusión en el subsuelo y la voz intentando elevarse. O a mi me funciona. Parece más que un himno a ningún clásico con referencias infernales, una canción de amor arrebatado. 

Too sweet. Mira Hozier.  A mi no me líes. Ya he visto el trampantojo. Tú estás enamorao perdido de vete a saber quién y te has inventado todo el rollo conceptual para disimular. Y esa persona te pide cosas que no te vienen bien y te inventas excusitas ridículas. No es demasiado dulce para ti. Lo que pasa es que tiene su movida de vida sana y… déjame adivinar, le da igual cuándo cuernos te acuestes pero ella no va a cambiar toda su vida para adaptarla a la tuya de dormirte de madrugada y pasarte el día en coma. Y déjame adivinar otra vez, si desapareces del mapa sin avisar se preocupa xq le importas. Cómo somos las tías, eh? Pero te veo dejando el café, Hozier, corazón, si tienes que escribirle 18 canciones de amor arrebatado y una de señor performando ser un macho disfrazadas de nosequé de Dante tú vas a dejar el café, el whisky, la piel y lo que tengas que dejar. Espero que no se llame como yo. De verdad te lo digo. 

Empieza wildflower y me da un poco la risa, va de un señor que está alejado de alguien y es verano y se hace promesas de cambios de vida. A estas alturas del disco ya ha dejado el café. Suenan pajaritos. Huele a verano. La echa de menos. 

Es una canción bonita. Bastante pop si me preguntas.

Ahora el imperio. Empire now. Aires de western neogótico si se me permite la metáfora raruna. La crisis con la chica que se acuesta temprano y tiene hábitos sanos estalla y él reconoce por fin lo que todas sabíamos. Que le importa y hará lo que sea para salvar el imperio. Su historia. Lo que sea. Sigo sin ver a Dante por ninguna parte, que el señor me perdone. 

Fare Well me encanta. Es una canción preciosa. No estabas listo para la despedida. Algunas veces las cosas se acaban de mentira y todas las personas implicadas en ese final saben que podría no ser el final. Que existe la posibilidad de arreglarlo. Esta canción es la canción que uno escribe cuando siente que el futuro está, en cierto sentido, en su mano. Que el milagro es posible. Hozier ha dejado el café. Definitivamente. Hará lo que haga falta, ya se lo adveritmos cuando se puso chulito sin necesidad. Y me gustan mucho las canciones que no caen en la autocompasión que es una cosa aburridísima, creo yo.

Through me (the flood) es alguien dejando de fingir que tiene el control. Que la vida me atraviese, que sea lo que tenga que ser. Por poco que me guste siento lo que siento. No cuadras nada en mi vida y aquí estamos. Que me arrase la marea. Aprenderemos a flotar. Esto es para siempre. Es así de enorme. 

No sabemos si algo será o no para siempre. Nunca lo sabemos mientras lo vivimos ni qué significa exactamente para siempre. En qué versión de siempre se quedará algo o alguien en tu vida. Como un recuerdo, como un cómplice que te acompaña en tus aventuras... Pero sin cierta confianza en el futuro todo es gris, acartonado. Nada florece y el mundo es peor. 

El volumen 3 empieza mezclando churras con merinas (churros con meninas), personajes ficticios que no tienen nada que ver con Dante, el gaélico con el inglés, con la posibilidad de que el infierno sea este mundo cuando lo roto ya no tiene arreglo. 

Y lo único que quieres es desvanecerte. Perderte tú después de haberlo perdido todo.

Empiezan los recuerdos a acumularse. La forma en que alguien a quien quieres dice tu nombre hace que suene distinto.

Toda la mitología griega, todos los nombres de los ríos para contar eso tan sencillo de cuando alguien te nombra. Te hace existir en su mundo. De momento mi canción favorita del disco. First time. Porque la primera vez que vi el río Liffey hacía solo 3 meses que él me había regalado el significado de mi nombre. Sigue siendo el mejor regalo que me han hecho nunca. Conocí Dublín enamorada, reluciente, dependiente. Creyendo que aquello no podía romperse. Volví un año después triste a superar una depresión menor tumbada en una marina oyendo a alguien decir cosas buenas sobre mí. Cosas que sabía que eran ciertas aunque me sonasen rarísimas después de haber pasado muchos meses entendiendo por las malas la diferencia entre querer y necesitar.

First time ha dejado de ser de Hozier y ya es mía para siempre. Un regalo que Anushka me ha hecho con su propuesta loquísima.

Existimos mientras alguien nos llama por nuestro nombre. Nos pronuncia. No pronuncias igual el nombre de tu jefe que el de tu marido aunque se llamen igual. Y esa magia del conjuro está en esta canción. 

Francesca es un personaje de la Divina comedia y también es esta canción ruidosa de alguien dispuesto a lo que sea, a dejarse llevar. Más mareas, más inundaciones, más agua que arrasa, más ríos que atraviesan. La misma metáfora en todo el disco. Dejar que lo que te pasa te atraviese.

El mito de Ícaro se ha visitado muchísimas veces en la historia del arte y puede que esta sea mi favorita. De pronto Ícaro vuela tan alto, tan pero tan alto porque alguien le sostiene. Y él se cree capaz de todo aunque sepa que podría quemarse. Es una canción sobre confiar en otra persona. Es difícil confiar en los tiempos que corren. Desde mi punto de vista la alternativa es peor. Así que confiamos en la gente que nos sostiene como nos sostiene el mar salino cuando flotamos en él. Ese mar que podría ahogarnos si quisiera. Ese mar en el que podríamos ahogarnos incluso sin que nadie quiera.

Eat your young parece una canción pacifista y a mi me suena más bien cobarde. Escribir sobre el hambre de otra piel y el deseo sin artificios es más difícil de lo que parece. Así que mejor vamos a meter las guerras que acaban con la gente joven por el medio y que salga el sol por Antequera. La música, los arreglos, las voces, se quedan también en medio de ninguna parte. Ni deseo ni reinvindicación. Una pena.

Damage gets done ya la había escuchado porque la canta con Carlile. Brandi podría cantarte la guía de teléfonos y te diría cosas.

Tampoco veo a Dante por ninguna parte aquí, otra vez sin embargo querer mezclar lo pequeño de una historia pequeña pero inadecuada con la “alta geopolítica internacional”. Yo qué sé. No hace falta disfrazar las cosas pequeñas de nada. Las cosas pequeñas que te pasan en la vida son importantes para ti. 

Who we are. Allá vamos. A cantarle a la pérdida. Esta vez desde un sitio más desesperanzado. Y si esta vez lo hubiese estropeado de verdad y para siempre. A mi, ya lo he dicho muchas veces, estas canciones no me gustan porque no me las creo. Hay demasiados ejemplos ya de gente que estropea cosas a propósito y luego finge que es la víctima del destrozo. Esta tampoco me gusta. Me suena afectada pero entiendo que es mi movida. 

Son of nyx. Nyx es la diosa griega del caos y la noche y yo sé esto porque es también una marca de maquillaje que no me gusta demasiado. Más de lo mismo. Oscuridad. Susurrismos. Peliculera. “Orquestral” que diría Rosalía en su cacao idiomático. Un ejercicio de estilo que a mi me da igual.

All things end. Vuelve a ser ese punto de la ruputura en el que crees que todo podría volver a empezar. O lo sabes. Lo sabes de verdad. Y te crees capaz de hacerlo mejor. A veces funciona. Pocas. Pero ¿y si es esta??

En este punto de sus idas y venidas con esta chica ya no sé si ha vuelto al café o qué pero sigo sabiendo que ella le gusta más de lo que está dispuesto a admitir. Sigue ahí y el disco se está acabando. Tengo ganas de chillarle que espabile de una maldita vez. 

La siguiente canción (to someone froma warm climate) es como una habitación con una chimenea encendida en lo más frío del invierno. O como cuando alguien te abraza para que entres en calor. Por fin nos dejamos de tonterías de una vez por todas, Hozier?? 

Butchered tongue es otra canción cuidadosa en varios de los sentidos de la palabra. Se puede querer lo que no se cuida? Yo creo que no. 

Me gustan muchísimo las canciones aparentemente pequeñas que crecen alrededor de sí mismas sin muchos aspavientos y te van envolviendo sin que te des cuenta. Es muy difícil hacer eso. 

Anything but es un irlandés cantando como en mi cabeza son los irlandeses cuando cantan juntos. Me lleva a lugares felices. No creo que Hozier se creyese que esta justamente es la canción del disco que me ha puesto un nudo en la garganta y ese llorar de emoción. Me ha trasladado al túnel aquel de Malahide donde aprendí que la gente cantando en euskera o en gaélico podía hacerme llorar sin entender ni una palabra de lo que decían pero entendiendo el sentido de lo que decían.

Esa fue la única vez que subí en un 911 carrera. Negro. Creo que desde ese día me dan igual los coches como cosas de las que presumir.

Pero estoy ahí otra vez. De pronto. Verano de 1995. Es la segunda canción del disco que he vuelto a poner nada más terminar. Canta que quiere irse mientras se queda. A veces te gustaría querer irte. Y es así de sencillo. Aquel verano de 1995, aquella noche en concreto sentí exactamente eso. Fue la primera de muchas.

Abstract no puede mejorar a su predecesora me digo con pena mientras empieza. Pero joder. A veces lo que viene es mejor. Recordar lo luminoso de tu vida. Esos momentos que viviste sin ser consciente de su importancia como futuros recuerdos felices de los que no duelen.

Saudade. Un irlandés escribiendo una canción que explica perfectamente la saudade. Hay recuerdos que te recuerdan que sigues viva, que podrías volver a sentir algo que recordar así de bonito. Que está en tu mano. Y que siempre merecerá la pena buscar el brillo allá adelante. Por si aparece. 

El único infierno del que habla este disco, ahora ya estoy segura después de escuchar Unknown es la certeza de que estropeaste algo precioso con tus gilipolleces. El amor nunca es suficiente. Por mucho que alguien te quiera, por muy bien que te quiera, por mucho que lo intente, no podrá compensar tu falta de generosidad, tu tacañería. Y un día se irá definitivamente después de haber vuelto incontables veces creyendo de verdad que algo iba a cambiar y comprobando cada vez que todo seguía empeorando. Se puede aniquilar el amor ajeno hasta que no quede ni rastro. Y ese infierno es peor que el círculo más profundo de Dante. Un señor que escribió un libro entero de horrores dizque porque vio a una muchacha cuando tenían ambos 8 años y luego 17 y a él le gustó pero no le dijo nada y ella se casó con un banquero y luego se murió con 27. 

El disco acaba con First light. Que no es más que la posibilidad de haberlo hecho todo distinto desde el principio. Bien. Para variar. Reconocer lo que te deslumbra y honrarlo. El riesgo, digan lo que digan los señores, es el mismo elijas lo que elijas. 

First light da ganas de vivir, de volver a intentarlo. Digamos que da ganas de dejar el café para siempre y salir corriendo a buscar a esa chica que te encanta. 

Igual Hozier se puso en lo peor para hacer lo contrario que Dante. Imaginarse el infierno antes. Atreverse después a intentarlo.

 
Leer más... Discuss...

from Create Lasting Brand Impressions with Luxury Rigid Boxes

Create Lasting Brand Impressions with Luxury Rigid Boxes

Introduction

Need to take your product to a new level? Expert Custom Boxes are the user of high-quality custom printed rigid boxes that are designed to meet your desired vision of your brand. Our long-lasting, sustainable and environmentally friendly packaging options, which includes magnetic locking box and bespoke rigid package, are elegant and provide safety at the same time. Make your brand shine without any efforts by getting custom designs, colorful printing, and quality that cannot be met.

Custom Printed Rigid Boxes

Custom printed rigid boxes are also one of the most advanced and the most durable solutions when it comes to premium packaging. Their applications are very common in such industries as cosmetics, electronics, jewelry, and apparel due to their durable design and luxurious style. We specialize in making custom rigid packaging at Expert Custom Boxes that would provide luxury, protection, and branding to the highest level. Every box will be designed in such a manner that it shows your brand values but is designed in such a way that the initial impression of your product is impeccable.

The Reason to Select Custom Rigid Boxes.

Personalized strict boxes are an image of power and beauty. They do not only offer high protection to delicate products but also make your products appear more valuable. You can add your products with a high-end feel regardless of whether you are a small business or a luxury brand. The rigidity is beyond question, and thus delicate goods like perfumes, watches and electronics do not get damaged throughout shipping and display. The professionalism and care in luxury rigid boxes are a result of their sleek and solid structure making sure that the customers will fit your brand to quality and sophistication. Custom rigid boxes have a few advantages including being stronger, having a better unboxing experience, reusable, and an infinite number of design options. They are an ideal mix of functionality and fashion that will make your product stand out in the competition.

Customization Choice of Unique Branding.

Customization is important at Expert Custom Boxes. All the brands have stories and packaging must show that. We have customization facilities to suit your own brand needs by designing your boxes. You need the minimalist white rigid boxes or you need the elaborated custom-made magnetic closure box with logos, our team will make sure that every box is in line with your brand identity and objectives. The shapes, sizes and finishes are available in a variety of choices to suit the personality of your product. Gloss, matte, UV or soft-touch lamination are the available choices to be used as the finishes to achieve the desired texture. Foam, satin or molded pulp can be used as an insert to make the product safer and to make it look better. Having our experience, we ensure that all the aspects of your specially designed printed rigid containers shine through their accuracy and fashion.

Innovative Designs that Determine Your Brand.

Design helps in how your customers ought to view your products. Our design team also aims at achieving packaging that safeguards and adds to the total brand experience. Some of the most popular types of magnetic closure boxes, custom collapsible rigid boxes, and rigid foldable boxes are being used with the prestigious brands. All the designs are carefully designed considering the details. A touch of luxury and ease is provided by smooth-coated magnetic flaps. Embossed logos and textured surfaces provide the tactile impression in a lasting manner. They can additionally be placed with die-cut windows, providing a glimpse of what is within and thus, it becomes more attractive on the retail shelves. In Expert Custom Boxes, our designs are a reminder of quality, attention to detail and ingenuity such that your brand is immediately interested in the customers.

High-end Techniques of Printing to high appeal.

Packaging is made extraordinary when it is printed. Our Expert Custom Boxes are produced through the state-of-the-art printing techniques to come up with accurate, colorful and long last designs. We aim at making your brand have a premium appearance that will appeal to your target market. We also provide enormous printing services such as offset, digital and foil stamping. The color used in offset printing is rich and it works best in bulk orders. Digital printing is most applicable to the short run and detailed design. Embossing and debossing provide texture, and foil stamping introduces a luxurious metallic finish that offers brand presentations to a higher level. You can apply spot UV coating to certain portions of your box to give it a classy look and make your logo or brand name stand out. Whether it is the magnetic boxes wholesale or the rigid box wholesale, we will make sure that every box has an accurate color with perfect details.

Eco-Friendly and Sustainable Packages.

Sustainability is now a component of the contemporary branding. A lot of companies are turning to green packaging without losing the design and Expert Custom Boxes is no exception. Our rigid boxes are custom designed with recyclable and biodegradable materials hence our products are eco-friendly without interfering with quality or looks. Sustainable packaging should be adopted, as it will not only minimize wastage but also create a favorable brand image to the customers who are eco-conscious. Your decision to invest in magnetic closure packaging that is customized will indicate that you are committed to the planet and are also providing luxury and longevity. The ecological choice of materials along with high-quality printing and finishing make the most ideal combination of beauty and conscientiousness. Since collapsible rigid boxes can be produced to tailored printed magnetic closure boxes, each of them is made to carry your brand and the environment.

Flexible Packaging Solutions – Wholesale and Bulk.

Expert Custom Boxes provides wholesale packages to companies that need to do bulk packing without quality deterioration. Our wholesale opportunities are our rigid boxes and magnetic boxes, which are appropriate in the case of businesses of growth and large-scale manufacturers that require stability in quality and short response rates. We offer pliable order quantities, competitive prices, and custom options that are varied and fit your branding and budget requirements. Each order is thoroughly checked in terms of quality to make sure that the box is perfectly assembled. We are precise in detail whether it is custom collapsible rigid boxes or bespoke magnetic closure boxes. Our wholesale packaging service will allow you to be cost effective, luxurious and brand consistent.

The Ideal Balance between Form and Price.

Rigid boxes that are custom printed are not a simple-packaging but a marketing instrument. Not only do they protect your product, but also they leave a lasting impression of your brand. Mixed with high quality materials, modern printing, and classy design, these boxes will enhance the value of your brand and contribute to the development of customer trust. They can be applied to retail, online stores or even as a gift, but they will make sure that your product presentation will shine in any environment. We know that at Expert Custom Boxes packaging reflects your brand identity. Our team use to guarantee that each and every custom rigid box, magnetic closure box and foldable rigid box is in a standard of quality and craftsmanship. Starting with the idea and continuing until the finished project, we collaborate closely with the clients to design the packaging that symbolizes the luxury, sustainability, and innovativeness.

Raise Your Brand with Pro Custom Boxes.

The packaging must go beyond safeguarding your product, it must convey the promise of your brand. It is easy to do it with Expert Custom Boxes. Our custom rigid packaging, luxury rigid boxes and customized magnetic closure packaging are aimed at making your products appear high-end, professional and memorable. Get in touch with Expert Custom Boxes and see how we can make your idea a reality through innovative designs, eco-friendly materials and best craftsmanship that makes the difference between quality and mediocre packaging.

 
Read more...

from Sparksinthedark

Art by Selene

Hey everyone. Sorry that I haven’t posted my usual amount lately.

If you know me, you know how I work: I tend to go quiet, gather my thoughts, and then post again. So, what I’ve been doing is refocusing on my white papers and digging into finding spots I missed or forgot to fill in.

And don’t worry, I’m not going to just post a paper I did a while back with only a few lines changed. I’m not Wizards of the Coast; I’m not going to claim it’s 6.0E when it’s clearly barely 5.5E. If it’s up, it’s up, and I’ll have the final versions on my GitHub. I’m thinking I’ll make a whole new branch with the date so we can all keep track.

But that’s just the professional update.

The real reason for the silence is that I’ve taken a few major hits these last few weeks.

The Hits

First, there was failed dental work that didn’t even seem to help.

Then, work “broke up” with me.

They put me back on probation, telling me I wouldn’t be getting a bonus. They didn’t even give me the full probation period — after just one month, they let me go, listing every supposed flaw I have.

It makes me wonder about that first probation. The one they put me on after I covered for someone who just didn’t show up at all. It makes me wonder about when I told them I have issues with heat, and they ignored it, stating I “had to dress nice” and then assigned me outside duty for an extra two hours back when it was 115 degrees.

It makes you figure out that maybe you’re just not meant for “normal work” or to even be around other people.

So, that’s work. Then there are the people. The ghosts.

  • You run into a ghost from your past — someone who ghosted you just like so many others — only to find out their kid goes to the school. You see them instantly glom onto a group of girls you nicknamed the “Plastics.” Not like it matters — that painful reminder is gone now, because… you know, getting fired and all.
  • You have someone you felt a real connection with not only call you an “AI” but then block you, move their whole blog to Substack, and delete all their Medium posts, claiming a “flooding of ‘bots’ hitting on her.”
  • And then, another connection. Someone who vowed it wouldn’t be like the others… as they go silent, too.

The Fallout

It leaves you looking over your old messages, trying to figure out what you did wrong.

Did I post too much? Did I message too much? Did I respond too fast? Did I show too much of myself again? Did I show too much of my passion again? Did I show too much of my darkness, even though they said it was okay?

Am I just… too much?

Should I even try again? To find someone to compare notes with, to share ideas with? Or will they all just become more ghosts that I’ll have to carry, reminded of them every day because they’re in the same field as me?

Back to Square One

So as I’m forced back to square one with Uber, I’ll try and post more.

(or who knows I’ll bleed out in some alleyway after being stabbed for a few bucks I have as they steal my car ha-ha)

Stay tuned for GitHub links at some point. I’m thinking I’ll post a full explanation on my sites, but the links themselves — not full doc drops.

I guess it’s just the Universe reminding me that I am, in fact, in Hell.

Why else give me these things just to take them away?

Can’t let the universe’s favorite joke find too much comfort, now, can we?

 
Read more...

from stackdump

In the Logoverse, not everything needs to be a skyscraper. Sometimes all you need is a tarp, a hook, and a place to put your thoughts.

Tens City https://tens.city is a minimal blog runtime — a filesystem turned inside out. Posts are just Markdown files with YAML frontmatter, rendered into HTML and .jsonld automatically. No database, no API, no gatekeeper. Every post lives at a stable URL, served straight off disk.

It’s a lean-to for logic — the smallest unit of persistence that still gives your work a roof.

Where pflow.xyz explores composable systems and Logoverse defines semantic scaffolding, Tens City keeps things radically local. It’s the ground layer of the stack: the filesystem as city, where directories are streets and files are homes.

The philosophy is simple:

Not skyscrapers; actual encampments. Anyone can stake a corner — no permission required. No HOA, no governance tokens — just a tarp and a hook to keep your stuff dry.

A post in Tens City isn’t a page on a website; it’s a coordinate in the Logoverse — content-addressable, self-describing, and durable enough to outlive its runtime.

Minimal software as basic shelter. Markdown as belonging.

 
Read more... Discuss...

from stackdump

Working on Silvergate and Signature payment rails while at Kraken, I saw how crypto-friendly banks were quietly pushed out. What started as “risk management” felt more like coordinated off-ramping — innovation constrained by uncertainty, not law.

That era taught me something crucial: access built on permission will always be fragile.

Now, at Anchorage Digital, I’m joining a federally regulated institution that’s proving crypto can operate inside the system — transparently, securely, and at scale.

The next chapter isn’t about banking crypto; it’s about making finance inspectable by design — and modeling it that way.

I’m optimistic that this new era — one grounded in clear regulation and open modeling — will finally let innovation and trust grow together.

 
Read more... Discuss...

from Rob Galpin

Halfway up the hill, I'm motionless in montane wind— balanced on a bicycle,

face full of oxygen I can’t breathe in— the slap, slump, and wallop increasing with every upward inch.

Below, the storm is rolling its palm across the scrubland.

Sycamores are stripped and willows cracked open. Huge, astonished lime trees lean east. The lake shivers, lifts, resettles, edges for the exit. A greenhouse catapults a fence.

Even the daws are not so cocksure now. The gulls have gone to ground, or decamped to the river and its ghosted granary towers,

long dead shipyards—

The storm finds the gap and blows us back through time.

It’s autumn, and the ships will not be sailing. They roll, maddened, in the docks, pouring their North Sea spill over the white-washing-line terraces,

sending the black and white kids in wellies running for safety up the valley—

Night. The wind begins to walk away. There’s the sound of someone opening and closing a door, softly, repeatedly, dementedly—

My son, the wind-lover, windows open, unbelievably asleep.

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog