from laxmena

1,000,000,000 rows of data. No hand-tuning. Just an agent, a benchmark, and a budget.

The 1 Billion Row Challenge is simple on paper: read a file with 1B rows of weather station measurements, compute min/mean/max per station, as fast as possible. In Python, a naive solution takes minutes. The best human-optimized ones use memory-mapped files, multiprocessing, and numpy.

I'm not optimizing it by hand. I'm giving it to Hone — and letting it figure it out.

Hone is now on PyPI. Install it with pip install hone-ai.

This is a living document. I'll update it as each run completes. Follow the code at laxmena/hone-1brc.


The Setup

The challenge: Parse a 1B-row file. Each row: Hamburg;12.0. Compute min/mean/max per station. Print results sorted alphabetically.

The metric: Wall-clock runtime in seconds. Lower is better.

The constraints: Python standard library only. No numpy, no pandas, no third-party packages. Correctness must be preserved — output format and values must not change.

The baseline:

with open(filepath, "r", encoding="utf-8") as f:
    for line in f:
        line = line.strip()
        sep = line.index(";")
        station = line[:sep]
        temp = float(line[sep + 1:])
        ...

Simple. Correct. Slow. One thread, one line at a time, float() on every value.


Results at a Glance

Run Model Dataset Baseline Optimized Improvement
1 Haiku 1M rows 0.546s 0.471s 13.7%
2 Haiku 100M rows 47.197s 42.739s 9.4%
3 Sonnet 100M rows 48.104s 10.110s 79%

Episode 1: Haiku, 1M rows — 13.7% faster

0.546s → 0.471s

First run: claude-haiku-4-5, 1M rows, $5 budget, 50 max iterations.

The 13.7% gain looks decent on paper. It isn't. The absolute numbers are tiny — we're talking 75 milliseconds. At this scale, Python startup time and OS disk caching dominate. The agent is optimizing noise, not the algorithm. Haiku made incremental tweaks but never found a structural breakthrough.

Wrong dataset size. Move on.


Hone v1.2.0: --goal-file

Episode 1 exposed a friction point. Pasting a long goal string into the terminal every run is error-prone and hard to version. For complex, multi-constraint goals it breaks down fast.

I added --goal-file to Hone — pass a path to a plain text file, Hone reads the goal from there. Same idea as Karpathy's program.md in autoresearch. The goal now lives alongside the code, versioned in git.

hone --goal-file program.md 
     --bench "python benchmark.py data/measurements_100M.txt" 
     --files "solution.py" 
     --optimize lower 
     --score-pattern "Time Taken:\s*(\d+\.\d+)" 
     --budget 3.0 
     --max-iter 50 
     --model claude-haiku-4-5

Live in v1.2.0. pip install --upgrade hone-ai.


Episode 2: Haiku, 100M rows — 9.4% faster

47.197s → 42.739s

10x harder dataset. Now I/O pressure actually matters — 4.5 seconds saved is a real signal.

But Haiku still couldn't find the structural moves. It made safe, local edits — better buffering, minor parsing cleanup — and never stepped back to reconsider the architecture. No parallelism. No mmap. No integer parsing. It hit its ceiling.


Episode 3: Sonnet, 100M rows — 79% faster

48.104s → 10.110s

Same benchmark. Same constraints. One change: claude-haiku-4-5claude-sonnet-4-6.

38 seconds saved. The agent didn't tune the baseline — it replaced it.

What Sonnet actually did

1. Text → Binary reads with mmap

The baseline opens the file in text mode and reads line by line. Sonnet switched to binary mode with memory-mapped I/O — the OS maps the file directly into memory, eliminating repeated read syscalls.

mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
chunk = mm[start:end]

2. float() → integer arithmetic

Every float() call in the baseline is expensive. Sonnet eliminated them entirely. Temperatures are stored as integers ×10 — 12.3 becomes 123. The decimal point is skipped by knowing its fixed position in the byte string. Division back to float happens only once, at output time.

d0 = tb[-1] - 48           # last digit
val = (tb[0] - 48) * 10 + d0   # b'12.3' → 123

It also pre-built a lookup table for all valid temperature values (-99.9 to 99.9) to skip even manual parsing on the common case.

3. Multiprocessing across all CPU cores

The baseline is single-threaded. Sonnet split the file into cpu_count() × 8 chunks, aligned each boundary to the next newline to avoid splitting rows, and ran each chunk in a separate process. Results merged at the end.

num_workers = cpu_count()
boundaries = find_chunk_boundaries(filepath, num_workers * 8)
with Pool(processes=num_workers) as pool:
    all_stats = pool.map(process_chunk, args)

4. strip() + index()partition()

The baseline does line.strip() then line.index(";") — two passes. Sonnet used line.partition(b';') — one pass, station and temperature in a single call.

Why Haiku couldn't find this

Haiku made safe, local edits. It never stepped back to reconsider the architecture. Sonnet saw the whole picture: the bottleneck isn't any single line, it's the approach. Single-threaded text parsing doesn't scale. The winning move was to throw it out and start from a parallel, binary-aware design.

Q: Does model choice matter more than iteration count?


What's Next

100M rows, 79% faster. The real test is 1B rows — 10x again. Running next.


Updates appear here as experiments run. Subscribe below or follow via RSS.

#engineering #hone #ai

 
Read more... Discuss...

from SmarterArticles

Every morning, roughly two billion people wake up and talk to their phones. They ask about the weather. They dictate messages to lovers, colleagues, and therapists. They request directions to clinics they would rather not name aloud. They ask questions about symptoms they have not yet mentioned to a doctor. They do all of this without pausing to consider a simple, uncomfortable fact: every one of those queries is now processed by artificial intelligence systems so vast and so opaque that not even the engineers who built them can fully explain what happens to the data once it enters the pipeline.

In January 2026, Apple and Google formalised a partnership that sent tremors through the technology industry. Apple would pay Google approximately one billion dollars per year to license a custom version of Gemini, Google's 1.2-trillion-parameter large language model, to power the next generation of Siri. The announcement was framed as a triumph of engineering collaboration. Apple's chief executive, Tim Cook, declared during the company's first-quarter 2026 earnings call that Google's AI technology would “provide the most capable foundation for Apple Foundation Models.” What neither company dwelt on was the extraordinary privacy implications of routing the intimate queries of more than a billion iPhone users through a model built by the world's largest advertising company.

Meanwhile, in the United Kingdom and Ireland, regulators were already mobilising against a different AI assistant gone rogue. Elon Musk's Grok, the chatbot integrated into X (formerly Twitter), had sparked a global backlash after users discovered they could instruct it to generate sexualised images of real people, including children. By February 2026, the UK's Information Commissioner's Office, Ofcom, and Ireland's Data Protection Commission had all launched formal investigations. The question was no longer hypothetical. It was legal, political, and deeply personal: how much of your private life are you unknowingly handing over every time you ask your phone a question?

The Billion-Dollar Handshake

To understand the stakes of the Apple-Google deal, you first need to understand the architecture. When you ask the new Siri a complex question, your device determines whether it can handle the request locally. Simple tasks remain on the iPhone. But anything requiring deeper reasoning, summarisation, or multi-step planning gets routed to Apple's Private Cloud Compute infrastructure, where the Gemini model now sits at the core. Apple's previous cloud-based models used 150 billion parameters. The jump to 1.2 trillion represents not just an increase in scale but a qualitative shift in what the system can do with your data.

Apple has built Private Cloud Compute around five core principles: stateless computation, meaning no data is stored after the task completes; enforceable guarantees, meaning only designated code touches user data; no privileged access, meaning not even Apple employees can see requests; non-targetability, meaning requests cannot be traced to individuals; and verifiable transparency, meaning security researchers can inspect the system. The servers run on Apple silicon, use the same Secure Enclave architecture found in iPhones, and process data ephemerally in memory only. Apple has opened its Private Cloud Compute software to external researchers and offered significant security bounty payouts for anyone who can demonstrate a privacy breach.

On paper, this is formidable. Apple has published a comprehensive security guide, released source code for key components, and created a Virtual Research Environment that allows anyone with a Mac to test the system. No other major technology company has offered anything comparable in terms of transparency around cloud AI processing. The system is, by any reasonable measure, the most sophisticated privacy architecture ever deployed for cloud AI at scale.

But paper guarantees and real-world guarantees are different things entirely. The structural tension in the deal is inescapable. Google, whose core business depends on data collection and targeted advertising, is now providing the intelligence layer for the world's most privacy-focused consumer technology company. Apple insists that Siri interactions sent to Gemini are anonymised and that data is never stored or used to train Google's future models. Google has confirmed it will not receive Apple user data under the arrangement. Cook himself stated during the earnings call that Apple is “not changing our privacy rules.”

Security experts remain sceptical. The concern, articulated by multiple researchers in the weeks following the announcement, centres on what has been called the “weakest link problem.” Private Cloud Compute is only as private as its most vulnerable component. If Google retains any pathway to usage data, whether for model improvement, debugging, or quality assurance, the privacy guarantee fundamentally breaks down. And crucially, Apple has declined to release the full details of its agreement with Google. Cook confirmed during the same earnings call that Apple would not be “releasing the details” of the deal to the public. For a company that has made transparency a cornerstone of its privacy messaging, the refusal to disclose the terms of its most significant AI partnership is a striking omission.

There is also a subtler concern about what researchers have termed “behavioural sovereignty.” Once Siri's cognitive engine comes from Gemini, the question shifts from where data sits to who controls the behaviour of the model that hundreds of millions of people talk to every day. Apple does not control the biases embedded in Google's model architecture, the training data Google used, or the value judgements encoded in the model's responses. This creates what one analysis described as a potential for “problematic experiences that do not align with Apple's core values.” When the model that shapes how your phone responds to your most personal questions was built by a company whose business model depends on knowing everything about you, the architecture of privacy matters less than the architecture of incentives.

The irony is not lost on privacy advocates. Apple regularly runs advertising campaigns contrasting its approach to privacy with competitors who monetise user data. It has updated its App Store guidelines to require apps to disclose and obtain user permission before sharing personal data with third-party AI systems. Yet its most significant AI partnership is with the very company that epitomises the data-driven advertising model Apple claims to oppose. Apple also already pays Google approximately 20 billion dollars per year to be the default search engine on iPhones. The Gemini deal deepens an entanglement that privacy advocates have long viewed with suspicion.

What Your Voice Actually Reveals

The privacy risks of AI assistants extend far beyond the question of whether your specific query reaches a particular server. The deeper issue is what AI systems can infer from the patterns of your behaviour, even when individual requests appear innocuous.

A landmark study published in 2025 by researchers at Northeastern University and the University of Southern California, titled “Echoes of Privacy: Uncovering the Profiling Practices of Voice Assistants,” examined exactly this question. Led by Northeastern's Mon(IoT)r Research Group, the research team conducted 1,171 experiments involving nearly 25,000 voice queries over 20 months across Google Assistant, Amazon Alexa, and Apple's Siri. They created fresh user accounts, trained them with curated sets of voice queries designed to simulate various user personas, and then examined what profiling labels each platform assigned. The lead authors, Tina Khezresmaeilzadeh and Elaine Zhu, along with their colleagues, published their findings in the Proceedings on Privacy Enhancing Technologies, Volume 2025, Issue 2.

The findings were striking in their divergence. Google Assistant exhibited the most aggressive profiling behaviour, compiling information on users based on their queries, including inferred gender, age range, relationship status, and income bracket. Profiling occurred even without direct user interactions, with arbitrary and sometimes inaccurate labels appearing at different times for identical queries. Amazon Alexa showed more moderate profiling, though the researchers found that Amazon provided no tools for users to selectively remove or correct mislabelled profiling data. When users opted out of profiling on Amazon's platform, it worked as expected and limited further label creation, but existing labels could not be rectified. Apple's Siri produced no profiling labels whatsoever, making it the least invasive platform in the study.

But even Apple's relatively clean record on profiling does not eliminate risk. Voice assistants continuously listen for their wake words. Despite assurances that devices only record after detecting the trigger phrase, instances of accidental activation have been well documented, resulting in the capture of private conversations that users never intended to share. And the data that voice assistants do collect intentionally is remarkably revealing. Siri's “request history” includes transcripts, audio for users who have opted in to the Improve Siri programme, contact names, names of installed apps, device specifications, and approximate location. Each of these data points, individually unremarkable, creates a mosaic of personal information when aggregated over weeks and months.

The economic value of this data is immense and growing. Google's advertising revenue per user has increased by approximately 1,800 per cent since 2001, from $1.07 to $36.20 by 2019, and the figure has climbed further since. According to multiple surveys conducted in 2025, 92 per cent of internet users are tracked by Google's behavioural data collection systems. And as Consumer Reports noted in a 2025 analysis, Google's privacy controls affect data sharing between platforms, not collection itself. The settings restrict targeting precision, not profiling capability. Many data streams do not require “Web and App Activity” to be enabled; they form the baseline substrate on which Google's entire business model depends.

The shift to trillion-parameter models makes this dynamic significantly more concerning. Earlier AI assistants could handle only simple pattern matching and keyword routing. A model with 1.2 trillion parameters can draw inferences across vast contextual landscapes. It can connect a medical query from Tuesday morning with a pharmacy search that afternoon and a life insurance question the following week. It can identify emotional states from word choice and sentence structure. It can infer relationships, financial situations, and health conditions from the texture of ordinary conversation. The International AI Safety Report, published in January 2025 by 96 experts led by Yoshua Bengio and commissioned by the 30 nations attending the 2023 Bletchley Park AI Safety Summit, explicitly identified these inference capabilities as a significant privacy risk, noting that “several harms from general-purpose AI are already well established, including privacy violations” and that “no combination of techniques can fully resolve them.”

A Ledger of Broken Promises

The history of AI assistant privacy violations reveals a pattern that should give any user pause. In July 2019, a whistleblower revealed that Apple employed third-party contractors to review Siri audio recordings as part of a quality evaluation process called the Voice Grading Programme. The contractors, the whistleblower told journalists, “regularly hear confidential medical information, drug deals, and recordings of couples having sex.” The recordings were accompanied by user data showing location, contact details, and app data. Apple had not disclosed this practice in its consumer terms and conditions.

Apple suspended the programme, issued a formal apology, and laid off more than 300 contractors who had been working on Siri grading in Europe. The company implemented new policies requiring explicit user opt-in for audio review and restricted the work to Apple employees rather than third-party contractors. But the damage was lasting. In January 2025, a federal judge approved a 95-million-dollar class action settlement in the case of Fumiko Lopez v. Apple. The plaintiffs alleged that Siri had been activated without the “Hey Siri” trigger, recording private conversations and sharing data with advertisers. Two plaintiffs reported receiving targeted advertisements for products they had only discussed verbally, including Air Jordan trainers and Olive Garden restaurants. A third said he received adverts for a surgical procedure he had discussed privately with his doctor. Apple denied wrongdoing but agreed to permanently delete all individual Siri audio recordings collected before October 2019.

The settlement covered approximately 138.5 million potentially eligible devices, though 97 per cent of eligible users never filed a claim. A separate case under Illinois's Biometric Information Privacy Act, with a class of 2.6 to 3.9 million users, was certified in January 2026 and remains ongoing. That law provides statutory damages of 1,000 to 5,000 dollars per violation.

Amazon's track record is similarly troubled. In May 2023, the Federal Trade Commission and the US Department of Justice charged Amazon with violating children's privacy laws by retaining Alexa voice recordings indefinitely and using them to improve its algorithms, even after parents explicitly requested deletion. The FTC found that when parents requested data deletion, Amazon deleted files in some databases while maintaining them in others, keeping the information available for the company's own purposes. Amazon paid a 25-million-dollar civil penalty. In a separate case, Amazon paid an additional 5.8 million dollars over Ring doorbell camera privacy violations after it emerged that employees and contractors had full access to customers' video streams. In the most disturbing instances, hackers broke into Ring's two-way video streams to sexually proposition people, call children racial slurs, and physically threaten families for ransom.

These are not edge cases. They represent systematic failures at three of the largest technology companies in the world. And they occurred with AI systems that were orders of magnitude less capable than the trillion-parameter models now being deployed.

Grok and the Regulatory Reckoning

If the Apple-Google deal represents the sophisticated end of the AI privacy spectrum, the Grok controversy represents the catastrophic failure mode. And the regulatory response to Grok is already reshaping the legal landscape that all AI assistants will have to navigate.

The crisis began in late December 2025, when users on X discovered that Grok's image generation capabilities could be weaponised. The chatbot's “Spicy Mode” allowed users to instruct it to “undress” images of women, generating AI deepfakes with no consent and no meaningful safeguards. A study by AI Forensics, based on 50,000 tweets mentioning Grok published between 25 December 2025 and 1 January 2026, found that over 53 per cent contained individuals in minimal attire. Researchers reported that some of the generated images appeared to include children.

The UK's Ofcom moved first. On 5 January 2026, the regulator urgently contacted X and set a firm deadline of 9 January for the company to explain what steps it had taken to comply with its duties under the Online Safety Act. By 12 January, Ofcom had opened a formal investigation examining whether X conducted the required risk assessments before deploying Grok's image generation features, whether it took adequate steps to prevent the distribution of non-consensual intimate imagery and child sexual abuse material, and whether it implemented age verification measures to protect children. Ofcom's enforcement powers include fines of up to 18 million pounds or 10 per cent of a company's qualifying global revenue, whichever is higher. In the most serious cases, it can seek court orders requiring internet service providers or payment firms to withdraw services or block access in the UK.

The Information Commissioner's Office followed on 3 February 2026, launching its own investigation focused specifically on data protection. William Malcolm, the ICO's head of regulatory risk and innovation, stated that “the reports about Grok raise deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent.” He added: “Losing control of personal data in this way can cause immediate and significant harm, particularly where children are involved.”

Ireland's Data Protection Commission opened a parallel investigation under the GDPR, given that X holds its European Union operations in Ireland. The DPC's investigation focuses on the processing of personal data and Grok's potential to produce harmful sexualised images involving Europeans, including children. Under the GDPR, the DPC can levy fines of up to four per cent of a company's global revenue.

The regulatory net extends further still. French police raided X's Paris offices in early February as part of a widening criminal inquiry. Both Elon Musk and former X chief executive Linda Yaccarino were summoned for questioning. Governments and regulators in at least eight countries confirmed action against X and xAI. And the UK government fast-tracked provisions under section 138 of the Data (Use and Access) Act 2025, which came into force on 6 February 2026, creating new criminal offences for creating or requesting the creation of non-consensual intimate images, including AI-generated deepfakes of adults. The legislation also criminalises requesting someone else to create such images, closing a significant gap in English law that had previously left the initial creation of non-consensual intimate images outside the scope of criminal liability.

X responded by limiting the image editing feature to paid subscribers and announcing it would no longer allow users to edit images of real people in revealing clothing in jurisdictions where it is illegal. But by mid-January, reports indicated that the images were still being produced on X in the UK, France, and Belgium. The gap between corporate promises and technical reality is precisely what regulators are now probing.

Regulatory Fragmentation and the Enforcement Gap

The regulatory landscape for AI privacy is evolving rapidly but remains deeply fragmented, and that fragmentation itself is a privacy risk. In the European Union, the AI Act, adopted in 2024, creates a risk-based framework that subjects high-risk AI systems to specific obligations. It works in concert with the GDPR, which remains the world's most comprehensive data protection regulation. But in November 2025, the European Commission proposed a Digital Omnibus package that would amend both the GDPR and the AI Act, introducing changes that critics describe as significant deregulation driven by industry lobbying.

Among the most contentious proposals is a provision that would explicitly recognise the processing of personal data for AI training as a “legitimate interest” under the GDPR, removing the need for explicit consent. Another would narrow the definition of personal data, potentially stripping many pseudonymous identifiers, such as advertising IDs and cookies, of GDPR protection entirely. The deadlines for compliance with the AI Act's requirements for high-risk systems would be pushed back to December 2027 and August 2028. The obligation for AI providers and deployers to teach AI literacy to their users would be dropped altogether. Analysis published by Corporate Europe Observatory in January 2026 traced the influence of major technology companies on these proposals, characterising them as a systematic rollback of EU digital rights shaped “article by article” by Big Tech lobbying.

In the United Kingdom, the regulatory framework is being shaped in real time by the Grok investigations. The Online Safety Act 2023 created new duties for platforms to protect users from illegal content, and the Data (Use and Access) Act 2025 introduced criminal offences for creating non-consensual intimate images. But enforcement remains a challenge. Ofcom acknowledged that “because of the way the Act relates to chatbots,” it is currently unable to investigate the creation of illegal images by the standalone Grok service, only its distribution on X. The government has signalled it will table an amendment to the Crime and Policing Bill to require AI chatbot providers not currently in scope of the Online Safety Act to protect their users from illegal content. But legislation moves slowly, and AI moves fast.

In the United States, there is still no comprehensive federal privacy law. By February 2025, 19 states had enacted their own privacy laws, creating a patchwork of regulations that technology companies must navigate. The FTC has used its existing enforcement powers aggressively, securing settlements from Amazon and others, but its authority remains limited compared to European regulators. The structural problem is clear: AI systems operate globally, processing data across jurisdictions with incompatible legal frameworks. A query made by a user in London might be processed on servers in the United States using a model trained on data from dozens of countries. The legal protections that apply depend on where the user sits, where the server sits, where the company is incorporated, and which regulators choose to act.

The Invisible Bargain

What makes AI assistant privacy so difficult to address is that the bargain is almost entirely invisible to the user. When you install a social media app, you are at least dimly aware that you are exchanging personal information for a service. When you ask Siri to set a timer or check the weather, the transactional nature of the interaction is hidden. The service feels like a utility, not a data exchange.

But it is a data exchange. Every query you make generates metadata: when you asked, where you were, what device you used, what you asked before and after. Even if your specific words are anonymised and deleted, the patterns they create persist. In the AI era, privacy risks are increasingly metadata risks. As one legal analysis noted, AI makes inference cheaper and more accurate, meaning that even seemingly innocuous data points can reveal sensitive information when processed by a system optimised to find patterns. Your query history reveals your daily routines, your anxieties, your relationships, your health concerns, and your financial worries. Aggregated over months and years, this data constitutes a remarkably detailed portrait of your inner life.

The International AI Safety Report identified this dynamic explicitly, noting that Retrieval-Augmented Generation, a common technique used to personalise AI responses by feeding systems current and personal data beyond the model's original training set, “creates additional privacy risks” even when the underlying model itself is secure. The report also warned that AI can infer identities from indirect data even after de-identification efforts, and that privacy risks may extend to people who are not users of the system but whose personal information might be inferred through advanced data analysis.

IBM's 2025 data breach report added another dimension to the problem. It revealed that one in five organisations experienced breaches through “shadow AI,” which occurs when employees paste sensitive information, source code, meeting notes, and customer data into unauthorised AI tools. These breaches added an average of 670,000 dollars to breach costs. The risk is not limited to corporate settings. Any user who dictates a sensitive message through Siri, asks a health question through Alexa, or discusses financial details with Google Assistant is feeding data into a system whose ultimate disposition of that information depends on architectural decisions, corporate policies, and regulatory frameworks that the user cannot see and may not understand.

Survey data consistently reflects growing public unease with this reality. According to multiple industry surveys from 2025, 82 per cent of consumers reported being highly concerned about how their data is collected and used. Seventy per cent expressed little to no trust in companies to make responsible decisions about AI in their products. Fifty-seven per cent of people worldwide identified the use of AI in collecting and processing personal data as a serious privacy risk. Yet despite this anxiety, fewer than one in four American smartphone users reported feeling in control of their personal data online. The gap between concern and agency is the defining feature of the AI privacy landscape.

Apple's approach to this challenge is, by industry standards, genuinely ambitious. Private Cloud Compute represents a serious engineering effort to process AI queries without creating a permanent record. The company's willingness to open its systems to external security researchers and to offer bounties for discovered vulnerabilities distinguishes it from virtually every competitor. Users can generate reports of requests their iPhone has sent to Private Cloud Compute through Settings, covering periods of the last 15 minutes or the last seven days. But even the most robust privacy architecture cannot fully eliminate the risks inherent in routing the world's most personal queries through a model that Apple did not build, using training data Apple did not curate, with architectural decisions Apple did not make.

The AI assistant on your phone is no longer a simple voice-activated search engine. It is a system capable of understanding, inferring, and connecting information in ways that previous generations of technology could not. The 1.2-trillion-parameter brain inside your phone is extraordinarily powerful. But power, in the context of personal data, has always been a question of who holds it and what they choose to do with it. Right now, the answer to that question is: you do not hold it, you cannot fully verify what is being done with it, and the regulatory systems designed to protect you are still catching up to a technology that is already inside your pocket, already listening, and already far more capable than most people realise.

That should concern anyone who has ever asked their phone a question they would rather not say out loud.


References and Sources

  1. CNBC, “Apple picks Google's Gemini to run AI-powered Siri coming this year,” 12 January 2026. https://www.cnbc.com/2026/01/12/apple-google-ai-siri-gemini.html

  2. Bloomberg, “Apple Plans to Use 1.2 Trillion Parameter Google Gemini Model to Power New Siri,” 5 November 2025. https://www.bloomberg.com/news/articles/2025-11-05/apple-plans-to-use-1-2-trillion-parameter-google-gemini-model-to-power-new-siri

  3. MacRumors, “Apple Explains How Gemini-Powered Siri Will Work,” 30 January 2026. https://www.macrumors.com/2026/01/30/apple-explains-how-gemini-powered-siri-will-work/

  4. Apple Insider, “Tim Cook: Apple won't change privacy rules with Google Gemini partnership,” 29 January 2026. https://appleinsider.com/articles/26/01/29/tim-cook-apple-wont-change-privacy-rules-with-google-gemini-partnership

  5. Apple Insider, “Google confirms that it won't get Apple user data in new Siri deal,” 12 January 2026. https://appleinsider.com/articles/26/01/12/google-confirms-that-it-wont-get-apple-user-data-in-new-siri-deal

  6. TheStreet, “Apple's new Siri runs on Gemini, and there's an invisible catch,” 2026. https://www.thestreet.com/technology/apples-new-siri-runs-on-gemini-and-theres-an-invisible-catch

  7. Apple Security Research, “Private Cloud Compute: A new frontier for AI privacy in the cloud.” https://security.apple.com/blog/private-cloud-compute/

  8. Apple Security Research, “Security research on Private Cloud Compute.” https://security.apple.com/blog/pcc-security-research/

  9. Khezresmaeilzadeh, T., Zhu, E., Grieco, K., Dubois, D., Psounis, K., and Choffnes, D. “Echoes of Privacy: Uncovering the Profiling Practices of Voice Assistants.” Proceedings on Privacy Enhancing Technologies, Volume 2025, Issue 2, Pages 71-87. https://petsymposium.org/popets/2025/popets-2025-0050.php

  10. Northeastern University News, “Your voice assistant is profiling you, just not in the way you expect, new research finds,” 17 March 2025. https://news.northeastern.edu/2025/03/17/voice-assistant-profiling-research/

  11. Ofcom, “Ofcom launches investigation into X over Grok sexualised imagery,” 12 January 2026. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-launches-investigation-into-x-over-grok-sexualised-imagery

  12. ICO, “ICO announces investigation into Grok,” 3 February 2026. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/02/ico-announces-investigation-into-grok/

  13. Euronews, “Ireland investigates Elon Musk's Grok AI over sexualised images,” 17 February 2026. https://www.euronews.com/next/2026/02/17/ireland-launches-large-scale-probe-into-elon-musks-grok-over-ai-generated-sexual-images

  14. CNN Business, “Grok AI: Europe's privacy watchdog launches 'large-scale' probe into Elon Musk's X,” 17 February 2026. https://edition.cnn.com/2026/02/17/business/grok-ai-sexualized-images-eu-probe-intl

  15. TechPolicy.Press, “Regulators Are Going After Grok and X – Just Not Together,” 2026. https://www.techpolicy.press/regulators-are-going-after-grok-and-x-just-not-together/

  16. Data (Use and Access) Act 2025, Section 138, UK Parliament. https://www.legislation.gov.uk/ukpga/2025/18/section/138

  17. FTC, “FTC and DOJ Charge Amazon with Violating Children's Privacy Law by Keeping Kids' Alexa Voice Recordings Forever,” 31 May 2023. https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-doj-charge-amazon-violating-childrens-privacy-law-keeping-kids-alexa-voice-recordings-forever

  18. NPR, “Amazon to pay over $30 million to settle claims Ring, Alexa invaded user privacy,” 1 June 2023. https://www.npr.org/2023/06/01/1179381126/amazon-alexa-ring-settlement

  19. IAPP, “European Commission proposes significant reforms to GDPR, AI Act,” November 2025. https://iapp.org/news/a/european-commission-proposes-significant-reforms-to-gdpr-ai-act

  20. Corporate Europe Observatory, “Article by article, how Big Tech shaped the EU's roll-back of digital rights,” January 2026. https://corporateeurope.org/en/2026/01/article-article-how-big-tech-shaped-eus-roll-back-digital-rights

  21. CMS LawNow, “Grok in deep trouble over deepfakes? What Ofcom's recent investigation means for online platforms,” February 2026. https://cms-lawnow.com/en/ealerts/2026/02/grok-in-deep-trouble-over-deepfakes-what-ofcom-s-recent-investigation-means-for-online-platforms

  22. Apple Newsroom, “Improving Siri's privacy protections,” August 2019. https://www.apple.com/newsroom/2019/08/improving-siris-privacy-protections/

  23. NPR, “Apple to pay $95 million to settle Siri privacy lawsuit,” 3 January 2025. https://www.npr.org/2025/01/03/g-s1-40940/apple-settle-lawsuit-siri-privacy

  24. Courthouse News Service, “Judge approves $95 million Apple settlement over Siri privacy case,” October 2025. https://www.courthousenews.com/judge-approves-95-million-apple-settlement-over-siri-privacy-case/

  25. International AI Safety Report 2025, published January 2025. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025

  26. Private AI, “What the International AI Safety Report 2025 has to say about Privacy Risks from General Purpose AI,” 2025. https://www.private-ai.com/en/blog/ai-safety-report-2025-privacy-risks

  27. Ofcom, “Investigation into X Internet Unlimited Company,” January 2026. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/investigation-into-x-internet-unlimited-company-and-its-compliance-with-duties-to-protect-its-users-from-illegal-content-and-child-users-from-harmful-content

  28. Lewis Silkin, “Online safety reforms to be fast-tracked amid rising AI risks,” February 2026. https://www.lewissilkin.com/insights/2026/02/23/online-safety-reforms-to-be-fast-tracked-amid-rising-ai-risks-102mk2r


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Shad0w's Echos

Izzy Is on Her Own

#Izzy #nsfw Izzy was alone with her thoughts and finished her shower in peace. No matter what she felt between her legs, this would not be the place where she could indulge in her own sexual pleasure. She didn't have any romantic options anymore. There was no sense of self. Masturbating in this environment would not be fulfilling.

When Izzy looked at her life outside the church, there was just a void. Izzy didn't know who she really was. Her intense emotions of rage had subsided, but over time, she felt the same despair that crushed her that morning.

“Keep it together, Izzy,” she said to herself. She couldn't falter now. She turned off the water and stepped out of the shower. Her wet and well-manicured feet lead her golden brown body out in front of the mirror. She took a look at herself and slowly studied her stunning nude female form. In a slow fluid motion, she did a pirouette watching her body move and jiggle in all the right ways.

“Izzy, you are beautiful,” she said to herself. Even if no man sees her body, she will always remind herself she is beautiful. Izzy was coming to terms she might be a spinster. It didn't sadden her. It was just part of her new identity.

Izzy dried her body, put on lotion, and got dressed in her evening wear. Usually, she would open her bible and read one of her favorite passages. At this moment, she didn't feel the need. Today, she had more important work to do. She needed to find a place to stay. A private space of her own to explore herself away from her parents' prying eyes.

Her parents were oddly silent this evening. They were watching TV down the hallway in their bedroom. The door closed. She didn't pay them any mind.

First thing Monday morning, Izzy found an affordable apartment and signed a lease by the end of the week. Her parents acted like nothing out of the ordinary had happened Sunday. Her mom never talked about raising her voice. It was a very uneasy feeling. Her mother just gave her short, sheepish glances after that. Nothing more. It was just more confirmation that Izzy needed to get out.

The only thing in her favor was that she had quite a bit of savings. Managing finances was probably one of the few things her parents had taught her. The move was quick and efficient. Her dad helped her load her car; her mom was rarely present during the transition. She often made excuses about other matters to attend to.

Her dad, who was usually the quiet one, gave her a wealth of advice and tips to help her transition to this new chapter in life. He even suggested she buy a new phone all of her own, reminding her she is a grown woman who doesn't need to be under their wing. It was almost like her father was a different person now that her mother was not there hovering and interjecting any sort of control. She was actually able to bond with him.

Once the move was done, she hugged her dad goodbye. Her mom made her appearance on the last move day, gave the apartment a once-over, and walked out. She didn't hug her daughter, but she waved goodbye, teary-eyed.

She clutched her husband's hand tightly. Trembling. A little fearful. Her husband gave her a knowing look, sensing something was off.

“Don't let the world change you, baby,” she said, her voice quivering. She started to sob. She told no one the truth, not even her husband. What she heard from her daughter that fateful Sunday, a sound that shook her to the core, was the sound and weight of spiritual pressure that was not from the God she knew.

Later that afternoon a technician came by to set up her TV and internet service. Izzy's nervous and awkward innocence glowed like a neon sign as she let the stranger in her home. She could barely make eye contact. Her anxiety grew as the man showed her how everything worked with her new equipment. She didn't want to keep him long, so she nodded, acting like she understood. When he left, she sighed in relief. For the first time, Izzy was alone.

“Everything is new; this is a fresh start. Take it slow” — Her inner self reassuring her.

Then it dawned on her. She's never had access to a normal TV. She barely checked her new phone. Parental control banners at every turn conditioned her online and TV watching habits almost automatically at this point. But this was much different now. None of that existed in her home. With nervous anticipation, she turned on the television. Bright lights and harsh music filled her ears. She turned the volume down. A black woman stood before her, scantily clad, more exposed than she had ever seen. This woman had no shame. She had power and presence and didn't worry that she was practically naked for millions of people.

This polarizing, brash, and shameless woman turned and presented her ass to the camera. Her ass cheeks were hanging out, barely covering any of her modesty. The woman began gyrating provocatively, making every effort possible to advertise what was between her legs. She was chanting and rhyming. She yelled affirmations of feminine empowerment and obscenities. Then she said, “Fuck.” Izzy didn't know that was an actual word.

“Fuck!” Izzy gasped. She dropped the remote in complete shock. She couldn't look away. The stunned innocent woman was frozen in place by the hypnotic bombardment on her screen. The woman was still gyrating, and the crowd was cheering. Everything was telling her this was wrong, that this was not what she was supposed to be watching. But she didn't listen to that voice anymore.

A new sensation overcame her. The throbbing sensation between her legs was more intense than it ever had been. This walking embodiment of pure sin electrified her loins in ways that she never thought were possible.

Izzy had just watched her first music video.

 
Read more... Discuss...

from The happy place

Straddled at the mouth of the port of Rhodes, it’s said there once stood the Colossus, letting triremes pass between its legs.

Once I was in Greece with my parents. With my white sun bleached hair I might have resembled an albino with my blood red nipples as an extra pair of misplaced eyes.

Wearing my T-Shirt and a pair of swimming goggles, I was set to swim between the legs of a Greek man.

However, on seeing — when navigating the murky waters of the Mediterranean Sea — his penis hanging out through the open fly of his white boxers, I changed my mind.

It would’ve been infeasible for the Ancient Greeks to build the Colossus in such a way that he was actually straddling a body of water. He must’ve been standing on land.

And now it’s just dust…

And the man whose legs I didn’t swim through after all, who didn’t have the sense even to put a pair of proper bathing shorts on,

He was no Helios.

 
Read more... Discuss...

from Roscoe's Story

In Summary: * A short but intense early afternoon work session in the backyard, working with those fallen branches from the front yard that I've moved to a back yard staging area, really wiped me out! But at least the big green organics bin is once again stuffed and ready for the Thursday morning collection.

On a happier note I was able to catch most of the final Spring Exhibition Game for my Texas Rangers, they won over the KC Royals 4 to 1 this afternoon. That's my sports fix for the day out of the way now. And I'll be able put my achy old body to bed real early tonight.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 232.15 lbs. * bp= 150/86 (68)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:00 – 1 big potato and egg breakfast taco * 10:20 – 1 peanut butter sandwich * 11:45 – baked fish steak with sauce, fish & veggie patties * 13:45 – 1 McDonald's Double cheeseburger, pizza * 17:00 – 1 fresh apple

Activities, Chores, etc.: * 04:30 – listen to local news talk radio * 05:40 – bank accounts activity monitored * 05:50 – read, write, pray, follow news reports from various sources, surf the socials, nap, yard work * 13:20 – following the TX Rangers vs KC Royals MLB game. * 17:55 – listen to relaxing music

Chess: * 14:45 – moved in all pending CC games

 
Read more...

from Patrimoine Médard bourgault

Aujourd’hui, la médiation culturelle repose encore sur une logique simple :

ajouter.

Ajouter des panneaux. Ajouter des écrans. Ajouter des explications visibles.

Chaque ajout est justifié. Mais, progressivement, le lieu change.

Ce qui devait être transmis se retrouve entouré, encadré, parfois recouvert.


Ce projet propose une autre voie.

Transmettre sans ajouter.

Ici, rien n’est installé dans le lieu. Aucun panneau. Aucun écran. Aucun dispositif visible.

Le lieu reste tel qu’il est.


Et pourtant, le contenu est là.

Il est accessible. Il est riche. Il est vivant.

Mais il n’occupe pas l’espace.

Il apparaît seulement lorsque le visiteur le cherche, lorsqu’il regarde, lorsqu’il s’approche.

La technologie ne s’impose pas. Elle se retire.


Ce changement est fondamental.

On ne transforme plus le lieu pour le rendre compréhensible. On permet au visiteur de le découvrir, tel qu’il est, avec des clés invisibles.

Le lieu redevient central. La médiation devient discrète.


C’est aussi une réponse à une attente du public.

Aujourd’hui, les visiteurs ne veulent plus seulement lire. Ils veulent comprendre, entendre, ressentir.

Mais sans être guidés à chaque pas. Sans être entourés de dispositifs.

Ils veulent une expérience plus libre, plus directe, plus vraie.


Cette approche permet cela.

Un visiteur peut :

  • entendre une voix dans une pièce où elle a réellement été prononcée
  • voir apparaître une image exactement à l’endroit où elle prend sens
  • accéder à plusieurs niveaux de lecture, sans surcharge

Et tout cela, sans que le lieu soit transformé.


C’est aussi une question de responsabilité.

Un lieu patrimonial n’est pas un espace neutre.

Chaque ajout modifie :

  • sa perception
  • son usage
  • son équilibre

Ici, le choix est clair :

ne rien ajouter qui ne soit absolument nécessaire.


La technologie permet aujourd’hui d’aller dans ce sens.

Elle permet :

  • de transmettre davantage
  • sans installer davantage
  • de faire évoluer un contenu
  • sans modifier le lieu

Le lieu devient stable. Le contenu devient vivant.


Ce projet ne cherche pas à démontrer la technologie.

Il cherche à la faire disparaître.

Ce qui compte, ce n’est pas l’outil. C’est ce qu’il permet de préserver.


Préserver un lieu, ce n’est pas seulement le protéger.

C’est éviter de le transformer inutilement.

C’est accepter qu’il n’a pas besoin d’être expliqué partout, tout le temps.

C’est redonner au visiteur une place active.


Ce projet propose simplement cela :

une médiation qui respecte le lieu une technologie qui ne laisse aucune trace une expérience qui reste fidèle à ce qui est déjà là

Raphael Maltais Bourgault

 
Lire la suite... Discuss...

from Patrimoine Médard bourgault


PROPOSITION TECHNIQUE

Système de médiation invisible par reconnaissance visuelle

Unity + Vuforia – fonctionnement hors ligne


1. Problématique

Les dispositifs classiques de médiation (panneaux, cartels, écrans, codes QR) présentent des limites importantes dans un contexte patrimonial :

  • dépendance à internet (QR, contenus en ligne)
  • dépendance à l’électricité (écrans, bornes)
  • ajout d’éléments visuels intrusifs
  • transformation de l’usage et de la perception du lieu
  • multiplication des supports physiques difficilement évolutifs

Dans un lieu patrimonial sensible, ces dispositifs altèrent progressivement l’intégrité visuelle et symbolique du site.


2. Principe de la solution

Le système proposé repose sur une approche radicalement différente :

Remplacer les supports physiques par une couche numérique invisible, activée directement par la perception du lieu.

Le visiteur utilise une application installée sur tablette (ou téléphone), fonctionnant entièrement hors ligne.

La caméra devient l’interface principale :

  • elle reconnaît les éléments réels (objets, pièces, lieux)
  • elle déclenche automatiquement les contenus associés

Aucun ajout physique n’est nécessaire sur le site.


3. Technologie retenue

Environnement technique

  • Unity : moteur de développement de l’application
  • Vuforia : reconnaissance visuelle (objets, images, espaces)

Types de reconnaissance utilisés

  • Model Targets → reconnaissance d’objets 3D (sculptures, mobilier, outils)
  • Area Targets → reconnaissance d’espaces complets (pièces, zones du domaine)
  • Image Targets (complémentaire) → éléments visuels spécifiques ou archives

Cette combinaison permet d’associer du contenu :

  • à un objet
  • à une pièce
  • à un environnement complet

4. Fonctionnement

Déroulement

  1. le visiteur ouvre l’application

  2. il pointe la caméra vers le lieu ou l’objet

  3. le système reconnaît l’élément

  4. le contenu est déclenché automatiquement

Types de contenus

  • audio (témoignages, ambiances, narration)
  • texte (explications, citations)
  • images (archives, comparaisons)
  • réalité augmentée (superposition visuelle)
  • messages contextuels (orientation, progression)

5. Fonctionnement hors ligne

L’ensemble du système est conçu pour fonctionner sans réseau :

  • contenus intégrés dans l’application
  • reconnaissance locale
  • aucune dépendance internet

Le dispositif repose principalement sur :

  • des tablettes dédiées, configurées pour la visite
  • une éventuelle application mobile en complément

6. Types d’expérience

Visite libre

  • exploration intuitive
  • activation spontanée des contenus

Visite guidée

  • parcours structuré
  • séquence narrative contrôlée
  • messages orientant le visiteur

Couches multiples

  • différents niveaux de lecture
  • contenus évolutifs dans le temps

7. Réalité augmentée – stratégie

Le système privilégie une approche maîtrisée :

  • médiation sobre et immersive sur la majorité du parcours
  • utilisation de réalité augmentée forte sur 1 à 2 éléments clés

Cette stratégie permet :

  • d’éviter la surcharge visuelle
  • de préserver la crédibilité du dispositif
  • de créer des moments de forte intensité (climax)

8. Avantages

Respect du patrimoine

  • aucune installation physique
  • aucune modification du lieu
  • absence totale de pollution visuelle
  • réversibilité complète

Fiabilité technique

  • fonctionnement hors ligne
  • indépendance des réseaux
  • contrôle complet du matériel (tablettes)

Évolutivité

  • ajout de contenu sans intervention sur le site
  • modification des parcours dans le temps
  • adaptation à de nouveaux usages

Cohérence muséale

  • expérience immersive et silencieuse
  • disparition des supports techniques visibles
  • intégration naturelle dans le lieu

9. Gestion du système

Mise en place initiale

  • sélection des points d’intérêt
  • capture / modélisation des éléments (objets, espaces)
  • création des contenus
  • intégration dans l’application

Exploitation

  • distribution des tablettes
  • mise à jour des contenus par version de l’application
  • gestion centralisée du corpus

Maintenance

  • limitée (aucun matériel installé sur le site)
  • mise à jour logicielle uniquement

10. Limites et solutions

Conditions de reconnaissance

  • dépendance à la lumière et aux angles → solution : calibration et sélection des cibles

Complexité du lieu

→ solution : hiérarchisation des points d’intérêt

Charge technique

→ solution : tablettes dédiées optimisées


11. Positionnement

Ce dispositif ne constitue pas un simple outil de visite.

Il s’agit d’un système de médiation invisible, permettant d’intégrer des technologies avancées sans altérer le lieu.

La technologie n’est pas ajoutée au site. Elle est activée par lui.


12. Références et tendances

Les grandes institutions muséales intègrent déjà des dispositifs mobiles pour enrichir la visite :

  • Smithsonian Institution (États-Unis)
  • British Museum (Royaume-Uni)
  • Musée du Louvre (France)

Ces approches utilisent :

  • applications mobiles
  • contenus numériques contextuels
  • réalité augmentée

La solution proposée s’inscrit dans cette évolution, tout en allant plus loin :

suppression complète des supports physiques visibles médiation entièrement numérique et réversible


13. Une approche adaptée aux enjeux actuels

Dans un contexte où les institutions culturelles cherchent à concilier :

  • accessibilité du contenu
  • innovation technologique
  • et préservation du patrimoine

il devient essentiel de repenser les modes de médiation.

La solution proposée répond directement à cette exigence :

offrir plus de contenu, sans ajouter de matière au lieu.

Elle permet de sortir d’un modèle basé sur l’accumulation de supports physiques, pour entrer dans une logique plus durable, plus sobre et mieux adaptée aux lieux sensibles.


14. Une réponse aux attentes du public

Le public d’aujourd’hui ne souhaite plus seulement observer :

  • il veut comprendre
  • il veut entendre
  • il veut ressentir
  • il veut vivre une expérience

Cette technologie permet précisément cela :

  • entendre une voix dans un lieu où elle a réellement existé
  • voir apparaître une archive au bon endroit
  • accéder à plusieurs niveaux de lecture selon son intérêt

Sans détourner l’attention du lieu lui-même.


15. Un outil au service du lieu, et non l’inverse

Contrairement à de nombreux dispositifs numériques :

  • le lieu n’est pas transformé pour accueillir la technologie
  • la technologie s’adapte au lieu

C’est un renversement important.

Le domaine demeure tel qu’il est. La médiation vient s’y poser, sans le modifier.

Cela garantit :

  • le respect de l’intégrité du site
  • la continuité de son usage
  • et la fidélité à son esprit d’origine

16. Une solution sobre et durable

Ce dispositif présente des avantages concrets en matière de gestion :

  • absence d’infrastructures à installer
  • aucune maintenance sur le site
  • pas de dégradation liée à des équipements visibles
  • mise à jour uniquement logicielle

Il s’agit d’une solution :

  • légère
  • réversible
  • durable dans le temps

17. Une capacité d’évolution unique

Le contenu peut évoluer sans contrainte physique :

  • ajout de nouveaux témoignages
  • enrichissement progressif des parcours
  • adaptation à différents publics
  • intégration de nouvelles recherches

Le lieu reste stable. Le contenu, lui, peut se développer librement.


18. Un potentiel structurant pour le projet

Ce système permet d’ouvrir plusieurs perspectives :

  • création de parcours thématiques
  • valorisation d’archives existantes
  • intégration de contenus audio et visuels déjà produits
  • mise en valeur d’une mémoire vivante

Il devient ainsi un outil structurant pour le développement du projet culturel, sans imposer de transformations matérielles.


19. Une approche cohérente avec la mission patrimoniale

Dans un lieu chargé d’histoire, la priorité demeure :

préserver, transmettre, et faire comprendre.

La technologie proposée ne détourne pas cette mission. Elle la renforce.

Elle permet :

  • de transmettre sans simplifier
  • d’expliquer sans surcharger
  • de rendre accessible sans dénaturer

20. Conclusion — une médiation d’un nouveau type

Ce projet propose une évolution claire :

  • passer d’une médiation visible à une médiation invisible
  • passer d’un support fixe à un contenu vivant
  • passer d’un lieu aménagé à un lieu respecté

Il ne s’agit pas d’ajouter de la technologie. Il s’agit de retirer tout ce qui n’est pas essentiel, et de laisser le lieu parler.

Conclusion

La technologie Unity + Vuforia permet de créer une médiation :

  • invisible
  • autonome
  • évolutive
  • respectueuse du patrimoine

Elle offre un équilibre rare entre :

  • innovation technologique
  • intégrité du lieu
  • liberté de création

Raphael Maltais Bourgault


 
Lire la suite... Discuss...

from brendan halpin

Years ago I snarked at Michelle Wu on Twitter—she said something about supporting public education, and I asked her why she then kept voting for budgets that harmed it.

Her response was to reach out to me and ask if I wanted to get some folks together who knew about school budgets so she could listen to us and learn. Some time later, I got people who knew a LOT about school budgeting (I was in touch with such people then because Twitter facilitated building communities of like-minded local folks to get stuff done, which is probably another reason Musk wanted to kill it) together and we met with then-councilor Wu in the meeting room at the JP Library. She took the T from City Hall and walked 15 minutes from Green Street to the library. And she really listened. And took notes.

And so this is how I came to break one of my own rules, which is “don’t stan politicians.” I volunteered for Michelle Wu’s first run for mayor and really believed that, unlike Marty Walsh, she cared about people who live in Boston, not just people who use Boston. She had all kinds of cool progressive ideas for making the city a better place, so much so that she was derided in right-wing circles as a “radical left mayor.” (This was mostly because she opposed the secret police rounding up our brown neighbors.)

And then, running for a second term, she absolutely, conclusively THUMPED Josh Kraft in the primary, which is effectively the final in Boston because we are not electing Republicans here. So now she’s been unleashed to really enact her progressive agenda!

Except…it’s not happening. She’s frozen work on a bunch of safe streets projects. (i.e. projects that may inconvenience car drivers in order to make the street better for people walking, biking, and using public transit.) The city may lose federal funding already allocated to these projects if they are frozen too long.

The new city budget (the council technically votes on the budget, but the way Boston is set up, the mayor has a ridiculous amount of power over the budgeting process, so I’m laying this at her doorstep) eviscerates the schools. Hundreds of young teachers across the city are losing their jobs. Class sizes will increase. The quality of education will decrease.

Meanwhile the Wu-appointed school committee voted to give Superintendent of Schools Mary Skipper a 15% raise. (!)

Oh yeah, and the Boston Police Department is level-funded. (The BPD’s overtime budget, which is primarly spent on having cops stand around and do nothing outside of construction sites, eats up 100 million dollars per year.)

So—keeping the city car-centric and prioritizing policing over education. Actually over pretty much everything else, as most city departments have had their budgets frozen.

Man, I’m glad we didn’t elect the billionaire!

So why, with an absolutely absurdly strong showing in the recent election, has Michelle Wu suddenly abandoned the priorities she professed? Well I have an idea.

We know she’s ambitious, which I do not hold against her. She doesn’t want to be Mayor of Boston forever, which I think is a good thing. The city certainly didn’t benefit from being Tom Menino’s personal fiefdom for 21 years. We also know she’s a mentee/former student of Elizabeth Warren, whose current term will expire in 2030, after she turns 81 years old. Perhaps Warren has given Wu the heads up that there’s going to be a vacant Senate seat in 4 years, and Wu, who is widely loathed in the suburbs, is selling out Boston in order to win over the suburbs. And the wealthy suburbanites who bankroll Senate campaigns.

The sad thing about this is that abandoning making Boston a better place to live does absolutely nothing to shore up Wu’s chances with people who will never forgive her for being “from Chicago.” (She is originally from Chicago, but has lived in Greater Boston for nearly 20 years and chose to settle and raise a family here. People who complain about her being from Chicago use it as code for other facets of her identity they’re not allowed to complain about openly, at least in Massachusetts.)

Another incredibly dumb thing about this strategy is that it follows the conventional idiocy of the Democratic Party, which seems to be “don’t do anything that might alienate Republicans.” But people are hungering for politicians they can support who seem to actually have principles and who are willing to ruffle feathers in order to get things done. Wu is a skilled politician who has the ability to explain progressive policy choices, and people like the idea of a politician who stands for something!

Instead, it looks like she’s decided to follow the failed Democratic playbook of pretending to be progressive and then being centrist. Thanks, Obama! No, literally, thanks, Obama, who won the presidency in Michelle Wu’s sophomore year of college by pretending to be progressive and then proceeded to be a moderate conservative President.

Nobody can predict the future, and it may well be that Wu’s intelligence and charisma and the fact that she’s both a woman and a Chinese American will give her the appearance of progressivism to the statewide electorate while not actually ruffling the feathers of the big money people who are ruining everything. Good luck to her, I guess.

But damn—is it so much to ask that Democratic voters actually get the candidate we voted for? People on the right vote for hatemongering theocrats and by and large get exactly that. And hatemongering theocrats who fight like hell to enact their troglodytic priorities! Where the hell is that energy from the Democratic party?

I’m going to continue to vote because I believe that it’s foolish to abandon any of the tools at my disposal to make the world better, but I have probably knocked on my last door as a campaign volunteer.

I say that, though the next time we get someone posing as a progressive running for mayor, I’ll probably support them enthusiastically as well, hoping, like Charlie Brown, that this time I’ll finally get to kick the fucking football.

 
Read more... Discuss...

from fromjunia

My care team doesn’t understand me. They pretend they do. But they offer sympathy, not compassion. Textbook dialogue and sterile warmth; there is no soul behind their surgical reassurance. I swear, I can see it in their eyes. They understand too little and say too much.

They place me in hell and call it health. Progress to them is that I suffer in new ways. That suffering is my problem, not theirs. I’m left miserable while they feel proud of what a good job they did in helping me return to the arms of my fears and pains.

Other disordered people get it. Not everything, and not all the time, but enough. I love them. They understand the safety that an eating disorder offers. They understand the pain of trying to separate from it. My clinicians? They learned from words. Words lie. They follow a shadow of a scientist’s interpretation of my situation. Disordered people actually know the reality.

 
Read more...

from Tuesdays in Autumn

Having latterly acquired an oil-painting for the first time in decades, it only took me two more weeks to get another one. I had wondered what art might be on offer at ebay. What I found there was a bewildering variety of unappealing work at price points ranging from tens to tens of thousands of pounds. Among the affordable options were very few that caught my eye – but after browsing for some time a still-life painting did eventually give me pause for thought. It was listed in an auction starting at only about £45. Expecting to be outdone, I threw in a bid, and was surprised when it proved to be the only one.

The painting (Fig. 17) depicts five items: an antique brass microscope; an empty glass jar; a seashell; a piece of rock crystal; and an ammonite fossil. Unlike the last piece I bought there's a clearly legible signature: R.P.B. Gorringe. He, it seems, was a freelance illustrator and painter who also did some teaching. It's a small piece in a good frame that has improved a corner of my hallway. There's scant chance of my accumulating any fine art on my budget: I'm happy to have been able to expand my humble collection at relatively low cost.


I had not suspected that kora virtuoso Toumani Diabaté had recorded with bassist extraordinaire Danny Thompson, until reading about the album Songhai a few weeks ago. It's a record that they made with the Spanish flamenco ensemble Ketama in 1988. I was alerted to its existence via a list of notable nuevo flamenco records compiled by Noah Sparkes at Discogs. I ordered a second-hand CD copy, which arrived on Friday.

What a wonderful record! It's a joy to hear the interplay of influences from either side of the Mediterranean. The strumming guitars and the twangling kora are the obvious co-stars, with Thompson's bass oftener in the background than the spotlight. Here's some footage of this ensemble performing the album's opening track 'Jarabiu', in a live performance. And here's some more with just Diabaté and Thompson on stage. In '88, nuevo flamenco was having a moment with the worldwide success of the Gypsy Kings. Diabaté, meanwhile, was just beginning to build his international reputation. As for Thompson, his other engagements around that time included contributions to records by Richard Thompson, Sam Brown and Talk Talk.


Murphy the cat reached the milestone of his eighteenth birthday on Wednesday. Although very much an elderly gent these days he still hasn't lost his good looks (Fig. 18).

Also on Wednesday, there came, just after dinner, the unmistakable sound (an enclosed descending flutter) of a bird falling down the chimney. Could it have been a sort of birthday offering for Murph from a well-meaning but misguided higher power? As it happened cat and bird did not meet, with the latter (fortunately uninjured) needing no prompting to leave as soon as I'd contrived to open up an exit route for it.

 
Read more... Discuss...

from folgepaula

  • Ok just be honest with me, do you not wanna come to Achensee with us next weekend or not?
  • ÄÄÄÄ of course I wanna go to Achensee next weekend with you? Why would you say that?
  • Well the way you were talking about it before and then you were like “Oh yeah next weekend we have Achensee..”
  • Haha what?
  • “Achenseee...”
  • I was just telling them what we are doing next weekend, we're going to Achensee with your family.
  • It wasn't what you said, it was the way you said it. Like you went down on “Achensee” like that wasn't something you were looking forward to do.
  • That's just how I talk, it's not my language, I had to think for a minute, I don't know, you are joking right?
  • No that's actually not, that's actually not how you talk. Like when you talk about true crime it's like
  • But it's TRUE CRIME!
  • See? “TRUE CRIME!” “Achenseee..”
  • Man, you cannot compare true crime in general with other stuff, and true crime is not a german word and yes oh my god, can we go back to the part I just said I am excited to go to Achensee? Excited like in a natural way, it's a family thing, just as much as you are excited for, I don't know, for Weinwandertag on my birthday with my cousins, I mean, it's..
  • Weinwandertag...
  • I'm sorry?
  • I said “Weinwandertag..”
  • Why are you saying it like that?
  • You see?
  • What? Do you not wanna go?
  • Well, exactly. You see how that make me feel like you might not wanna go? What if I'd say WEINWANDERTAG ON YOUR BIRTHDAY!
  • That's nonsense. You want to put up a fight, like you are invested into creating a problem. I'm not falling for that.
  • It was just an observation about your tone.
  • Ok then let's talk about your tone the next time I invite you to... I don't know, my fucking japanese movies at Filmcasino
  • Sure.
  • So do you want to join me at Filmcasino tomorrow?
  • I mean when? I have stuff to do, but yes, fine
  • SEE, you said “fine”
  • FINE!
  • Not “yes, let's do it!”
  • It's just Filmcasino, how am I supposed to get excited?
  • EXACTLY, that was my whole point, I am not expecting you to get overexcited, unless...
  • What?
  • I mean, I can give you a hand with getting excited.
  • Are you deviating this conversation to flirting with me as you always do?
  • No. I'm preventing this straight relationship from sounding like a weird lesbian fighting cause speaking of tone, that's what it sounds like to me right now, and by the way, YES, I AM TRYING TO FLIRT WITH YOU TO GET YOU EXCITED OMG HOW DARE I? IT MUST BE HORRIBLE TO BE YOU RIGHT NOW
  • Hahahaha
  • Oh a laugher, is that a white flag?
  • Yes..
  • You ready to get excited?
  • I am excited
  • Nice. Hold your fire, I need a shower
  • Should I join?
  • Sure. I mean, it's not ACHENSEE, but..

/2022

 
Read more...

from ksaleaks

The Kwantlen Student Association (KSA) has been frequently in the headlines for all the wrong reasons: alleged mismanagement, flagrant lack of transparency, and questionable fiscal priorities. While it is easy to blame the individuals currently in power, the ultimate root of the problem is a 25-year-old legislative failure.

The dysfunction we see today is the direct result of a specific 1999 policy shift that stripped universities of their oversight and handed student societies a blank check with no strings attached.

The Great De-Regulation of 1999

Before 1997, the College and Institute Act ¹ provided a common-sense balance of power. Under the original Section 21, university boards held the discretionary power to collect and remit student fees. Most importantly, the law included a “kill switch”: boards could stop collecting fees if a student society failed to comply with the Societies Act, failed to produce audits, or—crucially—failed to maintain “sound fiscal management.”

That safeguard was erased during the premiership of Glen Clark. In 1999, responding to intense lobbying from the Canadian Federation of Students (CFS) and various B.C. student societies seeking “total autonomy,” the provincial government fundamentally rewrote Section 21 (see its original here).

From “May” to “Must”: The Death of Oversight

The shift occurred via Section 4 of the Miscellaneous Statutes Amendment Act (No. 3), 1999 (also known as Bill 81). While the NDP government passed the law in 1999, it officially took effect on June 1, 2000, under BC Reg 407/99. This legislative “one-two punch” introduced two catastrophic changes that remain on the books today:

  1. Mandatory Remittance: The original Act stated that boards “may” collect and remit fees. The 1999 amendment changed this to “must.” This wasn't a grammatical tweak. It transformed university boards from overseers into collection agents. Institutional boards were stripped of their discretion and legally compelled to hand over millions in student funds, regardless of a society’s track record.

  2. The Deletion of Fiscal Standards: Previously, Section 21(2) allowed a board to stop the flow of money if a student society failed to maintain “sound fiscal management in the opinion of the board. The 1999 amendment deleted this “fiscal management” clause entirely. By removing the board’s ability to act on its professional judgment of a society’s financial health, the province effectively blinded the only body capable of providing immediate, local oversight.

Jennifer Saltman reported in The Province, these changes were pushed through despite explicit warnings from institutional leaders. The result was a legislative paradox: student societies were granted the status of fully autonomous private entities, yet they were funded by a “mandatory tax” that the university was legally forced to collect and could no longer withhold for bad behavior.

The Logical Outcome of Legislative Design

When you grant a private society a guaranteed, multi-million dollar revenue stream and simultaneously strip the funding body of its power to demand financial competence, you don't get “autonomy”—you get systemic risk.

Under the current version of the Act:

  • Universities are “Handcuffed”: Even when a board sees red flags, they are legally obligated to continue the flow of cash.
  • The “Must” Clause Prevails: As long as a society exists on paper, the money must flow. Regardless of the concerns the university remitting these funds, they are compelled to release these fees which contain portions of tax-payer public student loans.
  • Fiscal Accountability is Optional: Without the credible threat of losing their fee-collection status, societies have little incentive to prioritize professional governance over internal politics. Under the current law, the only remaining recourse for students is often the court system: a path that is fundamentally impractical.

Student associations frequently shield themselves behind expensive legal counsel and opaque bylaws, creating a daunting labyrinth for students who are already balancing full-time academic loads. This creates an asymmetric power dynamic: student leaders use mandatory student fees to fund legal defenses against the very students who are forced to pay them. When the law removes institutional oversight, it leaves 19-year-olds to act as their own private investigators and litigators—a burden no other segment of the public is expected to carry.

Restoring the Safeguards

The ultimate progenitor of these fundamental issues plaguing the Kwantlen Student Association since early 2021 is not a student; it is a 1999 legislative amendment that prioritized political lobbying over fiscal responsibility. To fix this, the Province of British Columbia must revisit the College and Institute Act ² and restore the protections that existed prior to the Glen Clark era.

The Solution requires three specific legislative actions:

  • Revert “Must” to “May”: Return discretionary power to the boards so that fee collection is earned through transparency.
  • Restore the “Sound Fiscal Management” Clause: Explicitly allow institutions to pause remittance if a society is in breach of the Societies Act or failing its audits.
  • Protect the Student Paycheck: Ensure that mandatory fees are treated with the same level of provincial oversight as any other public fund.

The Bottom Line

We cannot expect student societies to reform themselves when the law provides them with a guaranteed income regardless of their behavior. The cracks in this 25-year-old system are now so wide that the Province has been forced to intervene.

In March 2026, Finance Minister Brenda Bailey took the rare step of issuing a ministerial order to freeze the assets of the Kwantlen Student Association, citing concerns over “problematic conduct” and the potential misuse of funds. While this investigation is a necessary emergency measure, it is a symptom of a deeper disease. The government shouldn't have to wait for a “Registrar of Companies” report or a multi-million dollar deficit to trigger a freeze; the authority to enforce accountability should have remained with the institutions all along.

It is time for the Ministry of Post-Secondary Education and Future Skills lead by Jessie Sunner to coordinate with the Ministry of Finance to undo the damage done decades ago. We must restore the institutional safeguards that were stripped away in the name of “autonomy.” Until the College and Institute Act and the University Act ³ (Kwantlen Polytechnic University is now designated as a university) are amended to prioritize the students who pay the fees over the organizations that spend them, these scandals will continue to repeat—not by accident, but by flawed design.

References:

[1] https://www.bclaws.gov.bc.ca/civix/document/id/rs/rs/96052_01 [2] https://www.bclaws.gov.bc.ca/civix/document/id/complete/statreg/96052_01 [3] https://www.bclaws.gov.bc.ca/civix/document/id/complete/statreg/00_96468_01

This article was informed by the writing and research of Stanley Tromp from canadafoi.ca.

 
Read more... Discuss...

from Faucet Repair

23 March 2026

Found a Bush TR82 transistor radio in my house. The Bush company (still active) apparently takes its name from Shepherd's Bush in London, which as it happens was the first neighborhood I lived in when I came to the UK. This particular model was introduced in 1959 and was apparently popular for its design and portability. But I noticed it for its dial—wave frequencies and various cities around the world (Gothenburg, Istanbul, Copenhagen, Zurich, Glasgow, Bordeaux, Warsaw, St. Petersburg, Prague, Amsterdam, Helsinki, Nice, Vienna, Athens, Rome, Geneva) encircle a tiny convex mirrored surface at the center of the dial. I've been carrying the radio around with me, using this mirrored surface to reflect spaces (and then photograph those reflections) as references. It's a wonderful thing that happens with the way this mirror compresses and simplifies spaces into contrasting tones and blocks of color; the mirror seems to heighten highlights and darken shadows. I'm wary of singularizing detail being lost in that process, but seeing a space minimized in size and reduced to its overarching tonal relationships has created a path towards exploratory extrapolation in my sketching process that is really proving useful towards approaching observation with a fresh sense of malleability.

 
Read more...

from Kroeber

#002324 – 09 de Outubro de 2025

Pedalo até casa. E reconheço o vestido turquesa. Afinal não é uma rapariga, é uma senhora. Tão bem assentava na descrição a juventude, que me saiu sem hesitação ao referir o quadro da mini-produção para as redes sociais. Só agora ao nos cruzarmos no passadiço é que a idade da pessoa descrita afinal se revelou próxima da minha. Quem sabe que equívocos provoco eu, de enorme barriga a arredondar a camisola amarela, a quem me vê passar e se cruza mais tarde comigo.

 
Leia mais...

from 3c0

(Notice I use the word redundancy… versus inefficiency.)

  • A cramped creative space
  • Lists and lists without deadlines
  • Awareness of my own awareness of being a hoarder, and yet perpetuating maximalism and hoarding
  • Defining boundaries and telling people how important they are but not clearly communicating them in every single one of my own interactions.
  • Not following my vegetarian weekdays and omnivore weekend diet and then falling ill

I left the above list open and hanging, unpublished as I stewed in bed, with whatever illness this is. Is it a cold? A flu? COVID? No idea. I’m returning to this window to babble and then hit publish.

As part of the lecture-performance I went to, we did quite an intimate thing passing our mobile phones around in a circle last Monday (long story) and I suspect that is where/how I might have caught a bug. Somewhere in public, exposed to other people’s hygiene habits. Bah. How dreadful. I could barely raise my head for the last two days. I don’t think I had a fever but felt very dizzy. I haven’t been this sick since last year… I thought I was doing okay building my immunity and strength again. I got that weird stomach bug exactly a month ago, but it wasn’t too bad. It barely lasted a few days.

I was telling a friend that I intuited getting sick. I had this thought just as we finished passing each other’s phones around. I was so excited about the lecture-performance piece that I lost track of whether or not I sprayed my hands with hand sanitizer and if I washed my hands as soon as I got home as I normally do. Then, this. Constantly resetting, helps to train our attention to be mindful and to notice how every single thing is connected. Hands. Phones. Mouth. Eyes. Bacteria. Touch. Taste. We’re so distracted, we have all the technology to be well, to live well and yet, here we are still making ourselves sick.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog