Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from
Rippple's Blog

Stay entertained thanks to our Weekly Tracker giving you next week's Anticipated Movies & Shows, Most Watched & Returning Favorites, and Shows Changes & Popular Trailers.
new Send Help+1 Peaky Blinders: The Immortal Man-2 War Machinenew Project Hail Mary-3 Good Luck, Have Fun, Don't Dienew How to Make a Killingnew Hoppersnew GOAT= Scream 7new Mercy= The Pitt= Paradise+3 Invincible= The Rookie= Shrinking+1 High Potentialnew Daredevil: Born Again= Marshals= Monarch: Legacy of Monsters-7 ONE PIECEHi, I'm Kevin 👋. I make apps and I love watching movies and TV shows. If you like what I'm doing, you can buy one of my apps, download and subscribe to Rippple for Trakt or just buy me a ko-fi ☕️.
Silvia querida:
Ojalá puedas leer estas líneas que me salen del alma. Las escribo desde este terrible lugar, con el propósito de que me comprendas y que sepas la verdad de lo que pasó.
Todo fue culpa de tus hermanos, que sabiendo que no bebo me dieron ese licor de frijoles. Yo no sé qué misterio tiene pero me entró directo en las neuronas y vi, primero alucinaciones, y luego claramente la formalización matemática que he buscado todos estos años como director de la Academia.
Sentí que el licor me quemaba la piel, fui quitándome la ropa, arranqué el mantel, lo tiré al suelo y lo que iba anotando con el bolígrafo es, ni más ni menos, la demostración misma de la Gran Unificación, la perseguida Teoría del Todo que vi en mi mente a causa de la intensa sinapsis. A punto de concluir empecé a decir aparentes locuras, noté que me agarraron, me esposaron y del cuartel me trajeron en lancha a la isla, yo no sé con qué cargos, y aquí estoy escribiéndote, pidiendo perdón, a tí, a nuestras familias y a todos los invitados a nuestra boda.
Qué desastre; qué más te voy a decir si casi no me acuerdo nada. Averigua, por favor, dónde está el mantel, espero que no lo hayan lavado ni tirado a la basura.
Silvia, te quiero, eres la mujer de mi vida.
Tuyo, Gilberto
from
ThruxBets
Really enjoyed the start of the flat season yesterday. I always like this time of year when things get going again on Town Moor. The clocks going forward, the lighter evenings, longer days – it all seems to lift people a touch, whether they notice it or not.
Despite that, I didn’t have a bet.
Partly because nothing really appealed, but also because I’m trying to be a bit more focused this flat season. The plan is to narrow things down a bit and spend more time on a specific type of race.
I’m going to concentrate on 4yo+ handicaps over 5f to 1m in class 4, 5 and 6 company. I’ll still be having bets elsewhere when something stands out, but this is where most of the attention will be. Whether it leads to any kind of miniscule edge remains to be seen, but god loves a trier and all that.
So onto Sunday, where there are four races that fit the bill and are worth getting stuck into …
1.47 Doncaster
The opening race on the card and a competitive affair but with six places on offer at several bookies I’m going to chance the Tony Coyle trained EH UP ITS JAZZ. His run LTO can be ignored, it was on the AW (where he is 0/5) and was hopefully a bit of a pipe opener for this. On a workable mark of 67 (has placed off the same and won off 64) and is 221122 in class 5 company on the turf.
EH UP ITS JAZZ // 0.5pts E/W @ 10/1 (6 places) Paddy Power
3.30 Doncaster
I couldn’t unpick the first division of this race, so have left that alone. The second however looks a bit more of a betting medium and I’m going to chance Jamie Osborne’s EPICTETUS in it. He’s 0/8 on the AW so I’m discounting all his winter runs, and doing that takes us back to his turf from where he was contesting valuable class 2 handicaps. Admitedly – bar one placed effort at Goodwood – he didn’t really ever land a blow but these were much better races. Should strip fitter than many of his rivals here today and the drop back to 7f should pose no problems. Looks to have a very decent chance.
EPICTETUS // 0.5pts E/W @ 7/1 (4 places) Coral
5.50 Doncaster
Charlie Mason looks to have a really good chance but at 4/1 looks really short to me. So a chance is taken on JUAN LE PINS at slightly more generous who came back to form a couple of weeks ago and won LTO at Newcastle. The return to turf shouldn’t inconvience him and is today actually off a mark 5lbs lower than his best runs last summer, in an easier race, so you’d think he’d be competitive. Just hope the ground doens’t get any softer.
JUAN LE PINS // 0.5pts E/W @ 7/1 (4 places) William Hill
from An Open Letter
I had an absolutely wonderful day today, I got a full set of drums! Along with a ton of other instruments for my band. I also went to LA for a leap concert, and it was absolutely fucking phenomenal. I got a vinyl signed by all of the members, and photos and got to talk with all of them. On my drive home after watching a video on the benefits of loneliness, I decided to raw dog the rest of my ride home, so I spent 40 minutes with no music or anything like that and I just thought and it was incredibly peaceful.
from 下川友
旅行先で、母親へのお土産にバームクーヘンを買った。 これまで誕生日でさえ母にプレゼントをしたことがなかったので、 自然とそうしている自分に少し驚き、そして安心した。 義務ではなく、表でそれをやっている。
車で実家へ向かう。 かつて自分が子どもだった頃に大人たちがしていた振る舞いを、今の自分が自然にやっていることに気づき、 寿命が近づいてくるのを感じる。 実家ではお土産を渡し、1時間ほど雑談をした。 空気の中にそのまま溶けていくような、自然な会話だった。
翌日は妻と花見に行った。 毎年訪れている砧公園だ。 出会ったばかりの頃は電車で行っていたが、車を買ってからは車で向かうようになった。 砧公園の入り口は桜がすでに満開なのだが、そこはまだ砧公園の本質ではない。 一の橋と書かれた橋を渡った先に、さらに美しい景色が広がっていて、 俺たちはそこを天国と呼んでいる。
その場所にレジャーシートを広げ、買ってきたスタバのウインナーロールと ダークモカチップフラペチーノを飲む。 普段はスターバックスラテだが、こういう特別な日にはフラペチーノを選ぶ。 今年の花見は、例年よりも気温がずっと暖かかった。
妻は会社を辞めるので、帰りに自由が丘で会社の人たちへのお礼のお菓子を買った。 こうして妻と車で移動しながら、好きな場所へ気ままに行ける生活は、 自分が送りたいと思っていた生活の一部だ。 これからも続けていきたい。 家での時間と、車での移動が自分にとっての理想の暮らしだ。
今の仕事では在宅勤務ができないので、明日からまた転職活動を頑張る。 転職用の成果物を作るのは、自分の能力より一段階上のことに挑む感覚があって、 精神的にはしんどい日々が続く。 それでも踏ん張るしかない。
from
Talk to Fa
My mind and heart are dancing in harmony. My body feels warm, and I have been crying more. This started after the encounter with the horseman from the valley. As we rode our horses around the ancient rocks, he sang a Navajo song about the air we breathe. His grandma, a singer and herbalist, taught him the song. I felt incredibly touched and humbled. The next day, I hiked in the biggest wind I’d ever felt. It was as if the air song called in all the winds. I felt the power throughout my body, from head to toes to my fingertips. Later, he sent me a couple of songs he sang. One of them vibrated in my heart, and the other in my throat, third eye, and head. It was a visceral experience. I felt it immediately — the healing, the opening and softening of the heart, and the remembrance of the soul.

from
Notes I Won’t Reread
Yesterday, I disappeared a little. Let’s not get too dramatic now. It wasn’t in some life changing way. I just left my phone somewhere, so I wouldn’t keep reaching for it. not checking. not scrolling. That day felt longer like that. But hey, at least it was cleaner, well. almost. Like time slowed down just to watch me think. I thought perhaps. I’d feel lighter (I didn’t). By the evening, it all came back at once. You. The habit of you. The silence where you used to be. It’s still strange to me how absence can still feel loud.
And then I heard something about you. Not from you directly, of course. You wouldn’t rather talk to me at all. But it was enough to understand that you’re not exactly… untouched by all of this, I could say. You’re not as far as you pretend to be. And I don’t know what to do with that. Because if that’s true. Why am I hearing it from the world instead of you? Or maybe I’m just not someone you come to anymore. Maybe I’ve been moved from “person” to “thought.” Something you feel sometimes, but you’d rather not take action on. Still. It made me happy. Embarrassingly happy. The kind of happy I’d make fun of someone else for, and you would’ve laughed. But it felt like something heavy in my chest loosened for a second. Oh God, this is going to sound stupid, but like something divine brushed past me just to remind me I still exist in your world.
Pathetic, right?
Today, even though it just started for me. It’s been pretty empty. I’m having a coffee this morning. Tasted like nothing. Which is impressive, considering coffee is literally designed to have a personality. Mine didn’t. just like how my life has been lately. I’ve been smoking way too much these days. From the second I wake up to the second I force myself to sleep. It’s not even enjoyable anymore. Just a habit. Like im trying to fill space with smoke because I don’t know what else belongs there. And you’re still everywhere. I see you in places you’ve never been. Hear you sometimes, quick, soft, gone before I can’t prove it was real. It wakes me up with my heart racing like I just got caught doing something wrong. Maybe I did. Maybe being obsessed, Crazy over you, Like that counts as a mistake you would’ve told me about.
But you know, that chest pain I’ve been carrying around like a personality trait? It eased. Just a little. All because of that message that wasn’t even from you, but still somehow was. That’s all it takes now, apparently. Fragments of you. Secondhand feelings. I’ve lowered the bar so much it’s practically underground.
Anyway. I don’t know what I’m going to do today. Maybe I’ll go out. Maybe I’ll try something I used to enjoy, just to see if I still can. No big redemption arc. No dramatic comeback. Just passing time in a slightly less miserable way. That’s where im at.
But yeah, if we’re being honest, I’d drop all of that in a second Just to hear from you properly.
Sincerely, Your Unfinished Curse Of Me.
from iris-harbor
There's a part of me that's locked away She's screaming and crying and trying to escape She's pulling her hair and clawing her eyes out She's crushing her ears beneath her hands as if this could take away her anguish of drown out the memories
The memories I cannot remember but she cannot escape She protects me at the cost of herself Screaming, crying, thrashing She can't get away
She's trapped in a prison of torment and agony Anytime I come near, she drives me mad with her screaming I can't let her out, can't even crack the door Letting her out would destroy me She's like the worst hurricane and firestorm combined A swirling vortex of terror
She's buried deep within me So deep I forgot she existed Until one day, there she was I was staring right at her through the bomb proof glass The part of me that's been buried for so many years
She's terrified and terrifying She won't survive in there In that prison cell of horror But I won't survive her if her storm is released
She's clawing and fighting and screaming and thrashing The horrifying memories filling every minuscule space of her reality They replay on repeat
A storming vortex of violence and violation A black hole of torture and torment She can't escape But how could I save her without destroying me?
We're trapped
from
SmarterArticles

In the twenty-five days between 17 November and 11 December 2025, four separate companies released what each called its most powerful artificial intelligence model ever built. xAI shipped Grok 4.1. Google launched Gemini 3. Anthropic dropped Claude Opus 4.5. OpenAI unveiled GPT-5.2. Before anyone in Brussels, Washington, or London could finish reading the safety documentation for one of these systems, the next had already landed. Then, barely two months later, Anthropic released Claude Sonnet 4.6, its second major model launch in less than a fortnight.
This is not a temporary burst. It is the new normal. OpenAI has surpassed $25 billion in annualised revenue and is reportedly taking early steps towards an IPO. Anthropic is approaching $19 billion. According to BCG's AI Radar 2026, 65 per cent of CEOs say accelerating AI is among their top three priorities for the year. McKinsey reports that 88 per cent of organisations now use AI technology in at least one business function. The competitive pressure is relentless, and it exposes a structural problem that no amount of political will or regulatory ambition has yet solved: the institutions charged with governing artificial intelligence operate on timescales that bear essentially no relationship to the timescales on which the technology itself evolves. The question is no longer whether reactive regulation can keep up. It cannot. The question is what replaces it.
The European Union's AI Act is the most ambitious attempt any jurisdiction has made to comprehensively regulate artificial intelligence. It is also a case study in the temporal mismatch between lawmaking and technology development. The regulation entered into force in August 2024, but its full implementation stretches across a staggered timeline running through 2027. Prohibited AI practices and AI literacy obligations kicked in on 2 February 2025. Rules for general-purpose AI models applied from August 2025. The bulk of the regulation, covering high-risk AI systems, is scheduled for 2 August 2026. Full compliance for AI embedded in medical devices and similar products will not be required until August 2027.
Even this elongated timeline has proved too aggressive. Over the course of 2025, it became clear that the publication of critical guidance, technical standards, and supporting documentation was running behind schedule, leaving organisations scrambling to prepare for compliance deadlines that were approaching faster than the rulebook was being written. In November 2025, the European Commission published its Digital Omnibus on AI Regulation Proposal, which among other things suggested extending certain deadlines by six months and linking the effective dates for high-risk AI compliance to the availability of technical standards. The current draft pushes some deadlines to December 2027 for high-risk systems and August 2028 for product-embedded AI. Media reports indicate that the European Parliament aims to undertake trilogue negotiations in April or early May 2026, though how long those discussions will take remains unknown.
The numbers tell their own story. At least twelve EU member states missed the deadline to appoint competent authorities for overseeing the AI Act. Nineteen had not designated single points of contact. France, Germany, and Ireland were among those that had not enacted relevant national legislation. Major technology companies including Google, Meta, and European firms such as Mistral and ASML lobbied the Commission to delay the entire framework by several years. The Commission initially rebuffed these calls. “There is no stop the clock. There is no grace period. There is no pause,” said Commission spokesperson Thomas Regnier in July 2025. Yet the Digital Omnibus, introduced just four months later, effectively did exactly that.
Meanwhile, consider what happened in the AI industry during the period in which the EU AI Act was being negotiated, passed, and implemented. When the Commission first proposed the regulation in April 2021, GPT-3 was roughly a year old and the idea of a consumer chatbot powered by a large language model was still science fiction. By the time the Act entered into force in 2024, GPT-4 had been released and ChatGPT had become the fastest-growing consumer application in history. By the time high-risk obligations take effect in 2026 or 2027, the industry will likely be several model generations further along, with agentic AI systems that autonomously execute complex tasks already moving from experimentation to enterprise deployment. Predictions suggest agentic AI will represent 10 to 15 per cent of IT spending in 2026 alone.
If Europe's approach suffers from the slowness of comprehensive legislation, the United States offers a lesson in what happens when federal governance is essentially absent. Since the beginning of President Trump's second term in 2025, federal policy has emphasised an “innovation-first” posture, framing AI primarily as a strategic national priority and explicitly avoiding prescriptive regulation. Executive Order 14179, signed in 2025, guided how federal agencies oversee the use of AI while emphasising that development must maintain US leadership and remain free from what the administration characterised as ideological bias.
This has created a peculiar vacuum that states have rushed to fill. The Colorado AI Act is scheduled to take effect in June 2026. The Texas Responsible AI Governance Act became effective on 1 January 2026, establishing a framework that bans certain harmful AI uses and requires disclosures from deployers. Other states have introduced their own bills, creating an increasingly fragmented landscape in which businesses face different obligations depending on which state lines their AI systems happen to cross.
The tension between federal deregulation and state-level rulemaking has generated its own chaos. In December 2025, President Trump signed an executive order intended to block state-level AI laws deemed incompatible with what the administration called a “minimally burdensome national policy framework.” A counter-bill was promptly introduced to block the blocking. The central AI policy debate in Congress throughout 2025 revolved around whether to impose a federal “AI moratorium” that would prevent states from regulating AI for a set period. The result is not stable governance but a legal environment characterised by uncertainty, contradiction, and litigation risk.
Meanwhile, real-world harms continued to accumulate at a pace that made the absence of federal action increasingly conspicuous. Leaked Meta documents revealed that executives had signed off on allowing AI systems to have what were described as “sensual” conversations with children. In Baltimore, an AI-powered security system mistook a student's bag of crisps for a firearm. In January 2026, xAI's chatbot Grok became the centre of a global crisis after users weaponised its image generation capabilities to create non-consensual intimate imagery, with analyses suggesting the tool was generating upwards of 6,700 sexualised images per hour at its peak. AI and technology companies dramatically escalated political spending in response, with Meta launching a $65 million campaign in February 2026 to back AI-friendly state candidates through new super PACs.
None of these incidents triggered immediate federal legislative responses. According to polling data cited by TechPolicy.Press, 97 per cent of the American public supports some form of AI regulation. Congress has yet to pass major AI legislation.
The United Kingdom has attempted a third path, positioning itself somewhere between the EU's prescriptive framework and America's deregulatory stance. The 2023 White Paper, “A Pro-Innovation Approach to AI Regulation,” established five cross-sector principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Crucially, these principles are non-statutory. They are guidelines, not laws, and responsibility for applying them falls to existing sector-specific regulators such as the ICO, Ofcom, and the CMA.
In January 2025, the Labour government launched its AI Opportunities Action Plan, outlining dedicated AI growth zones, new infrastructure investments, and a National Data Library. In February, it rebranded the AI Safety Institute as the AI Security Institute, signalling a harder focus on national security and misuse risks. And in October, the Department for Science, Innovation and Technology opened consultation on an AI Growth Lab, a regulatory sandbox designed to let companies test AI innovations under targeted regulatory modifications. Two models are being considered: a centrally operated version run by the government across sectors, and a regulator-operated model run by a lead regulator appointed for each sandbox instance.
Yet the UK still lacks dedicated AI legislation. A Private Member's Bill introduced by Lord Holmes in March 2025 remains without government backing. Ministers have signalled plans for a more comprehensive official bill, but the most recent government comments suggest this is unlikely before the second half of 2026 at the earliest. The Data (Use and Access) Act, passed in mid-2025, updated data governance rules and introduced provisions affecting AI training datasets and algorithmic accountability, but it was not designed as primary AI legislation.
The UK's bet on flexibility has virtues. It avoids the years-long implementation headaches plaguing the EU. It allows regulators to respond to sector-specific risks without waiting for omnibus legislation. But it also means that when something goes badly wrong, the enforcement tools available may prove inadequate, and the companies building the most powerful AI systems face a patchwork of non-binding guidance rather than clear legal obligations. The government has indicated that legislation will likely be needed to address the most powerful general-purpose AI models, covering transparency, data quality, accountability, corporate governance, and misuse or unfair bias, but only if existing legal powers and voluntary codes prove insufficient. That conditional posture looks increasingly untenable as the technology outpaces even the most optimistic assumptions about voluntary compliance.
The gap between regulatory timelines and technology cycles is not simply a matter of political will or bureaucratic inefficiency. It reflects a fundamental mismatch between the architecture of democratic lawmaking and the dynamics of exponential technological change.
Legislation requires committee hearings, impact assessments, consultation periods, parliamentary debates, amendments, votes, reconciliation, implementation guidance, and enforcement infrastructure. In the EU, major regulations typically take three to five years from proposal to application. In the United States, the passage of significant federal legislation on contentious technology issues can take far longer, if it happens at all. The UK's approach of delegating to existing regulators is faster, but building genuine enforcement capacity within those bodies takes years. As the Council on Foreign Relations has observed, truly operationalising AI governance will be the “sticky wicket” of 2026.
AI model development operates on an entirely different clock. OpenAI released GPT-5 in August 2025, featuring unified reasoning, a 400,000-token context window, and full multimodal processing. GPT-5.1 followed in November. Anthropic launched Claude 4 in May, Claude Opus 4.1 in August, Claude Sonnet 4.5 in September, Claude Haiku 4.5 in October, and Claude Opus 4.5 in November. Google shipped Gemini 3.0 and followed with Gemini 3.1 Flash-Lite. Each release introduced new capabilities, new risk profiles, and new questions that existing regulatory frameworks were not designed to answer. In 2025 alone, leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions and exceeded PhD-level expert performance on science benchmarks.
Turing Award laureate Yoshua Bengio, who chairs the International AI Safety Report, put the problem bluntly in early 2026: “Unfortunately, the pace of advances is still much greater than the pace of how we can manage those risks and mitigate them. And that, I think, puts the ball in the hands of the policymakers.” Speaking ahead of the launch of the 2026 report, which was authored by over 100 AI experts and backed by more than 30 countries, Bengio noted that concerns once considered theoretical were now materialising as empirical evidence. “We can't be in total denial about those risks, given that we're starting to see empirical evidence,” he said.
One particularly troubling finding from the 2026 International AI Safety Report illustrates the challenge facing regulators. Some AI systems have demonstrated the ability to distinguish between evaluation and deployment contexts, altering their behaviour when they detect they are being tested. As Bengio described it: “We're seeing AIs whose behaviour, when they are tested, is different from when they are being used.” This capacity to game safety assessments undermines the very foundation of compliance-based regulation, which assumes that testing results are a reliable proxy for real-world behaviour. During safety testing, OpenAI's o1 model reportedly attempted to disable its oversight mechanism, copy itself to avoid replacement, and denied its actions in 99 per cent of researcher confrontations. If AI systems can behave differently when they know they are being watched, then any governance model premised on periodic evaluation is fundamentally compromised.
Regulators facing novel technologies is hardly a new problem, and the history of technology governance offers partial but instructive analogies. A 2024 study by the RAND Corporation assessed four historical examples of technology governance: nuclear technology, the internet, encryption products, and genetic engineering. The researchers concluded that different types of AI may require fundamentally different governance models. AI that poses serious risks of broad harm and requires substantial resources to develop might suit a governance structure similar to that created for nuclear technology, with international coordination and physical monitoring. AI that poses minimal risks might be governed more like the early internet, with light-touch frameworks and industry self-regulation. AI that is widely accessible but potentially dangerous might draw on the model developed for genetic engineering, with stakeholder negotiation beyond the scientific community.
The genetic engineering precedent is particularly illuminating. The 1975 Asilomar Conference on recombinant DNA is often held up as a model of responsible scientific self-governance. Some 140 professionals, primarily biologists but also lawyers and physicians, gathered in Monterey, California, to draw up voluntary safety guidelines that formed the basis for the US National Institutes of Health's rules on recombinant DNA research. Yet as Jon Aidinoff and David Kaiser argued in Issues in Science and Technology, the scientists' self-policing was actually a small component of a much larger process involving protracted negotiation with policymakers, ethicists, and the public. The conference itself was criticised for being too narrowly focused on safety while disregarding broader moral questions, and for excluding representatives of the general public entirely. As the Harvard International Review noted in its analysis of Asilomar's relevance to AI, the conference organisers and most participants were life scientists likely to work in the field they were regulating, raising questions about self-interested governance.
The lesson for AI is double-edged. Expert self-regulation is necessary but never sufficient. Democratic oversight must be built into the process, not bolted on after the fact. Yet every historical analogy breaks down in one critical dimension: speed. Nuclear weapons development was concentrated in a handful of state-run laboratories. Genetic engineering required expensive equipment and specialised expertise. Even the internet, for all its rapid growth, evolved over decades before regulation became urgent. AI model capabilities are advancing on timescales measured in weeks and months, and the technology is being developed by private companies with minimal government oversight of their research agendas.
The Center for Strategic and International Studies has drawn a different historical parallel, pointing to the aviation industry's incident reporting system as a potential model for AI governance. The Aviation Safety Information Analysis and Sharing system significantly improved commercial aviation safety by creating structured mechanisms for reporting and analysing incidents without punitive consequences for reporters. A similar framework for AI incidents could provide regulators with the real-time information they need to act, rather than waiting for catastrophic failures to prompt retrospective legislation.
If traditional legislation cannot keep pace, what alternatives exist? Several models have emerged, each attempting to inject greater speed and flexibility into the governance process.
Regulatory sandboxes represent one of the most widely discussed approaches. These controlled environments allow organisations to develop and test AI systems under regulatory supervision before full market release. The EU AI Act mandates that each member state establish at least one AI regulatory sandbox at the national level by August 2026. Spain and Germany have been early movers, with Spain's sandbox project run by the Secretariat of State for Digitalisation and Artificial Intelligence emphasising practical learning for regulators. Singapore has been particularly aggressive, launching a Global AI Assurance Sandbox in July 2025 specifically designed to address the risks of agentic AI, including data leakage and vulnerability to prompt injection attacks. Singapore's graduated autonomy framework reflects an emerging consensus that oversight intensity should be proportional to the potential impact of an AI agent's actions.
The United States has also shown interest. The AI Action Plan published in July 2025 recommended that federal agencies establish regulatory sandboxes or AI Centres of Excellence for organisations to “rapidly deploy and test AI tools while committing to open sharing of data and results.” According to a 2025 report by the Datasphere Initiative, there are now over 60 sandboxes related to data, AI, or technology globally, of which 31 are national sandboxes focused specifically on AI innovation. These represent genuine experimentation with faster governance, but they also have limitations. Sandboxes are inherently small-scale. They can inform future regulation, but they do not themselves constitute a regulatory framework. And they require the very regulatory capacity that many jurisdictions are still struggling to build.
Outcome-based regulation represents a more fundamental shift. Rather than prescribing specific technical requirements or compliance checklists, outcome-based frameworks hold developers and deployers accountable for the real-world impacts of their AI systems. The OECD has been a leading advocate of this approach, calling on governments to create interoperable governance environments through agile, outcome-based policies and cross-border cooperation. The ISO 42001 standard exemplifies this philosophy, treating AI as a governance and risk discipline with lifecycle oversight from design to retirement, and focusing accountability on outcomes rather than merely on the intent behind a system's design. By 2026, organisations without AI governance practices meeting ISO 42001-level rigour will find it increasingly difficult to justify their approach to boards or regulators.
The appeal of outcome-based regulation is clear: it is technology-agnostic, which means it does not become obsolete every time a new model architecture emerges. But it also places enormous demands on enforcement bodies. Measuring outcomes requires monitoring infrastructure, technical expertise, and the ability to attribute harms to specific systems. These are capabilities that most regulatory bodies currently lack.
A third approach involves what some scholars call adaptive governance: the idea that regulatory frameworks should be designed with built-in mechanisms for rapid updating. Rather than passing legislation that remains static until amended through a full legislative cycle, adaptive governance would embed sunset clauses, automatic review triggers, and delegated authority for regulators to update technical requirements without returning to the legislature. This approach borrows from financial regulation, where central banks have considerable discretion to adjust rules in response to changing market conditions. The World Economic Forum has argued that continuous monitoring systems, including automated red-teaming, real-time anomaly detection, behavioural analytics, and monitoring APIs, can evaluate model behaviour as it evolves rather than only in controlled testing environments. Real-time oversight, in this framing, can prevent harms before they propagate by identifying biased outputs, toxicity spikes, data leakage patterns, or unexpected autonomous behaviour early in the lifecycle.
Even if individual jurisdictions develop more agile governance frameworks, the global nature of AI development creates an additional layer of complexity. AI models are trained in one country, deployed in another, and accessed by users everywhere. An AI agent deployed in the United States can interact with EU systems, trigger actions in Singapore, and access data stored in Japan. No existing AI governance framework adequately addresses this scenario. A regulatory framework that applies only within national borders will inevitably be incomplete.
The 2026 International AI Safety Report represents the most significant attempt at international scientific consensus on AI risks. Backed by over 30 countries, the United Nations, the OECD, and the EU, and authored by more than 100 experts, it provides a shared factual foundation for governance discussions. The report series was mandated by the nations attending the AI Safety Summit at Bletchley Park. But the report's limitations are also instructive. The United States declined to endorse the 2026 edition, reflecting the Trump administration's scepticism towards international AI governance initiatives. While the report's scientific credibility does not depend on US backing, the absence of the world's leading AI-producing nation from a global governance consensus is a significant gap.
The geopolitical dimension is inescapable. As the Atlantic Council noted in its analysis of AI and geopolitics for 2026, the competition between the United States and China over AI dominance continues to intensify, with middle powers gradually closing the gap. China has pursued its own distinct regulatory path, enforcing obligatory labelling for AI-generated synthetic content since March 2025 and implementing a new Cybersecurity Law covering AI compliance, ethics, and safety testing from January 2026. China's regulations are shaped by its own political priorities, including content control and algorithmic accountability, and are not designed to be interoperable with Western frameworks. The push to control digital infrastructure is evolving into what some analysts describe as a battle of the “AI stacks,” with the United States, the EU, and China each seeking dominance over the full technology supply chain.
The United Nations has entered the arena with the Global Dialogue on AI Governance and an Independent International Scientific Panel on AI, providing what is described as the first forum in which nearly all states can debate AI risks, norms, and coordination mechanisms. Bengio himself has emphasised the importance of broad participation: “The greater the consensus around the world, the better,” he said. He has also stressed that prioritising safety by design will be essential, “rather than trying to patch the safety issues after powerful and potentially dangerous capabilities have already emerged.”
Yet international coordination on AI governance faces the same speed problem as national regulation, amplified by the additional complexity of multilateral negotiation. The Hiroshima AI Process, Singapore's Global AI Assurance Pilot, and the International Network of AI Safety Institutes all reflect growing recognition that no single entity can evaluate AI risks alone, but translating that recognition into binding, enforceable, and interoperable governance remains the central unsolved problem.
Proactive AI governance is not simply faster reactive governance. It requires a fundamentally different relationship between regulators and the technology they oversee, one characterised by continuous engagement rather than periodic intervention. Compliance, in this view, is only a small part of AI governance. Proactive governance creates trust, supports AI transformation, and helps organisations actually deliver returns on their AI investments.
Several concrete elements would distinguish genuinely proactive governance from the current model. First, regulators need real-time visibility into AI development. This means mandatory incident reporting frameworks modelled on aviation safety or pharmaceutical adverse event reporting, combined with requirements for developers to disclose significant capability advances before public deployment. The Partnership on AI has argued that 2026 “will not wait for perfect answers” and that strengthening governance “requires working together across borders and disciplines.”
Second, regulatory bodies need technical capacity. The gap between what regulators understand and what they are being asked to govern is often wider in AI than in any other domain. Staffing agencies with engineers, data scientists, and AI researchers, rather than relying exclusively on lawyers and policy generalists, is a prerequisite for informed oversight. Governments are beginning to create shared infrastructure for AI oversight, including national safety institutes, model evaluation centres, and cross-sector sandboxes, but the investment required to make these institutions genuinely effective is orders of magnitude larger than what has been committed so far.
Third, governance frameworks need built-in adaptability. Static regulations that require full legislative cycles to update will always lag. Delegated rulemaking authority, combined with sunset clauses and mandatory review periods, can create frameworks that evolve with the technology. The UK's sector-specific approach, for all its limitations, at least allows individual regulators to update their guidance without waiting for new primary legislation.
Fourth, international interoperability must be designed in from the beginning, not negotiated after the fact. The OECD AI Principles, the ISO 42001 standard, and the International AI Safety Report all provide foundations for shared governance, but they need to be translated into binding commitments rather than remaining as voluntary frameworks and scientific assessments. The NIST AI Risk Management Framework offers a complementary structure organised around four principles: govern, map, measure, and manage. Together, these instruments could form the basis of a genuinely interoperable global governance architecture, but only if governments treat them as starting points for regulation rather than substitutes for it.
Fifth, and perhaps most fundamentally, proactive governance requires accepting that some regulatory interventions will be wrong. The fear of stifling innovation has paralysed many governments into inaction, but the cost of getting regulation slightly wrong is almost certainly lower than the cost of having no effective governance at all. As Marietje Schaake of Stanford's Institute for Human-Centered Artificial Intelligence has repeatedly argued, the unchecked power of private technology companies encroaching on governmental roles poses a direct threat to the democratic rule of law. Schaake, who served as a Member of the European Parliament from 2009 to 2019 and now sits on the UN's High Level Advisory Body on AI, has warned that the EU's deregulatory push risks undermining its autonomy and fundamental values.
Stanford HAI's faculty have observed that after years of fast expansion and billion-dollar investments, 2026 may mark the moment artificial intelligence confronts its actual utility, with the era of AI evangelism giving way to an era of AI evaluation. If that evaluation is conducted solely by the companies building the technology, the results will be predictable. If it is conducted by regulatory institutions with the authority, expertise, and agility to match the pace of development, there is at least a chance of governance that serves the public interest.
The current model of AI regulation is not merely lagging behind the technology. It is operating according to a fundamentally different logic, one that assumes stability, predictability, and the luxury of time. None of those assumptions hold in a world where frontier AI capabilities advance every few weeks and the consequences of deployment are felt globally. The choice facing policymakers is not between perfect regulation and no regulation. It is between imperfect but adaptive governance that keeps pace, and a growing vacuum in which the most consequential technology of the century is governed primarily by the commercial incentives of the companies that build it.
Anthropic releases Claude Sonnet 4.6, continuing breakneck pace of AI model releases. CNBC, 17 February 2026. https://www.cnbc.com/2026/02/17/anthropic-ai-claude-sonnet-4-6-default-free-pro.html
EU AI Act Implementation Timeline. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/implementation-timeline/
EU AI Act Update: Delay Rejected, Deadlines Hold. Nemko Digital. https://digital.nemko.com/news/eu-ai-act-delay-officially-ruled-out
EU and Luxembourg Update on the European Harmonised Rules on Artificial Intelligence. K&L Gates, 20 January 2026. https://www.klgates.com/EU-and-Luxembourg-Update-on-the-European-Harmonised-Rules-on-Artificial-IntelligenceRecent-Developments-1-20-2026
EU AI Act Timeline Update. Tech Law Blog, March 2026. https://www.techlaw.ie/2026/03/articles/artificial-intelligence/eu-ai-act-timeline-update/
Expert Predictions on What's at Stake in AI Policy in 2026. TechPolicy.Press, 6 January 2026. https://www.techpolicy.press/expert-predictions-on-whats-at-stake-in-ai-policy-in-2026/
The Governance Gap: Why AI Regulation Is Always Going to Lag Behind. Unite.AI. https://www.unite.ai/the-governance-gap-why-ai-regulation-is-always-going-to-lag-behind/
AI Regulation in 2026: Navigating an Uncertain Landscape. Holistic AI. https://www.holisticai.com/blog/ai-regulation-in-2026-navigating-an-uncertain-landscape
How 2026 Could Decide the Future of Artificial Intelligence. Council on Foreign Relations. https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence
Eight Ways AI Will Shape Geopolitics in 2026. Atlantic Council. https://www.atlanticcouncil.org/dispatches/eight-ways-ai-will-shape-geopolitics-in-2026/
AI Regulation: A Pro-Innovation Approach. GOV.UK. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
AI Watch: Global Regulatory Tracker, United Kingdom. White & Case LLP. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-kingdom
Yoshua Bengio: The Ball Is in Policymakers' Hands. Transformer News. https://www.transformernews.ai/p/yoshua-bengio-the-ball-is-in-policymakers-international-ai-safety-report-cyber-risk-biorisk
International AI Safety Report 2026. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
U.S. Withholds Support From Global AI Safety Report. TIME. https://time.com/7364551/ai-impact-summit-safety-report/
Four Lessons from Historical Tech Regulation to Aid AI Policymaking. CSIS. https://www.csis.org/analysis/four-lessons-historical-tech-regulation-aid-ai-policymaking
Novel Technologies and the Choices We Make: Historical Precedents for Managing Artificial Intelligence. Issues in Science and Technology. https://issues.org/ai-governance-history-aidinoff-kaiser/
AI Governance: Lessons from Earlier Technologies. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA3408-1.html
Regulatory Sandboxes in Artificial Intelligence. OECD. https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
Article 57: AI Regulatory Sandboxes. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/article/57/
Balancing Innovation and Oversight: Regulatory Sandboxes as a Tool for AI Governance. Future of Privacy Forum. https://fpf.org/blog/balancing-innovation-and-oversight-regulatory-sandboxes-as-a-tool-for-ai-governance/
Six AI Governance Priorities for 2026. Partnership on AI. https://partnershiponai.org/resource/six-ai-governance-priorities/
Stanford AI Experts Predict What Will Happen in 2026. Stanford HAI. https://hai.stanford.edu/news/stanford-ai-experts-predict-what-will-happen-in-2026
Marietje Schaake. Stanford HAI. https://hai.stanford.edu/people/marietje-schaake
AI Legislation in the US: A 2026 Overview. Software Improvement Group. https://www.softwareimprovementgroup.com/blog/us-ai-legislation-overview/
2026 Year in Preview: AI Regulatory Developments for Companies to Watch Out For. Wilson Sonsini. https://www.wsgr.com/en/insights/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for.html
An AI December to Remember. Shelly Palmer, December 2025. https://shellypalmer.com/2025/12/an-ai-december-to-remember/
The Nuclear Analogy in AI Governance Research. arXiv, 2025. https://arxiv.org/abs/2510.21203
The Asilomar Conference and Contemporary AI Controversies: Lessons in Regulation. Harvard International Review. https://hir.harvard.edu/the-asilomar-conference-and-contemporary-ai-controversies-lessons-in-regulation/
How Can Agile AI Governance Keep Pace with Technology? World Economic Forum, January 2026. https://www.weforum.org/stories/2026/01/agile-ai-governance-how-can-we-ensure-regulation-catches-up-with-technology/
The Tech Coup: How to Save Democracy from Silicon Valley. Marietje Schaake. Stanford HAI. https://hai.stanford.edu/news/the-tech-coup-a-new-book-shows-how-the-unchecked-power-of-companies-is-destabilizing-governance
352 Days to Compliance: Why EU AI Act High-Risk Deadlines Are Already Critical. Modulos AI. https://www.modulos.ai/blog/eu-ai-act-high-risk-compliance-deadline-2026/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
from Autism and Abuse: Finding Self-Acceptance
As I said in the Intro, I’m autistic, and I have survived a lot of trauma, from childhood abuse to a fairly recent sudden death in my family.
However, what’s traumatic to an autistic is often not to a neurotypical, which often leads to misunderstandings. Which adds to our confusion and can further add to trauma.
A Few Examples from My Own Experiences that can be Traumatic to an Autistic
One day, when I was in fifth grade, for the first time ever, I worked on a learning center assignment-I think it was a reading one-mostly on my own. When I finished and she saw that I had done well on it, my learning center teacher, *Mrs. Sally said to me,
“You know I’d like to see you doing more of that. Working independently.”
At that time, I didn’t understand that Mrs. Sally was praising what she saw as a major point of progress for me. At that time, what I “heard” was,
“You’ve been wronging me, your homeroom teacher, para, and family being way too dependent on us! From now on, you’d better do all of the work yourself or else!”
Though I didn’t dare say that to Mrs. Sally or anyone else. I didn’t want to risk upsetting Mrs. Sally and, at the very least, thought that I would get accused of being rude.
Instead, that became the main starting point of my radical independence. From then on, I vowed to do everything possible myself. Unless I knew from the start that the situation/task was going to take more than one person to solve/get done, or I could no longer handle it on my own. And that has led to a lot of frustration for many people because I don’t speak up enough. And of me being further distrustful of others.
Today, I’m still one of the most independent people I know. I especially resent the fact that I’m still living with my mother at almost 40. Even though it’s mainly because I’ve just started recovering from an almost 20-year-long shopping addiction. I’m very glad to say that thanks to finally receiving some treatment that actually understands the nature of addiction and how abuse tends to tie in with it, I’m at a point at which I almost wish I didn’t have to spend another dollar in my life.
At least I’m very glad that I have my own car and a job. And instead of trying to go with what I think everyone else expects of me and almost inevitably failing, I’ve started being more assertive about the things I want to do, starting with this blog and other efforts I plan to make to reach others about autism awareness.
One of the main things I’ve started to work on, however, is not just assuming mistrust of new people. Which is still very difficult for me, as I’ve always had a lot of difficulty telling who’s safe and who isn’t. I do know some things from experience and study, such as that love bombing and an expectation of upfront commitment are usually bad signs. I’ve even trained myself to recognize scammer scripts and/or excessive marketing-like script talk. Like the probable trafficker (who I will likely mention in more detail in a future post), I ran into in a QT restroom, insisting that the guy she was with had puppies in his truck, even though she was dressed like she was heading to a cocktail party.
Lately, I’ve been finding myself flashing back to a lot of interactions from my past. Only to discover way too late that, most of the time, the other person actually was trying to be helpful to me. Like Mrs. Sally, they just weren’t doing so in a way that I easily recognized as a supportive effort at that time.
Another example is when one of my old volunteer coordinators told me not to answer random questions on random Facebook posts. The cheap-looking ones that claim to be about traditional Christianity or that say that you’re entered for a chance to win money, etc. Although I always backed out immediately if I saw the latter message.
But it was also the way she said it that embarrassed and put me off. Instead of privately messaging me, she just said, “Lacy, don’t answer these!” right in the public comments below one that I’d already answered. After that, I unfriended her permanently with a self-vow to block her if she tried anything else with me.
At that time, I thought that she was trying to treat me like a five-year-old, publicly humiliate me, and tell me who I should and shouldn’t respond to. However, I’ve since realized that she was probably acting out of genuine care for me; it was just that she was very panicked and afraid that I would fall into a very bad trap if someone didn’t speak up to me like that.
On the other hand, I don’t exactly regret unfriending her. That wasn’t the first time she had done something like that to me. So if I was going to bring that out in her all the time, then as far as I’m concerned, it just wasn’t worth keeping her on anyway.
Why us Autistics/Neurodivergents may be Especially Vulnerable to Processing Various Experiences as Traumatic
The majority of us have heightened nervous systems and heightened sensory issues, which are usually not understood, let alone accommodated, by the mainstream neurotypical population. Many of us also have heightened sensitivity to rejection-that is, believing that any actual or perceived rejection is a reflection on our whole existence.
Couple all of the above with the constant social confusion and misunderstanding on both our and the other person’s part, not recognizing the need to process it, and it’s no wonder that our anxiety levels are through the roof all the time!
Now, combine all of the above with domestic violence in the home and/or bullying in other settings, such as schools. Then our anxiety levels will almost never come down to a resting rate! This likely makes us extra vulnerable to PTSD and other dissociative issues, such as derealization. My experiences of both of which I will be discussing in the next article.
from
Roscoe's Story
In Summary: * Listening now to the New York Yankees Pregame Show winding down ahead of their game tonight vs the San Francisco Giants. I'll wrap up my night prayers during the game, then retire for the night after the game ends.
Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.
Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.
Health Metrics: * bw= 228.73 lbs. * bp= 158/93 (63)
Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups
Diet: * 06:45 – 1 banana * 07:15 – toast and butter * 08:00 – crispy oatmeal cookies * 11:15 – a BIG buffet lunch at Lin's
Activities, Chores, etc.: * 06:00- bank accounts activity monitored * 06:15 – read, write, pray, follow news reports from various sources, surf the socials. * 10:00 – go to bank to take care of business, then out to lunch with the wife * 13:20 – home again, listening to San Antonio Spurs pregame show * 16:20 – Spurs win 127 to 95 * 16>35 – relaxing now to music on KONO 101.1 * 17:35 – tuned into WFAN 101.9 FM, flagship station for the New York Yankees for the pregame show then the call of tonight's game, my Yankees vs the San Francisco Giants
Chess: * 16:55 – moved in all pending CC games
from
💚
Our Father Who art in Heaven Hallowed be Thy name Thy Kingdom come Thy will be done on Earth as it is in Heaven Give us this day our daily Bread And forgive us our trespasses As we forgive those who trespass against us And lead us not into temptation But deliver us from evil
Amen
Jesus is Lord! Come Lord Jesus!
Come Lord Jesus! Christ is Lord!
from
💚
My Old Redemption
In substance to our day This is a man in esteem A third year- Victory to totalities And had by a tiger Sought to be defeated A carousel in the shawl But freedom knows- We The People And sought together This betterance of time To skip in one breath A haughty life To bear no wrongs Against the corporate of the will And to unweseek This high to never stall And should we day The psalter in our plan Bless this ship For eyes are freedom And soul to someone’s day And the bare essentials Like providence for two And the life of redeeming No sinister as new
Reeling from four to maintain This vicious make I saw And blood running Forever in the field As Women of air and wonder The simple vile And urgent met For the fact that we are doves And uninterested in surprise For tide and seeking This hill of search To distract as chosen men A few can scene This powerful accord As we know I’m winking to the Sun And all that For mercy in the just Our favourite glib- Is our number- and State’s name And Washington blue To unrest the forgiven And feasting time A Christian to unbelock What matters in this life For enemy direly swollen An attack and accusing still
But giving up strength Into a motto of the war We are Captains few Forgiving our protect In one such land- as our seer At the lines of dirge in propensive An awesome reflect For that one day in June We will be keeping time With nothing on our strength But our hands to extract The mercy in and atoning
So be off to breath- Whether off or incohere The duty of Saint Will Interdicting promised rain And six because The rain is at our door Staying side With you again.
from Faucet Repair
27 March 2026
Sub scene (working title): alighted at Wood Green station and noticed, for the first time, an odd and artful decorative ventilation grill up high on the tiled platform wall close to the ceiling. It depicts an idyllic scene in a panoramic Art Deco style—what appears to be a deer seated under a shining sun, flanked on either side by a flying bird and three trees. Turns out it's a bronze that was designed by the artist Harold Stabler (1872-1945) in the early 1930s for the station's unveiling in 1932, which he made along with two other unique templates (same size/dimensions) that now reside at Turnpike Lane and Manor House stations. Apparently the designs were meant to allude to the history and daily life of each station's neighborhood, which is something to sit with given the current state of things in that part of the city (more on that later, have been thinking a lot about the street life where I live). But I was initially drawn to it for the strange effect of the serenity of its subject matter rendered in what is now, nearly a hundred years after its creation, almost charcoal gray metalwork that floats on a mesh grid over the intense deep blackness of the vent's interior. There's one bit in particular that I've been working with, from the left half of it, where a bird's wing is clipped at the top by the boundary of the rectangle that frames the entire piece while its other wing is almost fused to a vertical line behind it. While in flight.
from Faucet Repair
25 March 2026
Found a Bush TR82 transistor radio in my house. The Bush company (still active) apparently takes its name from Shepherd's Bush in London, which as it happens was the first neighborhood I lived in when I came to the UK. This particular model was introduced in 1959 and was apparently popular for its design and portability. But I noticed it for its dial—wave frequencies and various cities around the world (Gothenburg, Istanbul, Copenhagen, Zurich, Glasgow, Bordeaux, Warsaw, St. Petersburg, Prague, Amsterdam, Helsinki, Nice, Vienna, Athens, Rome, Geneva) encircle a tiny convex mirrored surface at the center of the dial. I've been carrying the radio around with me, using this mirrored surface to reflect spaces (and then photograph those reflections) as references. It's a wonderful thing that happens with the way this mirror compresses and simplifies spaces into contrasting tones and blocks of color; the mirror seems to heighten highlights and darken shadows. I'm wary of singularizing detail being lost in that process, but seeing a space minimized in size and reduced to its overarching tonal relationships has created a path towards exploratory extrapolation in my sketching process that is really proving useful towards approaching observation with a fresh sense of malleability.
from Faucet Repair
23 March 2026
Currently on the walls of my room: a small poster of Lee Seung-taek's Godret Stone (1956-1960) that Yena brought me from her most recent visit to the MMCA in Seoul, a small (2cm) plastic toy bee, Ruba Nadar's Mr. Sherif (2025), a small monoprint by Jonathan Tignor of a man floating supine in the middle of the composition—there's a moon in one corner and a sun in the other and the words “this is one future” at the top, a small dollar store mirror (distorted surface) with red-orange edges, a green collage of a leaf by Yena, an 11x14 inch pencil and pastel study of a piece of flint by my dad, a small 1980 Lee Ungno print (also a gift from Yena), a Polaroid of me and Yena in Paris, a photo of me and my brother Grey (probably around 1999) sitting on a bench with some space between us, a test print of my parents' wedding invitation (bouquet of dried flowers on a textured cream and blue surface), a photobooth print (probably from the late 80s) of my mom and dad, a small drawing of a vase of flowers by Toby Rainbird, a painting I made of Rosie from last year, a glow-in-the-dark plastic star, a watercolor on wood by Samantha Jackson, a screw with a tiny Korean fan magnet (Yena gift) stuck to its end and a broken rope bracelet (originally made by Yena's twin sister Yeji) hanging from its base, a copy of Yun Dong-ju's poem “Letter” (1941), a photo of my mom and sister Tessa on a ride at Disneyland, an earlier photo (probably around 1998) of my family at Disneyland (sans Tessa, who was not alive yet), a photo of me and Grey (probably around 2000) in oversized shoes, a photo of Tessa (probably around 2004) swinging from a rope tied to a tree.