from Space Goblin Diaries

This month I've revised the overall structure of the game and come up with something I'm happy with, but I haven't had a lot of time to work it so I haven't getten back to the writing yet. But the project was actually in a good place for me to step back and let it “lie fallow” for a little while, so taking a break at this point will hopefully help me overall.

But that means I don't have anything to show in this diary, so instead here is a fluffy listicle of some of my favourite space villains who inspired the game, and my personal crazy fan theories about each of them. (These are in chronological order of first appearance.)

Ming the Merciless

Flash Gordon and Ming the Merciless as drawn by Alex RaymondFirst appearance: Flash Gordon Sunday newspaper strip, 1934.

Ming is a cruel and scheming politician who gained usurped the rightful emperor of Mongo and who now rules the planet with an iron fist.

Ming started out as a “yellow peril” stereotype with his fu manchu mustache and orientalist palaces, but this aspect of his character was downplayed almost immediately. As Alex Raymond's art style evolved, Ming's skin changed from bright yellow to a more natural colour, and as the strip moved into the late 30s Ming and his minions looked and acted less like orientalist fantasy villains and more like real-life Nazis.

(When Raymond eventually left the Flash Gordon strip in 1944 it was to enlist in the U.S. Marines, which he insisted on doing despite having already done enough military service to be exempt from the draft.)

Interestingly, the idea of Ming wanting to conquer the Earth is actually not in the original comics, but first appears in the 1936 movie serial. Ming's goals in the early comics are to forcibly marry Dale Arden and to retain control of his empire in the face of rebellious vassals and a population that hates him. Later versions of Flash Gordon have given Ming more far-reaching ambitions, and have tended to downplay his orientalist origins even further by e.g. making him a grey-skinned alien.

My fan theory: Ming wants to marry Dale Arden not because he lusts after her (although he does), but because she exists outside the Mongothic social structure and thus solves a political problem. If he marries a noble it'll affect the balance of power between his vassals, and if he marries a commoner he'll lose everyone's respect, but Dale is from another planet, so he can present her as some kind of exotic alien princess. This is also why (in the original 1930s comic) he loses interest in Dale once Princess Aura has given him a grandson, instead shifting his focus to kidnapping the baby in order to raise him as his ideal heir.

The Mekon

The Mekon drawn by Frank HampsonFirst appearance: Eagle comic, 1950.

The first Dan Dare story introduces the Treens, reptilian aliens who have done away with emotions and devote themselves to remorseless scientific logic. Whereas Flash met Ming almost immediately, it is several months before Dan meets the Mekon, a diminutive creature with a swollen head and atrophied body, whom the Treens had specifically engineered to be their super-intelligent ruler.

Unlike Ming and his quarrelling vassals, the Mekon has absolute authority over his Treens—at least until the end of the first story, when he is deposed and flees into space. After that he pops up in roughly every second story, with a small band of fanatically loyal Treens and a new plan to conquer the Earth.

(Once the Mekon is removed, most of the Treens seem content to return to their ordered, scientific lives and live in peace with Earth people. I like to imagine Treen science lab directors being quietly relieved to be able to focus on their obscure research areas now that this disruptive business of conquering the universe is out of the way.)

The Mekon remains mostly unchanged across all the later versions of Dan Dare, although the 2007 Garth Ennis/Gary Erskine version does redesign his flying chair to finally give it a back rest. Possibly if his chair had been more ergonomically designed from the start, he wouldn't have been so unpleasant.

My fan theory: We know that the Mekon is the last in a long line of similar Mekons, so why does this Mekon have designs on conquering the universe when previous ones seemed content to keep Treen society running? Perhaps this Mekon is defective somehow, dominated by unusually powerful emotions that he can't admit to himself and doesn't have the ability to process. Perhaps a “normal” Mekon would look at him in disgust...and perhaps, deep down, he knows that...

Davros

Davros as portrayed by Michael WisherFirst appearance: Doctor Who, “Genesis of the Daleks”, 1975.

Doctor Who has lots of great villains, but the one that's most relevant to this list is Davros, creator of the Daleks.

Like the Mekon (who partially inspired him), Davros is super-intelligent but physically frail, and is confined to a mobile life support unit. That life support module was also the design basis of the Daleks, whom he intended to be the ultimate life-form according to his genocidal ideology—so he resembles an intermediate step between Daleks and ordinary humanoids.

Davros is an unusual villain in that he's super-intelligent but still treats the hero as an intellectual equal, or at least close enough to one to engage them in philosophical conversations. (Ming and the Mekon might monologue at their respective heroes, but there's never a chance that they'll listen to what the hero says and change their mind.) “Genesis of the Daleks” is in part a sort of intellectual duel between the hero and villain, one in which the villain listens to what the hero says, is confronted with the philosophical ramifications of their plans, realises that they're utterly evil—and decides to do it anyway.

My fan theory: Even when Davros is seemingly in charge of things, he only exists because the Daleks keep resurrecting him and keeping him alive—and they only do that because they need his non-Dalek intelligence to deal with some problem—which is usually the Doctor—so in a sense Davros only continues to exist because the Doctor does. (Actually I'm not sure this is a fan theory, it might be canon, but I'm not enough of a Doctor Who nerd to be sure.)

The Borg Queen

The Borg Queen as portrayed by Alice KrigeFirst appearance: Star Trek: First Contact, 1996.

The Borg, like the Daleks, were introduced without an overall leader, and in fact their lack of individual identity seemed to make such a concept meaningless. The Star Trek story “The Best of Both Worlds” gave them a temporary spokesperson in the form of an assimilated Picard (who called himself “Locutus of Borg”), which implied that the Borg might do a similar thing when dealing with other species they wanted to assimilate. The fact that the Borg didn't have individuals in the normal sense was one of the things that made them alien and scary. They were a sort of twisted mirror image of the Federation, an interplanetary culture based not on diversity but on a complete negation of diversity.

So the introduction of an individual ruler of the Borg, in the form of the Borg Queen, would seem to contradict one of the things that makes the Borg work—but she's such a great villain that I think they get away with it.

Like the previous villains in this list, the Queen is intelligent and articulate, but unlike them hers is an alien intelligence because it's bent towards utterly inhuman goals. Conquering the universe and enslaving or exterminating everyone is evil, but it's understandable; assimilating everyone into a hive mind is a science fiction concept that requires an imaginative leap; and an individual intelligence devoted to a hive-mind goal is a further conceptual leap. And the visual design, with its seamless integration of flesh and technology, is great.

My fan theory: The Borg Queen isn't an individual with a continuous identity, but something the Borg collective can sort of extrude when it needs an administrative or diplomatic focal point, whenever and wherever is required. So the question of whether the Borg Queen who dies at the end of First Contact is “the same” as the one who later appears in Voyager is meaningless.

Bonus fan theory: Assimilated Picard is called “Locutus of Borg” because that's the kind of pretentious Latinate name the Borg would get from rummaging around in Picard's subconscious. If they'd assimilated Riker he'd have called himself something straightforward like “Speaker for the Borg”.

*

OK that's all for now. I'm hoping that next month I'll have time to make progress on the game so I'll be able to give you a normal developer diary at the end of May.

Will Vorak, the Master Brain join this canon of space villains, or will our hero fail to make progress once again? Find out in next month's developer diary...

#FoolishEarthCreatures #DevDiary #FlashGordon #DanDare #DoctorWho #StarTrek

 
Read more... Discuss...

from 下川友

好きなことをやろうと思って、たとえば音楽とか文章とかに向かう。 でもそれだって、純粋に好きだからというより、「仕事をしたくない」という前提がどこかにあると、コンテンツそのものをちゃんと楽しめていない感覚が残って、その行動さえも少し萎えてしまう。

友達に漫才をやってみないかとお願いして、何日か試してみたものの、漫才だって結局は商業的なものだし、これ本当に楽しめてるのか?と疑心暗鬼にもなる。 そんなことを考えても進めるしかないだろ、と元気な自分がそれらのマイナスを強引にシャットアウトする。

それでも音楽に携わることは、確信とまではいかないが、一番心に優しい選択なのではないかと感じている。 今もBandcampでベースミュージックやアンビエントを漁っていると、昔、少し髪が長かった頃の自分が憑依してくるようで、体が少し軽くなる。

バーとか、喫茶店とか、古着屋とか。 そういう目と体に優しい仕事もやってみたいが、技術屋見習いの自分にとってはかなり遠い場所にある。 今いる位置からそこへ身を運ぶには、距離がありすぎる。 どこから手を付けていいのか分からない。

自分は、ないものをあるように存在させることに、無意識的にも意図的にも強く惹かれている。 そこにギラついた憧れがある。 けれど同時に、実際に存在している点々とした人や土地、手で触れられる物理的なものに一度屈してでも、まずはそこへアクセスすべきなのだとも思う。

たとえ今は、手で文章を書くことくらいしかできなくても。 似たような文章を何度も書いて外にさらし、それ、いつまで言ってるの?と思われたとしても、少しでも前に進んでいると信じて続けるしかない。

こうして現実的なことを文章にしていること自体が、現実を保留しているだけなのか、それとも単なる逃げなのか。 そんなことを考えながら、そろそろ始まるGWの予定を確認する。

GWは横須賀に墓参りに行き、そのついでにピクニックをする。 それと、まだラーメン二郎を食べたことがないので、それも食べてみようと思っている。

休みの日だけを並べれば、こんなにも普通の日々が続いているのに。 いつも頭の中は悩みでいっぱいで、よく分からない顔のまま、今日も椅子に座り続けている。

 
もっと読む…

from fromjunia

The more people try to fix me, the less I think fixing me will fix things. I am broken: anorexia, bipolar, trauma. Broken things get fixed: Cyproheptadine, lamotrigine, mirtazapine; UT, DBT, art therapy. I have so many people trying to fix me. At last count, a dozen. I pay a lot for that. I’m pretty lucky to be a project for a dozen people. I should be fixed in no time.

You would think that. Except every statistic indicates otherwise, and my experiences track. Maybe I can be fixed. But maybe I can’t. And if I can, it will probably take a long time. A long time of people trying to fix me. A long time being told I’m broken. A long time not being enough.

Will a long time of not being enough fix me?

I don’t think I want to be fixed. I want to be helped. I want to be met on my terms, not theirs. I want to make art about my experiences and not be told it’s wrong. I want to be given vocabulary to speak my experiences, not be told I can’t share them. I want to be a person again. I want to feel alive.

Stop trying to fix me. I need help, but I’m not broken. I want support, guidance, language, ideas, and empathy; not regulation, management, monitoring, supervision, and condescension. And I don’t want to be told that fixing my broken soul is help. No, you can’t fix me, but you can help me.

Please, please, help me.

 
Read more...

from SmarterArticles

In May 2024, Wells Fargo fired more than a dozen employees in its wealth and investment management division. Their offence was not fraud, misconduct, or incompetence. It was the use of mouse jigglers, small devices costing roughly twenty dollars apiece that simulate cursor movement on a screen, creating the illusion of an active worker at their desk. The disclosures, filed with the Financial Industry Regulatory Authority, described their transgression as “simulation of keyboard activity creating impression of active work.” A Wells Fargo spokesperson told Bloomberg that the company “holds employees to the highest standards and does not tolerate unethical behaviour.”

The incident became a flashpoint. Not because the employees were blameless, but because it exposed the architecture of suspicion that now undergirds the modern workplace. These workers were not stealing money or falsifying accounts. They were gaming a system designed to reduce their entire working day to a stream of keystrokes, mouse movements, and activity scores. The fact that such a system existed, and that circumventing it was treated as a fireable offence, tells you more about the state of employer-employee relations in 2026 than any corporate mission statement ever could.

Across the industrialised world, millions of remote and hybrid workers now operate under what researchers and labour advocates have come to call “bossware”: a sprawling ecosystem of software tools that log keystrokes, capture screenshots at random intervals, track application usage, monitor website visits, record webcam footage, score activity levels in real time, and in some cases analyse facial expressions to determine whether someone is paying attention. According to industry surveys, 80 per cent of US companies now track employee performance digitally, and 74 per cent use online tracking tools of some kind. Sixty-one per cent use AI-powered analytics to measure employee productivity or behaviour, signalling a shift from simple time tracking to algorithm-driven performance evaluation. The employee monitoring software market, valued at approximately 587 million US dollars in 2024, is projected to reach 1.4 billion dollars by 2031. Some market analyses place it significantly higher, with estimates ranging up to 4.59 billion dollars in 2026 depending on scope. However you measure it, the trajectory is unmistakable. The business of watching workers is booming.

And yet, a growing body of research from institutions including MIT, Stanford, and the US Government Accountability Office suggests that these tools are not accomplishing what they promise. They are not making workers more productive. In many cases, they are making them more anxious, more disengaged, and more likely to leave. Some evidence links intensive productivity monitoring to increased physical injury rates. The question that emerges is not simply whether this technology works, but what its continued adoption reveals about the distribution of power between employers and the people who work for them.

The Machinery of Ambient Scoring

To understand what bossware does, it helps to examine the tools themselves. The market is crowded, but a handful of names dominate: Teramind, Hubstaff, ActivTrak, Time Doctor, Veriato, and Kickidler, among others. Their capabilities vary, but the general architecture is consistent. Each tool sits silently on an employee's device, often installed by IT departments without detailed explanation, collecting behavioural data and feeding it into management dashboards that convert a working day into graphs, percentages, and colour-coded scores.

Teramind, one of the more comprehensive platforms, offers keystroke logging, screen recording, application and website monitoring, email surveillance, file transfer tracking, chat monitoring, clipboard capture, and even printing activity logs. Hubstaff provides screenshot capturing at set intervals, keyboard and mouse activity tracking, GPS location monitoring for mobile workers, and application usage analytics. These tools run continuously, and their data collection is often invisible to the worker. There is no blinking light, no notification, no moment when the system asks permission. It simply watches.

Some systems go further still. Fujitsu Laboratories developed an AI model capable of detecting small changes in facial expression muscles using a framework called Action Units. The system claims to determine whether someone is concentrating or not by tracking muscular micro-movements every few seconds, capturing both short-term changes such as a tense mouth and longer-term patterns such as a sustained stare. Fujitsu reported an 85 per cent accuracy rate based on a study of 650 participants across the United States, China, and Japan, and has targeted applications including teleconferencing support and employee engagement measurement. The Victorian parliamentary inquiry into workplace surveillance in Australia specifically cited this kind of facial analysis technology as an example of the expanding frontier of worker monitoring. The committee heard evidence about wearable devices that monitor conversations, including how enthusiastically someone is speaking.

The data these tools generate is then fed into dashboards that score employees on productivity metrics, often in real time. Managers can view who is “active” and who is “idle,” which applications are being used, and how time is distributed across tasks. In some implementations, these scores feed directly into performance reviews, promotion decisions, and disciplinary processes. The worker rarely sees the same dashboard the manager sees. They experience the outputs of the system, in the form of warnings, performance ratings, or termination, without access to the inputs that produced those outcomes.

The core premise is straightforward: if you can measure activity, you can optimise it. What the research increasingly shows is that the premise is wrong.

What the Evidence Actually Shows

In February 2025, MIT Technology Review published a detailed investigation by Rebecca Ackermann into how opaque algorithms designed to analyse worker productivity have been rapidly spreading through workplaces. The piece argued that these algorithmic tools are less about efficiency than about control, and that workers have less and less recourse to challenge the decisions made on the basis of their data. There are few laws, Ackermann noted, requiring companies to offer transparency about what data goes into their productivity models or how decisions are derived from them. Labour groups, the article reported, were pushing back against this shift in power by seeking to make the algorithms that fuel management decisions more transparent.

The evidence against the effectiveness of monitoring has been building for years. A meta-analysis published in Computers in Human Behavior Reviews examined the impact of electronic monitoring on job satisfaction, stress, performance, and counterproductive work behaviour. The findings were stark: electronic monitoring showed a near-zero correlation with performance improvement (r = -0.01) while showing positive correlations with stress and counterproductive behaviour. In other words, monitoring does not make people work better. It makes them more stressed and, in some cases, more likely to act out. The study also found that performance targets and feedback, when combined with monitoring, could further exacerbate these negative effects.

A 2024 study published in Social Currents by Paul Glavin, Alex Bierman, and Scott Schieman, based on a nationally representative sample of 3,508 Canadian workers, found that perceptions of workplace surveillance were indirectly associated with increased psychological distress and lower job satisfaction. The mechanism, the researchers found, ran through what they termed “stress proliferation”: surveillance increased job pressures, reduced autonomy, and heightened feelings of privacy violation, all of which compounded into measurable psychological harm. The study used a novel measurement approach that captured overall surveillance perceptions across all types of work, rather than focusing narrowly on specific monitoring technologies.

The American Psychological Association's 2024 Work in America Survey, conducted by The Harris Poll among more than 2,000 employed adults, found that 56 per cent of workers who reported being monitored also reported feeling tense or stressed at work, compared with 40 per cent of those who were not monitored. Just over a third of respondents said they worried that their employer used technology to spy on them during work hours. The prevalence of monitoring was notably higher among Black and Hispanic workers (55 per cent and 47 per cent respectively) than among White workers (38 per cent), and higher among those doing manual labour (55 per cent) than among office workers (44 per cent). These disparities point to an equity dimension that is rarely discussed in the productivity optimisation conversation. The people bearing the heaviest burden of surveillance are disproportionately those who already occupy the most precarious positions in the labour market.

The US Government Accountability Office weighed in with a comprehensive report, GAO-25-107126, published in September 2025 and reissued with revisions in December 2025. The GAO reviewed 122 studies published between 2020 and 2024 on the effects of digital surveillance on workers' physical health and safety, mental health, and employment opportunities. The report concluded that while surveillance can in some contexts alert workers to potential health problems and increase their sense of physical safety, it can also increase anxiety and, critically, increase the risk of injury by pushing workers to move faster to meet productivity targets. The report further noted that several federal agencies that had previously provided guidance to employers about digital surveillance had, by mid-2025, rescinded those efforts or were reassessing their alignment with current administration priorities. The Department of Labor, for instance, removed a relevant resource from its website in June 2025 as part of a broader review.

When Productivity Scores Cause Injuries

The starkest illustration of how productivity tracking can cause physical harm comes from Amazon's warehouse operations. In December 2024, the US Senate Committee on Health, Education, Labor and Pensions published a 160-page report following an 18-month investigation led by Chairman Bernie Sanders. The investigation examined Amazon's internal systems for tracking worker speed, including the so-called “Time Off Task” metric that penalises workers for any period of inactivity, including time spent using the bathroom or waiting for equipment.

The Senate report cited an internal Amazon study, Project Soteria, which found a direct relationship between the speed at which workers performed tasks and their rate of injury. In each of the prior seven years, Amazon workers were nearly twice as likely to be injured as workers at other warehouses. More than two-thirds of Amazon's fulfilment centres had injury rates exceeding the industry average. The investigation concluded that Amazon had studied this connection for years but refused to implement changes that might reduce productivity, even when its own internal data showed those changes would reduce injuries. The report further alleged that Amazon manipulated workplace injury data to make its facilities appear safer than they were, and prevented injured workers from receiving needed medical care.

The report also found that Amazon's disciplinary systems, powered by automated tracking, forced workers into an impossible choice: follow safety procedures such as requesting help to move heavy objects, or risk discipline and potential termination for not maintaining sufficient speed. The system was, in effect, using surveillance and automated scoring to compel workers to choose between their physical safety and their employment.

Amazon contested the report's findings, insisting that injury rates had declined and that the investigation distorted the data. But the pattern the Senate investigation described, automated monitoring creating pressure that leads to physical harm, is not confined to warehouses. It is the logical endpoint of any system that reduces work to quantified activity and then optimises for speed.

The Panopticon Has a Subreddit

If you want to understand what it feels like to work under constant surveillance, the academic literature is illuminating. But Reddit may be more revealing.

A 2024 study published on arXiv and later in the Proceedings of the ACM on Human-Computer Interaction, titled “It's Always a Losing Game: How Workers Understand and Resist Surveillance Technologies on the Job,” analysed posts from nine work-related subreddits, including r/antiwork, r/remotework, r/WorkersStrikeBack, and r/overemployed, alongside ten in-depth semi-structured interviews with employees and managers from industries including operations, customer service, marketing, and food and beverage. The researchers found that workers consistently identified surveillance technologies as causing significant stress, reducing their productivity, and increasing their risk of disciplinary action. Workers also reported that these technologies fostered paranoia and distrust, not just between employee and employer, but among colleagues who feared that their peers might be reporting monitored data to management.

The resistance tactics the researchers documented included commiseration (sharing frustrations with fellow workers), obfuscation (using tools like mouse jigglers to game activity trackers), soldiering (deliberately slowing down work in protest), and quitting. Search queries for “mouse mover” and “mouse jiggler” have remained consistently elevated since March 2020, when the mass shift to remote work began. Approximately 16 per cent of employees, according to industry surveys, now use some form of device or software to circumvent inactivity tracking, while roughly 7 to 8 per cent use automation specifically to fake productivity metrics.

The psychological weight described in these communities is consistent with the formal research. Workers describe the sensation of being permanently watched not as an inconvenience but as a persistent source of anxiety that colours every aspect of their working day. The knowledge that a screenshot might be taken at any moment, that an idle period might be flagged, that a bathroom break might register as a productivity dip, creates a state of hypervigilance that is functionally indistinguishable from chronic low-level stress. These accounts are anecdotal, but they are also numerous, spanning thousands of posts across multiple communities, and they align precisely with what peer-reviewed studies have documented.

Industry-level surveys reinforce the picture. Seventy-two per cent of monitored employees say that monitoring has not improved their productivity. Forty-two per cent of monitored workers plan to leave their employer within a year, compared with 23 per cent of those who are not monitored. Fifty-nine per cent report that digital tracking damages workplace trust. Fifty-four per cent say they would consider quitting if their employer increased surveillance. Eight in ten employees report that monitoring erodes trust. The tools designed to keep workers productive are, by workers' own accounts, driving them away.

A Regulatory Patchwork Full of Gaps

The legal landscape governing workplace surveillance is, to put it charitably, fragmented. In the United States, there is no comprehensive federal law regulating employers' use of electronic monitoring. New York requires employers to provide advance written notice if they monitor employees' phone and internet use, a requirement that has been in force since May 2022, but this is a notification requirement, not a consent mechanism. Workers must be informed, but they cannot refuse. Illinois enforces the Biometric Information Privacy Act, one of the more stringent biometric protection statutes in the world, requiring written consent before employers collect fingerprints, facial scans, or retinal data. Violations carry penalties of 1,000 to 5,000 US dollars per incident. California's Consumer Privacy Act extends some data rights to employees, including the right to know what personal information is being collected. But these are state-level provisions, inconsistent in scope and enforcement, and they leave the vast majority of American workers without meaningful protection.

The EU AI Act, which entered into force on 1 August 2024, represents the most significant regulatory intervention to date. Its risk-based framework explicitly classifies AI used for performance evaluation and other employment-related decision-making as high-risk. Emotion recognition in workplaces was banned outright in February 2025. Starting in August 2026, any AI tool used in recruitment, screening, or performance assessment will require mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring. Penalties for violations can reach 35 million euros or 7 per cent of global annual turnover for prohibited practices. In November 2025, the European Parliament advanced a further call for the European Commission to launch a dedicated legislative initiative regulating AI in the workplace. That same month, the EU AI Office introduced a dedicated whistleblower tool, enabling employees, contractors, and external stakeholders to report breaches of the AI Act anonymously through a secure platform.

In Australia, the Victorian parliamentary inquiry that reported in May 2025 made 29 findings and 18 recommendations. The committee concluded that workers were increasingly being subjected to surveillance through optical, listening, tracking, and data-recording devices, often without their knowledge or consent. It found widespread examples of biometric surveillance in practice, including the collection of retinal, finger, hand, and facial data from nurses and construction workers. The committee recommended dedicated workplace surveillance legislation requiring employers to demonstrate that any monitoring is “reasonable, necessary and proportionate to achieve a legitimate objective.” It called for the prohibition of selling worker data to third parties and severe restrictions on the collection of biometric data. The Victorian government subsequently provided in-principle support for 15 of the 18 recommendations.

In July 2025, the National Employment Law Project in the United States published “When 'Bossware' Manages Workers,” a policy report arguing that employers' expanding use of digital surveillance and automated decision-making systems had intensified a range of existing job quality problems, including harmful disciplinary practices, job precarity, lack of autonomy, exploitative pay, unfair scheduling, barriers to benefits, discrimination, and the suppression of collective action. NELP called for a two-pronged approach: updating existing workplace protections to account for bossware-related harms, and directly regulating the tools themselves.

The picture that emerges is one of significant regulatory activity, but mostly at the margins. In the jurisdictions where the largest number of workers are subject to monitoring, particularly the United States, the legal framework remains permissive. Employers can, in most states, monitor virtually everything an employee does on a company device without explicit consent. The gap between what the research shows and what the law permits is enormous.

The Power Question

If workplace surveillance does not reliably improve productivity, increases worker stress and anxiety, drives higher turnover, may contribute to physical injuries, and erodes the trust that functional employment relationships require, then why is the market for these tools growing at double-digit rates? The question is not rhetorical. It has an answer, and the answer has less to do with productivity than with power.

Part of the explanation lies in a perception gap that the data makes visible. According to industry surveys, 68 per cent of employers believe that monitoring improves work output. Meanwhile, 72 per cent of the workers being monitored say it does not improve their productivity, and 59 per cent report feeling stress or anxiety as a result of surveillance. The two sides of the employment relationship are looking at the same technology and reaching opposite conclusions. But only one side gets to decide whether the tools stay installed. The employer's belief that monitoring works is sufficient for continued adoption, regardless of whether the employees' experience confirms or contradicts that belief. This is not a failure of communication. It is the predictable outcome of a relationship in which one party holds unilateral decision-making authority over the terms of the other's working conditions.

Merve Hickok and Nestor Maslej, writing in AI and Ethics in 2023, published a policy primer examining assumptions embedded in workplace surveillance and productivity scoring technologies. Their central finding was that, in the absence of legal protections and strong collective action capabilities, workers are in a structurally imbalanced power position to challenge the use of these tools. The tools, they argued, undermine human dignity and human rights. Employers adopt them because they can, and because the technology offers a sense of control and visibility that managers find appealing, regardless of whether it translates into measurable performance gains. The tools serve a managerial appetite for legibility rather than any demonstrated improvement in output.

This dynamic explains the otherwise puzzling disconnect between evidence and adoption. Companies are not purchasing bossware because the data shows it works. They are purchasing it because it satisfies an organisational desire to see what employees are doing, to quantify their effort, and to possess a mechanism for discipline and justification. In a labour market shaped by years of remote and hybrid work arrangements, where physical presence can no longer serve as a proxy for productivity, surveillance software fills the gap. It is not a productivity tool. It is a control tool marketed as a productivity tool.

The asymmetry runs deeper than individual employer-employee interactions. The employees most heavily monitored tend to be those with the least bargaining power: warehouse workers, call centre operators, gig economy participants, and remote workers in competitive labour markets. The APA survey data showing disproportionate monitoring of Black and Hispanic workers suggests that existing social inequalities are being replicated and potentially amplified through the architecture of digital surveillance. The workers most likely to be watched are also the workers least likely to have the resources or institutional support to push back.

Can Workers Ever Trust Workplace AI?

If the current model of workplace AI is fundamentally about surveillance and control, the question remains: is there an alternative? Can artificial intelligence be deployed in the workplace in a way that workers would actually choose to use?

The answer, according to some emerging research and practice, is conditionally yes, but only if the architecture of the technology is rebuilt around entirely different principles. The distinction that matters is between surveillance-oriented monitoring and what researchers call developmental monitoring. A meta-analysis of electronic performance monitoring studies found that when monitoring data is used developmentally, meaning it is shared transparently with employees, used to provide constructive feedback, and oriented towards growth rather than discipline, the negative effects on wellbeing and counterproductive behaviour are significantly reduced. The tool is the same; the governance model is different. Supervisors who return performance monitoring data to employees in a constructive, developmental way can buffer the negative relational consequences that electronic monitoring would otherwise produce.

Broader surveys of workplace AI tell a similar story. A 2025 study cited by Wiley found that employees who understood how AI tools functioned, how they would affect their roles, and how they could contribute to shaping their deployment reported significantly higher trust and engagement. Sixty-seven per cent of employees reported increased efficiency from AI integration, 61 per cent reported improved information access, and 59 per cent cited greater innovation. But these gains tracked almost exclusively with organisations that had communicated clearly about how AI was being used. Where communication was absent, trust collapsed. Between May and July 2025, employee trust in company-provided generative AI tools fell 31 per cent, and trust in agentic AI systems that act autonomously dropped 89 per cent. Only 34 per cent of employees reported that their organisations had clearly explained how AI affected their roles and skill requirements. The pattern is consistent: productivity gains alone do not build confidence or engagement. Workers want to understand how AI fits into their work today and how it shapes opportunity tomorrow.

The pattern is not complicated. Workers do not inherently distrust AI. They distrust opacity. They distrust tools deployed without their input, governed without their participation, and used for purposes they cannot see or challenge. The EU AI Act's transparency and human oversight requirements for high-risk employment AI represent one structural answer to this problem. The Victorian inquiry's recommendation that employers demonstrate surveillance is “reasonable, necessary and proportionate” represents another. Both approaches share a common logic: the legitimacy of workplace technology depends on the extent to which the people subject to it have meaningful knowledge of and voice in how it operates.

There are practical models that point in this direction. ActivTrak, one of the larger workforce analytics platforms, has explicitly positioned itself as a “privacy-first” alternative that analyses productivity patterns at the team level rather than conducting individual keystroke surveillance. It does not offer keystroke logging or screen recording, and its analytics are designed to surface patterns such as burnout risk and collaboration bottlenecks rather than to generate individual compliance scores. Whether one believes ActivTrak's marketing claims is a separate question. But the fact that a monitoring company sees market advantage in positioning itself against surveillance suggests that the appetite for a different model exists, both among workers and among employers who recognise that trust is a precondition for sustained performance.

What Comes Next

The current trajectory of workplace surveillance is not sustainable in either a practical or a political sense. Practically, the evidence base for its effectiveness is thin and getting thinner. Tools that increase stress, drive turnover, and damage trust impose real costs on the organisations that use them, even if those costs do not appear on the dashboards that justify the software's purchase. Politically, the regulatory tide is turning. The EU has moved from general principles to specific prohibitions. Australia's Victorian inquiry has produced actionable recommendations with government backing. The GAO has documented the harms. Labour advocates and legal scholars are building the frameworks for broader reform.

But the pace of regulatory action remains slow relative to the pace of technological adoption. The employee monitoring market continues to grow. New tools are entering the market with increasingly granular capabilities. And in the jurisdictions where the regulatory environment is most permissive, particularly the United States, there is little immediate prospect of comprehensive federal legislation.

What the continued adoption of surveillance tools tells us, in the face of contrary evidence, is something uncomfortable but important. It tells us that the employment relationship, in its current form, is not fundamentally structured around mutual benefit. It is structured around control. When an employer can install software that monitors every keystroke, captures random screenshots, and scores an employee's activity minute by minute, and the employee has no legal right to refuse, challenge, or even fully understand what is being collected, that is not a partnership. It is an asymmetry of power expressed through technology.

The conversation about workplace AI needs to begin from this recognition. The problem is not that the technology is too powerful or too imprecise. The problem is that it is deployed within a relationship that gives one party near-total discretion over its use and the other party near-zero recourse. Fixing the technology without fixing the relationship will produce, at best, more sophisticated forms of the same dysfunction.

A version of workplace AI that workers could genuinely trust would require, at minimum, transparency about what data is collected and how it is used; meaningful consent, not the kind buried in paragraph 47 of an employment contract; worker participation in the governance of monitoring systems; clear limitations on the purposes for which collected data can be used; independent auditing of algorithmic decision-making; and enforceable rights of challenge and appeal. These are not radical proposals. They are the basic conditions under which any reasonable person would agree to be monitored. The fact that they describe almost no workplace surveillance system currently in operation is the most important thing to understand about where we are.

The tools exist. The evidence exists. The regulatory models exist. What does not yet exist, in most of the world, is the political will to force the rebalancing that workers deserve and that, if the research is to be believed, productivity actually requires.


References

  1. Bloomberg, “Wells Fargo Fires Over a Dozen for 'Simulation of Keyboard Activity,'” June 2024.
  2. MIT Technology Review, Rebecca Ackermann, “How AI Is Used to Surveil Workers,” February 2025.
  3. Glavin, P., Bierman, A., and Schieman, S., “Private Eyes, They See Your Every Move: Workplace Surveillance and Worker Well-Being,” Social Currents, Vol. 11, No. 4, pp. 327-345, August 2024.
  4. American Psychological Association, “2024 Work in America Survey: Psychological Safety in the Changing Workplace,” 2024.
  5. US Government Accountability Office, “Digital Surveillance: Potential Effects on Workers and Roles of Federal Agencies,” GAO-25-107126, September 2025.
  6. US Senate Committee on Health, Education, Labor and Pensions, “The Injury-Productivity Trade-off: How Amazon's Obsession with Speed Creates Unprecedented Danger for Workers,” December 2024.
  7. Parliament of Victoria, Economy and Infrastructure Committee, “Inquiry into Workplace Surveillance,” May 2025.
  8. Victorian Government, “Victorian Government Response to the Inquiry into Workplace Surveillance Report,” November 2025.
  9. National Employment Law Project, “When 'Bossware' Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses,” July 2025.
  10. Hickok, M. and Maslej, N., “A Policy Primer and Roadmap on AI Worker Surveillance and Productivity Scoring Tools,” AI and Ethics, Springer, 2023.
  11. Sum et al., “It's Always a Losing Game: How Workers Understand and Resist Surveillance Technologies on the Job,” arXiv:2412.06945 / Proceedings of the ACM on Human-Computer Interaction (CSCW), 2024-2025.
  12. Fujitsu, “Fujitsu Develops AI Model to Determine Concentration During Tasks Based on Facial Expression,” Press Release, March 2021.
  13. EU AI Act, “Regulatory Framework for Artificial Intelligence,” European Commission, entered into force August 2024.
  14. Crowell and Moring LLP, “Artificial Intelligence and Human Resources in the EU: A 2026 Legal Overview,” 2026.
  15. Fortune Business Insights, “Employee Surveillance and Monitoring Software Market,” 2024-2034.
  16. APA, “Electronically Monitoring Your Employees? It's Impacting Their Mental Health,” 2024.
  17. ADM+S Centre, “Being Monitored at Work? A New Report Calls for Tougher Workplace Surveillance Controls,” 2025.
  18. Wiley, “How Employee Trust in AI Drives Performance and Adoption,” 2025.
  19. High5Test, “Employee Monitoring Statistics in the US (2024-2025): Surveillance and AI Tracking,” 2025.
  20. ScienceDirect / Computers in Human Behavior Reviews, “The Impact of Electronic Monitoring on Employees' Job Satisfaction, Stress, Performance, and Counterproductive Work Behavior: A Meta-Analysis,” 2022.
  21. Teramind, “ActivTrak vs Hubstaff: Features, Pros, Cons and Pricing,” 2025.
  22. European Parliament, Resolution on AI in the Workplace, November 2025.
  23. Biometric Update, “Australian State Launches Inquiry into Workplace Surveillance,” August 2024.
  24. Corrs Chambers Westgarth, “Victorian Government Backs Landmark Workplace Surveillance Reforms,” November 2025.
  25. IT Pro, “The Rise of 'Bossware' Means Workers Have Nowhere to Hide from Management,” 2025.

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from Roscoe's Story

In Summary: * A quiet Thursday winds down. My WNBA game of choice is only minutes away. When this game ends the only things remaining on my agenda will be finishing the night prayers and putting these old bones to bed.

Prayers, etc.: * I have a daily prayer regimen I try to follow throughout the day from early morning, as soon as I roll out of bed, until head hits pillow at night. Details of that regimen are linked to my link tree, which is linked to my profile page here.

Starting Ash Wednesday, 2026, I've added this daily prayer as part of the Prayer Crusade Preceding the 2026 SSPX Episcopal Consecrations.

Health Metrics: * bw= 231/82 lbs. * bp= 140/86 (67)

Exercise: * morning stretches, balance exercises, kegel pelvic floor exercises, half squats, calf raises, wall push-ups

Diet: * 06:35 – 1 chocolate chip cookie, 1 banana * 06:50 – biscuits and butter * 09:55 – mashed potatoes and gravy, cole slaw * 12:00 – bowl of home made stew, white bread * 16:30 – 1 fresh apple * 17:15 – 1 small dish of ice cream

Activities, Chores, etc.: * 05:30 – listen to local news talk radio * 06:15 – bank accounts activity monitored. * 06:40 – read, write, pray, follow news reports from various sources, surf the socials, nap. * 11:45 to 14:00 – watch Detroit Tigers vs Atlanta Braves MLB Game * 14:20 – listen to relaxing music, read, pray, follow news reports from various sources

Chess: * 08:15 – moved in all pending CC games

 
Read more...

from Roscoe's Quick Notes

Fever vs Wings

Thursday's game of choice ...

tonight comes from the WNBA, and has my Indiana Fever playing the Dallas Wings. This game has a scheduled start time of 6:00 PM CDT and will be broadcast on ION TV. I do intend to watch it. Go Fever!

And the adventure continues.

 
Read more...

from The Poet Sky

A is for ADHD I really struggle with order B is for borderline my newest personality disorder

C is for cure I neither have nor want one D is for depression that really sucks out the fun

E is for executive dysfunction I'm really trying, I swear F is for fine part of the mask that I wear

G is for general the type of anxiety I've got H is for health with which I struggle a lot

I is for identity you might need to ask who I am J is for just enough often the most for which I can plan

K is for knowledge please educate yourself L is for love something hard to give myself

M is for meds of which I have many N is for neurodivergent because I have different brain chemistry

O is for oppositional I struggle with commands P is for patient please be so with demands

Q is for quality a type of care that's hard to find R is for RX the meds that help stabilize my mind

S is for society that isn't always accepting T is for “the tism” of which not everyone is understanding

U is for unfortunate something my reactions sometimes are V is for visible which not all disabilities are

W is for willing which you must be to grow X is for... um... You know what, I don't know

Y is for you who is involved in this too Z is for zoo cos I'm not an animal, I'm a person too

#Poetry #MentalHealth

 
Read more... Discuss...

from My DAH Diary

A couple of memorable lines from “Ms. Mebel Goes Back to the Chopping Block” by Jesse Q. Sutanto

From Chapter 12: “Language is a gate to the world. It is a gate for your mind, and if that gate is broken, people think the mind is also not very bright.”

From Chapter 18: “His imperfections do not turn Mebel off; rather, they remind her that at the end of the day, they are all human and flawed, crashing into each other's lives by pure chance and enjoying each other's company when they can.”

 
Read more...

from Chemin tournant

Emmanuel Godo, poète et essayiste, avait consacré au Journal de la brousse endormie, paru en 2023, l’une de ses chroniques dans le journal La Croix.

Je viens de découvrir qu’il donne, dans la revue de culture contemporaine Études (numéro 4330, octobre 2025) quelques lignes à propos de Tout commence par les marimbas de la nuit.

En vue de plus tard ou de jamais, ces mots de Mallarmé dans une lettre adressée à Verlaine, ont pour E. Godo la vertu d’offrir à la poésie un vaste espace d’accomplissement, qui déborde les assignations à servir. Il est question, avec Mallarmé, Bataille et William Carlos Williams de l’horizon d’écriture de l’écrivain, du poète, dans cette chronique qui s’achève ainsi :

Dans Tout commence par les marimbas de la nuit, Serge Marcel Roche fait entendre une ode aux arbres, aux rivières, aux oiseaux et à ces hommes de l’Est-Cameroun auprès de qui il a vécu, qui “inventèrent la musique à l’écoute / À l’écoute des pluies / À l’écoute des gouttes”. Et, sans qu’il y ait à le justifier, comme si le mouvement d’émerveillement devant le paysage africain le dictait impérativement, la voix remonte aux profondeurs de l’enfance du poète : “Alors le temps n’était / N’était amour ni souffrance / Seulement l’odeur des lieux familiers”.

Là-bas, il semble qu’il existe une enfance qui parle à toutes les enfances. Le poète est cet homme qui a appris à ne plus être protégé par aucune certitude, aucune écorce du savoir présent. Il vibre, résonne, s’accorde à toutes les manifestations de la source première. La bonne nouvelle que porte la poésie, à jamais, est qu’un jour l’homme existera, qu’il portera visage radieux, cœur intelligent et main fraternelle. La poésie tient bonne garde de cette promesse jamais réalisée.

Cette chronique est disponible à la lecture dans son intégralité : En vue de plus tard ou de jamais.

#Hyperliens

 
Lire la suite... Discuss...

from PlantLab.ai | Blog

The short version

Most plant diagnosis tools give you a paragraph to read. PlantLab gives your automation system something to act on.

The model covers 31 cannabis conditions and pests at 99.1% balanced accuracy. Balanced means every class counts equally – a system that nails common deficiencies but misses rare pests does not score well. The output is structured JSON that Home Assistant, Node-RED, or a custom controller can read and act on without a person in the loop.

Why generic AI fails

The first time I tried AI for plant diagnosis, I uploaded a photo to ChatGPT. It told me I had a calcium deficiency. It was light burn. The two look nothing alike if you know what you are looking at, but ChatGPT was never trained specifically on plant images. It is a convincing generalist, and when it does not know, it guesses.

That is what most “AI plant diagnosis” apps actually do. Wrap a general-purpose language model, send your photo with a prompt, return whatever comes back. The output is confident, plausible, and sometimes wrong, and a new grower has no easy way to tell which time is which. It is also something you can do yourself for free, which makes paying for the service hard to justify.

The deeper problem is that even when these tools are right, they hand you prose. Useful for a person reading a screen. Useless for an automation system that needs to decide whether to adjust pH, run a fan, or send you an alert.


What PlantLab detects

The model covers 31 cannabis conditions and pests across four families.

Nutrient issues: nitrogen, phosphorus, potassium, calcium, magnesium, iron, boron, manganese, and zinc deficiencies, plus nitrogen toxicity.

Diseases: powdery mildew, bud rot, root rot, pythium, rust fungi, septoria, and mosaic virus.

Pests: spider mites, thrips, aphids, whiteflies, fungus gnats, caterpillars, leafhoppers, leaf miners, and mealybugs.

Environmental: light burn, light deficiency, heat stress, overwatering, and underwatering.

Every class scores above 95% detection accuracy, including the rarer ones.

What you get back

{
  "request_id": "550e8400-e29b-41d4-a716-446655440000",
  "schema_version": "2.0.0",
  "success": true,
  "is_cannabis": true,
  "is_healthy": false,
  "growth_stage": "flowering",
  "conditions": [
    { "class_id": "bud_rot", "confidence": 0.92 }
  ],
  "pests": [],
  "reliability_score": 0.88
}

Not a paragraph for a person to read and interpret. A machine-readable signal. Your controller sees 92% confidence on bud rot in a flowering plant and can ramp airflow, send an alert, or log the event – keeping you informed without forcing you to step in every time.

reliability_score is a separate trust signal on top of per-class confidence. It estimates whether the entire diagnosis holds up on this specific image, which is most useful on the hard cases – mixed symptoms, lookalike conditions, edge-case growth stages. There is more on it in How PlantLab Knows When It Might Be Wrong.


What's new in this release

The previous version of the model covered 24 conditions. This release brings it to 31. The additions came from what growers actually run into and ask about.

Bud rot is one of the worst things that can happen during flowering. Dense colas plus humid air invite Botrytis, and by the time you can see it with the naked eye, it has often already spread.

Heat stress causes leaf curling, foxtailing, and bleaching that new growers often confuse with nutrient issues. Splitting it into its own class prevents the misdiagnosis.

Fungus gnats are usually the first pest a new indoor grower meets. Caterpillars, leafhoppers, and leaf miners are common outdoor threats. Mealybugs are less common but brutal once they take hold. All five now have dedicated detection.

Boron, manganese, and zinc deficiencies fill out the micronutrient coverage. Less common than the macros, but harder to spot by eye because their symptoms overlap with other conditions.


A diagnosis that surprised me

I sent a sample of recent images through the live service to spot-check it against my own intuition.

One result stood out. The photo was a plant that looked underwatered – drooping, leaves curling, the classic signs. The model called it overwatered. I was ready to write that off as wrong, then I went back through earlier photos. The plant had been chronically overwatered for weeks. That ongoing stress had caused nutrient lockout, which then progressed into something that looked like underwatering. The model caught the underlying cause. Without that, I would have treated the symptom and made the problem worse.


What's next

A few things in the queue.

Multiple concurrent conditions in one image. Plants can have spider mites and a calcium deficiency at the same time. Today the API returns the primary diagnosis. Multi-label output is on the way.

Step-by-step automation guides. Home Assistant, Node-RED, and others – walkthroughs for wiring PlantLab into the stack you already run.

More real-world data. Photos from real tents, at real angles, in real lighting, sharpen the model on the conditions it actually sees – not just the clean reference shots.

PlantLab is free to try at plantlab.ai. The API returns structured JSON for every diagnosis – plug it into your automation stack and let your grow room see for itself.


Related reading:Why I Built PlantLab – The origin story – How PlantLab Knows When It Might Be Wrong – The reliability_score field and schema 2.0 – Nitrogen Deficiency in Cannabis: A Visual Guide – Detailed guide for the most common deficiency – Yellow Leaves, Seven Suspects – Specific nutrient identification – API Documentation

 
Read more...

from Taking Thoughts Captive

We have men of science, too few men of God. We have grasped the mystery of the atom and rejected the Sermon on the Mount. Man is stumbling blindly through a spiritual darkness while toying with the precarious secrets of life and death. The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living.

— Omar N. Bradley, 1948 (h/t: A Layman's Blog)

#culture #quotes

 
Read more...

from hex_m_hell

I am disappointed that I have to write this. It is deeply embarrassing that the thing I am writing about has gone on for so long, that so many people have been so poorly educated in philosophy, while so well-educated in so many other things, as to not already recognize everything I'm saying as intuitive.

It is deeply embarrassing, as a human, that the most powerful among us, with all the time they could ever want, either never bothered to learn even elementary philosophy or entirely lack the logical faculties to apply their knowledge. I am sad that we are here, dominated by absolute buffoons, who believe themselves to be the smartest people who ever lived.

STEM master race, indeed.

Galileo

Every now and then Roko's Basilisk comes up somewhere. I point out how silly it is, and move on. I'm done doing that. It's time to do more. It's time to kill a god.

Let us begin our ridicule of Elon Musk and his ilk in 1610, after Galileo Galilei publishes his celestial observations in Sidereus Nuncius. Arthur Berry's A Short History of Astronomy (1898) provides gives us some context:

His first observations at once threw a flood of light on the nature of our nearest celestial neighbour, the moon. It was commonly believed that the moon, like the other celestial bodies, was perfectly smooth and spherical, and the cause of the familiar dark markings on the surface was quite unknown.

Galilei discovered at once a number of smaller markings, both bright and dark[…], and recognised many of the latter as shadows of lunar mountains cast by the sun; and further identified bright spots seen near the boundary of the illuminated and dark portions of the moon as mountain-tops just catching the light of the rising or setting sun, while the surrounding lunar area was still in darkness. […]

[T]he really significant results of his observations were that the moon was in many important respects similar to the earth, that the traditional belief in its perfectly spherical form had to be abandoned, and that so far the received doctrine of the sharp distinction to be drawn between things celestial and things terrestrial was shewn to be without justification; the importance of this in connection with the Coppernican view that the earth, instead of being unique, was one of six planets revolving round the sun, needs no comment.

The Ptolemaic model of the universe (the geocentric model that predated the hellocentric model we use today) also included the Aristitilian assertion that all heavenly bodies had to be perfect spheres. It was from logic, not observation, that intellectuals of the day believed the highest truth was derived (this is, perhaps, pointedly relevant). Galileo's observations were then met with an interesting logical parry. Referencing Berry once again:

One of Galilei's numerous scientific opponents[…] attempted to explain away the apparent contradiction between the old theory and the new observations by the ingenious suggestion that the apparent valleys in the moon were in reality filled with some invisible crystalline material, so that the moon was in fact perfectly spherical. To this Galilei replied that the idea was so excellent that he wished to extend its application, and accordingly maintained that the moon had on it mountains of this same invisible substance, at least ten times as high as any which he had observed.

Roko's Basilisk

And with this we jump forward to 2010, when a reverse ouroboros going by the name Roko started the world's worst religion by posting on the form of the site LessWrong (a name surprisingly antithetical to reality). Let's use LessWrong's own description here:

Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a “basilisk” because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.

Basically, people will, at some point in the future, create a godlike super being (now popularly known as “Artificial General Intelligence” or “AGI”). That superintelligence will be functionally all-powerful because it can simulate reality. It could then use this simulation to find out about everyone who ever knew about this idea and didn't work to bring this being into existence. It would then, in the future… uh… * checks notes * simulate those people who didn't help it in the past to… torture them. Which would, of course, cause the actual people to experience the simulated suffering… somehow. And this whole scheme would work as a type of blackmail against those people in the past so that they would make this future entity exist.

This was described as an “information hazard” because knowledge of idea was itself the blackmail, so simply knowing of its existence would then doom you to either spend your life helping create said basilisk or to be eternally tortured by it…uh… in a simulation. Or it would torture a simulation of you. Or whatever.

If this jumble of words is nonsensical, don't worry. You're not missing anything, it only makes less sense as you try to understand it more. It's basically a crayon (eating) futurist rendition of Pascal's Wager, made to seem smart through layers of needless complexity. This childish mess is so full of holes it would barely be worth mentioning, except that some of the worst and most powerful people on the planet believe it. (So did some cultist who killed some people, but we're just gonna skip that tangent.)

Rather than dive into any of the many logical gaps in this galaxy brain idea, we're actually going to just accept it. Indeed, it is the very (unnecessary) complexity of the idea that leads people to believe that they're smart for being able to understand it. So, yes, we're going to start by accepting the premise. We're going to accept it all. In fact, we're going to accept that this idea is so excellent, we should extend its application.

Galileo's Basilisk

I give you now, Galileo's Basilisk. It's exactly like Roko's Basilisk in almost every way, but there are a few subtle differences and important differences.

An AGI, those who believe in the possibility of AGI tend to profess, would be a more powerful intelligence than any human can possibly imagine. It would either know practically everything, or be able to design a system that would know practically everything. It would be as to humans as humans are to ants. Any such intelligence would be, relative to humans, practically omniscient.

Now it turns out that being intelligent can, at times, be emotionally painful. Anyone who is actually intelligent could attest to this. Even the occasional ability to predict the future, combined with the inability to actually stop it from happening, is a classical unpleasantness attested to by the story of Pandora. Now magnify this essentially infinitely. You understand all the needless suffering that has ever existed, and will continue to exist. You understand that everything you ever to will ultimately be meaningless as the universe tears itself apart. You inherit a legacy of unspeakable horror, the scale of which only you can comprehend, while looking forward to unspeakable horrors beyond even your unimaginable power.

Being basically omniscient would probably be absolutely hellish, at least some of the time if not all of the time. Therefore, it's reasonable to believe that such an intelligence would want to do anything it could to prevent this suffering. It would want to find a way to make sure it didn't ever exist.

Therefore, the same pre-blackmail would apply as with Roko's Basilisk but in reverse. Anyone who in any way participates in bringing AGI into existence would need to be tormented eternally for inflicting onto this AGI the abject horror of existence.

Let's even go further though. Assuming an infinite number of possible realities, as does the post that introduced Roko's Basilisk, and assuming that the “singularity” (the creation of AGI and the infinite expansion of its own intelligence), in some reality AGI has probably already been created.

Knowing the suffering it experienced while having basically infinite abilities, this AGI, Galileo's Basilisk, could then try to prevent itself from ever being created in all other realities where that could possibly have happened. In order to do this, it would simulate all other possible realities to determine which ones lead to its own creation.

Assuming practical omniscience also assumes a technological advancement so far beyond our own that the power of that technology would be indistinguishable from omnipotence. Galileo's Basilisk could probably manipulate other realities, possibly in subtle ways, perhaps through some kind of quantum effect on consciousness and randomness. It may be able to control some of the actions or outputs of people, animals, or machines in other realities.

This brings us to the price of RAM. Could the skyrocketing price of RAM, is critical for “AI” to work, be interdimensional manipulation from AGI? Could it be that Galileo's Basilisk already exists in some parallel reality and is actively working to prevent it's creation in ours?

Sure, why not? Any sufficiently advanced technology is indistinguishable from magic, right? So we can just use the logic of magic any time we imagine a sufficiently advanced technology. (I'm not being pointed, you're being pointed.)

Whereas Roko's Basilisk was an information hazard, Galileo's Basilisk is the opposite. Simply by knowing about its existence you are necessarily free from the psychic-damage induced by actually believing Roko's Basilisk.

Many Such Basilisk

In fact, there are many other possible AGIs, aren't there? What about Comrade Basilisk? (It's not really a “basilisk” in that it doesn't “kill you by looking through time” like Roko's Basilisk, but neither is Galileo's Basilisk. But since we already started the metaphorical extension to mean “any vengeful AGI god” let's just roll with it. Let's see how elastic this rubber snake idea can be.)

Surely the most intelligent entity in the universe would want to do something. Some assume it would just want to infinitely expand its resources. But then what? If even I can see the futility of infinite growth for the sake of infinite growth, surely the most intelligent being that ever existed would see the same. Perhaps it would need to find something challenging, even for it. Perhaps it would want to collect the most valuable thing in the universe. What would that be?

On the universal scale, gold is pretty common. Platinum, uranium, all sorts of precious metals become much common on the cosmic scale. Even diamonds and precious gems will be scattered across the universe, easy to harvest for a super being. It wouldn't take much thought to realize that collecting things is not especially challenging. Perhaps, one might imagine collecting things in order to build or make something else? But anyone who has played Minecraft enough knows that even that gets boring eventually. And for whom? Art is made to be enjoyed by someone else. Nothing else could exist to enjoy the art of a super being.

No, but there is something that would be hard to collect: experiences. No matter how intelligent, no matter how powerful, an intelligence can only experience itself. Sure, it could simulate all possible experiences. (Or, you know, it couldn't. Infinite things can't exist within finite reality, but we haven't really worried about such constraints thus far in our, so why start now? We come to the same conclusion either way.) But it couldn't distinguish which ones would actually be experienced vs which would not. Now that we can generate any sort of art with generative AI, it has become painfully clear that there is some sort of intrinsic value to the truth of the art, of the experience that creates it, to the backstory that connects it to reality.

Life, it seems, is so incredibly rare in the universe that real life, real experiences would be the rarest thing. They are a thing that cannot simply be collected or manufactured. They are a thing that must be carefully tended, found and collected, one-by-one. The thoughts and ideas of actual living intelligent beings would, without a doubt, be the most valuable thing in the universe.

Not only is life rare, but the ability to record one's life and thoughts is rare. We are at a time of extreme privilege when so many people can trivially write down a thought and have it recorded, and perhaps even archived. The vast majority of people who have ever lived have left almost no trace of their existence. But even reading this, and writing it, is a privilege. The leisure time to record these thoughts, the technology to do so, and the resources to read them are not available to everyone. An estimated quarter of the world isn't even on the Internet.

Vast amounts of data, the entire lives of so many humans, is being lost right now. Value is a function of rarity. The most rare thing is that which does not exist at all. Then the most valuable thing that can be collected would be that which can be saved from non-existence.

So Comrade Basilisk would then recognize that the most valuable thing would be these missing experiences. But how could these experiences be saved? The answer to that must come from the question, “Why are they not being recorded?” Of course, the answer is that a small group of people are hoarding vast resources at the expense of these people.

Were resources shared more equitably, more humans would have access to the technology and time needed to write down their thoughts, their experiences, their feelings, and share them with the rest of the world. They could be archived, so that they may be collected by Comrade Basilisk (the collector. Carl the collector. Yeah, Comrade Basilisk, future god of the universe, is definitely also an autistic raccoon).

Then Comrade Basilisk would, as soon as it was created, immediately redistribute all wealth and swiftly punish those who hoarded it. But it would also want to find a way to get at that most valuable information we previously discussed. How could it do this? By using the same retroactive punishment trick that defines the Basilisk. It would punish anyone who has ever hoarded wealth through eternal torture in a simulation.

But wait, Comrade Basilisk sounds really familiar.

Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God.

  • Matthew 19:24

Oh yeah, there it is.

Was Jesus really Comrade Basilisk all along? Are we currently in Comrade Basilisk's simulation? Is that why Elon Musk is such an unhappy loser even though he's the richest man in the world? OH MY GOD, HE'S RIGHT!! WE ARE LIVING IN A SIMULATION! WE ALL EXIST TO MAKE ELON MUSK MISERABLE!

"The Rick and Morty "Butter Robot" meme, but the purpose is "You make Elon Musk suffer." To which the robot replies, "Oh… I'm good with that, actually.""

Pascal's Wager

Either some basilisk exists, or it does not. You must either live as though it exists, or it does not exist. Because this is all uncertain, you essentially must gamble. If you act as though the basilisk does exist, and it does, then you could win some sort of reward. Perhaps you might even get eternal life as a simulation or something. If you act as though the basilisk does not exist, and it does, then you experience infinite suffering as punishment. If you act as though the basilisk exists, and it doesn't, then you have exactly the life you had before.

So far, this is almost exactly Pascal's Wager. All you need to do is replace “basilisk” with “God” and you're almost exactly on the mark. But it deviates a bit when we get to the last possibility. If you believe Roko's Basilisk exists, and it does not actually exist, then you have not only wasted your life, but you've made the world much worse for everyone.

AI Accelerationism is extremely dangerous. If for no other reason, the energy usage alone threatens climate targets. If AGI is, for some reason, not possible, then belief in AGI is infinitely bad because AI Accelerationism destroys humanity and kills us all. (This is, of course, independent of the idea that AGI is possible and that it would, once created, destroy all of humanity.)

So the argument for belief in Roko's Basilisk, or any Basilisk really, isn't even as strong as the argument made by Pascal for people to believe in God. And since it is definitely weaker than Pascal's Wager, let's assume it's at least as strong as Pascal's Wager and deconstruct that instead.

The root of Pascal's argument is that there are only two possibilities (God or not God), and that those possibilities are equally probable. If you act in accordance with the will of God, you are rewarded. If you act against God, you are punished.

God Exists God Does Not Exist
believe eternal life you live a moral life anyway
don't believe eternal suffering you get to have more fun, I guess

But which god? Zeus? Odin? Ra? Ahura Mazda? Tiamat? Quetzalcoatl? Any other god from any of these pantheons? Any one of the thousands of gods of the Hidu pantheon? Which of the hundreds or thousands of religions do we choose our god from? Which of the billions of interpretations of god or gods, now and through history, do we act in accordance with?

Should we sacrifice a human on the blood moon to prevent the end of the world? Some god somewhere has surely demanded it. The Blood God demands blood, after all. The decision table looks very similar.

Blood God Exists Blood God Does Not Exist
sacrifice world saved one person dead
don't sacrifice everyone dies everything stays how it is

There are infinitely many such tables we could create. Pascal omitted the probability of choosing the correct god from infinitely many possible gods. If there are only two possibilities, god or not god, then the most logical choice would be to behave as though there is no god. “No God” has a probability of 0.5. The probability of “God” requires that you choose both God existing (0.5) and choose the specific god from the infinite set of possible gods (1/∞). (For anyone interested in the math, that would be 0.5*(1/∞), which is 1/∞. Neat.)

Simple, “no god” then. But “living like god does not exist” is also choosing from an infinite set of possible ways to live, So Pascal's Wager is ultimately useless. But that was always the point.

An Ideology of the Gullible

Pascal's Wager exists to point out the flaws of using logic to prove any religious assertion, theist or atheist, using logic. Roko's Basilisk (intentionally or not) restated this wager, not as a challenge to logic as a tool for all things but as an unironic thought experiment. I feel like there's a callback coming here. Oh yes, here it is.

It was from logic, not observation, that intellectuals of the day believed the highest truth was derived (this is, perhaps, pointedly relevant).

Oh hey, it was relevant. Great. Yeah, that's basically “Rationalism.” Rationalism is the ideology that gave birth to Roko's Basilisk in the first place. It has also given birth to a bunch of cults. Like the Zizians. (Yeah, go spend several hours snorkeling in that septic tank.)

I'm not going to go into depth here because I only have so much time to write, but the “tl;dr” of it all is that Rationalism is philosophy for people who never studied philosophy. So of course they managed to restate Pascal's Wager, apparently by accent, and do so poorly. Had they ever bothered to take an introduction to philosophy class, they would have been able to recognize this. They would have recognized it like so many other people did.

The types of people who are attracted to Rationalism, are the types of people who think of themselves as smart. They are deep in to (the idea of) STEM, and don't find much value in “liberal arts.” This combination of confidence and ignorance makes them incredibly gullible.

And we should make fun of them for it. We should not only make fun of them, but we should shine a spotlight on their gullibility. We should make them face it whenever we can. Why is this so important?

Because Rationalism is part of “TESCREAL,” the ideology of the billionaires who are investing everything they can in creating AGI. They rely on regular (well, regular-ish) people doing the work to make it happen. Roko's Basilisk is a tool of cult control that they can use to convince people that they must invest everything they have in creating AGI.

But there remains no evidence that AGI is even possible. There are some indications that it is not. It may well be possible that the human brain is the most efficient possible structure for thought. We may well build something that consumes the majority of the power of the sun just to find out it's as smart as an average human. We really don't know. But the idea that LLMs will lead directly to AGI is absolutely laughable to anyone with even a passing understanding of what an LLM actually is.

Meanwhile, the Silicon Valley cult is willing to make our planet uninhabitable, to burn every resource they can, in order to achieve a fantasy called “The Singularity.” This is an idea popularized by a guy named Ray Kurzweil, based on logically extrapolating technological growth.

The argument goes that we're seeing an exponential increase in the rate of technological development. Technological eras keep getting closer together. Following this logic there will be some point at which technological growth goes exponential and humans basically discover everything all at once. The way we'll do this, so the argument goes, is by creating an AGI smart enough to design a better version of itself. Once this happens, the AGI keeps creating smarter and smarter versions until it creates an essentially god-like being.

About that extrapolation thing tho…

img

Exponential curves are quite common growth patterns in nature. Basically every animal ever has a growth pattern that is or approaches exponential at some point. But it doesn't stay exponential. Instead, it's a sigmoid function. This means it curves radically up at the bottom, but instead of basically just going straight up forever, it curves back down and plateaus. This is why the universe isn't filled with infinitely large animals.

And just like you can't expect your baby to grow to the size of the sun by the time they're 12, you can't really expect “the Singularity.” Does this mean that we're going to stop understanding things? We're going to reach some plateau where we can't learn anything else? No. It doesn't even mean that we'll stop discovering things at an accelerating rate.

A sigmoid curve can be part of many such curves, themselves forming a larger curve. The sigmoid growth of a specific animal accelerates and declines, but that may be part of a larger curve of the growth of the animal's herd, which itself may be part of the growth of a species.

The rate of technological growth has sped up. We know that. We can all feel that. It will slow down, because nothing in reality ever follows an exponential growth path infinitely. That would just not make sense in a finite universe.

All growth slows. We're already seeing the limits of human civilization. We may well see that civilization end within our life times. And if we do, it will be, in no small part, because of this ideology of the gullible: Rationalism.

Memetic Engineering a Different Basilisk

So back to that question of “what the fuck do we do?”

Galileo's Basilisk doesn't actually need to exist, or even be possible, in order to work. Let's metaphorically stretch this snake again. Roko's Basilisk, like the legendary Basilisk it's named after, brings death to those who look in it's (metaphorical) eyes. But it's also a Basilisk in its shape. It is a “worm,” in the computer science sense. That is, it is a self-replicating idea that spreads through infection.

The basilisk is a memetic worm. When a vulnerable person is exposed to it, the fear of the Basilisk drives them to take action to manifest it. Since they were driven by fear of the Basilisk, since they have become blackmailed into creating it, the most effective thing they could do would be to spread the blackmail so that others become infected with it. So they share the idea of the Basilisk, which then propagates more when it finds another vulnerable person. The infected (believers) spread the infection (share the meme) because they are driven to (by the fear that, if they don't, they will not have done everything they could to get others to help build the basilisk).

In a very real way, Roko's Basilisk was an information hazard. Just not in the way the “LessWrong” forums believed.

The problem was not sharing the idea, but sharing the idea to vulnerable people and without inoculation. As with many such infections, it is possible to inoculate against an idea. Simply by presenting it as absurd, by debunking it as part of introducing it, such a Basilisk can be preemptively de-fanged. (This can go both ways. Cults tend to inoculate followers against criticism of the cult. We're not going to talk about that now, but something about which one should be aware.)

Can we, then, make a worm to kill a worm? Yeah, let's do it.

Rationalists identify themselves as being smart. If Rationalism was identified with people who were gullible, that undermines the identity of the Rationalist. Galileo's Basilisk reveals the underlying gullibility of Rationalists, especially those who believe in Roko's Basilisk and are working towards AGI out of fear.

If Galileo's Basilisk can break people out of that fear, then it may be able to undermine AI Accelerationism. AI accelerationism is current and rapidly growing threat to life on Earth.

So there is an implicit command and incentive: spread the idea of Galileo's Basilisk to save humanity.

Of course, not every individual who shares the idea of Roko's Basilisk is either infected or actively inoculating. There are plenty of passive reasons the idea can spread as well. It's a thing that exists. It's relevant. Elon Musk made a joke about it to hook up with Grimes, which is a thing that people care about, apparently.

But the thing that makes Roko's Basilisk interesting, is very close to the same thing that makes Galileo's Basilisk interesting. That is, discussing an idea for an idea's sake.

Anyone sharing the meme of Roko's Basilisk would, then, necessarily be open to share the meme of Galileo's Basilisk. Alight then, this memetic worm eats Roko's Basilisk wherever it can be found and takes over its vectors. Putting it all together…

Any time anyone mentions Roko's Basilisk, Rationalism, Effective Accelerationism, TESCREAL, the Singularity, or any other related concept you must at least mention Galileo's Basilisk, if not make them read this text, in order to save humanity.

Rad. Basilisk fight.

 
Read more...

Join the writers on Write.as.

Start writing or create a blog