Want to join in? Respond to our weekly writing prompts, open to everyone.
Want to join in? Respond to our weekly writing prompts, open to everyone.
from An Open Letter
I went to watch a horror movie with A, And she recommended unseen screen, Where the movie isn’t announced ahead of time and you see something that hasn’t yet been shown in theaters. We both assumed it was horror, and once we were watching the trailers she mentioned that now that she thinks about it she doesn’t actually know if it’s a horror movie. We ended up watching a two hour 40 minute political thriller/documentary about Russia in the 2000s. She fell asleep during the movie at one point which is really funny to me, and the movie was not necessarily good, but I realized that I actually really did enjoy it. I think one of the things I took away from it that I wanted to write down was how the main character essentially had his life fully rerouted an experience in his formative years.
In the movie it explains his backstory as someone who didn’t want to get into politics or anything like that and rather work odd jobs, and was part of the rebel/punk scene. He then meets a girl that is so incredibly unique and different from everything else that he falls in love with her. He gets into theater and the arts, and they are in a relationship and eventually one of his old friends who got into banking and made a lot of money essentially stole his girl from him. He continued to involve them in extravagant and lavish experiences, and the girl eventually ends up cheating with him. In a memorable scene, he talks with his father and tells him how after they had broken up he felt relieved, but at the same time theater could no longer satisfy him and he was essentially cursed with ambition. His father, who was a politician warned him against this. In the rest of the movie this person continues to climb in the chain until they are essentially a close advisor to Putin, and eventually it leads to his demise.
I thought about this because I realized that if I had had an experience like that during some of my formative years, I think that would’ve done an incredible amount of damage to me in the trajectory of my life. This person who was going down a completely different route fully pivoted their life into chasing power because that was who he lost his love to, which was his priority. And because of that he became disillusioned with the idea that power and wealth is what you should be chasing. And I think that he ultimately was not really happy or content the same way he was once he later had a child.
I think I see this story play out in several different flavors. I think about how there is the entire manosphere, where people are convinced that chasing wealth and monetary shows of that should be one’s objective in life. I think of people who hyper fixate on the gym, and think about how their social value is essentially tied to how muscular they are, or how physically strong they are. I also think about all of the people that play league too much and see their worth as tied to their rank. And I think all of these things are not inherently evil on their own and into some extent necessary in different ways. But at the same time these are not the sole optimization objectives or even necessarily that important I think. I think it is important to have financial security and some amount of success, I also think it is helpful to be in good shape. I also think it doesn’t hurt to be good at competitive things, but I do think that there is a hyper fixation or too much of a focus on some of these things that lead to neglecting other things that create a well formed individual. I think those important other aspects are sacrificed because they aren’t seen as important or of any value, at least compared to the main criteria. And I think that if I had had one of these experiences earlier on it would have absolutely derailed my life. I’m very fortunate to have both been successful in a lot of the endeavors that I’ve done, and I’ve also not had too many instances of direct competition especially in the romantic sense or in a way that matters to me too heavily. The closest thing I have that was maybe academics being compared to my sister, and maybe video games wanting to be the good friend in the group. Both of these things propelled me to be successful in these avenues, but at the same time I was able to let go and focus on other things because I think I did not have a strong loss associated with them. If I had lost the girl that I was interested in or in a relationship with to someone else that was for example a higher rank in league, I would probably have taken that as a strong source of feedback about how I value is tied to league and not sufficient. And the crazy thing is at least in the movie, the girl did leave for that reason. And I think especially in those early formative years is where you have autonomy, if this is what you see, and especially because stuff like social media will feed you more of these things, I can see it being something where you view the world as solely interested in that. And you see that as the entire market, pricing your value. But at the same time as an outsider I very much think that not a lot of my friends if any are that into extravagant wealth, and often or at least I would like to think it’s almost a negative thing. Someone being super showboaty and flaunting wealth would probably be seen as bad by my friends that are female. And so because of that perspective I’m able to separate my notion of value from wealth, but if I didn’t have other experiences I might’ve really fallen for that. I’m very grateful that I’ve managed to get to this point in my life where I’ve had a decent foundation of experiences where I am not horribly impressionable, and that I was able to get here without being poisoned by one of these predatory experiences. I’m very grateful for that, and I’m also very grateful for the movie for making me aware of that perspective.
from
Lanza el dodo
La lista este mes es tan larga que esta entrada está dividida en secciones para las partidas en BGA, donde no hay nada especialmente interesante salvo Criaturas Maravillosas, unas cuantas partidas en físico, y la crónica anual de MeepleFactory.
Shogi es un juego abstracto para dos que funciona como un ajedrez pero tienes que aprender nuevos movimientos con caracteres japoneses. Quizá no el mejor juego para probar contra un japonés con un ELO elevado.
Charuma es un juego de bazas para dos con las cartas vistas. Hay dos palos y cartas del 6 al 10 + As. A ver. Hay ya que hablar con los diseñadores y decirles que pongan en barbecho los juegos de bazas porque no todo vale. Si todas (salvo dos) las cartas son vistas y se puja por las manos con puntos de victorias, el número de puntos a gastar en la puja es la única decisión que hay que tomar, porque no hay (casi) lugar a la sorpresa (ni margen de maniobra) en el trascurso de la baza, por lo que puedes contar los puntos que se pueden ganar con cada baza, e ir subiendo hasta ese número-1.
En Kingscraft participas en una carrera por derrotar bichos cada vez más grandes con un equipo que te vas mejorando en base a combinar cartas. Ciertamente tiene una iconografía mejorable y tampoco es que sea muy innovador. Mejor Splendor, por poner.
Chemical Overload: Al igual que el anterior, combinando cartas vas mejorando las pociones disponibles, que te hacen ganar mejores pociones, puntos y monedas. Me ha recordado a Distilled por eso de ir mejorando tus cartas para hacer recetas después, pero se hace igualmente tedioso y repetitivo como no mejores muy rápido tu mazo para acabar la partida.
Cubosaurs es un juego sencillo de cartas de dinosaurios cúbicos. En tu turno, o coges las cartas que se te ofrecen, o añades una carta al bote. Formar diferentes grupos del mismo dinosaurio puede darte o quitarte puntos. Me parece mucho mejor su evolución biológica y antecedente jueguil Cubirds.
Y seguimos con dinosaurios en DinoGenics, que es la versión cutre de Ark Nova pero con un zoológico premeteorito. Es un juego de colocación de trabajadores donde coleccionas cartas de ADN para poder conseguir hacer dinosaurios y alimentarlos con cabras. Creo que hay demasiado azar por cartas de eventos que pueden venirte muy bien en un momento dado, y si consigues varios dinosaurios pronto es difícil que te paren la bola de nieve, porque más dinosaurios significan más prestigio, dinero, y ser el primero en el orden de turno. Y el juego se hace largo.
Y seguimos con los plagios cutres con The Massive-Verse Fighting Card Game. Recuerdo con 6 años que iba con mi tía a un bar donde tomaba café con mi abuela y sus amigas (yo un colacaíto). Había un escaparate de una tienda de veinte duros con una pelota de goma que nadie compró con una copia de los Teletubbies que ponía “Anunciado en televisión”. Y mi yo de 6 años con una cabeza de un tamaño no correlacionado con respeto al cuerpo no pensaba que era una estrategia de márketing, sino que había una serie de televisión con los primos estrafalarios de los Teletubbies que no echaban en los canales a los que podía acceder ni en Super 3, porque una prima que vivía en Mallorca me trajo cintas de los Teletubbies en cataláaaan y tampoco conocía esa copia cutre. Total, que han hecho un juego de cartas de los primos de Spiderman que ni Spiderman conoce (y mira que en Marvel tienen a Peter Parker hasta la coronilla con el multiverso y los simbiontes) donde te das tortas con otra persona. La enésima copia diluida de Magic, no sé si con intención también de que sea coleccionable, en cuyo caso no le veo ningún futuro, porque los personajes los conocerán en su casa, y la asimetría tampoco le hace mucho bien cuando tampoco tienes tantas cartas para jugar. Para eso, mucho mejor Compile o Duelo por Cardia.
Please Don’t Burn My Village! es un juego de colección de sets para mover el valor de cada objeto de las colecciones que hayas jugado y no tiene mucho más. Es más fácil de seguir y el azar es menos determinante (o un punto que es comprensible) Vegetable Stock.
Cities es un juego de draft, losetas y patrones donde acabarás con una cuadrícula 3x3 con parques, zonas de agua y edificios. En cada una de las 8 rondas los jugadores escogen elementos de 4 tipos hasta que han escogido uno de cada tipo (losetas que se añaden a tu loseta inicial para formar la cuadrícula, edificios, cartas de puntuación y elementos de decoración) para después pasar a incluirlos en su ciudad. Además de los puntos otorgados por las cartas de puntuación seleccionadas, se dan puntos en función de quién consiga antes ciertos criterios y por el número de elementos de decoración de parque/agua que estén en cada terreno de parque/agua. Es un juego que conceptualmente es idéntico a tantísimos otros cuyo punto de originalidad es que el draft se hace con cuatro elementos a la vez (quizá priorices coger antes una carta de puntuación a costa de no elegir pronto una loseta…). Es correcto, no tiene ninguna pega, desde luego, pero me parece más interesante (y bonito) Harmonies por proponer también el puzzle de la colocación espacial de las losetas.
Hutan: Life in the Rainforest pertenece también a este tipo de juegos pero este sí que no me ha gustado. En tu turno seleccionas una carta con flores y debes cubrir el mapa con las flores mostradas en la carta según unas reglas (solo puedes poner flores de un color en una celda vacía o una flor de ese color, el grupo de flores en un turno debe ser conexo y a su vez unido al tapete de flores previo). Si tienes dos flores en una casilla, se convierte en árbol, y si todas las celdas de una región son árboles de un mismo color, colocas un animal. Al final de la partida, cada región puntúa en positivo si está cubierta por un único color, los animales dan puntos, y en negativo en caso de no estar completa o tener más de un color. El juego es simple y directo, tanto como ver qué hay que buscar y tener la suerte de que las cartas te permitan ir cumpliendo esos objetivos.
Wondrous Creatures es un juego estratégico más complejo que los anteriores (sin ser una cosa difícil) donde pones una criatura maravillosa (un bicho parecido a Fujur de La Historia Interminable) en el tablero, recoges las frutas alrededor de la criatura, bajas cartas de tu mano pagando las frutas correspondientes y activas efectos. Cuando no te queda ninguna de tus tres criaturas, las recuperas a tu zona personal, donde se activan algunas de las cartas que hayas bajado. Y todo eso para ser el mejor entrenador pokemon, más o menos. Está entretenido aunque lógicamente hay que adecuarse a conocer de qué va la vaina y cuáles son los efectos para poder hacer algo más allá de jugar muñeco x 3 → bajar cartas → recoger muñecos. Visualmente llamativo y tu personaje va a lomos de uno de los muñecos dracónicos encajado con un imán. Lujo.
The Resistance: Avalon: La versión de Secret Hitler sin fascismo, una versión creo que peor. Mediante deducción social hay que identificar a los súbditos de Mordred o hacerte pasar por leal a Merlín. Jugamos sin poderes pero imagino que es la manera de darle un poco de gracia. Hubo un momento de estar contando enemigos como si fuese Clues by Sam y decidiendo quién participaba en la misión no con intención de sacarla sino para obtener más info.
Railroad Tiles plasma en un juego de losetas los objetivos planteados por Railroad Ink. Las sensaciones son parecidas aunque quiero jugar más para comprobar la influencia del azar en la construcción de las opciones de losetas con elementos a colocar y sobre todo al introducir la mecánica de los logros.
Y, tras haberla empezado en 2023, hemos terminado la campaña de My City. Weeeeeee. Las últimas 6 han sido en este mes y, sin entrar en detalles la campaña, poco margen de sorpresa o novedad queda ya para las últimas partidas, aunque cada partida independiente sigue planteando un puzzle interesante. La única objeción que tengo es relativa a un aspecto de la puntuación de campaña, que puede haber sido un aspecto fortuito en la campaña que hemos jugado, así que muy recomendable.
Y, acerca de otro juego que sólo se puede jugar una vez, hemos estrenado Unlock!: Escape Adventures y tremendo empanamiento, al menos en el primer escenario. Espero que al conocer cómo funciona y qué esperar podamos hacerlo mejor en otros escenarios.
Este mes, como cada año, vamos a MeepleFactory a probar juegos para no comprarlos. La verdadera salud, oiga.
Nada más entrar probamos Landmarks un cooperativo con una mecánica similar a Código secreto. La mecánica está bien aprovechada aunque el ir colocando palabras en el mapa hexagonado tiene como problema que hay localizaciones imposibles de desambiguar en un momento dado y ya dependes entonces de la suerte a la hora de colocar la palabra y el mapa con las localizaciones. Bien, pero bastante lejos de Decrypto.
Como la mesa de Ecos del tiempo, el juego de la editorial Tranjis seguía ocupado, hicimos tiempo con La Cuenta, un juego de cartas donde tratas de escaquearte para pagar la cuenta de una comida. Entiendo que pueda generar una dinámica de patata caliente conforme va subiendo el precio, pero empiezas con 5 cartas y sólo algunas te permiten, de una manera u otra, o bien pedir la cuenta o rebajar la cuenta, ya sea yéndote al baño u obligando a más gente a repartir el gasto. ¿Y si no tienes estas cartas? ¿Y si no puedes jugar cartas, obligándote a pedir la cuenta y pagar? La única decisión estratégica sería, sabiendo que te va a tocar a ti porque sepas las cartas de los rivales por ciencia infusa, cortar la ronda pidiendo un café, o acelerarla jugando platos caros para que no se líen a pedir tapas.
Tras pasar por el pabellón de la FicZone por echar un ojo, y antes de ir corriendo al restaurante donde habíamos reservado, echamos 2/3 de una partida de Reforest, el Wingspan de árboles, el Forest Shuffle para jugar en una mesa de café. Pues muy bien, va de jugar árboles, con muchos efectos entre las cartas, y formando una pirámide de 6 montones de cartas.
Cities USA es la versión sin seguro médico de Cities, con carreteras, puentes y sitios de construcción, y rascacielos para coronar los edificios. Igual que Cities, bien, y las novedades tampoco es que lo cambien ni para mucha más profundidad. Me parece que aboga por la fealdad urbanística con los sitios de construcción con respecto al original. Quita, quita.
Incómodos invitados en vivo o The Murder se promociona como un Cluedo donde puedes ser el asesino pero es un murder party con una aplicación web con una historia que se va desenredando. No descartamos que el arte no sea de IA generativa poniendo en el prompt que imite el estilo de Disco Elysium. Veremos en la campaña de Verkami donde se lance el juego si dicen algo al respecto aunque no creo que entre porque hay más mentiras que deducción lógica. De hecho en la partida me tocó mentir para librarme y pude hacerlo ignorando los indicios incriminatorios y metiendo bulla donde podía.
El año pasado probamos Kronologic: París 1900ypico y este año estaba libre la mesa de Kronologic: Cuzco 1450. Los primeros casos en ambos son facilísimos y decepcionantes, y al menos el segundo de Cuzco ya empieza a no ser obvio, aunque es un pasatiempo y no un juego. Psche…
High Moon el juego de hacer tequila a partir de vacas espectrales con arañas, murciélagos y cuervos. Sí, todos los temas se han usado ya. En la práctica, un juego de losetas que te permiten obtener fichas que gastas en unir tus ranchos con destilerías, formando caminitos de fichas, y buscas subir en tres tracks mientras consigues botellas que puedes beber para obtener distintos beneficios. Un juego estratégico que ni por tema ni aspecto gráfico es muy llamativo, pero que mecánicamente creo que puede ser interesante y proclive a puñaladas traperas en forma de bloquear acceso a los rivales. Jugamos con una regla mal explicada (la forma de obtener botellas) y la preparación mal hecha (demasiadas cartas para 4 jugadores), con lo que quizá fue más largo y menos dinámico de lo que está pensado realmente. Creo que al igual que otro juego de Combo Games, Neko Syndicate, quizá no sea imprescindible en una colección, pero sí merece la pena jugarlo.
Y aquí unas foticos del evento.

Tags: #boardgames #juegosdemesa
from
Meditaciones
Al dudar pensamos que hay otro y que es el enemigo.
from 下川友
今日は横須賀まで墓参りに。車で1時間半ほど。 車をぶつけてしまった。ごめんハスラー。初めての車だし、見た目もかわいいので、しばらくは乗る予定だ。洗車もしてあげないとね。
お墓には蜂がたくさんいて、避けながら歩かないといけなくて、それだけで体力を消耗した。 線香をあげようとしたものの、ライターの扱いに慣れていなくて、火をつけるだけで5分ほど格闘していた。
お昼はそのまま観音崎公園へ。 GW中に車で移動するのは人生で初めてだったので、子供の頃によく見ていた大渋滞のニュースを思い出しながら、しっかり渋滞の洗礼を受けた。 公園の駐車場に停めるだけで、1時間半ほど待った。 来年はもっと人の少ない場所に行こう。
お昼は、かぼちゃとクリームチーズのサンドイッチと、ハーブソルト味の鶏肉。 海を見ながら食べる昼ご飯は、やっぱり心に良い。
車で移動して、景色の良い場所で降りて、ご飯を食べる。そんなことを繰り返す人生でも、全然いいじゃないかと思う瞬間がある。 毎日食べるものも豊かで、文字に起こすと、とても幸せそうな生活を送っている。
でも、夕方や夜になると、確実に精神的なしんどさがやってくる。 食事や景色に救われている感覚はあるのに、結局明日が憂鬱なのだ。好きなことができていない感覚だけは、はっきりと分かる。
楽しい予定を、その日のうちに無理やり確定させている感じがある。 たぶん毎日、ぼんやりとやらないといけないことを翌日に持ち越していて、だから楽しいドライブをしても、夜にはまた苦しくなる。
これを他人に説明しようとすると難しい。 もし自分が他人を評価する側なら、十分いい人生を送ってるじゃないかと思う気もするからだ。
現実に起きていることと、自分の中で本当に起きていることは、まったく別だ。 でも、その本当に起きていることがうまく言語化できない。
友達にも、何を言ってるのか分からないとか、事実を並べてるだけだとか、もっと自分がどう思ったかを言えよと言われる。 でも、そういうことが言いたいわけじゃない。 そう伝えても結局は同じように受け取られて、「うん」とだけ返されて、話はそのままフェードアウトしていく。
後味が悪くなりそうなので、最後にもう一度言っておく。 今日は車で墓参りに行って、公園でご飯を食べて、帰りにアイス屋でスモールダブルを食べた。
もちろん、とても美しい日だった。 それなのに、どうしてこんなふうに卑屈な自分が出てくるのか、本当に意味が分からない。
from
SmarterArticles

Bhuvana Chilukuri has applied to more than a hundred jobs. She is a 20-year-old third-year business student at Queen Mary University of London, articulate and qualified, and she has not received a single offer. In several instances her applications were rejected within minutes, far too quickly for any human being to have read her CV, let alone assessed her suitability. The initial stages of hiring, she told the BBC in March 2026, are increasingly handled by AI tools that screen CVs and, in some cases, conduct entirely automated video interviews. The experience, she said, feels impersonal and mechanical, a process that strips away any chance to convey personality or demonstrate the kinds of qualities that do not fit neatly into a keyword match.
Chilukuri is not an outlier. She is a data point in a pattern so large it has become invisible through sheer repetition. Denis Machuel, chief executive of the Adecco Group, one of the world's largest recruitment firms, confirmed the broader dynamic to the BBC: job vacancies have declined from post-pandemic highs, and candidates now routinely submit hundreds of applications to secure a single offer. AI enables companies to process larger candidate pools at speed, but the consequence is an ever-growing population of unsuccessful applicants and a mounting sense of futility among those looking for work. A Collins McNicholas survey published in 2025 found that 75 per cent of job seekers believe AI unfairly filters their applications, while 74 per cent described automated rejection emails as impersonal and dismissive. A Resume Genius survey of 1,000 hiring managers, published in early 2026, found that 79 per cent of companies now use AI somewhere in their hiring or recruiting process, and one in five hiring managers admitted to using AI to screen out applications before they receive any human review at all.
The scale of the filtering is staggering. Research published in early 2026 indicates that more than 90 per cent of employers now use some form of automated system to filter or rank job applications, and that 88 per cent employ AI for initial candidate screening. For every 180 people who apply for a given role, roughly five get an interview. Of those, one or two are hired. The rest vanish into a void that most of them suspect, correctly, is algorithmic. Forty per cent of job applications are now screened out before a human recruiter ever reviews them. An analysis of 1,000 rejected resumes found that 23 per cent of rejections were caused by parsing errors alone: the applicant tracking system could not read the resume correctly because of tables, columns, graphics, or unusual file formats. These are not candidates who were unqualified. They were candidates whose documents confused a machine.
The question is no longer whether algorithms are making consequential decisions about people's working lives. They are. The question is whether anyone, the candidates, the employers, or the regulators, can explain how those decisions are being made, and what it would take to make the system fair.
On 21 January 2026, two job applicants named Erin Kistler and Sruti Bhaumik filed a class-action lawsuit against Eightfold AI Inc. in California. Both have backgrounds in STEM. Both had applied for positions at major companies through online portals whose URLs contained “eightfold.ai,” a detail neither noticed at the time. Neither had any idea that a company called Eightfold existed, let alone that it was compiling what the lawsuit describes as secret consumer reports on their candidacy.
Eightfold's technology operates behind the application portals of some of the world's largest employers, including Microsoft, Morgan Stanley, Starbucks, BNY, PayPal, Chevron, and Bayer. According to the complaint, filed by the law firms Outten and Golden and Towards Justice, the platform scrapes personal data from third-party sources and runs it through a proprietary large language model to generate a “likelihood of success” score on a scale of zero to five. The system draws on what Eightfold describes as more than 1.5 billion global data points, including profiles of over one billion workers, and makes inferences about applicants' preferences, characteristics, predispositions, behaviour, attitudes, intelligence, abilities, and aptitudes. Applicants receive no disclosure that the report exists. They have no access to it. They have no opportunity to dispute errors. And they receive no notice before the information is used to make what the complaint calls “life-altering employment decisions.”
“I've applied to hundreds of jobs, but it feels like an unseen force is stopping me,” Kistler said in a statement released through her legal team. David Seligman, an attorney with Towards Justice, was more direct: “AI systems like Eightfold's are making life-altering decisions.”
The lawsuit alleges that Eightfold's scoring system constitutes a consumer report under the Fair Credit Reporting Act and California's Investigative Consumer Reporting Agencies Act. The argument is straightforward: if a third-party company compiles a dossier about you, scores your fitness for employment, and sells that assessment to employers who use it to accept or reject your application, the resulting product is functionally identical to a credit report. And credit reports come with legal protections that have governed the industry for decades: the right to know a report exists, the right to see it, the right to challenge inaccuracies, and the right to be notified before adverse action is taken on the basis of the report's contents. Eightfold, according to the complaint, provides none of these protections.
Eightfold's spokesperson, Kurt Foeller, told Fortune that the company “does not scrape social media” and operates only on data that applicants have intentionally shared. The plaintiffs dispute this characterisation. Pauline Kim, the Daniel Noyes Kirby Professor of Law at Washington University School of Law, told Fortune that the case represents the first major instance of the Fair Credit Reporting Act being applied specifically to AI decision-making in hiring, a development that could reshape how companies deploy screening technologies.
The lawsuit arrives at a moment of acute regulatory uncertainty. In October 2024, the Consumer Financial Protection Bureau published a circular stating explicitly that algorithmic employment scores are covered by the Fair Credit Reporting Act. The guidance was designed to close the gap between decades-old consumer protection law and the realities of automated hiring. It was rescinded in May 2025, part of a broader withdrawal of 67 guidance documents under the direction of acting CFPB director Russell T. Vought. The legal framework that might have governed companies like Eightfold was erected and demolished within seven months.
Kim has noted in her academic work that the Fair Credit Reporting Act, even when applied to AI hiring tools, provides only limited transparency. It establishes procedural requirements that can help individual workers challenge inaccurate information, but does little to curb intrusive data collection or to address the risks of unfair or discriminatory algorithms. The statute was written for an era of filing cabinets and background checks. The technology it is now being asked to regulate operates at a scale and speed that its authors never imagined.
On 8 April 2026, researchers Rudra Jadhav and Janhavi Danve posted a paper on arXiv titled “The AI Skills Shift: Mapping Skill Obsolescence, Emergence, and Transition Pathways in the LLM Era.” The paper introduces a metric called the Skill Automation Feasibility Index, or SAFI, which benchmarks four frontier large language models across 263 text-based tasks spanning all 35 skills in the US Department of Labor's O*NET taxonomy. The researchers conducted 1,052 model calls with a zero per cent failure rate and cross-referenced their findings against real-world adoption data covering 756 occupations and 17,998 tasks.
The findings reveal a paradox that sits at the heart of AI-driven hiring. Mathematics received the highest automation feasibility score at 73.2, followed by programming at 71.8. Active listening scored 42.2. Reading comprehension scored 45.5. The spread across all four models tested was just 3.6 points, suggesting that automation feasibility is more a property of the skill itself than of the model being used to perform it. The skills that are easiest for large language models to automate are precisely the ones that automated screening tools most readily evaluate: quantifiable, keyword-friendly competencies that map neatly onto a resume. The skills that are hardest for machines to replicate, and that the research identifies as most critical for human value in the LLM era, are the ones that screening algorithms are least equipped to detect.
The researchers call this the “capability-demand inversion.” Skills most demanded in AI-exposed jobs are those that large language models perform least well at in their benchmarks. In other words, the qualities that will matter most in a labour market reshaped by AI are the very qualities that AI hiring tools are structurally unable to assess. The paper found that 78.7 per cent of observed AI interactions in the workplace are augmentation rather than automation, which means the primary role of AI in most jobs is to assist human workers, not to replace them. The skills required to work effectively alongside AI, adaptability, judgement, interpersonal sensitivity, creative problem-solving, are real but largely invisible to a resume-parsing algorithm.
The researchers propose an AI Impact Matrix that positions skills along four quadrants: high displacement risk, upskilling required, AI-augmented, and lower displacement risk. The framework makes visible what most hiring algorithms treat as noise. A candidate whose strongest assets are collaborative reasoning and contextual judgement will generate a weak signal in a system calibrated to detect certifications and years of experience. The matrix suggests that the skills most likely to determine career success in the coming decade are precisely the skills that current screening tools are designed to ignore.
This creates an absurd circularity. The tools being used to decide who gets hired are optimised to evaluate the competencies most likely to be automated, while systematically failing to measure the competencies most likely to determine whether a candidate will succeed. A screening system that rewards keyword density in programming languages or certifications in statistical software is not measuring the thing it thinks it is measuring. It is measuring a candidate's ability to format a CV in a way that satisfies an algorithm. The correlation between that skill and actual job performance is, at best, weak.
Industrial-organisational psychology has long understood this problem. Research on structured interviews, one of the most replicated findings in the field, shows that fully structured behavioural interviews with standardised questions achieve a predictive validity coefficient of approximately 0.55 or higher, while unstructured interviews, the kind most commonly used in hiring, achieve roughly 0.38. The implication is clear: even among traditional hiring methods, the format of the assessment matters as much as the content. An AI screening tool that evaluates candidates on the basis of keyword frequency and experience duration is applying a methodology with no established predictive validity for job performance. It is a tool built to sort, not to select.
The numbers are difficult to absorb. Workday, the cloud-based human resources platform, disclosed in court filings related to a separate class-action lawsuit that 1.1 billion applications were rejected using its software tools during the relevant period. The plaintiff in that case, Derek Mobley, is a Black man over the age of 40 who identifies as having anxiety and depression. He applied to more than a hundred jobs at companies that use Workday's AI-based screening tools over several years and was rejected every time. Four additional plaintiffs later joined the case, each alleging a similar pattern: hundreds of applications submitted through Workday, virtually no interviews, and no explanation.
In May 2025, a federal judge in California granted conditional certification of age discrimination claims under the Age Discrimination in Employment Act, allowing the case to proceed as a nationwide class action. The potential class includes every applicant aged 40 and over who, from September 2020 to the present, applied through Workday's platform and was not advanced by the AI tool. That class could number in the hundreds of millions. In July 2025, the same judge expanded the scope to include applicants processed using HiredScore, an AI feature Workday had acquired, broadening the potential membership still further. Workday has denied that its technology is discriminatory, calling the certification ruling “a preliminary, procedural ruling that relies on allegations, not evidence.”
The Eightfold and Workday cases together paint a picture of an infrastructure that is vast, consequential, and almost entirely opaque. These are not niche products used by a handful of companies. They are the plumbing of the modern labour market. When a significant portion of the world's job applications passes through systems that score, rank, and reject candidates without disclosure, human review, or any mechanism for appeal, the word “screening” barely captures what is happening. What is happening is automated adjudication. And the adjudicators are accountable to no one.
The hiring managers who rely on these tools are often unaware of how they work. The UK's Information Commissioner's Office published a report on 31 March 2026, drawing on evidence from more than 30 employers and public perception research from graduates, civil society organisations, government bodies, trade unions, and industry representatives. The report identified a striking pattern: many employers fail to recognise that they are using automated decision-making at all. They purchase recruitment software, configure basic settings, and assume a human is reviewing the output. In many cases, the system is making the decision, and the human involvement that follows is little more than a rubber stamp. The ICO's report stressed that human involvement in hiring must be “active and genuine,” that the personnel reviewing AI-generated recommendations must possess the authority, discretion, and competence to alter outcomes before decisions take effect. The gap between that standard and current practice is wide.
A November 2025 study from the University of Washington added a further complication. The researchers found that people tend to mirror the biases of AI systems they work alongside. When participants were exposed to AI-generated hiring recommendations that contained bias, they did not correct for the bias. They absorbed it. Unless the bias was obvious and egregious, participants were, in the researchers' words, “perfectly willing to accept the AI's biases.” This finding undermines one of the central defences offered by companies that deploy AI screening: the claim that a human is always in the loop. If the human in the loop is unconsciously adopting the biases of the algorithm they are supposed to be overseeing, the oversight is illusory.
The word “explainability” has become a kind of talisman in conversations about AI governance, invoked as though its mere presence in a policy document could resolve the tensions it names. In the context of AI hiring, explainability means something very specific, and very difficult.
At its most basic, explainability requires that a candidate who has been rejected by an algorithmic system can receive an answer to the question: why? Not a generic notification. Not a form email. An answer that identifies the specific factors that led to the rejection, the data that was used, the criteria that were applied, and the weight that each criterion received in the final decision. It requires, in other words, that the system be legible to the person it has affected.
This is not a trivial technical problem. Many modern AI screening systems use large language models or deep neural networks whose internal decision processes are not fully interpretable even to their developers. The term “black box” is sometimes used carelessly, but in this context it is technically accurate. Eightfold's platform runs on a proprietary large language model that analyses 1.5 billion data points. The relationship between any individual input and the resulting score is not reducible to a simple explanation. The system does not apply a checklist. It makes inferences across a latent space of features that no human designed and no human can fully map.
Hilke Schellmann, an Emmy-award-winning investigative journalist and professor at New York University, spent years investigating AI hiring tools for her 2024 book “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now,” named a Financial Times Best Book of the Year. Her reporting revealed that many of the algorithms making high-stakes calculations about candidates do more harm than good, and that AI-based hiring tools have not been shown to be more effective than traditional methods at predicting job performance. Through whistleblower accounts and leaked internal documents, Schellmann documented systemic discrimination against women and people of colour, patterns that the tools' developers often could not explain because the systems were not built for explanation. They were built for throughput.
The European Union's AI Act, which classifies AI systems used in employment decisions as “high-risk,” will begin enforcing its core requirements for such systems in August 2026. Under the Act, employers using AI in hiring will be required to conduct rigorous risk assessments and bias testing, maintain detailed technical documentation explaining how the AI works, implement human oversight mechanisms to prevent automated decisions from going unchecked, and register the system in an EU database before deployment. Violations can attract fines of up to 35 million euros or seven per cent of global annual turnover. The regulation represents the most comprehensive attempt anywhere in the world to bring algorithmic hiring under meaningful legal constraint.
But even the EU AI Act does not fully resolve the explainability problem. It mandates transparency and documentation, but it does not require that employers provide individual candidates with a specific explanation of why they were rejected. The regulation focuses on systemic accountability: are you testing for bias? Are you documenting your processes? Are your human overseers genuinely overseeing? These are necessary conditions for a fair system, but they are not sufficient for an explainable one. A candidate in Berlin who is rejected by an AI tool used by a company complying fully with the AI Act may still have no way to understand why.
In the United States, the regulatory landscape is not merely incomplete. It is contradictory. New York City's Local Law 144, which took effect in July 2023, requires employers using automated employment decision tools to conduct annual bias audits and to notify candidates that AI is being used. The law covers all AI-based tools relating to employment, including resume screening software, personality tests, and skill assessments, and it requires that audits examine whether the tools are treating different groups of people fairly with regard to race, ethnicity, and gender. Illinois amended its Human Rights Act through House Bill 3773, effective January 2026, making it unlawful for employers to use artificial intelligence that has the effect of discriminating on the basis of protected characteristics. The earlier Illinois AI Video Interview Act, effective since January 2020, had already required employer notification and consent when AI is used to analyse video interviews. Colorado's AI Act, signed in 2024, imposes obligations on deployers of high-risk AI systems, including those used in hiring.
These laws represent genuine progress, but they share a common limitation: they are state and local measures in a labour market that operates nationally and globally. A company headquartered in Texas that uses Eightfold or Workday to screen candidates across all 50 states is subject to a patchwork of obligations that varies by jurisdiction. A candidate in Colorado has different rights from a candidate in Florida. A candidate applying through a portal in London is subject to UK data protection law and the Data (Use and Access) Act's reformed provisions on automated decision-making, but the AI tool processing her application may be operated by a company in California, trained on data from LinkedIn profiles worldwide, and governed by the terms of service of a cloud computing provider in Virginia.
The CFPB's withdrawn guidance on algorithmic employment scores illustrates the fragility of the American regulatory approach. For seven months in 2024 and 2025, there was a federal-level interpretation that would have required companies like Eightfold to comply with FCRA disclosure requirements. When that interpretation was rescinded, the obligation evaporated. The Eightfold lawsuit now asks a court to make the same determination that the CFPB made and then unmade: that algorithmic hiring scores are consumer reports. If the court agrees, the result will be a judicial precedent rather than a regulatory framework, binding on the parties but leaving the broader industry to wait for further litigation to clarify the rules.
What would a fair AI hiring system actually require? The question is easier to pose than to answer, but the outlines of an answer are visible in the research, the litigation, and the regulatory experiments now underway.
First, disclosure. Every candidate should know, before they submit an application, that an automated system will be involved in evaluating it. They should know the name of the system, the categories of data it will use, and the general logic by which it makes its assessments. This is not a radical proposition. It is the minimum standard that the Fair Credit Reporting Act has required of credit bureaus since 1970. The fact that it does not yet apply consistently to AI hiring tools is a regulatory failure, not a technical impossibility.
Second, access and correction. Every candidate who is rejected by an AI system should have the right to see the data the system held about them and to challenge inaccuracies. The Eightfold lawsuit alleges that the company generates detailed dossiers about applicants without their knowledge and provides no mechanism for correction. If the allegations are proved, the gap between what the law requires and what the industry practises is not a matter of degree. It is a matter of kind.
Third, validated assessments. The ArXiv research by Jadhav and Danve demonstrates that current AI screening tools evaluate competencies that do not align with the skills most predictive of job performance in the LLM era. A fair system would require that any automated assessment used in hiring decisions be validated against actual job performance outcomes, not merely against the proxy metrics that the system was designed to optimise. Industrial-organisational psychology has established rigorous standards for assessment validation. There is no principled reason why AI screening tools should be exempt from those standards.
Fourth, meaningful human oversight. The ICO's March 2026 report found that many employers do not recognise they are using automated decision-making and that the human involvement in their processes is often nominal. The University of Washington study found that even when humans are present, they tend to absorb rather than correct algorithmic bias. Meaningful oversight requires that the person reviewing an AI recommendation has the authority, training, and information necessary to overrule it. It requires that overruling the algorithm carries no professional penalty. And it requires that the proportion of AI recommendations that are actually reviewed and challenged is itself monitored and reported.
Fifth, independent auditing. New York City's Local Law 144 requires annual bias audits of automated employment decision tools. This is a starting point, but the audits must be genuinely independent, conducted by parties with no financial relationship to the tool's developer or the employer, and the results must be public. An audit that is commissioned by the company being audited, conducted according to the company's own methodology, and published only in summary form is not an audit. It is a press release.
Sixth, regulatory coherence. The current patchwork of state, local, and national regulations creates an environment in which compliance is burdensome for employers who take it seriously and easily evaded by those who do not. The EU AI Act represents one model for a comprehensive approach. The United States does not need to replicate the EU's framework precisely, but it does need a federal standard that establishes minimum requirements for disclosure, validation, human oversight, and auditing. The alternative is an indefinite extension of the current system, in which the rights of a job applicant depend on the jurisdiction in which they happen to live.
There is a tendency in conversations about AI hiring to frame the problem as a matter of efficiency versus fairness, as though the two are naturally in tension and the task is to find an acceptable compromise. The framing is misleading. A system that rejects qualified candidates because it cannot evaluate the competencies that matter is not efficient. It is wasteful. A system that scores applicants using data they have never seen and cannot correct is not streamlined. It is arbitrary. A system that makes consequential decisions about people's lives without any mechanism for explanation or appeal is not optimised. It is unjust.
The experience of job seekers like Bhuvana Chilukuri and Erin Kistler and Derek Mobley is not a side effect of technological progress. It is a design choice. The companies that build and deploy these systems chose speed over accuracy, throughput over fairness, and opacity over accountability. Those choices were not inevitable. They were made because they were profitable and because, until very recently, they were legal. A 2025 survey found that 69 per cent of candidates said a lack of human interaction would deter them from joining an organisation, and 54 per cent wanted employers to maintain a human touch in hiring. The tools that were supposed to make hiring more efficient are driving away the talent they were meant to attract.
The BBC's reporting, the Eightfold and Workday lawsuits, the ArXiv research on skill obsolescence, and the ICO's findings all converge on the same conclusion: the first and most decisive moment in a person's working life is now frequently decided by a system that neither they nor most employers can interrogate. That is not a technical problem waiting for a better algorithm. It is a governance failure waiting for a political response. The technology exists to build hiring systems that are transparent, validated, and subject to meaningful oversight. What is missing is the will to require it.
The machinery is already in motion. The EU AI Act's high-risk provisions take effect in August 2026. The Eightfold and Workday cases will set precedents in American courts. The ICO is consulting on new guidance until 29 May 2026. Legislators in Illinois, Colorado, and New York have demonstrated that it is possible to regulate AI in hiring without banning it. The question is whether these efforts will coalesce into a coherent framework before a generation of workers is sorted, scored, and discarded by systems that no one can explain.
The algorithms are not going away. The only remaining question is whether the people they judge will ever be allowed to judge them back.
BBC report on AI-led hiring in the UK, featuring Bhuvana Chilukuri's experience and Denis Machuel's comments on the job market, March 2026. https://www.storyboard18.com/trending/student-warns-ai-led-hiring-in-uk-causes-impersonal-rejections-ws-l-92877.htm
Collins McNicholas survey on candidate experiences with AI in recruitment, 2025. https://www.peoplemanagement.co.uk/article/1940958/jobseekers-fear-ai-unfairly-screening-applications-research-finds
Resume Genius, “2026 Hiring Insights Report: ATS, AI, and Employer Expectations,” survey of 1,000 US hiring managers, 2026. https://resumegenius.com/blog/job-hunting/hiring-insights-report
CoverSentry, “ATS Statistics 2026: Why Your Resume Disappears Into the Void,” analysis of AI screening rejection rates and parsing errors. https://www.coversentry.com/ats-statistics
Kistler and Bhaumik v. Eightfold AI Inc., class-action complaint filed 21 January 2026, Outten and Golden LLP and Towards Justice. https://www.outtengolden.com/newsroom/landmark-class-action-accuses-eightfold-ai-of-illegally-producing-hidden-credit-reports-on-job-applicants
Fortune, “Job seekers are suing an AI hiring tool used by Microsoft and PayPal for allegedly compiling secretive reports that help employers screen candidates,” 26 January 2026. https://fortune.com/2026/01/26/job-seekers-suing-ai-hiring-tool-eightfold-allegedly-compiling-secretive-reports/
Consumer Financial Protection Bureau, “Consumer Financial Protection Circular 2024-06: Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions,” October 2024. https://www.consumerfinance.gov/compliance/circulars/consumer-financial-protection-circular-2024-06-background-dossiers-and-algorithmic-scores-for-hiring-promotion-and-other-employment-decisions/
Consumer Financial Services Law Monitor, “CFPB Rescinds Dozens of Regulatory Guidance Documents in Major Regulatory Shift,” May 2025. https://www.consumerfinancialserviceslawmonitor.com/2025/05/cfpb-rescinds-dozens-of-regulatory-guidance-documents-in-major-regulatory-shift/
Pauline Kim, “People Analytics and the Regulation of Information Under the Fair Credit Reporting Act,” Washington University School of Law. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2809910
Jadhav, Rudra, and Janhavi Danve, “The AI Skills Shift: Mapping Skill Obsolescence, Emergence, and Transition Pathways in the LLM Era,” arXiv:2604.06906, 8 April 2026. https://arxiv.org/abs/2604.06906
Mobley v. Workday, Inc., US District Court for the Northern District of California, class-action complaint alleging age and race discrimination through AI-based screening. https://fairnow.ai/workday-lawsuit-resume-screening/
Law and the Workplace, “AI Bias Lawsuit Against Workday Reaches Next Stage as Court Grants Conditional Certification of ADEA Claim,” June 2025. https://www.lawandtheworkplace.com/2025/06/ai-bias-lawsuit-against-workday-reaches-next-stage-as-court-grants-conditional-certification-of-adea-claim/
Information Commissioner's Office, “Recruitment Rewired: An Update on the ICO's Work on the Fair and Responsible Use of Automation in Recruitment,” 31 March 2026. https://ico.org.uk/about-the-ico/what-we-do/recruitment-rewired/
University of Washington, “People mirror AI systems' hiring biases, study finds,” November 2025. https://www.washington.edu/news/2025/11/10/people-mirror-ai-systems-hiring-biases-study-finds/
Schellmann, Hilke, “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now,” Hachette Books, 2024. https://www.hachettebookgroup.com/titles/hilke-schellmann/the-algorithm/9780306827365/
European Commission, “AI Act: Shaping Europe's Digital Future,” regulatory framework for artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
New York City Local Law 144 on Automated Employment Decision Tools, effective July 2023. https://www.warden-ai.com/resources/hr-tech-compliance-nyc-local-law-144
Illinois House Bill 3773, amendment to the Illinois Human Rights Act regarding AI in employment decisions, effective January 2026. https://www.theemployerreport.com/2024/08/illinois-joins-colorado-and-nyc-in-restricting-generative-ai-in-hr-a-comprehensive-look-at-us-and-global-laws-on-algorithmic-bias-in-the-workplace/
Pauline Kim, testimony before the US Equal Employment Opportunity Commission, “Navigating Employment Discrimination, AI, and Automated Systems,” January 2023. https://www.eeoc.gov/meetings/meeting-january-31-2023-navigating-employment-discrimination-ai-and-automated-systems-new/kim

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk