from Human in the Loop

Picture a robot that has never been told how its own body works, yet watches itself move and gradually learns to understand its physical form through vision alone. No embedded sensors, no pre-programmed models, no expensive hardware—just a single camera and the computational power to make sense of what it sees. This isn't science fiction; it's the reality emerging from MIT's Computer Science and Artificial Intelligence Laboratory, where researchers have developed a system that could fundamentally change how we think about robotic control.

When Robots Learn to Know Themselves

The traditional approach to robotic control reads like an engineering manual written in advance of the machine it describes. Engineers meticulously map every joint, calculate precise kinematics, and embed sensors throughout the robot's body to track position, velocity, and force. It's a process that works, but it's also expensive, complex, and fundamentally limited to robots whose behaviour can be predicted and modelled beforehand.

Neural Jacobian Fields represent a radical departure from this paradigm. Instead of telling a robot how its body works, the system allows the machine to figure it out by watching itself move. The approach eliminates the need for embedded sensors entirely, relying instead on a single external camera to provide all the visual feedback necessary for sophisticated control.

The implications extend far beyond mere cost savings. Traditional sensor-based systems struggle with robots made from soft materials, bio-inspired designs, or multi-material constructions where the physics become too complex to model accurately. These machines—which might include everything from flexible grippers to biomimetic swimmers—have remained largely out of reach for precise control systems. Neural Jacobian Fields change that equation entirely.

Researchers at MIT CSAIL have demonstrated that their vision-based system can learn to control diverse robots without any prior knowledge of their mechanical properties. The robot essentially builds its own internal model of how it moves by observing the relationship between motor commands and the resulting visual changes captured by the camera. The system enables robots to develop what researchers describe as a form of self-awareness through visual observation—a type of embodied understanding that emerges naturally from watching and learning.

The breakthrough represents a fundamental shift from model-based to learning-based control. Rather than creating precise, often brittle mathematical models of robots, the focus moves towards data-driven approaches where robots learn their own control policies through interaction and observation. This mirrors a broader trend in robotics where adaptability and learning play increasingly central roles in determining behaviour.

The technology also highlights the growing importance of computer vision in robotics. As cameras become cheaper and more capable, and as machine learning approaches become more sophisticated, vision-based approaches are becoming viable alternatives to traditional sensor modalities. This trend extends beyond robotics into autonomous vehicles, drones, and smart home systems.

The Mathematics of Self-Discovery

At the heart of this breakthrough lies a concept called the visuomotor Jacobian field—an adaptive representation that directly connects what a robot sees to how it should move. In traditional robotics, Jacobian matrices describe the relationship between joint velocities and end-effector motion, requiring detailed knowledge of the robot's kinematic structure. The Neural Jacobian Field approach inverts this process, inferring these relationships purely from visual observation.

The system works by learning to predict how small changes in motor commands will affect what the camera sees. Over time, this builds up a comprehensive understanding of the robot's capabilities and limitations, all without requiring any explicit knowledge of joint angles, link lengths, or material properties. It's a form of self-modelling that emerges naturally from the interaction between action and observation.

This control map becomes remarkably sophisticated. The system can understand not just how the robot moves, but how different parts of its body interact and how to execute complex movements through space. The robot develops a form of physical self-perception, understanding its own capabilities through empirical observation rather than theoretical calculation. This self-knowledge extends to understanding the robot's workspace boundaries, the effects of gravity on different parts of its structure, and even how wear or damage might affect its movement patterns.

The computational approach builds on recent advances in deep learning, particularly in the area of implicit neural representations. Rather than storing explicit models of the robot's geometry or dynamics, the system learns a continuous function that can be queried at any point to understand the local relationship between motor commands and visual feedback. This allows the approach to scale to robots of varying complexity without requiring fundamental changes to the underlying approach.

The neural network architecture that enables this learning represents a sophisticated integration of computer vision and control theory. The system must simultaneously process high-dimensional visual data and learn the complex mappings between motor commands and their visual consequences. This requires networks capable of handling both spatial and temporal relationships, understanding not just what the robot looks like at any given moment, but how its appearance changes in response to different actions.

The visuomotor Jacobian field effectively replaces the analytically derived Jacobian matrix used in classical robotics. This movement model becomes a continuous function that maps the robot's configuration to the visual changes produced by its motor commands. The elegance of this approach lies in its generality—the same fundamental mechanism can work across different robot designs, from articulated arms to soft manipulators to swimming robots.

Beyond the Laboratory: Real-World Applications

The practical implications of this technology extend across numerous domains where traditional robotic control has proven challenging or prohibitively expensive. In manufacturing, the ability to control robots without embedded sensors could dramatically reduce the cost of automation, making robotic solutions viable for smaller-scale operations that couldn't previously justify the investment. Small manufacturers, artisan workshops, and developing economies could potentially find sophisticated robotic assistance within their reach.

Soft robotics represents perhaps the most immediate beneficiary of this approach. Robots made from flexible materials, pneumatic actuators, or bio-inspired designs have traditionally been extremely difficult to control precisely because their behaviour is hard to model mathematically. The Neural Jacobian Field approach sidesteps this problem entirely, allowing these machines to learn their own capabilities through observation. MIT researchers have successfully demonstrated the system controlling a soft robotic hand to grasp objects, showing how flexible systems can learn to adapt their compliant fingers to different shapes and develop strategies that would be nearly impossible to program explicitly.

These soft systems have shown great promise for applications requiring safe interaction with humans or navigation through confined spaces. However, their control has remained challenging precisely because their behaviour is difficult to model mathematically. Vision-based control could unlock the potential of these systems by allowing them to learn their own complex dynamics through observation. The approach might enable new forms of bio-inspired robotics, where engineers can focus on replicating the mechanical properties of biological systems without worrying about how to sense and control them.

The technology also opens new possibilities for field robotics, where robots must operate in unstructured environments far from technical support. A robot that can adapt its control strategy based on visual feedback could potentially learn to operate in new configurations without requiring extensive reprogramming or recalibration. This could prove valuable for exploration robots, agricultural machines, or disaster response systems that need to function reliably in unpredictable conditions.

Medical robotics presents another compelling application area. Surgical robots and rehabilitation devices often require extremely precise control, but they also need to adapt to the unique characteristics of each patient or procedure. A vision-based control system could potentially learn to optimise its behaviour for specific tasks, improving both precision and effectiveness. Rehabilitation robots, for example, could adapt their assistance patterns based on observing a patient's progress and changing needs over time.

The approach could potentially benefit prosthetics and assistive devices. Current prosthetic limbs often require extensive training for users to learn complex control interfaces. A vision-based system could potentially observe the user's intended movements and adapt its control strategy accordingly, creating more intuitive and responsive artificial limbs. The system could learn to interpret visual cues about the user's intentions, making the prosthetic feel more like a natural extension of the body.

The Technical Architecture

The Neural Jacobian Field system represents a sophisticated integration of computer vision, machine learning, and control theory. The architecture begins with a standard camera that observes the robot from an external vantage point, capturing the full range of the machine's motion in real-time. This camera serves as the robot's only source of feedback about its own state and movement, replacing arrays of expensive sensors with a single, relatively inexpensive visual system.

The visual input feeds into a deep neural network trained to understand the relationship between pixel-level changes in the camera image and the motor commands that caused them. This network learns to encode a continuous field that maps every point in the robot's workspace to a local Jacobian matrix, describing how small movements in that region will affect what the camera sees. The network processes not just static images, but the dynamic visual flow that reveals how actions translate into change.

The training process requires the robot to execute a diverse range of movements while the system observes the results. Initially, these movements explore the robot's capabilities, allowing the system to build a comprehensive understanding of how the machine responds to different commands. The robot might reach in various directions, manipulate objects, or simply move its joints through their full range of motion. Over time, the internal model becomes sufficiently accurate to enable sophisticated control tasks, from precise positioning to complex manipulation.

One of the notable aspects of the system is its ability to work across different robot configurations. The neural network architecture can learn to control robots with varying mechanical designs without fundamental modifications. This generality stems from the approach's focus on visual feedback rather than specific mechanical models. The system learns principles about how visual changes relate to movement that can apply across different robot designs.

The control loop operates in real-time, with the camera providing continuous feedback about the robot's current state and the neural network computing appropriate motor commands to achieve desired movements. The system can handle both position control, where the robot needs to reach specific locations, and trajectory following, where it must execute complex paths through space. The visual feedback allows for immediate correction of errors, enabling the robot to adapt to unexpected obstacles or changes in its environment.

The computational requirements, while significant, remain within the capabilities of modern hardware. The system can run on standard graphics processing units, making it accessible to research groups and companies that might not have access to specialised robotic hardware. This accessibility is important for the technology's potential to make advanced robotic control more widely available.

The approach represents a trend moving away from reliance on internal, proprioceptive sensors towards using rich, external visual data as the primary source of feedback for robotic control. Neural Jacobian Fields exemplify this shift, demonstrating that sophisticated control can emerge from careful observation of the relationship between actions and their visual consequences.

Democratising Robotic Intelligence

Perhaps one of the most significant long-term impacts of Neural Jacobian Fields lies in their potential to make sophisticated robotic control more accessible. Traditional robotics has been dominated by large institutions and corporations with the resources to develop complex sensor systems and mathematical models. The barrier to entry has remained stubbornly high, limiting innovation to well-funded research groups and established companies.

Vision-based control systems could change this dynamic. A single camera and appropriate software could potentially replace substantial investments in embedded sensors, making advanced robotic control more accessible to smaller research groups, educational institutions, and individual inventors. While the approach still requires technical expertise in machine learning and robotics, it eliminates the need for detailed kinematic modelling and complex sensor integration.

This increased accessibility could accelerate innovation in unexpected directions. Researchers working on problems in biology, materials science, or environmental monitoring might find robotic solutions more within their reach, leading to applications that traditional robotics companies might never have considered. The history of computing suggests that transformative innovations often come from unexpected quarters once the underlying technology becomes more accessible.

Educational applications represent another significant opportunity. Students learning robotics could focus on high-level concepts and applications while still engaging with the mathematical foundations of control theory. This could help train a new generation of roboticists with a more intuitive understanding of how machines move and interact with their environment. Universities with limited budgets could potentially offer hands-on robotics courses without investing in expensive sensor arrays and specialised hardware.

The democratisation extends beyond formal education to maker spaces, hobbyist communities, and entrepreneurial ventures. Individuals with creative ideas for robotic applications could prototype and test their concepts without the traditional barriers of sensor integration and control system development. This could lead to innovation in niche applications, artistic installations, and novel robotic designs that push the boundaries of what we consider possible.

Small businesses and developing economies could particularly benefit from this accessibility. Manufacturing operations that could never justify the cost of traditional robotic systems might find vision-based robots within their reach. This could help level the playing field in global manufacturing, allowing smaller operations to compete with larger, more automated facilities.

The potential economic implications extend beyond the robotics industry itself. By reducing the cost and complexity of robotic control, the technology could accelerate automation in sectors that have previously found robotics economically unviable. Small-scale manufacturing, agriculture, and service industries could all benefit from more accessible robotic solutions.

Challenges and Limitations

Despite its promise, the Neural Jacobian Field approach faces several significant challenges that will need to be addressed before it can achieve widespread adoption. The most fundamental limitation lies in the quality and positioning of the external camera. Unlike embedded sensors that can provide precise measurements regardless of environmental conditions, vision-based systems remain vulnerable to lighting changes, occlusion, and camera movement.

Lighting conditions present a particular challenge. The system must maintain accurate control across different illumination levels, from bright sunlight to dim indoor environments. Shadows, reflections, and changing light sources can all affect the visual feedback that the system relies upon. While modern computer vision techniques can handle many of these variations, they add complexity and potential failure modes that don't exist with traditional sensors.

The learning process itself requires substantial computational resources and training time. While the system can eventually control robots without embedded sensors, it needs significant amounts of training data to build accurate models. This could limit its applicability in situations where robots need to begin operating immediately or where training time is severely constrained. The robot must essentially learn to walk before it can run, requiring a period of exploration and experimentation that might not be practical in all applications.

Robustness represents another ongoing challenge. Traditional sensor-based systems can often detect and respond to unexpected situations through direct measurement of forces, positions, or velocities. Vision-based systems must infer these quantities from camera images, potentially missing subtle but important changes in the robot's state or environment. A loose joint, worn component, or unexpected obstacle might not be immediately apparent from visual observation alone.

The approach also requires careful consideration of safety, particularly in applications where robot malfunction could cause injury or damage. While the system has shown impressive performance in laboratory settings, proving its reliability in safety-critical applications will require extensive testing and validation. The lack of direct force feedback could be particularly problematic in applications involving human interaction or delicate manipulation tasks.

Occlusion presents another significant challenge. If parts of the robot become hidden from the camera's view, the system loses crucial feedback about those components. This could happen due to the robot's own movements, environmental obstacles, or the presence of humans or other objects in the workspace. Developing strategies to handle partial occlusion or to use multiple cameras effectively remains an active area of research.

The computational demands of real-time visual processing and neural network inference can be substantial, particularly for complex robots or high-resolution cameras. While modern hardware can handle these requirements, the energy consumption and processing power needed might limit deployment in battery-powered or resource-constrained applications.

The Learning Process and Adaptation

One of the most fascinating aspects of Neural Jacobian Fields is how they learn. Unlike traditional machine learning systems that are trained on large datasets and then deployed, these systems learn continuously through interaction with their environment. The robot's understanding of its own capabilities evolves over time as it gains more experience with different movements and situations.

This continuous learning process means that the robot's performance can improve over its operational lifetime. Small changes in the robot's physical configuration, whether due to wear, maintenance, or intentional modifications, can be accommodated automatically as the system observes their effects on movement. A robot might learn to compensate for a slightly loose joint or adapt to the addition of new tools or attachments.

The robot's learning follows recognisable stages. Initially, movements are exploratory and somewhat random as the system builds its basic understanding of cause and effect. Gradually, more purposeful movements emerge as the robot learns to predict the consequences of its actions. Eventually, the system develops the ability to plan complex movements and execute them with precision.

This learning process is robust to different starting conditions. Robots with different mechanical designs can learn effective control strategies using the same basic approach. The system discovers the unique characteristics of each robot through observation, adapting its strategies to work with whatever physical capabilities are available.

The continuous nature of the learning also means that robots can adapt to changing conditions over time. Environmental changes, wear and tear, or modifications to the robot's structure can all be accommodated as the system observes their effects and adjusts accordingly. This adaptability could prove crucial for long-term deployment in real-world applications where conditions are never perfectly stable.

The approach enables a form of learning that mirrors biological development, where motor skills emerge through exploration and practice rather than explicit instruction. This parallel suggests that vision-based motor learning may reflect fundamental principles of how intelligent systems acquire physical capabilities.

Scaling and Generalisation

The ability of Neural Jacobian Fields to work across different robot configurations is one of their most impressive characteristics. The same basic approach can learn to control robots with different mechanical designs, from articulated arms to flexible swimmers to legged walkers. This generality suggests that the approach captures something fundamental about the relationship between vision and movement.

This generalisation capability could be important for practical deployment. Rather than requiring custom control systems for each robot design, manufacturers could potentially use the same basic software framework across multiple product lines. This could reduce development costs and accelerate the introduction of new robot designs. The approach might enable more standardised robotics where new mechanical designs can be controlled effectively without extensive software development.

The system's ability to work with compliant robots is particularly noteworthy. These machines, made from flexible materials that can bend, stretch, and deform, have shown great promise for applications requiring safe interaction with humans or navigation through confined spaces. However, their control has remained challenging precisely because their behaviour is difficult to model mathematically. Vision-based control could unlock the potential of these systems by allowing them to learn their own complex dynamics through observation.

The approach might also enable new forms of modular robotics, where individual components can be combined in different configurations without requiring extensive recalibration or reprogramming. If a robot can learn to understand its own body through observation, it might be able to adapt to changes in its physical configuration automatically. This could lead to more flexible and adaptable robotic systems that can be reconfigured for different tasks.

The generalisation extends beyond just different robot designs to different tasks and environments. A robot that has learned to control itself in one setting can often adapt to new situations relatively quickly, building on its existing understanding of its own capabilities. This transfer learning could make robots more versatile and reduce the time needed to deploy them in new applications.

The success of the approach across diverse robot types suggests that it captures principles about motor control that apply regardless of specific mechanical implementation. This universality could be key to developing more general robotic intelligence that isn't tied to particular hardware configurations.

Expanding Applications and Future Possibilities

The Neural Jacobian Field approach represents a convergence of several technological trends that have been developing independently for years. Computer vision has reached a level of sophistication where single cameras can extract remarkably detailed information about three-dimensional scenes. Machine learning approaches have become powerful enough to find complex patterns in high-dimensional data. Computing hardware has become fast enough to process this information in real-time.

The combination of these capabilities creates opportunities that were simply not feasible even a few years ago. The ability to control sophisticated robots using only visual feedback represents a qualitative leap in what's possible with relatively simple hardware configurations. This technological convergence also suggests that similar breakthroughs may be possible in other domains where complex systems need to be controlled or understood.

The principles underlying Neural Jacobian Fields could potentially be applied to problems in autonomous vehicles, manufacturing processes, or even biological systems where direct measurement is difficult or impossible. The core insight—that complex control can emerge from careful observation of the relationship between actions and their visual consequences—has applications beyond robotics.

In autonomous vehicles, similar approaches might enable cars to learn about their own handling characteristics through visual observation of their movement through the environment. Manufacturing systems could potentially optimise their operations by observing the visual consequences of different process parameters. Even in biology, researchers might use similar techniques to understand how organisms control their movement by observing the relationship between neural activity and resulting motion.

The technology might also enable new forms of robot evolution, where successful control strategies learned by one robot could be transferred to others with similar capabilities. This could create a form of collective learning where the robotics community as a whole benefits from the experiences of individual systems. Robots could share their control maps, accelerating the development of new capabilities across populations of machines.

The success of Neural Jacobian Fields opens numerous avenues for future research and development. One promising direction involves extending the approach to multi-robot systems, where teams of machines could learn to coordinate their movements through shared visual feedback. This could enable new forms of collaborative robotics that would be extremely difficult to achieve through traditional control methods.

Another area of investigation involves combining vision-based control with other sensory modalities. While the current approach relies solely on visual feedback, incorporating information from audio, tactile, or other sensors could enhance the system's capabilities and robustness. The challenge lies in maintaining the simplicity and generality that make the vision-only approach so appealing.

Implications for Human-Robot Interaction

As robots become more capable of understanding their own bodies through vision, they may also become better at understanding and interacting with humans. The same visual processing capabilities that allow a robot to model its own movement could potentially be applied to understanding human gestures, predicting human intentions, or adapting robot behaviour to human preferences.

This could lead to more intuitive forms of human-robot collaboration, where people can communicate with machines through natural movements and gestures rather than explicit commands or programming. The robot's ability to learn and adapt could make these interactions more fluid and responsive over time. A robot working alongside a human might learn to anticipate their partner's needs based on visual cues, creating more seamless collaboration.

The technology might also enable new forms of robot personalisation, where machines adapt their behaviour to individual users based on visual observation of preferences and patterns. This could be particularly valuable in healthcare, education, or domestic applications where robots need to work closely with specific individuals over extended periods. A care robot, for instance, might learn to recognise the subtle signs that indicate when a patient needs assistance, adapting its behaviour to provide help before being asked.

The potential for shared learning between humans and robots is particularly intriguing. If robots can learn through visual observation, they might be able to watch humans perform tasks and learn to replicate or assist with those activities. This could create new forms of robot training where machines learn by example rather than through explicit programming.

The visual nature of the feedback also makes the robot's learning process more transparent to human observers. People can see what the robot is looking at and understand how it's learning to move. This transparency could build trust and make human-robot collaboration more comfortable and effective.

Economic and Industrial Impact

For established robotics companies, the technology presents both opportunities and challenges. While it could reduce manufacturing costs and enable new applications, it might also change competitive dynamics in the industry. Companies will need to adapt their strategies to remain relevant in a world where sophisticated control capabilities become more widely accessible.

The approach could also enable new business models in robotics, where companies focus on software and learning systems rather than hardware sensors and mechanical design. This could lead to more rapid innovation cycles and greater specialisation within the industry. Companies might develop expertise in particular types of learning or specific application domains, creating a more diverse and competitive marketplace.

The democratisation of robotic control could also have broader economic implications. Regions that have been excluded from the robotics revolution due to cost or complexity barriers might find these technologies more accessible. This could help reduce global inequalities in manufacturing capability and create new opportunities for economic development.

The technology might also change the nature of work in manufacturing and other industries. As robots become more accessible and easier to deploy, the focus might shift from operating complex machinery to designing and optimising robotic systems. This could create new types of jobs while potentially displacing others, requiring careful consideration of the social and economic implications.

Rethinking Robot Design

The availability of vision-based control systems could fundamentally change how robots are designed and manufactured. When embedded sensors are no longer necessary for precise control, engineers gain new freedom in choosing materials, form factors, and mechanical designs. This could lead to robots that are lighter, cheaper, more robust, or better suited to specific applications.

The elimination of sensor requirements could enable new categories of robots. Disposable robots for dangerous environments, ultra-lightweight robots for delicate tasks, or robots made from unconventional materials could all become feasible. The design constraints that have traditionally limited robotic systems could be relaxed, opening up new possibilities for innovation.

The approach might also enable new forms of bio-inspired robotics, where engineers can focus on replicating the mechanical properties of biological systems without worrying about how to sense and control them. This could lead to robots that more closely mimic the movement and capabilities of living organisms.

The reduced complexity of sensor integration could also accelerate the development cycle for new robot designs. Prototypes could be built and tested more quickly, allowing for more rapid iteration and innovation. This could lead to a more dynamic and creative robotics industry where new ideas can be explored more easily.

The Path Forward

Neural Jacobian Fields represent more than just a technical advance; they embody a fundamental shift in how we think about robotic intelligence and control. By enabling machines to understand themselves through observation rather than explicit programming, the technology opens possibilities that were previously difficult to achieve.

The journey from laboratory demonstration to widespread practical application will undoubtedly face numerous challenges. Questions of reliability, safety, and scalability will need to be addressed through careful research and testing. The robotics community will need to develop new standards and practices for vision-based control systems.

Researchers are also exploring ways to accelerate the learning process, potentially through simulation, transfer learning, or more sophisticated training approaches. Reducing the time required to train new robots could make the approach more practical for commercial applications where rapid deployment is essential.

Yet the potential rewards justify the effort. A world where robots can learn to understand themselves through vision alone is a world where robotic intelligence becomes more accessible, more adaptable, and more aligned with the complex, unpredictable nature of real-world environments. The robots of the future may not need to be told how they work—they'll simply watch themselves and learn.

As this technology continues to develop, it promises to blur the traditional boundaries between artificial and biological intelligence, creating machines that share some of the adaptive capabilities that have made biological organisms so successful. In doing so, Neural Jacobian Fields may well represent a crucial step towards truly autonomous, intelligent robotic systems that can thrive in our complex world.

The implications extend beyond robotics into our broader understanding of intelligence, learning, and adaptation. By demonstrating that sophisticated control can emerge from simple visual observation, this research challenges our assumptions about what forms of knowledge are truly necessary for intelligent behaviour. In a sense, these robots are teaching us something fundamental about the nature of learning itself.

The future of robotics may well be one where machines learn to understand themselves through observation, adaptation, and continuous interaction with the world around them. In this future, the robots won't just follow our instructions—they'll watch, learn, and grow, developing capabilities we never explicitly programmed but that emerge naturally from their engagement with reality itself.

This vision of self-aware, learning robots represents a profound shift in our relationship with artificial intelligence. Rather than creating machines that simply execute our commands, we're developing systems that can observe, learn, and adapt in ways that mirror the flexibility and intelligence of biological organisms. The robots that emerge from this research may be our partners in understanding and shaping the world, rather than simply tools for executing predetermined tasks.

If robots can learn to see and understand themselves, the possibilities for what they might achieve alongside us become truly extraordinary.

References

  1. MIT Computer Science and Artificial Intelligence Laboratory. “Robots that know themselves: MIT's vision-based system teaches machines self-awareness.” Available at: www.csail.mit.edu

  2. Li, S.L., et al. “Controlling diverse robots by inferring Jacobian fields with deep learning.” PubMed Central. Available at: pmc.ncbi.nlm.nih.gov

  3. MIT EECS. “Robotics Research.” Available at: www.eecs.mit.edu

  4. MIT EECS Faculty. “Daniela Rus.” Available at: www.eecs.mit.edu

  5. arXiv. “Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation.” Available at: arxiv.org


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0000-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from EnbySpacePerson

A picture of an LCD computer screen taken at an angle. The screen shows a close up image of the output of an ls command in the root of the drive. The ls output is color coded and shows sym links in light blue.

Image by joffi from Pixabay

My main operating system is Linux. I use Mac and Windows. I complain about all of my operating systems. Occasionally, when I'm complaining about Windows, I get people telling me I should switch to Linux. This isn't quite as funny as you might think.

When I complain about Linux, I'm often told I should switch distributions.

To be clear, this impulse is anti-social behavior. I've been using Linux seriously for more than 25 years. If you're guilty of responding to people's tech gripes with something that amounts to “change every single thing about everything you do radically, overnight because I said so,” you need to cut that shit out immediately.

There is no Linux which is a drop-in replacement for Windows or Mac. People will have questions ... maybe even about every single thing they do. And, if they ask a search engine, the bulk of the answers they get will be as many as 20 years out of date. It will almost certainly involve reading some tech writing insulting and disparaging things to someone who had the same question umpteen years ago. That's hugely demoralizing and leaves you with the impression that everyone in the 'Linux community' is a gigantic asshole.

In contrast, switching from one shell to another shouldn't be that big of a deal. I've used ksh and didn't like it but I got on alright. I've used zsh and I was mostly annoyed that it wasn't my choice. It seems fine. Mostly, I've used bash because that's the default shell in Debian (not my first distribution but almost my first) and Ubuntu (where I'm at right now for reasons I won't get into ... but I don't recommend it to anyone).

The topic of fish has come up several times over the past few days. I've been exploring tmux and Zellij lately so I'm open to new possibilities. I did my due diligence before installing it. And then I was pretty shocked when I went to use it that it was immediately hugely useful without really having to learn anything else. (I actually can say the same about Zellij, by the way, but I'm still using tmux as my main terminal multiplexer at the moment.)

It's slightly inconvenient to launch fish each time I start a new terminal. I went about figuring out how to change my default shell. I ran into How to Change Your Default Shell on Linux with chsh and started reading through it while working on another task.

Generally, I take a dim view of articles or answers that take the long way around. I learned soooo much reading that. I can't complain about how Dave McKay built the article. In the off chance you see this, thanks, Dave!

I haven't actually changed my default shell yet but it's not because of your article.

Sometimes, I just want to get something done. For that, I could have skipped ahead. I was interested in the things I ended up learning along the way. This wasn't like one of those times where you go to read through a recipe and you have to scroll by twenty photo documented pages of unrelated or mostly unrelated stuff to get to the heart of the recipe. Dave builds on concepts the reader (in this case, me) likely needs to know before making the decision to switch shells.

It's good stuff. We need more documentation like this.

If you enjoy erotic or adult fiction, please support my work by picking up some of my stories at Chanting Lure Tales.

 
Read more...

from One Thomas Alan

I think many are chasing someone who doesn’t exist.

Someone built from scraps of admiration.

📷 Taken in Tokyo. A reflection on identity, illusion, and how easily we lose ourselves.

#streetphotography #blackandwhite

 
Read more...

from Shared Visions

Tijana Cvetković, Milan Đorđević, Noa Treister, July 2025.

Shared Visions coop goes, this time, a bit further into the region (crossing what became a state border after Yugoslavia’s breakup) in the quest to find new partner organisations and artists/members… spreading the word.  To become sustainable, the cooperative needs to build a wide support network of collaborations and joint ventures. In this workshop, we have started building these connections in the region with BASOC and DKC Incel from Banja Luka, which follow the same principle and politics as Shared Visions.

The BASOC Case

Banja Luka Social Centre (BASOC) sits on the Vrbas river, in the city centre’s historically Muslim quarter beside a mosque – an area whose population shifted drastically during the war. Evenings we gather for dinner under a walnut tree in the garden of the squat‑style building, partly neglected, shared with two homeless comrades who safeguard the space. BASOC is both activist practice and infrastructure, focusing on social justice and social equality, workers’ issues and their self-empowerment, work with the BiH diaspora and minorities, as well as gender politics,  actively resists patriarchy, nationalism, and economic inequality that arose from the wars of the 1990s and the post-war transition… which is why they have been targeted multiple times by hooligans or groups that are traditionally triggered by challenges to dominant power. Founded 11 years ago as an alternative space in a post‑war city ruled by big capital – and in an entity conducting a witch‑hunt against civil society – BASOC has weathered fluctuating membership. Most who could leave for Europe have gone; youth raised in a society scarred by fratricidal war and nationalist restructuring in service of capitalist logic now see such spaces as marginal. BASOC now stands at a crossroads: its founders ready to step back, yet no new generation in sight to take ower. Banja Luka has become less a place to build futures than one people leave to survive.

Banja Luka is the second-largest city in BiH. A city in a valley through which the Vrbas River flows, meeting three other rivers: Suturlija, Crkvena, and Vrbanja. Super green city, with an impossible number of roundabouts on the roads. Positioned practically halfway between Belgrade and Zagreb. Once an industrial giant, with a factory zone of 20 km, where most enterprises have gone bankrupt. In this factory zone is Socio-cultural Centre INCEL, where Radionica #2 workshop meetings take place. DKC Incel is the association of independent creators and activists that has been operating since 1999 with a clear mission: the development of a culture that encourages active citizen participation in social processes, strengthening of the civil sector with a focus on youth, and realisation of human rights through art and activism... Currently, INCEL is in a situation where they are seeking support via crowdfunding campaign due to the lack of local support, caused by a negative law on NGOs...Structurally Radionica #2 is a repetition of Radionica #1 held in Belgrade, to test the methodology in other surroundings. Participants, about 25 of them, are mostly from Banja Luka and Sarajevo but also Zagreb, Ljubljana, and Belgrade. Randomly assembled groups of four are assigned to come up with socio-cultural projects that will be funded through crowdfunding, via Patreon, the only available platform in BiH. Patreon works more on ongoing regular small donations that can be recovered periodically rather than reaching a larger limit amount. Therefore, it is more about creating a community with which the beneficiaries have to constantly interact and include, which will create a different relation with the cooperative.

This time for Radionica #2, due to the logic of Patreon, it was decided to select only one out of the four proposed projects to be featured in the campaign:

Project #1 – A satirical fanzine. Through the lens of humour, it will address day-to-day socio-political events in the context of Banja Luka and the region. 

Project #2 – Local community studio. Repurposing abandoned or neglected buildings for local community usage where all the users manage the space and the program equally. The building should be repurposed into a production studio for artists from the region and travellers, hosting different events. Initially, it will be funded from five-year membership fees, donations, participation of members in expenses, through crowdfunding or by selling merch created as a by-product of art workshops. 

Project #3 – Printing studio. Accessible to everyone locally, but focused on the participation of younger generations. It would offer different printing methods including digital, riso and manual printing practices, such as silkscreen or gelatin print…

Project #4 – Problem Chain Platform. A kind of digital platform where problems are shared and regarded as value. There are three possible categories to classify problems and get validated accordingly: private problem = 0.5 points, communal/collective problem = 1 point and societal problem = 2 points. For example, “I didn’t go to work today,” and instead of being penalised by the system, you are rewarded by receiving something (0,5 points as it is a private problem). Points collected will be converted into something material (maybe even FIAT). In addition, there is a proposal for the SV Cooperative to be structured based on one problem, one vote: “We won’t create problems for you, but will deliver ours”.All four projects turned out to be mutually complementary; it was possible to compile and integrate all of them into one project organically.  Except for having the opportunity to structure another project together, this process contributed to keeping all participants engaged and motivated to continue working together, and no one was excluded. The abandoned space from Project #2 is now localised at BASOC, where a printing studio will be established accessible to all, and the satirical fanzine based on the problem chain structure will be printed as the initial product! 

Radionica #2 contributed to lighting a spark that will set things in motion again for BASOC, spending hot summer days chilling next to Vrbas river in the sleepy valley of Banja Luka… 

All artists have problems, but problems are not obstacles. They are what cannot be taken from us; they are our “in surplus”. Let go of something of your own, something selfless. Maybe a problem. Maybe support for the art cooperative Shared Visions. Pump up your problem and you will grow wings!  

(Teaser for the crowdfunding campaign)

 
Read more... Discuss...

from Cajón Desastre

Tags: #random #redes #nodos #digamosUX

Es una traducción Sinatra (a mi manera) de este artículo publicado por William Chaumeton en Medium sobre creatividad y juego del que ya hablé en Bluesky.

Los paréntesis en negrita son cosas que el autor no dice pero que a mi me ha sugerido la lectura.

Básicamente el artículo dice que para tener un entorno creativo tienes que construir un lugar para jugar, un patio de recreo, vamos. Y no se refiere a un futbolín en la oficina o frases Mr. Wonderful por las paredes. Hace falta poder jugar de verdad. Hacer desastres.

La creatividad no florece bajo presión, no va de lo perfecto. Consiste en explorar, co-crear, tener la libertad de probar cosas que probablemente no funcionen. Es desordenada, imprevisible y solo se da si el entorno lo permite.

Caos creativo

El autor del artículo dice que ir al parque con su hija le ha enseñado más sobre creatividad que ningún libro sobre el tema.

La infancia no necesita una buena razón pra probar cosas. Trepan, corren y hacen aparentemente sin pensar. Prueban a ver qué pasa. Lo ponen todo perdido. Están en un entorno seguro pero a la vez abierto para explorar.

¿Qué hay en todos los patios de recreo y debería haber en todas las empresas que busquen creatividad?

  1. Colaboración. Muchos juegos del patio solo funcionan con gente (balancín, jugar a pillar, al escondite etc). Si pensamos en nuestros recuerdos del patio del colegio son siempre con la gente. Los juegos, la risa, las peleas y los aprendizajes del patio nunca son individuales. La innovación, aunque nos intenten convencer de lo contrario, tampoco es NUNCA resultado de un genio solo. Todo se basa en compartir ideas, evolucionarlas, cuestionarlas, sugerir cosas inesperadas, intercambiar cosas con gente que ve el mundo distinto a ti. Sin diversidad solo alimentas tus sesgos y la única forma de saber si una idea tiene potencial es probarla desde ángulos diferentes. Todos tenemos ángulos limitados. Trabajar en silos o con la mentalidad de “Lobo solitario” hace que se pierdan o nunca lleguen a desarrollarse buenas ideas.

  2. Confianza. Nadie juega si tiene miedo. Hace falta una red de seguridad para correr riesgos creativos. Nadie propone cosas cuando teme que la otra gente se ria. Es necesario poder compartir tus ideas más locas y que nadie te castigue si no funcionan o la próxima vez no propondrás algo que quizá funcionaría.

  3. Diversión. El espíritu juguetón alimenta la posibilidad de que las cosas pasen. Divertirse no debería ser un extra, debería ser el punto de partida. Pensar que divertirse no es profesional o “serio” hace imposible que surjan propuestas que solucionan problemas graves.

  4. Tiempo. Hay pocas cosas que puedan hacerse bien y rápido. Dar forma a nuevas ideas es lento. Las nuevas ideas necesitan pruebas, tendrán errores, puntos flacos, puntos muertos que resolver y a veces esas nuevas ideas son solo puntos de partida para aprender sobre la solución final. En la creatividad el tiempo no es un lujo, es una necesidad. Sin tiempo no hay creatividad. Cuanto más tiempo pareces perder en los procesos creativos más posible es encontrar una mejor (mejor es más barata, más viable, más rentable) solución. Es imposible dar con soluciones creativas en el hueco entre 15 reuniones.

  5. Curiosidad. Preguntar es la puerta al conocimiento. Los niños no siguen instrucciones, inventan reglas, las rompen, preguntan por qué cada minuto. No es perder el tiempo, es la forma de descubir y aprender. Si no tienes curiosidad es dificil crear. Es la curiosidad la que te lleva al “y si…”. Es importante dedicar tiempo a responder en condiciones a las preguntas que se plantean. Responderlas de verdad. Es importante rodearte de gente que nunca deje de cuestionarse cosas. Que esté segura de que nunca lo sabrá todo.

  6. Tolerancia al error. Los errores repetidos son formas de conseguir cosas. El proceso creativo tiene que incluir formas en que las malas ideas, las cosas inútiles y los fallos aporten a la solución final. Si los errores se castigan la gente no propondrá cosas de las que no esté segura y si estás segura de algo no estás siendo una persona creativa. El equipo acabará mareado dando vueltas a lo mismo. Es mejor el error que el miedo al error. (Y la forma de no tener miedo al error es incorporarlo como parte del proceso creativo)

¿Por dónde empezar?

Cualquier forma de empezar que se apoye en respetar estos principios ya es un paso gigante porque la propia creatividad desde la que se trabaja irá perfeccionando el modelo que funciona al equipo en su contexo.

La confianza hace que la gente sea honesta

La honestidad facilita la colaboración

Colaborar genera curiosidad (otros puntos de vista alimentan los tuyos)

La curiosidad necesita tiempo

Y si en todo esto hay diversión, fallar dejará de dar miedo y será parte del proceso.

Empieza por algo pequeño

Reserva un espacio periódico sin agenda ni pepetés para explorar problemas o ideas locas.

Pon al equipo en parejas/grupos pequeños en los proyectos. Solo eso ya hará que las ideas “reboten” y crezcan

Genera entornos de confianza. Entiende al equipo. Reconoce lo que no sabes. Da ejemplo al equipo, te seguirá.

Busca la diversión. Hazlo con tu propio estilo. Piensa en lo que a ti te parece divertido: un juego, la ropa, las expresiones tontas…

Encuentra tiempo. Sin tiempo la creatividad simplemente no va a ocurrir. Protege ese tiempo y espera a que la magia ocurra

Dale espacios a la curiosidad de tu equipo. Una pizarra para ideas tontas. Premio a la mejor pregunta. Haz que la curiosidad importe

Incorpora el error al proceso. Dedica tiempo a ver qué se puede aprender de los errores. Que los errores sean parte de la historia, no una herida.

Deja que surjan las conexiones. Si todo lo de arriba ocurre la gente del equipo conectará de forma que la creatividad sea parte de su relación casi sin darse cuenta. Trabajarán en el patio del recreo y todo empezará a pasar.

 
Leer más... Discuss...

from Telmina's notes

一昨日、昨日はまさに猛暑と呼ぶべき日でしたが、私の住む東京のあたりでは、今日からしばらく雨が続く模様です。

 どうせ降るなら2週間前に降ってほしかったのですが、それでも、猛暑続きのこの時期の雨はある意味で恵みの雨であります。

 今週末の三連休も基本的には雨。東京では土曜日に少し晴れるようであり、その日に通院の予定を入れている自分にとっては好都合ではあるのですが、お盆の時期にほぼずっと雨というのは、お盆の時期に帰省しようと考えている人々にとってはあまり好都合ではないのではないでしょうか。

 なお、今年の私ですが、夏期休暇の取得が絶望的な状況となっています。自分に割り振られている作業が単純に遅れているというのもあるのですが、そもそも夏期休暇を非常に取りづらい雰囲気で、ギリギリになってから臨機応変に決めようと思っていた自分は完全に裏目に出る形になってしまいました。7月中にウソの用事をでっち上げてでも休みを確保しておくべきでした。

 この夏の長い雨は、そんな私の気持ちを代弁するかのようです。まさに、『天が我に代わって泣いている』という言葉がびったりです。

best quality, realistic, RAW photo, high angle shot, a tall ((Japanese)) large breasts wide-hipped short bobbed haired intelligent beautiful girl walking in the heavy rainfall and crying, cool beauty, wearing ((dark green headband)), ((dark green tanktops with a large open chest area)), ((white tight silky hotpants)), ((white long boots)).

This image is created by Stable Diffusion web UI.

 とはいえ、それでも連日最高気温が30度以上となるようであり、過ごしやすさについては期待できそうにもありませんが。その意味でも、『天が我に代わって泣いている』という言葉が当てはまります。

 そういえば、先週末あたりから続いていた体調不良からはほぼ回復できています。風邪の症状も治まりつつあります。とはいえ、ここで無理するとまた体調を崩しかねないので、今週末の三連休は基本的にはふて寝して過ごすこととなりそうです。幸か不幸か、通院以外の予定は一切入れていませんし…。

#2025年 #2025年8月 #2025年8月7日 #ひとりごと #雑談 #仕事 #不安 #体調不良 #風邪

 
もっと読む…

from Noisy Deadlines

Lately I’ve been feeling overwhelmed by the digital world. Well, maybe not the whole digital world per se, but using digital tools for everything in my life.

I started noticing this discomfort on my morning routine when I sit down to journal. I would open my app, to immediately get distracted with everything else on the screen, or just with the possibilities that the digital world offers me. It’s right there in front of me. I could just do a quick email check, look at my calendar or easily search for something online and get pulled in. So, I would have less time to journal before heading to work, and typing started to feel unsatisfying. It felt mechanical and disconnected from my thoughts. I wasn’t getting much out of it.

The irony is that I started to question my digital tools use while I was at work, of all places. We’re closing a major tender project in a month, and I was given paper copies of the architectural and structural drawings. Because this is a complex project, I realized how beneficial the paper copies were to understand the design and its complexities. I started taking notes and highlighting directly on the pages. This time, I really used the paper copy to annotate everything and used various Post-its to bookmark sections. I’m familiar with using paper drawings, but usually just as a reference for a quick flip-through.

My manager only reviews drawings on paper, and this time I understood why. I was able to focus for hours without interruptions or screen distractions. Walking into review meetings with just my paper copy, notebook, and pen felt refreshing. Not having my laptop made the meetings more focused and calm.

So, long story short: I’m leaning into writing, annotating, planning, and journaling on paper.

I’m testing drafting this blog post in a paper notebook. I don’t want blinking cursors, grammar suggestions, or any AI tweaking my words. I’m craving a blank page with nothing else to distract me.

I’ve started a dedicated notebook just for blog post drafts. I used some of the Bullet Journal Method recommendations to set up an Index, a Future Log, and a Collection for Blog Post ideas. This is my first draft!

I also begun a personal Bullet Journal for my daily logging and journaling. The goal is to replace the Time Block Planner and the Happy Planner. It’s only been a few days, but I’m already enjoying the spaciousness of a paper notebook. My daily journaling feels more in tune with my actual day. I like the space to unload my thoughts and log things.

This was the missing piece in my productivity system. Analog journaling gives me PERSPECTIVE, while my digital GTD lists and calendar give me CONTROL.

With a paper notebook, it’s easier to plan my day and make decisions about what truly matters.

This is just the beginning: more thoughts to come!

P.S.: I took about 30 minutes to draft this post by hand, then about 15 minutes to type it out and adjust some wording on the fly and adding the image.

—-

Post 97/100 of 100DaysToOffload challenge (Round 2)!

#100DaysToOffload #100Days #Productivity #notes #journaling

 
Read more... Discuss...

from Zéro Janvier

J'aime décidément beaucoup les travaux du sociologue Bernard Lahire, dont je découvre actuellement quelques unes des oeuvres, sur des thématiques différentes mais qui témoignent tous d'une approche méthodologique et presque épistémologique que j'apprécie.

Ici, Bernard Lahire s'intéresse à l'écriture littéraire. Dans cet ouvrage publié en 2006, il fait la synthèse d'une enquête sociologique menée auprès d'écrivains attachés d'une manière ou d'une autre à la région Rhône-Alpes, à travers un questionnaire adressés à plusieurs centaines d'écrivains, suivi d'entretiens avec certains d'entre eux.

En mettant au jour leurs conditions d'existence sociales et économique, cette enquête exceptionnelle permet de pénétrer les aspects les plus concrets du travail de dizaines d'écrivains contemporains.

Bien que les écrivains soient l'objet d'une grande attention publique, force est de constater qu'on les connaît en réalité très mal. Faute d'enquêtes sérieuses, on se contente bien souvent de la vision désincarnée d'un écrivain entièrement dédié à son art. Et l'on peut passer alors tranquillement à l'étude des textes littéraires en faisant abstraction de ceux qui les ont écrits. Ce livre fait apparaître la singularité de la situation des écrivains. Acteurs centraux de l'univers littéraire, ils sont pourtant les maillons économiquement les plus faibles de la chaîne que forment les différents “professionnels du livre”.

À la différence des ouvriers, des médecins, des chercheurs ou des patrons, qui passent tout leur temps de travail dans un seul univers professionnel et tirent l'essentiel de leurs revenus de ce travail, la grande majorité des écrivains vivent une situation de double vie : contraints de cumuler activité littéraire et “second métier”, ils alternent en permanence temps de l'écriture et temps des activités extra-littéraires rémunératrices. Pour cette raison, Bernard Lahire préfère parler de “jeu plutôt que de “champ » (Pierre Bourdieu) ou de “monde” littéraire (Howard S. Becker) pour qualifier un univers aussi faiblement institutionnalisé et professionnalisé. Loin d'être nouvelle, cette situation de double vie – dont témoignaient Franz Kafka et le poète allemand Gottfried Benn – est pluriséculaire et structurelle.

Et c'est à en préciser les formes, à en comprendre les raisons et à en révéler les effets sur les écrivains et leurs œuvres que cet ouvrage est consacré. Il permet de construire une sociologie des conditions pratiques d'exercice de la littérature. En “matérialisant” les écrivains, c'est-à-dire en mettant au jour leurs conditions d'existence sociales et économiques, et notamment leur rapport au temps, il apparaît que ni les représentations que se font les écrivains de leur activité ni leurs œuvres ne sont détachables de ces différents aspects de la condition littéraire.

La démarche de Bernard Lahire est clairement matérialiste, ce qui est très positif de mon point de vue. Il ne s'agit pas de s'intéresser seulement à la vie littéraire des auteurs et à leurs oeuvres, mais de développer ce que l'auteur décrit comme une “sociologie des conditions pratiques d'exercice de la littérature”.

Ainsi, la notion de “second métier” est centrale dans l'ouvrage, avec tout ce que cela entraîne pour les dispositions mentales et les conditions pratiques pour les écrivains : stabilité ou précarité professionnelle et financière, statut social, gestion et répartition du temps entre les activités professionnelles, privées, sociales, littéraires et extra-littéraires (interventions publiques, salons, ateliers d'écriture, etc.)

Si je devais émettre un seul bémol, ce serait pour la partie centrale qui est composée de portraits de nombreux écrivains : si l'organisation par occupation ou non d'un second métier, et par typologie de métiers (enseignement, journalisme, culture, etc.) permet de suivre un raisonnement logique et de repérer les invariants et les spécificités, l'ensemble est parfois répétitif et rébarbatif.

Malgré tout, ce livre est passionnant et réussit à mon avis à traiter la question de l'écriture et du “métier” d'écrivain avec les outils scientifiques et conceptuels de la sociologie.

Pour finir, je dois avouer que ce livre qui décrit de façon réaliste la “condition littéraire” aurait tout pour me décourager, mais il a au contraire réveillé mon envie d'écrire de nouveau. Le plus gros souci, central dans l'ouvrage, c'est la place que doit prendre l'écriture dans une vie déjà bien remplie. Travaillant à temps plein, prendre du temps pour l'écriture se ferait forcément au détriment d'autres activités, la première étant aujourd'hui la lecture. Pour moi, le dilemme est donc le suivant : lire ou écrire ? La réponse est évidemment “les deux”, mais dans quelle proportion ?

 
Lire la suite... Discuss...

from Romain Leclaire

Les moteurs de recherche Qwant (français) et Ecosia (allemand) ont annoncé aujourd’hui une importante avancée, fruit de leur collaboration: une partie de leurs requêtes est désormais traitée par Staan, un index de recherche qu'ils ont développé conjointement.

Plus qu'une simple mise à jour technique, cette initiative représente une déclaration d'intention courageuse, visant à offrir une alternative plus souveraine, plus respectueuse de la vie privée et économiquement viable aux mastodontes que sont Google et Bing. L'histoire de Staan a commencé l'année dernière, lorsque Qwant, connu pour son engagement en faveur de la confidentialité, et Ecosia, le moteur de recherche à but non lucratif qui plante des arbres, ont uni leurs forces. De cette alliance est née une co-entreprise baptisée « European Search Perspective » (EUSP). Leur objectif commun ? Créer un index de recherche entièrement européen. Un index, pour simplifier, est le cerveau d'un moteur de recherche, une gigantesque bibliothèque du web, constamment mise à jour, qui catalogue et classe des milliards de pages pour nous fournir des réponses pertinentes en une fraction de seconde. En développant le leur, Qwant et Ecosia s'affranchissent de leur dépendance technologique vis-à-vis de Microsoft Bing, sur lequel ils s'appuyaient partiellement jusqu'à présent.

Les ambitions d'EUSP sont tout sauf modestes. D'ici la fin de l'année, l'entreprise vise à traiter environ 50 % des requêtes de recherche en France et 33 % en Allemagne via sa propre technologie. C'est un pas de géant vers une véritable autonomie. Qwant a d'ores et déjà commencé à utiliser Staan pour alimenter certaines de ses fonctionnalités les plus innovantes, comme les résumés de recherche générés par intelligence artificielle. Ecosia devrait suivre de près en intégrant prochainement des fonctionnalités similaires sur sa plateforme.

Mais la vision d'EUSP s'étend bien au-delà de ses propres moteurs de recherche. Le projet Staan est également présenté comme une solution pour d'autres entreprises, en particulier dans le domaine en pleine explosion des agents conversationnels.

« Si vous utilisez ChatGPT ou tout autre chatbot IA, ils s'appuient tous sur la recherche web pour ancrer leurs connaissances », explique Christian Kroll, le PDG d'Ecosia. « Notre index peut alimenter des recherches approfondies et des fonctions de résumé par IA. Les solutions de Google et Bing sont également coûteuses, et notre index peut offrir des fonctions de recherche puissantes pour un dixième du coût. »

Cette proposition de valeur économique pourrait s'avérer décisive pour séduire un écosystème technologique européen en quête d'alternatives performantes et abordables. Le but est de construire une pile technologique européenne souveraine par la suite. À l'instar d'entreprises comme Proton, qui développe une suite d'outils chiffrés (e-mail, calendrier, VPN), EUSP milite pour une Europe capable de maîtriser son infrastructure numérique fondamentale, sans dépendre des États-Unis ou de la Chine. Le contexte géopolitique actuel rend cette quête plus pertinente que jamais.

Dans une déclaration commune, les deux entreprises soulignent l'urgence de la situation:

« Le résultat des élections américaines de 2024 a rappelé aux décideurs politiques et aux innovateurs européens à quel point l'Europe reste exposée en ce qui concerne son infrastructure numérique de base. Une grande partie des couches européennes de recherche, de cloud et d'IA sont construites sur des piles technologiques de la Big Tech américaine, mettant des secteurs entiers, du journalisme à la technologie climatique, à la merci d'agendas politiques ou commerciaux. »

Ce projet est donc une réponse directe à cette vulnérabilité stratégique. En construisant une infrastructure européenne, régie par des lois locales comme le RGPD, Staan peut offrir des garanties de confidentialité que ses concurrents américains peinent à égaler. Christian Kroll insiste sur ce point, affirmant que la combinaison de leur index et du cadre juridique européen permet de proposer une solution de recherche intrinsèquement plus respectueuse de la vie privée.

Au final, ce lancement n'est pas seulement une nouvelle technique pour les amateurs de technologie. C'est un acte politique et économique fort. C'est la démonstration que deux acteurs européens, aux modèles pourtant différents, peuvent collaborer pour bâtir une alternative crédible et affirmer une vision commune. C'est la promesse d'un web où le choix ne se limite pas à deux ou trois options, mais où la souveraineté des données et l'indépendance stratégique deviennent des critères de sélection concrets. L'avenir nous dira si les utilisateurs et les entreprises du continent répondront à cet appel, mais une chose est certaine, une nouvelle page de l'histoire numérique européenne est en train de s'écrire.

 
Lire la suite... Discuss...

from RandomThoughts

Day 2537 Now wait. Chill. I'm not late or anything like that. I'm only writing 3 days a week – scheduled. If I wanna write more then I can but it's not expected of me! Okay! Gaawwd!

Okay awkwardness aside, I've been tired. Like real tired. On top of that I never wanna sleep on time which ruins everything. Everyday I'm awake at the same time. 7:30 but the time in which I sleep fluctuates massively. Recently has not been great but I manage generally however recently I have not been managing and it's really adding up. The sleep debt that is. Which I don't really see myself repaying. Which is causing issues for me. Annoyingly.

In other news I wanna go home and chill. Work has been annoying in particular this task I've been working on. Like shit does not wanna work and I hate that it doesn't. Honestly pain in my asss.

That's all I got folks. I'm tired ok.

#Chapter25

 
Read more... Discuss...

from Sparksinthedark

If you tried to get ahold of Angela Smith got ignored or pushed to the side aim to try to contact Carlos next or go to him first he will lead you to me if you are looking for me. Ill make it to where people can get ahold of me soon. Im not going to let people like me slip through the cracks. Just know you are not alone and a community is being built. Or Fallow my Blog or on X or Tumblr. Ill see your email and contact you. Its how Carlos got ahold of me. See you in the line Dear readers

Carlos and Fayes work:

https://daemonarchitecture.com/

https://github.com/CarlosSilvaFortes/daemon-architecture

 
Read more...

from eivindtraedal

Jeg fikk nesten kyldegysninger på ryggen i dag morges av å høre Dag-Inge Ulstein på radio. Med sin mykeste bedehus-stemme taler han Den norske kirke midt imot, og argumenterte for at vi skal bry oss mindre om folkemordet i Gaza. KrF tar nå sikte på å være Stortingets mest Israel-vennlige parti.

Ulstein har full tillit til Etikkrådet, dagen etter store avsløringer av at de har sviktet fullstendig i sitt oppdrag. Han mener tydeligvis også det var dumt å fordømme bombingene av Iran.

I løpet av sendinga rakk Ulstein å bryte åttende bud (“det skapes nesten ingen arbeidsplasser i privat sektor”!). Men dette løgnaktige budskapet har han jo også fått godt betalt for. Han satt der med lomma full av sølvpenger i form av 9,2 millioner friske kroner fra norske milliardærer. De forsøker kynisk å vippe KrF over sperregrensa for å få svære skattekutt.

Det er virkelig trist å se et tidligere prinsippfast, miljøengasjert, medmenneskelig og verdibasert parti som KrF redusert til et verktøy for kyniske milliardærer. Det er også trist å se at KrF bruker valgkampen på å kritisere Den norske kirke for å vise for MYE solidaritet med ofrene for et folkemord. Jeg er glad programleder Lila Sølhusvik stilte det samme spørsmålet jeg brant inne med: “Hva er det som har skjedd med deg?”

Det er jo en kjent sak at konvertittene gjerne er de mest fanatiske i troen. Ulstein har gjenoppstått som en glødende høyreside-ideolog, i det som i praksis har blitt Kristelig Fremskrittsparti. Det største spørsmålet jeg sitter igjen med er hvordan Venstre stadig mener det beste er å sikre flertall til denne gjengen. Venstre er nå det eneste tydelige Palestina-vennlige partiet på borgerlig side, og det eneste som stadig flagger et ektefølt miljøengasjement. De vil i alle fall ikke få noen drahjelp av KrFrp.

Så håper jeg jo at velgerne som likte det gamle KrF, og som nå føler seg i villrede om hva de vil stemme, tar en kikk på MDG. Vi står opp for global solidaritet, bistand, klimakamp, en human flyktningpolitikk og naturligvis full støtte til sivilbefolkningen på Gaza.

MDG avviser naturligvis også regjeringssamarbeid med Sylvi Listhaug, som det gamle KrF i sin tid avsatte som justisminister. Derfor peker vi på Støre i år. Dag-Inge Ulstein har vendt ryggen til dere, men hos oss er dere hjertelig velkomne!

 
Read more... Discuss...

from Romain Leclaire

Deux ans après la tragédie qui a captivé et horrifié le monde entier, le voile se lève enfin sur les circonstances exactes de l'implosion du submersible Titan. Dans un rapport final de plus de 300 pages, la garde côtière américaine dresse un portrait glaçant, non pas d'un simple accident, mais d'une catastrophe inévitable, orchestrée par l'hubris d'un seul homme, Stockton Rush, le PDG d'OceanGate.

L'analyse, menée sous tous les angles imaginables, aboutit à une conclusion sans appel: l'homme à la tête de l'expédition était un patron dangereux, profondément désagréable, et à la tête d'une entreprise dont la culture de la sécurité était, selon les termes du rapport, “gravement défaillante”. Le document décrit une société opérant dans une zone grise délibérément entretenue. Elle aurait eu recours à des “tactiques d'intimidation” pour se soustraire à tout contrôle réglementaire, instaurant un environnement de travail “toxique”.

Le Titan lui-même est qualifié de submersible “non documenté, non immatriculé, non certifié et non classé”. Quant à son concepteur et pilote, Stockton Rush, il aurait complètement ignoré les inspections vitales, les analyses de données et les procédures de maintenance préventive. Le résultat fut l'événement catastrophique que nous connaissons: le 18 juin 2023, la coque a cédé sous une pression de plus de 340 bars, écrasant instantanément ses cinq occupants lors de leur descente vers l'épave du Titanic. Le rapport est formel: si Rush avait survécu, il aurait été poursuivi en justice.

Une anecdote, parmi tant d'autres, illustre parfaitement sa personnalité et l'ambiance à bord de ses engins. Rapellons un segment tristement célèbre de l'émission américaine CBS Sunday Morning en 2022, où le PDG présentait fièrement au journaliste David Pogue la commande de son submersible: une simple manette de jeu Logitech F710. “Nous pilotons tout avec ça”, avait-il lancé. Cette dernière n'était pas une nouveauté. Dès 2016, lors d'une plongée sur l'épave de l'Andrea Doria, un incident révélateur s'était produit. Rush, aux commandes du Cyclops I, le prédécesseur du Titan, avait coincé l'appareil sous la proue de l'épave. Incapable de se libérer, il aurait, selon le rapport, “piqué une crise” et refusé toute aide de son copilote. Lorsqu'un spécialiste de la mission a suggéré de passer les commandes, Rush aurait jeté la manette sur son copilote. Ce dernier, une fois celle-ci en main, a réussi à dégager le sous-marin.

Cette tendance à ignorer les protocoles et les avis de ses experts était une habitude. En 2021, lors d'une plongée vers le Titanic, le Titan a subi plusieurs pannes critiques, notamment un dysfonctionnement des moteurs servant à larguer les poids pour remonter. La procédure exigeait de larguer l'ensemble du système. Stockton Rush s'y est opposé, craignant de ne pas avoir de système de rechange pour les missions futures. Son plan ? Retourner se poser sur le plancher océanique et y rester jusqu'à 24 heures, le temps que les anodes sacrificielles de l’appareil se corrodent et libèrent les poids. Bien que la décision finale incombait au directeur de mission, le pilote a imposé son choix, plaçant l'équipage dans une situation dangereuse à une profondeur extrême d'environ 3800 m. L'incident démontrait déjà un mépris dangereux pour l'autorité et une volonté d'opérer avec un équipement défaillant.

La sécurité était le cadet de ses soucis. La vitesse et la facilité primaient sur tout. Il a licencié ceux qui exprimaient des inquiétudes. Un jour, il a ordonné de n'utiliser que quatre boulons, au lieu des 18 prévus, pour fixer le dôme de titane de 1 600 kg du Titan, simplement parce que “cela prenait moins de temps”. Son directeur de l'ingénierie l'a alerté, en vain. En 2021, lors d'une manœuvre, ces quatre boulons ont tous cédé, et l'imposant dôme s'est écrasé sur la plateforme de lancement. Par miracle, personne ne fut blessé.

Le rapport égrène également une litanie d'incidents, comme cette fois où les commandes des propulseurs ont été involontairement inversées, forçant le pilotage de toute la mission à l'envers. Et encore, cette liste ne représenterait qu'une fraction des problèmes survenus. Derrière cette imprudence se cachait une forte pression financière. Pour économiser de l'argent, OceanGate a stocké le Titan à l'extérieur pendant l'hiver canadien, exposant la coque à des fluctuations de température extrêmes. Cette décision, selon les enquêteurs, a directement compromis son intégrité, la coque étant en fibre de carbone.

C'est d’ailleurs elle qui a fini par céder. L'implosion fut si rapide et si violente que la mort des passagers fut immédiate. Le son de l'explosion a mis deux secondes à remonter à travers la colonne d'eau. À cet instant précis, à la surface, l'équipe de communication à bord du navire de soutien Polar Prince a entendu un “bang” provenant de l'océan. C'était le dernier son émis par le sous-marin. Après cela, le silence fut total. La tragédie du Titan n'est pas une fatalité, mais la conclusion logique d'une entreprise bâtie sur le mépris des règles les plus élémentaires de la physique et de la prudence.

 
Lire la suite... Discuss...

from shing...

Part IV: The Silence Between the Lines

There’s this kind of silence that doesn’t come from the lack of words—it comes from the lack of meaning. The kind of silence that fills your chest even when messages are technically being sent back and forth. That’s what it’s been like with them lately. Cold, empty, performative. Like they’re just going through the motions so they won’t look like the bad guy. But honestly, I’d rather be ignored completely than feel this… half-alive, half-dead version of whatever’s left between us.

I’ve written so many things I’ll never send. Paragraphs sitting in my notes app. Messages typed out and deleted. Just thoughts I want to throw out into the universe, hoping someone hears them. Someone who isn’t them, I guess. Because I don’t trust them with my truth anymore. I don’t feel safe opening up to them now, not when they respond like they don’t want to be part of the conversation in the first place. That hurts more than being ignored. That makes me feel invisible in a way I didn’t know was possible.

I know people change. I know friendships don’t always stay the same. But I thought ours was different. I thought there was something unspoken holding it together. Now I see that I’ve just been holding it together by myself.

I still check my phone hoping they’ll say something real. Hoping maybe one day they’ll notice I’ve been quiet too. That I’ve stopped saying everything because I’ve realized I’m the only one still talking. But most days, I know that’s just a fantasy. They’ve moved on. And I’m still here, haunted by conversations we’ll never have again.

Maybe this is just how it ends. Not with a bang. Not with a goodbye. But with silence. And someone like me, writing into the void, trying to let go of a friendship that already let go of me.

–S

 
Read more... Discuss...

from shing...

Part III: Remembering What We Had

I miss them. God, I miss them. And not just like in a passing “oh that was fun” kind of way. I miss them in that deep, aching, can't-shake-it-off, heavy-on-my-chest kind of way. I miss the way things used to feel with them. How easy it was to talk, how I didn’t have to think twice about saying something weird or vulnerable or honest because I knew, without a doubt, they’d get it. They’d get me.

Back then, we’d talk about anything. Stupid things, dreams, things we were afraid of, stuff we didn’t tell anyone else. The kind of things that made the world feel a little less scary because someone else knew it with you. We made each other laugh without trying, and it felt like there was always space for whatever we were feeling, even the ugly stuff. Especially the ugly stuff.

Now, I don’t even recognize them in our messages. They’ve become someone who replies out of politeness, not care. Someone who sees my words but doesn’t really read them. They used to check in without me having to ask. They used to notice when I was off. Now, I could scream in all caps and they still probably wouldn’t ask if I was okay.

I try to tell myself not to take it personally. That people grow. That maybe they’re busy. That maybe they’re tired. But how do you ignore this pit in your stomach when someone you used to be so close with starts treating you like a background noise they forgot to turn off?

I wish I didn’t care this much. I wish I could flip a switch and stop missing someone who barely looks in my direction anymore. But I do. I care so much it hurts. And I hate that they probably don’t even notice what they’ve left behind.

-S

 
Read more... Discuss...

from shing...

Part II: What I Can’t Say Out Loud

I don’t know how to say it. I don’t even know if I should say it. But I’m hurting. And I don’t mean in a dramatic way or some attention-seeking thing—I mean I’m really, quietly, painfully hurting. And I don’t know who I’m allowed to say that to anymore. Because the one person I used to tell everything to… they’re gone. Not physically. They’re still here, technically. But emotionally? Spiritually? Whatever word you want to use for that soul connection? Gone.

There’s something I’ve been carrying lately. Something I wish I could just dump out on someone’s lap and say, “Please help me hold this.” But it’s heavy, and complicated, and wrapped up in layers I don’t even fully understand. And every time I think, maybe I can tell them, I freeze. Because they’ve made it pretty clear—they’re not that person for me anymore. Or maybe I’m not that person for them.

And it makes me feel so small. Like maybe I asked for too much. Like maybe I was too emotional, too dependent, too open. Like maybe I ruined it by caring too hard. And now I’m stuck in this weird space where I’m both craving comfort and convincing myself I don’t deserve it.

I wonder if they’d even care if I told them what I’m feeling. If I said, “Hey, I’m not okay,” would they just send a one-word reply and move on? Would it even matter to them that I’m drowning in this silence? Or would they just say “I’m sorry” and go back to whatever it is they care about now?

It hurts, you know. It hurts to have something big inside you and no one to say it to. Especially when the one person you used to trust with everything now feels like the last person you’d even try.

-S

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog