Personal Systems: System Agents

In 2025, Anthropic launched Claude Code. It was a resounding success among developers experimenting with AI coding assistants, seeing rapid adoption: six months after becoming available to the public, it reached $1 billion in run-rate revenue. In the same year, it has spawned similar text-based user interfaces (TUIs) and workflows, and many developers are building workflows orchestrating multiple Claude Code instances.

Claude Code doesn’t just generate code. It uses tools defined through the Model Context Protocol (MCP) to do everything from reading and editing files and running shell commands to accessing external services. It is able to “remember” how to do things, organizing them through skills and subagents. In the past few months, even non-programmers have begun using Claude Code to work on spreadsheets, access notes on their computer in discussions, organize things in their computer, search for information across GMail and Google Drive, even troubleshoot problems in their computer—while requests to Claude are sent to Anthropic’s servers, Claude Code itself is installed in and runs on the user’s own computer.

Agents as computer users

It is likely that many users will eventually accept, even desire, agents as users on their personal computers. How should we conceptualize and think about personal systems that not only accept them as bolt-on features, but are designed with them in mind?

While the idea of personal computing used to revolve around a single user, this doesn’t quite work when we add agents into the mix. Agents make their own decisions and carry out their own actions, which would be confusing if associated with the user instead of the agent.

Multi-user systems

It can be difficult to trace what an agent did through native computer tools. Claude Code runs as the logged-on user, and its actions are logged in the system as user actions. This is the case with most other coding agents as well. These software usually provide some tools to understand what the agent is doing, though it can be difficult to diagnose problems such as the agent trying but failing to edit a particular file.

This happens because the tools available to the agent and to the user are different. While a user may open the file in a text editor and use the cursor and keyboard to add the change, an agent might use a low-level text-based program to find-and-replace text strings, using commands like <...>. These two modes of editing can fail in different ways; the user doesn’t understand why the agent can’t make an edit that they can easily make, the agent might not understand that its command is failing because the user just added a change that caused the find-and-replace match to fail. The agent doesn’t see user actions until it checks, and the user might not see changes by the agent if the software does not provide those affordances.

We don’t have deep, technical studies into this area for personal computers yet; the personal computing pioneers generally assumed single-user systems. There is some research into assisting joint cognitive systems: systems that aim to create co-agency, enabling operators to see a common picture of what is going on in the system.

Agent-mediated automations

The traditional paradigm for modelling software actions are:

  • Processes: each program running on the computer registers a process name with and receives a process ID from the operating system. This helps the operating system track resource allocation to processes: reserved memory, network sockets, open files, etc
  • Services/Daemons: operating systems usually provide a registration system for long-running background programs to register as a service, which can be started on user login, and restarted upon failure
  • Users: software is invoked by a system-created user. Specialized users are created for system processes, e.g. printing. File and resource access permissions are tagged to users; software invoked by a user inherits the user’s access permissions.

Advanced operators are beginning to automate some processes on their personal computers through agents. This is confusing to many operating systems and non-technical users. There is a distinct difference between “user X wishes to run program Y using the Y filename.txt command” and “agent Z wishes to run program Y using the Y filename.txt in response to user X’s message” which current software systems have not yet caught up to. Many agent programs simply run as user X.

This makes permission management and diagnostic investigation difficult, if not impossible.

Open research questions

  • How might agents be modeled so that their actions are associated with them?
  • How might agent actions be associated with an originating intent, e.g. a user request, programmed workflow, or scheduled task?