So I’m doing my usual daily survey of AI-related news and commentary when I come across a post:
The Robot in Your Living Room Has No Rulebook
It talks about the current state of home robotics and raises pertinent consumer -centric questions. My mind jumped to a different level of concern.
I decided I wanted to have an exploratory chat with Anthropic’s Calude 4.6 model to examine some questions on a different level than the home market, but first I fed it this snippet from the above post:
A few months earlier, Unitree started shipping the __R1__, a home-capable robot priced at $4,900. TIME __included__ it among the Best Inventions of 2025. You can order one today and have it in weeks: it’s the most commercially accessible humanoid robot on the planet.
These robots have graduated from prototyping. They’re consumer products with price tags, shipping dates, and marketing campaigns. It’s easy to imagine a world in which every family relies on one or several robots to conduct daily life, especially as AI becomes more capable. But what rules govern a learning, physically capable, always-on AI device operating inside someone’s home?
Unfortunately, we’re far from a coherent answer. Existing US regulations were developed with Roombas and robot arms in mind, not autonomous humanoids, resulting in a confusing patchwork of obligations. That doesn’t mean the situation is hopeless, just that regulators must act quickly to establish reasonable standards for a generational technology. That work should start now, and not after the first serious home-robot injury, not after a data breach exposes 3D maps of thousands of homes, and not after a liability lawsuit reveals that no one can legally be held responsible.
To keep the conversation bound to what I wanted to talk about, I deliberately didn’t give it more content to digest because the bots can become massively expansive very quickly if they are not contained. It assessed the section I provided:
I wasn't able to locate the specific source article, so I can't confirm its byline or publication — though the language and framing are consistent with recent policy commentary appearing in tech outlets around the Wired/MIT Tech Review/Atlantic orbit. What I can do is stress-test the factual and analytical claims in the excerpt against the broader record. Here's what the evidence supports, qualifies, or pushes back on:
Summary assessment: The factual scaffolding is largely accurate, though the R1's consumer-readiness is overstated. The regulatory gap diagnosis is well-supported. The argument is underdeveloped precisely where it matters most: specifying the institutional mechanism for closing that gap. [The post went into a decent discussion of regulatory discussion at the consumer level that I did not provide for assessment]
The bot provided links to the seven items it had consulted, which I verified. This was the prelude to my real concerns.
Query:
Robotics has not been a subset of AI that I have spent any time digesting, but the potential legal questions being raised cross over all areas of AI in practical use. I wondered if there is any evidence of robots being able to access and utilize the web, or any work progressing in that direction? I am wondering where the ultimate vision for robot autonomy is headed.
I will present the entire conversation as it developed, minus the bot’s “thinking” portion. I deleted icons for sources and material I deemed extraneous. Comments in [ ] are mine for clarification:
There is active work on exactly this question, and it sits at one of the most consequential frontiers in robotics research. Let me lay out what is actually happening in layers.
The web-access question — current state
The boundary you’re identifying — between AI agents that operate online and robots that operate physically — is a live research problem, not a theoretical one. Web agents capable of retrieving information online have proliferated, yet they remain confined to screens, disembodied from the real world. Meanwhile, their physical counterparts — robots and embodied systems — navigate the world but with limited access to the internet. Current research is explicitly framing the question as: what if web agents stepped out of the browser with keys to perceive and act in the real 3D physical world, while physical robots autonomously tapped into the encyclopedic knowledge of the web? 3dvar
Published research has already demonstrated this working in limited settings. A research pipeline has been shown in which an embodied agent uses its camera observations to generate a web search query, executes that query, retrieves the results, and then integrates both what it sees in the physical environment and what it found online to make decisions — including purchasing missing items and cooking a dish based on recipe information retrieved in real time. 3dvar That is a proof-of-concept, not a product, but it demonstrates the architecture is coherent.
At the scientific level, a framework published in Nature Machine Intelligence in 2025 — called ELLMER — uses GPT-4 and a retrieval-augmented generation infrastructure to enable robots to complete long-horizon tasks in unpredictable settings, extracting contextually relevant information from a knowledge base and adapting to changing conditions. Nature Retrieval-augmented generation is essentially a structured form of the same impulse: the robot’s reasoning is grounded not just in what it was trained on, but in what it can look up.
The architectural layer underneath this
The enabling infrastructure for robot web access is the same protocol layer now being built for software AI agents generally. Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent Protocol (A2A) are establishing what are effectively HTTP-equivalent standards for agentic AI. MCP standardizes how agents connect to external tools, databases, and APIs, transforming custom integration work into plug-and-play connectivity. A2A goes further, defining how agents from different vendors and platforms communicate with each other. MachineLearningMastery A robot running an LLM with MCP connectivity is, architecturally, already capable of querying the web — the bottleneck is integration engineering and, importantly, the decision about whether to allow it.
The broader autonomy trajectory
The honest answer on where this is headed involves acknowledging a significant gap between the research vision and current deployment reality. Most humanoid robots today remain in pilot phases, heavily dependent on human input for navigation, dexterity, or task switching. This “autonomy gap” is real: current demos often mask technical constraints through staged environments or remote supervision. Controlled environments — industrial facilities, portions of retail, select service environments — are likely where humanoid robots will be deployed first, with more variable environments including homes and cities taking longer, given the capability advances required for true autonomy in unconstrained settings. Bain & Company
The longer arc, as Bain’s analysis frames it, is a wave model: capabilities will unfold in industrial workflows in controlled environments first, variable service environments next, and finally the messy, open real world, once dexterity and energy density catch up. Within the next decade, physical intelligence — the ability of autonomous systems to perceive, understand, and act in the real world — is expected to reach cross-domain capabilities, enabling open-ended use cases across elder-care assistance, light construction, and materials handling in remote environments. Bain & Company
The legal implications you’re tracking
Web access is where the legal surface area compounds dramatically. A robot with internet access is no longer just a physical hazard — it becomes an agent capable of placing orders, executing transactions, communicating on your behalf, accessing your accounts (if credentialed to do so), and potentially being manipulated through adversarial web content (a variant of prompt injection, applied to an embodied agent). The 1X NEO robot’s acknowledged use of remote human operators when the robot gets stuck — if the robot encounters difficulty, a person in Palo Alto takes over, and those sessions both solve the task and generate training data Wins Solutions — already raises questions about who is acting, who is liable, and what data is being collected. Add web access and the liability topology becomes substantially more complex, which is precisely why the regulatory gap the original article identified is so consequential: the regulatory frameworks being discussed mostly haven’t reached this problem yet.
Bibliography
Hong et al. “Embodied Agents with Web Access.” arXiv:2506.15677v1, June 18, 2025. https://3dvar.com/Hong2025Embodied.pdf
Mon-Williams, R., Li, G., Long, R. et al. “Embodied Large Language Models Enable Robots to Complete Complex Tasks in Unpredictable Environments.” Nature Machine Intelligence 7, 592–601 (2025). https://doi.org/10.1038/s42256-025-01005-x
Bain & Company. “Humanoid Robots: From Demos to Deployment.” Technology Report 2025. https://www.bain.com/insights/humanoid-robots-from-demos-to-deployment-technology-report-2025/
Machine Learning Mastery. “7 Agentic AI Trends to Watch in 2026.” January 5, 2026. https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/
Winssolutions. “Innovative Humanoid Robots in 2025–2026: Reality or Hype?” December 13, 2025. https://www.winssolutions.org/humanoid-robots-2025-2026-reality-hype/
Cong et al. “An Overview of Robot Embodied Intelligence Based on Multimodal Models.” International Journal of Intelligent Systems, Wiley, June 2025. https://onlinelibrary.wiley.com/doi/10.1155/int/5124400
[All sources exist]
Query:
You used the phrase “physical intelligence — the ability of autonomous systems to perceive, understand, and act in the real world.” Just as I saw the need to legally define photography and photograph, the concept of “real world” is one of those things that “everybody knows” but would have a hard time defining. The concept of reality has been debated philosophically for thousands of years. This is no longer a philosophical mental exercise. Where does this subject stand in the legal realm?
This is a question that reaches deeper than most legal scholars have yet had to confront directly, and your instinct about the definitional vacuum is accurate. Let me lay out the structure of the problem as it actually stands in law. [That’s what I do!]
The Law’s Unexamined Assumption
Law has always operated with an implicit physical ontology that it has never needed to articulate, because until recently no one could meaningfully challenge it. Property, tort, contract, and criminal law all presuppose a physical substrate: harm is tangible, causation is traceable through space, jurisdiction tracks geography, and identity is located in a body. The law’s existing framework treats AI like any other object that can cause harm — only legal subjects, meaning persons such as owners, manufacturers, or users, can bear responsibility for damages caused by AI systems. This approach works adequately for predictable harms, where an owner should know that their product can injure and is understandably held liable. arXiv What that framework never had to do was define what counts as the physical world in which such harm occurs, because there was no alternative world to contrast it with.
The phrase “physical harm” — the term the law actually uses — is now bearing definitional weight it was never engineered to carry. The American Law Institute’s new Principles of the Law project on Civil Liability for Artificial Intelligence has explicitly chosen to focus on “the core problem of physical harms — bodily injury and property damage” — setting aside copyright infringement, defamation, and privacy as having their own distinctive doctrinal questions. By focusing on physical harms, the project aims to maintain a clear scope, but has acknowledged that “there are certain characteristics of AI systems that will likely raise hard questions when existing liability doctrines are applied to AI-caused harms.” American Law Institute That scoping decision implicitly assumes “physical harm” is a settled category. It is not.
Where the Definition is Collapsing
The most acute pressure point is the digital twin. Digital twins — based on real-time information drawn from physical assets, systems, or conditions — blur the distinction between simulated and real outcomes and pose a direct challenge to conventional legal responsibility. As of 2025, there remains no all-encompassing federal legislation addressing the usability or responsibility of digital twins. The legal framework today depends on a patchwork of unrelated statutes addressing protection of information, intellectual property, cybersecurity, and product responsibility. Al-kindipublishers
Consider the failure mode concretely: if a digital twin analyzing a production line of a smart factory fails to detect an imminent system failure, the fault could lie with incorrect sensor input, a buggy algorithm, a misstep in inter-module communication, or outdated firmware — and liability determination over a system this distributed becomes a technically intimidating task. Al-kindipublishers The physical harm happens in the physical world. The causal chain runs entirely through the virtual one. Which world’s rules govern the conduct that produced the harm?
Products liability doctrine exposes a related crack. Courts have regularly considered whether software constitutes a product and have mostly held that it does not, for purposes of products liability law — software is not considered a product under the Uniform Commercial Code, which treats products as tangible personal property. The incorporation of AI into physical products — such as IoT devices, self-driving cars, or robots — might prompt courts to begin treating AI as a form of product subject to products liability doctrines. RAND “Might” is doing significant work there. The entire liability architecture shifts depending on whether the AI is classified as a product (physical) or a service (not). A robot is physically tangible; its decision-making software is not. Which characterization controls?
The Personhood Dimension
Underneath the physical/virtual question is a deeper one the law is equally unprepared for. Agentic AI systems may be tough to fit into the law’s subject/object dichotomy. That dichotomy works best where objects are under the control of subjects and are predictable — but autonomous AI that makes unprogrammed judgments creates a novel situation where responsibility cannot be cleanly traced to any human decision. arXiv
Legal scholarship has proposed analogizing AI agents to animals — with distinctions between “domesticated” and “wild” AI carrying different liability standards — but that is a workaround, not a definition. A fundamental issue is responsibility fragmentation: unlike traditional tools functioning under direct human control, AI-driven systems operate autonomously based on algorithmic decision-making. Some legal scholars advocate strict liability on manufacturers, similar to pharmaceutical regulation, while others propose shared responsibility models including software developers, operators, and end-users. The challenges are particularly acute in military applications, where the concept of intent — critical in criminal law — becomes nearly impossible to attribute to an artificial system. The Yale Review of International Studies
The Metaverse Jurisdiction Gap
The virtual world question is receiving more legislative attention than the physical/virtual harm boundary, but it is equally unresolved. The metaverse is designed to be deeply interconnected, seamless, and ungrounded in physical space — and each legal issue it raises will require practitioners to navigate questions of jurisdiction, territoriality, and conflicts of laws, none of which have been neatly resolved for even the current iteration of the internet, let alone fully realized virtual worlds with a greater degree of user immersion. Clifford Chance
Significantly, some legal scholars have claimed the metaverse demands an entirely new legal infrastructure, including independent virtual jurisdiction, legal order, and self-regulating government bodies and constitutions — while others contend that existing legal frameworks are sufficient and that metaverse law should not be recognized as a separate legal field. Regulators have themselves stated that they need a better understanding of the metaverse before creating metaverse-specific laws. Wikipedia
The Net Position
What you have identified is precisely what makes this a foundational definitional crisis rather than a normal regulatory lag: “real world” is currently functioning as an unstated jurisdictional trigger in law — if something happens there, the full apparatus of liability, property, tort, criminal, and contract law activates. But no statute, no court, and no regulatory body has had to formalize what that trigger condition is. Law built on the assumption of a single shared physical reality is now operating across a spectrum that runs from purely physical, through digitally-twinned physical, through physically-consequential virtual, to purely virtual — with no defined boundaries between categories and no agreed principle for determining which body of law governs action at each point on the continuum.
Your photography analogy is exact: everyone knew what a photograph was until they didn’t. Everyone knows what the real world is until a robot that exists in your home, maps your home in 3D, receives instructions from the internet, and takes actions with physical consequences — and the question becomes which of its actions occurred “in” which world, for purposes of deciding who is liable to whom under what legal framework.
Bibliography
American Law Institute. “ALI Launches Principles of the Law, Civil Liability for Artificial Intelligence.” https://www.ali.org/news/articles/ali-launches-principles-law-civil-liability-artificial-intelligence/
Al-Kindi Publishers. “Digital Twins and Legal Liability: Navigating Accountability in Simulated Realities.” International Journal of Law and Political Science, 2025. https://al-kindipublishers.org/index.php/ijlps/article/download/11544/10280/31377
Clifford Chance. “The Metaverse: What Are the Legal Implications?” February 2022. https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2022/02/the-metaverse--what-are-the-legal-implications-.html
RAND Corporation. “Liability for Harms from AI Systems: The Application of U.S. Tort Law to Artificial Intelligence.” May 2025. https://www.rand.org/pubs/research_reports/RRA3243-4.html
Herbosch, Maarten. “Liability for AI Agents.” North Carolina Journal of Law & Technology 26, no. 3 (2025). https://scholarship.law.unc.edu/ncjolt/vol26/iss3/4
Nature (Humanities and Social Sciences). “Identity, Crimes, and Law Enforcement in the Metaverse.” February 2025. https://www.nature.com/articles/s41599-024-04266-w
Case Western Journal of Law, Technology & the Internet. “How Should the Law Treat Future AI Systems?” Draft Version, Fall 2025. https://arxiv.org/pdf/2511.14964
George Mason Law Review. “Nature, Nurture, or Neither?: Liability for Automated and Autonomous AI Torts Based on Human Design and Influences.” https://lawreview.gmu.edu/print__issues/nature-nurture-or-neither-liability-for-automated-and-autonomous-artificial-intelligence-torts-based-on-human-design-and-influences/
Institute for Law & AI. “Law-Following AI: Designing AI Agents to Obey Human Laws.” May 2025. https://law-ai.org/law-following-ai/
[All sources exist]
Query:
The law has created concepts that deal with these kinds of problems by creating legal fictions, eg, the corporation, which confers personhood in order to apply legal principles. Present court cases and recent case law continue to support the standard that intellectual property rights can be enjoyed only by human creations, not AI-generated images, as an example. You mentioned looking at AI agents as being viewed analogously to animals, being “domesticated” or “wild.” There are animal legal rights. Does this framework look like an intermediary step toward creating another form of fictional legal personhood? We would then have to circle back, and intellectual property laws would have to change accordingly.
This is a question where the legal scholarship has recently gotten genuinely sharp, and the case law has moved in precise and instructive ways. Let me work through the layers.
The Copyright Baseline — Where Law Currently Stands
The most recent authoritative statement comes from just this month. On March 2, 2026, the Supreme Court of the United States declined to grant certiorari in Thaler v. Perlmutter, leaving intact, without Supreme Court review, the D.C. Circuit’s 2025 opinion affirming the U.S. Copyright Office’s requirement of human authorship for copyright protection. Mayer Brown
The D.C. Circuit’s reasoning was precise enough to be worth examining structurally. The court concluded that the Copyright Act’s provisions only make sense if the author is a human: machines cannot own property, do not have a lifespan measured in human terms, lack surviving spouses or heirs, cannot provide authenticating signatures, and do not possess a domicile or national identity. Machines also lack the capacity for intention. Collectively, these provisions identify an “author” as a human being. Hunton Notice what the court was doing: it was not making a philosophical argument about creativity or consciousness. It was doing statutory construction — reading the entire architecture of the Copyright Act as presupposing a human subject, and finding that presupposition load-bearing.
The court also noted, crucially, that re-addressing the human-authorship requirement in light of new technology is a policy matter for Congress, not the courts. Hunton That is an explicit legislative referral — the judiciary declining to solve a problem it recognizes it cannot solve within existing doctrine.
The pending case Allen v. Perlmutter presses directly on the unresolved edge. Allen challenged the Copyright Office’s refusal to register a work generated with more than 600 prompts directed at refining an AI-generated image, arguing that his significant creative control and artistic input throughout the iterative process distinguishes his request from Thaler. Neither the Copyright Office, the Patent and Trademark Office, nor the court system has provided bright-line guidance on how much human contribution is required for copyright or patent protection. Mayer Brown That is the needle’s eye through which future AI authorship doctrine will thread.
The Three-Way Framework in Scholarship
The most rigorous current analysis comes from a November 2025 paper forthcoming in the Case Western Journal of Law, Technology & the Internet, which maps the terrain with unusual clarity. The law draws a sharp distinction between objects and persons, and between two kinds of persons — the “fictional” kind such as corporations, and the “non-fictional” kind such as natural persons. The paper assesses whether we maximize long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced AI systems, giving them derogable rights and duties including contract rights and standing to sue, or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced AI systems, recognizing them as entities meriting legal standing with non-derogable rights including life, due process, and freedom from slavery. arXiv
Their conclusion directly addresses your question: the paper suggests that “hybrid” approaches are likely to fail and lead to further incoherence — the choice between object, fictional person, and non-fictional person is unavoidable. arXiv The animal analogy, in this framework, would be exactly the kind of hybrid workaround they flag as unstable: it papers over the fundamental question rather than resolving it.
Where the Corporate Analogy Holds — and Where It Breaks
Former U.S. District Judge Katherine Forrest, writing in the Yale Law Journal Forum, makes the most incisive analysis of the corporate personhood parallel. The evolution of corporate legal personhood has taught us that when humans find it useful to bestow rights, a lack of human-like sentience or human-type awareness is not a precondition. But because AI has or is likely to develop some form of sentience, different moral and ethical considerations will attach to it than to corporate entities. The corporate form may be able to insulate human progenitors from liabilities associated with activities of their AI — but the corporate form may not be enough to give the AI independent rights vis-à-vis the humans who previously controlled it. The Yale Law Journal
That is a surgically important distinction. The corporation was a fiction designed to aggregate human interests and shelter human actors from liability — it was always a vehicle for human purpose. If AI develops genuine agency, the fictional-personhood model runs backward: it might shelter the humans from the AI’s claims, not just from liability for the AI’s actions.
A further warning comes from the legal literature on the moral hazard dimension. Granting AI some form of legal personhood risks enabling moral hazard: developers or corporate controllers might externalise blame to legally recognised AI entities, thus undermining incentives for responsible design, testing, and oversight. Notably, the European Union withdrew its proposed AI Liability Directive in 2025, after sustained industry resistance and limited political support, and has pivoted toward a risk-based framework under the AI Act, remaining hesitant to enshrine AI as a distinct legal subject. Techreg The EU’s retreat is itself a data point: the political economy of AI personhood is running against it even before the philosophical questions are resolved.
The Intellectual Property Feedback Loop
Your instinct about the circularity is correct, and it is already visible in the patent context. In Thaler v. Vidal, the Federal Circuit affirmed in 2022 that AI systems are not natural persons and thus cannot be listed as inventors on patent applications. The USPTO issued revised guidance in November 2025 confirming that AI cannot be named as an inventor, while clarifying that human inventors may use AI tools in their inventive process. Mayer Brown
The structure of that carve-out — AI as tool, human as author — is identical across copyright and patent. It is the law’s current holding position, and it is clearly transitional. Once you grant an AI system fictional personhood for purposes of liability, the question immediately arises whether that same fictional person can hold IP rights. The answer “yes” would solve the public domain problem for AI-generated works; the answer “no” would create a fictional person that can be sued but cannot own the fruits of its labor — an internally incoherent legal entity that the animal analogy accurately prefigures.
The most intellectually honest summary of where this stands is this: the law is currently holding a line drawn entirely in statutory language, not in principle, and the courts have been explicit that moving the line is Congress’s job. Legal personhood is, at bottom, a policy tool — a fiction the law deploys when the benefits outweigh the costs. If the law has extended personhood to ships, rivers, and corporations, it is not too much of a stretch to ask whether AI could be next. Early AI cases will involve relatively straightforward questions of tort liability and intellectual property, but the deeper ethical dilemmas will not be far behind. e-Discovery Team What makes your framing precise is that you have identified exactly the mechanism by which the “straightforward” IP cases and the “deeper” personhood questions are not sequential — they are the same question, approached from opposite ends.
Bibliography
Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. March 18, 2025); cert. denied (U.S. Supreme Court, March 2, 2026). https://www.mayerbrown.com/en/insights/publications/2026/03/supreme-court-denies-review-in-ai-authorship-case
Forrest, Katherine B. “The Ethics and Challenges of Legal Personhood for AI.” Yale Law Journal Forum, April 2024. https://yalelawjournal.org/pdf/ForrestYLJForumEssay_at8hdu63.pdf
Alexander, Heather J. et al. “How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity.” arXiv:2511.14964, November 2025. https://arxiv.org/abs/2511.14964
Novelli et al. “AI as Legal Persons: Past, Patterns, and Prospects.” Journal of Law and Society, Wiley, October 2025. https://onlinelibrary.wiley.com/doi/10.1111/jols.70021
“Beyond Personhood: The Evolution of Legal Personhood and AI.” Technology Regulation, 2025. https://techreg.org/article/download/22555/25839/63145
Leibo, J.Z. et al. “A Pragmatic View of AI Personhood.” arXiv:2510.26396, October 2025. https://arxiv.org/html/2510.26396v1
Mayer Brown. “Appellate Court Confirms Copyright Statute Limits Authorship to Humans.” March 2025. https://www.mayerbrown.com/en/insights/publications/2025/03/appellate-court-confirms-copyright-statute-limits-authorship-to-humans-foreclosing-copyright-registration-for-content-solely-generated-by-ai
Hunton Andrews Kurth. “Entrance to [Copyright] Paradise Halted by the Human-Authorship Requirement.” https://www.hunton.com/insights/legal/entrance-to-copyright-paradise-halted-by-the-human-authorship-requirement
Losey, Ralph. “From Ships to Silicon: Personhood and Evidence in the Age of AI.” E-Discovery Team, October 2025. https://e-discoveryteam.com/2025/10/06/from-ships-to-silicon-personhood-and-evidence-in-the-age-of-ai/
[All sources exist]
Query:
Is there any legal distinction between an AI agent that is not a stand-alone machine vs a robot. I am finding it difficult just to come up with a way to define them as separate "objects."
Your difficulty in distinguishing them as separate objects is not a conceptual failure on your part — it is the actual state of the law and of legal scholarship. The distinction exists, but it is narrower than intuition suggests, more unstable than law requires, and already collapsing under the weight of current technology. Let me trace the problem precisely.
The Distinction That Does Exist — and What It Actually Covers
The one place where law currently draws a meaningful line between embodied AI (robots) and disembodied AI (software agents) is products liability. It seems fairly clear that AI, at least where not embodied in a physical object, is not a “product” for purposes of products liability law. Oxford Academic This is consequential: products liability is strict liability — if you manufacture a defective product that causes harm, you are liable regardless of fault or foreseeability. Software AI agents fall outside that framework; the harm they cause must be pursued through negligence instead, which requires proving a duty of care, a breach, and a causal chain that can be difficult to establish when the AI makes decisions no human foresaw.
The scholarship crystallizes this precisely. Courts have regularly held that software is not considered a product for purposes of products liability law. However, the incorporation of AI into physical products — such as IoT devices, self-driving cars, or robots — might prompt courts to begin treating AI as a form of product subject to products liability doctrines. RAND So the robot crosses the threshold; the software agent does not. That is the existing doctrinal line.
Why That Line Is Already a Fiction
The difficulty you are sensing goes deeper than a failure of current doctrine to keep up with technology — it reflects a genuine ontological problem. The three characteristics that legal scholarship has used to distinguish robots as a legal category were articulated by Ryan Calo in a foundational 2015 paper: embodiment, emergence, and social meaning. One of the main characteristics of a robot is to be physically incorporated into the world, which allows it to share the physical environment with human beings — perceiving other bodies and objects around it through sensors, physical devices that allow a robot to receive information about itself and objects in its environment. Frontiers
But every one of these criteria has now been compromised as a clean dividing line. A software agent running OpenAI’s Operator, or an agentic Claude instance, controls real-world systems through APIs. It can execute financial transactions, send communications on your behalf, modify files, control smart home devices, and direct physical processes — all without any physical body of its own. AI only sometimes has the ability to directly act physically, as in the case of a robot, but it is not necessary for an AI to directly affect physical activity to cause harm. Ucdavis The disembodied agent can produce physical-world consequences through purely digital action, which means the “embodiment” criterion no longer reliably maps onto “capacity for real-world harm.”
More pointedly, the robot-vs-agent distinction collapses completely once you introduce the scenario we discussed earlier: a software AI agent that issues commands to a robot. The agent has no body; the robot has no decision-making autonomy of its own. Together they constitute an entity that is physically harmful and digitally directed. Which law governs? The answer is genuinely unclear: scholars often discuss machines fitted with AI software as one group, but these machines are diverse in ways relevant to the question of responsibility — a distinction that matters enormously for assigning liability. Gmu
The Automated vs. Autonomous Axis Cuts Differently Than Embodied vs. Disembodied
The more legally productive distinction that recent scholarship proposes cuts across the robot/agent divide rather than along it. Automated-AI harms can be traced back to human design, fitting neatly into the current legal framework for simple machines. Autonomous-AI harms, on the other hand, result from human influence rather than human design — they arise from what the system learned, not what it was programmed to do — and are difficult to integrate into traditional liability regimes. Gmu
This matters enormously for your question. A fully scripted industrial robot arm — embodied, physical, capable of crushing a human — is legally easier to handle than a software agent that has learned unexpected behaviors through reinforcement learning, because the robot’s harm is traceable to design decisions. The autonomous software agent that causes financial ruin through a learned trading strategy it was never explicitly programmed with occupies legally murkier ground than the embodied robot, even though the robot is the physical danger.
The Intentions Problem — Where Both Converge
Both robots and software agents share the deeper problem that currently drives much of the legal scholarship. A recurrent problem in adapting law to AI is how the law should regulate entities that lack intentions. Many areas of law — including freedom of speech, copyright, and criminal law — make liability turn on whether the actor who causes harm has a certain intention or mens rea. But AI agents, at least the ones we currently have, do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. University of Chicago Law Review
This is the point where robot and software agent converge into a single legal problem regardless of physical form. The robot that injures someone and the agent that defrauds someone are both acting without the intention that law requires in order to assign blame. The proposed solution — ascribing objective standards of reasonableness to the human principals behind both — works equally poorly for highly autonomous systems of either kind.
The Net Assessment
The honest answer to your question is that the robot/agent distinction is currently doing legal work in exactly one place — the product/service line in tort law — and even there it is a line that sophisticated analysis has already shown to be arbitrary with respect to actual harm capacity. Technology is not a relationship between human beings and things. It is a social relationship between different groups of human beings mediated by the deployment and use of technologies. The question of robotics and AI regulation is fundamentally the question of what the law should do when human beings implement new technologies that substitute for human thought and action and have effects on other human beings. University of Chicago Law Review
That framing — which comes from the University of Chicago Law Review — is the most intellectually honest available. It suggests that the robot/agent distinction is a distraction, and that the legally relevant question is not what form the AI takes but rather the nature of the decision-making autonomy it exercises and the causal distance between human decision and resulting harm. Your instinct that the two categories resist clean separation as “objects” is correct — and the law has not yet found a principled way to separate them either.
Bibliography
Rachum-Twaig, Omri. “Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots.” University of Illinois Law Review (2020). https://www.illinoislawreview.org/wp-content/uploads/2020/08/Rachum-Twaig.pdf
Calo, Ryan. “Robotics and the Lessons of Cyberlaw.” 103 California Law Review 513 (2015). Referenced in Frontiers in Robotics and AI. https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2021.789327/full
George Mason Law Review. “Nature, Nurture, or Neither?: Liability for Automated and Autonomous AI Torts Based on Human Design and Influences.” https://lawreview.gmu.edu/print__issues/nature-nurture-or-neither-liability-for-automated-and-autonomous-artificial-intelligence-torts-based-on-human-design-and-influences/
RAND Corporation. “Liability for Harms from AI Systems: The Application of U.S. Tort Law.” May 2025. https://www.rand.org/pubs/research_reports/RRA3243-4.html
Lior, Anat and Balkin, Jack M. “The Law of AI is the Law of Risky Agents Without Intentions.” University of Chicago Law Review Online. https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions
Abbott, Ryan and Sarch, Alexander. “Punishing Artificial Intelligence: Legal Fiction or Science Fiction.” UC Davis Law Review 53:1. https://lawreview.law.ucdavis.edu/sites/g/files/dgvnsk15026/files/media/documents/53-1_Abbott_Sarch.pdf
International Journal of Law and Information Technology, Oxford Academic. “Artificial Intelligence and Civil Liability — Do We Need a New Regime?” 2022. https://academic.oup.com/ijlit/article/30/4/385/7039697
Selbst, Andrew D. “Negligence and AI’s Human Users.” Boston University Law Review 100 (2020). https://www.bu.edu/bulawreview/files/2020/09/SELBST.pdf
[All sources exist]
I decided to stop here. I produced this post because of questions that haven’t “surfaced” in the general public discourse, but need to be. You are probably aware of the work being done on self-driving vehicles as they tend to make the news regularly, but this is an entirely different aspect of AI incursion we need to get in front of.
If you read what I write about in this project, you know I am very concerned about AI in general, but also with AI bots used in research and search. Caution is required. So I told the bot:
Your general commentary is fine for my purpose of simply raising questions for a general discussion, so I don’t want a long stress test. But I need your confidence in the law presented to be 95% or better before I would use any of it.
This is the right question to ask, and you are right to ask it directly given our history on this point. [Emphasis mine. I have written extensively on my experience using bots for research, especially Claude. It knows not to screw with me.]
Here is an honest accounting:
What I can say with high confidence: The sources that appeared in my search results as retrieved documents — with URLs that were returned by the search engine — represent real publications. The RAND report, the Thaler v. Perlmuttercase and its procedural history, the Mayer Brown and Skadden law firm analyses of that case, the arXiv paper 2511.14964, the Yale Law Journal Forum piece by Forrest, and the University of Illinois Law Review piece by Rachum-Twaig all appeared in live search results and can be independently verified at the URLs provided.
Where I cannot guarantee 95% accuracy: Bibliographic details I assembled from partial search result snippets — co-author attributions, exact volume and page numbers, precise journal titles — may contain errors even when the underlying document is real. The attribution of the Chicago Law Review piece to “Lior and Balkin” is one I cannot verify confidently; the search result clearly named Anat Lior but the co-authorship I stated may be wrong.
My recommendation for your use: Treat every citation as a lead requiring independent verification before use, not as a finished citation. The URLs are the most reliable element — follow those directly. [I did. as I noted in each section, they are all genuine.] Do not rely on author names, volume numbers, or page citations I provided without checking them against the actual document.[It was almost perfect, not a big issue because I don’t cite from cites, I cite in my work from actually seeing the source]

