Sensors, ubiquitous connectivity, and artificial intelligence are coming together to create a new generation of robotics. Imagine Siri with eyes, hands and legs (or wheels). Technology is finally leaving cyberspace and coming into contact with the real world. Robots are gaining the ability to (1) sense what is happening around them, (2) plan (i.e. think about) what to do next, and (3) act upon their environment. As this sense-plan-act cycle becomes increasingly sophisticated, robots will begin to exhibit emergent and intelligent behaviors. Regardless of their consciousness (or lack thereof), intelligent robots interacting with a complex world will inevitably need to act autonomously in ways that are neither predictable nor planned by the humans that originally created them.
In other words, robot activities will increasingly come within the ambit of the law. As we shall see, robot law is neither obscure nor esoteric. On the contrary, the use of robots in our everyday lives will force lawyers to look with fresh eyes at some of the most basic principles of the law. Just as importantly, lawyers need to understand how legal developments can and will affect the development of robot technology.
As an example and a starting point, let’s consider the common law of agency, which is a likely point of embarkation for robot law. The black letter legal definition of agency is a relationship created by contract or by operation of law where one party (the principal) grants authority to another party (the agent) to act on behalf of and under the control of the principal to deal with a third party. The actions of the agent with a third party bind the principal.
On first glance, agency law seems fairly well-suited to deal with human-robot relations. Imagine, for example, that my family purchases a robot nanny to help look after our three children. (If you know where I can acquire such a device please email me!) This robot is owned by me and is programmed to do my bidding. Imagine further that this robot is integrated with my self-driving car and so can chauffeur my kids to and from school, soccer practice, piano lessons, etc. Is the robot my agent for purpose of dealing with the school, other cars on the road and the piano teacher? I expect most courts will have little difficulty answering in the affirmative. The robot is my personal property and presumably acts primarily under my orders, subject only to the prime directives embedded in its code.
This conclusion initially sounds both reasonable and helpful. The school, for example, can amend its child release policies to permit children to leave campus under the care of a robot nanny acting on behalf of the child’s parent or guardian. The school is relieved of liability, I get to stay late at the office, and little Johnny makes it to soccer practice on time. Everybody wins. Viewing human-robot relations exclusively through an agency lens, however, brings with it other consequences, some of which may not be so welcome.
For example, if the robot is my agent, then the doctrine of “respondeat superior” is likely to apply. This doctrine first emerged in the English common law in the 17th century to deal with the legal consequences of masters acting via their servants. During the 19th century, this doctrine was extended from servants to employees, and became the basis of vicarious liability on the part of the employer for actions committed by the employee within the scope of employment. This became particularly important in the 20th century, when motor vehicles emerged as a new technology and the law needed to assign liability for accidents involving delivery trucks and all other manner of commercial vehicles. Respondeat superior continues to live on as a means of shifting liability to the party whom society judges as both responsible and better able (via insurance and otherwise) to shoulder liability. In the case of a robot, treating robots as employees may be problematic, but the earlier analogy to servants would seem to fit just fine. Even better, a doctrine like this lets courts “see through” the robot entirely and attach liability to the human owner. Much as courts do in cases involving dogs or other animals, vicarious liability of a human is likely to be much more legally palatable early on than direct liability of a state-of-consciousness that we don’t really understand.
Many early commentators on robots may be content to end the discussion here. Ignore the robots, find the human, and let sleeping dogs lie. The innovative urges of our modern economy are not so easily quelled. On the contrary, for every legal action, there is likely to be an equally legal and opposite reaction. For example, the law of agency and its focus on both control and the existence of the master-servant (employer-employee) relationship, gives rise to the equally important law of independent contractors. Independent contractors are not servants, but autonomous actors who are responsible for the contracted-for-service, but are free to determine the means and methods for getting the job done. In the modern context, the law of independent contractors (often referred to as 1099 contractors after the federal tax forms that are used to track non-employee compensation in the US) has resulted in the development of Uber and a host of on-demand businesses that offer services not through employees, but via an army of freelance individuals.
In the case of robots, an early and arguably overbroad application of “respondeat superior” may tend to discourage humans from owning robots directly. Why take on that liability? As in the case of Uber, other models are possible. For example, we already have a massive system for creating, empowering, and managing non-human legal persons. This system is called the law of corporations and other business entities. In fact, I would submit that most business lawyers in the US today represent few, if any, humans directly. Instead, our clients are overwhelmingly corporations and other non-human entities.
These entities pay taxes, own property, have bank accounts and can freely contract for goods and services in the economy. More importantly, corporations have independent legal existence and can bring claims to the courts to enforce and defend their rights. Corporate law provides an immensely powerful conceptual tool for robots to gain legal rights.
To wit, imagine a newly manufactured robot being contributed to a newly formed Delaware corporation. The corporation could even finance the purchase via a bank loan secured by the robot itself. The corporation has a Tax ID a bank account and a credit card. The robot then goes forth into the world and begins providing services, for example as a chauffeur for a business that provides after-school activities for kids. The business pays the robot corporation monthly for its services. The robot corporation then uses that money to rent a vehicle, pay for insurance, as well as a parking place in a “robot garage” where the robot can go when it’s not working to recharge its batteries and receive any necessary maintenance. At the end of the year, the robot corporation uses a CPA and a tax lawyer file a tax return on its income. In short, robot corporations would not be mere servants, but legal “freemen” capable of living out their existence free from the ownership by, or dependence on, humans.
Although this scenario is eminently possible from a corporate law perspective, it poses serious challenges to the legal system in other respects. If a robot corporation has no human owner, there is no “easy out” vicarious liability theory for courts called upon to evaluate its actions. Imagine a car accident involving the robot chauffeur. Without respondeat superior, courts will need to assign fault between the robot and the other driver. Can a robot commit negligence? If so, what is the standard of care? Will courts have to replace the “reasonable person” standard with a “reasonable robot” standard that takes into account the unique abilities and limitations of artificial intelligences? What if the other driver is a human who becomes upset and draws a gun on the robot chauffeur (and its terrified carload of school kids). Does a robot have the inherent right to self defense? May a robot “stand its ground” when defending humans for which it is contractually responsible?
If a robot exceeds the scope of this right, can it be charged with a crime? Does the robot have the right to due process? If the robot corporation is unable to pay the resulting fine and declares bankruptcy, what happens to the robot? In particular, does a robot have a right to a second chance? One quickly realizes that even basic economic liberties (i.e. the freedom to contract) lead unavoidably to the biggest questions of all. To wit, the American revolution was fueled by a very powerful concept: “no taxation without representation.” In a world where robot corporations provide our security, mow our lawns, build our houses, take care of pets, children, the sick and the elderly—and pay an increasing share of the taxes—will robots demand the right to participate in the political process (i.e. to vote)? Do free robots have a right to life (i.e. electricity), liberty (i.e. the right to not be imprisoned) or the pursuit of happiness?
The Anglo-American common law is a thousand years old, give or take, and has adapted to social evolution from manor feudalism to Bitcoin. The law will adapt to robots as well, as lawyers grapple with these issues and vigorously represent their clients, both carbon- and silicon-based.
About the Author
Joel Espelien is the founder and principal of Espelien Law PLLC, and writes and reports on the future of media for The Diffusion Group. Follow Joel on Twitter @espelienjb.