Strauss Center News

Updates from the Strauss Center and our affiliated distinguished scholars and fellows

Categories

Zeyi Lin Delves Into Law Surrounding Artificial Intelligence

Apr 26, 2017 |

Zeyi Lin, Electrical Engineering and Plan II Honors student, is studying the law surrounding lethal autonomous weapons (LAWs) and legal personhood for robots under the guidance of Professor Kay Firth-Butterfield. Zeyi explains his work, part of the Strauss Center’s Brumley Undergraduate Scholars program, here for us:

Strauss Center: Could you tell us about your mentor, Professor Firth-Butterfield, and the work you’re doing with her?
Zeyi: Professor Firth-Butterfield focuses on the laws and ethics around the development and beneficial use of artificial intelligence (AI). AI can augment our well-being–numerous companies and academic institutions have heavily invested in researching autonomous vehicles, for example. But there are risks as well–think about the trolley problem in the context of whether an autonomous vehicle would crash into a pedestrian to save the lives of its passengers, or veer off to save the pedestrian at the cost of its passengers’ lives. Professor Firth-Butterfield’s work addresses how industry, government, and society alike can ethically and responsibly design AI systems to mitigate those risks.

This semester I’ve done research on lethal autonomous weapons (LAWs) and legal personhood for robots. In the realm of personal ownership of LAWs outside of a military context, there is a balance between responsibly using a type of technology that can serve dangerous purposes with the need for a nascent technology to remain open for continued innovation. What’s particularly interesting is the intersection between this new technological development and Second Amendment rights in the United States–are LAWs considered weapons, or something else? One interpretation of previous Supreme Court rulings is that armed robots, lethal or not, may eventually be protected under common use, as they will be more effective, accurate, and safer. On the other hand, advocates against LAWs in the battlefield, and supporters of weapons safety in general, would contend that their potential benefits would be far outweighed by the present legal, geopolitical, and military risks that LAWs pose.

Closely interconnected with the last thought is the extent to which robots should be held responsible for their actions, whether in the context of LAWs or otherwise. Since robots are already making decisions as toys, pets, and even personal-care aides functioning in different situations as companions and even protectors, the discussion of “robot rights” is ever more relevant. With the expansion of the concept of “legal personhood” to corporations–which allows them to own property, enter into and enforce contracts, make political expenditures under the First Amendment, and also allows them to be sued and be held liable under civil and criminal laws–scholars are discussing the possibility of offering a similar type of “personhood” to robots that also bear similar responsibilities.

I was surprised to learn that there is not much policy movement in this area in the United States. However, in the European Union, the European Parliament Committee on Legal Affairs is actively debating creating a new guiding legal and ethic framework for robots, current and future, to form a basis of robot responsibility. There seems to be a level of technical understanding undergirding this discussion, which I hope translates well to the overarching discussion on artificial intelligence and robotics policy in the future, across the world.

Zeyi was part of the Strauss Center’s Cybersecurity Team, who won 2nd place at the Atlantic Council’s Cyber 9/12 Student Challenge in Washington D.C., during which teams make policy recommendations after a simulated large-scale cyberattack.

SC: What was the competition like?
ZL: The challenge consisted of a policymaking simulation set in the late summer of 2018, complete with sample intelligence reports and documents. Our scenario was a cyberattack most likely originating from another country, China, on a major U.S. financial institution. The scenario presented the issues of ascertaining attribution of the attack; determining the best course of action given certain legal frameworks, technical limitations, and diplomatic considerations; and drafting and briefing policymakers and cybersecurity professionals as part of our preliminary round.

We recommended using diplomacy rather than an aggressive approach, as there are very few cybersecurity standards globally and the risk of escalation with a country like China has consequences that are too great.

Upon advancing to the semi-final and final rounds, we were given progressively less time to prepare our responses, in hopes of better simulating a real policy situation that requires faster decision-making. The team did our best to maintain a thematic policy consistency throughout each round, and I think that the combination of legal, policy, and technical expertise in our team helped us collaborate and think of policy recommendations that were both novel and feasible. It felt great to place second and represent UT at the competition. We wouldn’t have found so much success if it weren’t for Professor Robert Chesney’s coaching, Professor Derek Jinks‘ suggestions on approaches to technology law, and Professor Paul Pope’s advice on briefing intelligence professionals. I really want to thank them for their time. It was only the first year that UT has competed at the 9/12 challenge, so I hope that future teams can find further success.

SC: What’s in the future for you?
ZL: I look forward to the one year I have left at UT. I’m currently planning my thesis on cybersecurity in foreign policy and international relations. I hope to attend graduate school to study technology and public policy either immediately after graduation or after working for a few years.

SC: We can say with confidence that whichever school Zeyi attends will be lucky to have him. Enjoy your last year Zeyi, and thank you!

strausscenter_black