Strauss Center News

Updates from the Strauss Center and our affiliated distinguished scholars and fellows


Spring Highlights: Cybersecurity & AI Edition

May 10, 2017 |

This spring, the Strauss Center continues to pioneer an interdisciplinary approach to the study of cybersecurity and artificial intelligence issues. Technical, legal, policy, and business considerations relating to cybersecurity and A.I. are deeply intertwined in actual practice, both in the private and public sectors. The Strauss Center’s sponsored courses, activities, and events aim to bridge the gap between the technical aspects of cybersecurity and A.I. and the (hotly-contested) policy and legal architectures pertaining to these topics.

Find out more about our Integrated Cybersecurity and Artificial Intelligence programs below!


The Strauss Center is thrilled to congratulate University of Texas at Austin students Mohamed Al-Hendy (School of Law), Craig Gertsch (LBJ School), Zeyi Lin (Cockrell School of Engineering and Plan II), and Alex Shahrestani (School of Law) on defeating 38 other teams and reaching the national final in this year’s “Cyber 9/12 Student Challenge,” organized by the Atlantic Council from March 16-18 in Washington D.C. The Strauss Center was proud to sponsor the team’s participation in the fifth annual competition, in which the UT team competed against groups from across the country to develop policy recommendations responding to a fictional cybersecurity breach.

Read more about their experience here.


Aiming to impact how we approach the pressing national task of building a future workforce that is prepared for the many challenges of cybersecurity, this spring the Strauss Center is sponsoring the Tech Policy Lab class led by Cybersecurity Fellow Andrew Woods. The Tech Policy Lab class brings together both law students and computer science graduate students for what Professor Woods calls “a deeply rewarding cross-pollination,” and provides an overview of the pressing technology policy questions facing individuals, governments, and Internet firms today.

Find out more about the course here.


In 2017-18, we will be taking our innovative contributions to the UT curriculum much further. The core aspiration of our program is to establish UT as the nation’s premier institution for cybersecurity cross-training, integrating computer science, engineering, law, policy, and business administration. We are moving forward with that vision with a series of new courses open to a variety of graduate students.

The new courses include a foundational course for law and policy students who will learn technological essentials from Strauss Center Cybersecurity Senior Fellow Matt Tait, formerly with GCHQ as well as Google Project Zero; a parallel foundational course for all grad students in which Strauss Center Director Bobby Chesney will cover the array of institutional, policy, and legal challenges cybersecurity raises; and a special graduate seminar in the spring during which students will be exposed to an array of currently-pending cybersecurity policy controversies.


On March 11, 2017, the Strauss Center hosted its fourth SXSW Interactive session, this time exploring the ethical considerations that technologists need to prioritize in the design and development of autonomous and intelligent systems. The panel discussion featured:

  • John C. Havens, Executive Director, IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
  • Konstantinos Karachalios, Managing Director, IEEE Standards Association
  • Derek Jinks, Marrs McLean Professor in Law, University of Texas School of Law; Director, Consortium on Law and Ethics of Artificial Intelligence and Robotics (CLEAR), the Strauss Center’s AI program
  • Kay Firth-Butterfield, Executive Director, AI-Austin, and key faculty member for Strauss Center AI initiatives


On June 5-6, 2017, in partnership with the Strauss Center, this summer at The University of Texas at Austin the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems will hold its second face-to-face meeting to iterate their document Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.

The document is designed to help technologists identify key ethical concerns within their work focused on Artificial Intelligence and Autonomous Systems (AI/AS) that will also provide directional, candidate recommendations to deal with these issues. This event is not open to the public.