In a world filling with artificial intelligence, how will our laws and society adapt?

Translate This Post

Photo by Stephen LaPorte, CC BY 2.0.
Photo by Stephen LaPorte, CC BY 2.0.

Artificial intelligence is something that already impacts our day-to-day lives across all industries. Four panelists met at the Wikimedia Foundation’s offices on July 19 to discuss the law and emerging technologies, like driverless cars and AI-assisted language research.
Four panelists met to speak on this topic for over an hour. Rebecca Crootof is the incoming executive director of Yale’s Information Society Project and a scholar focused on artificial weapons systems and the law of war; David Ahn is a partner at Fenwick & West, focusing on intellectual property; Jimoh Ovbiagele is the founder of ROSS, a tool that uses AI for legal research; and Christopher Reed is head of Zenti, a tool that analyzes language.
Charles Roslof, a legal counsel at the Wikimedia Foundation, moderated the discussion. His questions covered a wide gamut of topics. First, he and the panelists tried to nail down a fundamental question: what do we mean when we say AI? Are we trying to make machines more like people? And really, what is intelligence, be it artificial or human, anyway?
The interplay between the sometimes fuzzy definitions and the impact AI has on our lives underlined many of the conversations. But this back-and-forth created more questions than answers:

  • Ahn, for example, noted that while AI would not itself receive intellectual property rights protection, AI-created products remain an open question. At present, AI products receiving legal protection is not likely—but if AI continues to grow more sophisticated, it might not be out of the question.
  • Crootof discussed how autonomous weapons systems pose special problems for the law of warfare. Present analogies make it difficult to define such weapons and therefore to regulate them under the law. If they are neither combatants nor weapons, what are they?

But the panelists did more than ponder theoretical future issues. They also explained how AI has concrete effects today. Reed discussed how the program Zenti produces can be used in suicide prevention, and Ovbiagele discussed how AI can assist lawyers in legal research and document review.
The entire panel considered the legal consequences of illegal conduct by AI. One question that Reed and Ahn both pondered was how can we find mens rea (criminal culpability or “evil mindedness”) when dealing with a machine? And if AI causes you pain, whose fault would it be?
Questions from the audience focused on the impact of AI on the real world as well. Karyn Kesselring, an intern at Twitter and student at University of Colorado Law School, wondered what young attorneys and current law students can do to prepare for the increase in AI’s status in the legal profession.
Ken Villa, also from Twitter, asked about the disproportionate impact automatization has on  workers who have difficulty adapting to new working conditions.
In response to both concerns, the panelists highlighted the importance of working with AI, rather than having it operate in place of people. Because of the many useful functions of AI and its increasing expansion throughout industry, the panelists encouraged the audience to think of AI as a useful tool that could improve the work we humans do.
A recording of the panel is available on YouTube.
Stanton Kidd, Legal Intern
Wikimedia Foundation

Archive notice: This is an archived post from blog.wikimedia.org, which operated under different editorial and content guidelines than Diff.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?