Title: Provably beneficial AI
Abstract: Should we be concerned about long-term risks from superintelligent AI? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help.
Short Bio: Stuart Russell is a Professor of Computer Science at UC Berkeley, where he directs the Center for Human-Compatible AI. His research covers many aspects of artificial intelligence and machine learning. He is a fellow of AAAI, ACM, and AAAS and winner of the IJCAI Computers and Thought Award. He held the Chaire Blaise Pascal in Paris from 2012 to 2014. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in the field.
Title: From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability
Abstract: Over the past decades a considerable amount of work has been devoted to the notions of autonomy and intelligence of robots and other AI systems. Moreover, depending on the kind of application, several standards on the "levels of automation" have been proposed, e.g. the Federal Automated Vehicles Policy adopted by the U.S. Department of Transportation in September 2016. References to the autonomy or intelligence of such "autonomous systems", however, often are a source of misapprehension. The talk intends, on the one hand, to shed light on the reasons for this misunderstanding. On the other hand, we may admit that current AI systems have the intelligence of a fridge, or of a toaster; and yet, such autonomous systems have already challenged basic pillars of society and the law, e.g. whether or not lethal force should ever be permitted to be "fully automated". A different kind of intelligence is triggering a radical re-engineering or re-design of today's systems, processes, and environments. The "levels of automation" for autonomous vehicles (AVs), or autonomous weapons (AWs), or the algorithmic wars of finance, e.g. Wall Street, give us the measure of this impact. The aim of the talk is to show that the normative challenges of AI, robotics, and autonomous systems entail different types of accountability that go hand-in-hand with choices of technological dependence, delegation of cognitive tasks, and trust. The legal issues brought on by the delegation of decisions to such smart artificial systems do not only depend on the degree of predictability and reliability of AI decisions. Furthermore, these issues hinge on the degree of social agreement, or disagreement, about values and principles of the normative context under examination. The stronger such a cohesion is, the higher the risks that can be socially accepted through the normative assessment of the not fully predictable consequences of tasks and decisions entrusted to AI systems and artificial agents.
Short Bio: A former lawyer and current professor of Jurisprudence at the Department of Law, University of Turin (Italy), he is Faculty Fellow at the Center for Transnational Legal Studies in London, U.K. and NEXA Fellow at the Center for Internet & Society at the Politecnico of Turin. Author of eleven monographs, numerous essays in scholarly journals and book chapters, co-editor of the AICOL series by Springer, he has been member of many EU projects and researches, among which the RPAS Steering Group on drones, the Group of Experts for the Onlife Initiative set up by the European Commission, and Expert for the evaluation of proposals in the Horizon 2020 robotics program. His main interests are Artificial Intelligence & law, network and legal theory, and information technology law (specially data protection law and copyright).
Title: Improving health-care: challenges and opportunities for reinforcement learning
Abstract: Reinforcement learning offers a powerful paradigm for automatically discovering and optimizing sequential treatments for chronic and life-threatening diseases. In particular, we will focus on how data collected in multi-stage sequential trials can be used to automatically generate treatment strategies that are tailored to patient characteristics and time-dependent outcomes. We will also examine promising methods to improve the efficiency of clinical trials through adaptation. Examples will be drawn from several ongoing research projects on developing new treatment strategies for epilepsy, mental illness, diabetes, and cancer.
Short Bio: Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. Dr. Pineau’s research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President-Elect of the International Machine Learning Society. She is a Senior Fellow of the Canadian Institute for Advanced Research and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.
Title: Swift Logics for Big Data
[The EurAI Talk]
Abstract: Reasoning with and about big data, in particular, massive web data is a great challenge. On one hand, we aim for powerful inference mechanisms that add value by creating knowledge from the data. Such mechanisms seem to require sophisticated logics with a high expressive power. On the other hand, we need swift inference algorithms with an acceptable computational complexity. In this talk, reasoning formalisms that achieve both are presented: We introduce and describe specific KRR formalisms for big data that belong to the Datalog+/- family of languages. These logical languages extend the well-known Datalog language by additional features (the "+") to gain expressive power, but simultaneously make syntactic restrictions (the “-“) so as to achieve tractability and scalability. After discussing the theoretical foundations of Datalog+/-, some applications to ontological reasoning, web data extraction, data wrangling, and general reasoning about data will be illustrated, among which are some recent commercial applications.
Short Bio:Georg Gottlob is a Professor of Informatics at Oxford University and at TU Wien. His interests include KR, theory of data and knowledge bases, logic and complexity, problem decompositions, and, on the more applied side, web data extraction, and database query processing. Gottlob has received the Wittgenstein Award from the Austrian National Science Fund, is an ACM Fellow, an ECCAI Fellow, a Fellow of the Royal Society, and a member of the Austrian Academy of Sciences, the German National Academy of Sciences, and the Academia Europaea. He chaired the Program Committees of IJCAI 2003 and ACM PODS 2000. He was the main founder of Lixto, a company that provides tools and services for semi-automatic web data extraction which was acquired by McKinsey & Company in 2013. Gottlob was awarded an ERC Advanced Investigator's Grant for the project "DIADEM: Domain-centric Intelligent Automated Data Extraction Methodology". Based on results of this project, he co-founded Wrapidity Ltd, a company that specializes in fully automated web data extraction that was recently acquired by an international media intelligence firm.
Title: As We Train the AI, so the AI Can Train Us
Abstract: Accompanying the recent success of AI is a concern over a concomitant displacement of human jobs by automation. Can AI can be a part of the solution?
This talk will highlight recent breakthroughs and exciting opportunities in the application of AI to innovations in human education. What are some successes in applying the recent advances in machine learning and natural language processing to improving human teaching and learning? What areas are ripe for new applications? And what are the common pitfalls and dead ends? I hope to inspire colleagues to consider the role of AI in helping people learn new skills in more effective, engaging, and convenient ways.
Short Bio: Dr. Marti Hearst is a Professor in the School of Information and the Computer Science Division at UC Berkeley, and previously was a Member of the Research Staff at Xerox PARC. She is the author of Search User Interfaces, the first academic book on that topic, and has written more than a hundred research articles in the areas of computational linguistics, information visualization, search user interfaces, and human-computer interaction, and learning at scale. Dr. Marti Hearst is currently Vice Present of the Association for Computational Linguistics and a Fellow of the ACM, and has received 4 student-initiated Excellence in Teaching Awards. She received her PhD in Computer Science from UC Berkeley.
Title: Super-Human AI for Strategic Reasoning: Beating Top Pros in Heads-Up No-Limit Texas Hold'em
Abstract: Poker has been a challenge problem in AI and game theory for decades. As a game of imperfect information it involves obstacles not present in games like chess and go, and requires totally different techniques. No program had been able to beat top players in large poker games. Until now! In January 2017, our AI, Libratus, beat a team of four top specialist pros in heads-up no-limit Texas hold'em, which has over 10^160 decision points. Libratus is powered by new algorithms in each of its three modules:1)computing approximate Nash equilibrium strategies before the event, 2)endgame solving during play, and 3)fixing its own strategy to play even closer to equilibrium based on what holes the opponents have been able to identify and exploit. The algorithms are domain-independent and have applicability to a variety of imperfect-information games such as negotiation, cybersecurity, auctions, strategic pricing, finance, and steering biological adaptation and evolution for medical treatment planning. This is joint work with my PhD student Noam Brown.
Short Bio:Tuomas Sandholm is Professor at Carnegie Mellon University in the Computer Science Department, with affiliate professor appointments in the Machine Learning Department, Ph.D. Program in Algorithms, Combinatorics, and Optimization (ACO), and CMU/UPitt Joint Ph.D. Program in Computational Biology. He is the Founder and Director of the Electronic Marketplaces Laboratory. He has published over 450 papers. He has 27 years of experience building optimization-powered electronic marketplaces, and has fielded several of his systems. He is Founder and CEO of Optimized Markets, Inc., which is bringing a new paradigm to advertising campaign sales and scheduling - in TV (linear and digital), Internet display, mobile, game, radio, and cross-media advertising. He was Founder, Chairman, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 of the world's largest-scale generalized combinatorial multi-attribute auctions, with over $60 billion in total spend and over $6 billion in generated savings. His algorithms also run the UNOS kidney exchange, which includes 66% of the transplant centers in the US. He has served as market design consultant or board member for Baidu, Yahoo!, Google, Chicago Board Options Exchange, swap.com, Granata Decision Systems, and others. He has developed the leading algorithms for several general classes of game. He holds a Ph.D. and M.S. in computer science and a Dipl. Eng. (M.S. with B.S. included) with distinction in Industrial Engineering and Management Science. Among his many honors are the NSF Career Award, inaugural ACM Autonomous Agents Research Award, Sloan Fellowship, Carnegie Science Center Award for Excellence, Edelman Laureateship, and Computers and Thought Award. He is Fellow of the ACM, AAAI, and INFORMS. He holds an honorary doctorate from the University of Zurich.
Title: Deep Learning at Alibaba
Abstract: In this talk, I will focus on the applications and the latest development of deep learning technologies at Alibaba. More specifically, I will discuss (a) how to handle high dimensional data in DNN and its application to recommender system, (b) the development of deep learning models for transfer learning and its application to multimedia data analysis, (c) the development of combinatorial optimization techniques for DNN model compression and its application to large-scale image classification, and (d) the exploration of deep learning technique for combinatorial optimization and its application to the packing problem in shipping industry. I will conclude my talk with a discussion of new directions for deep learning that are under development at Alibaba.
Short Bio:Rong Jin is a principal engineer at Alibaba, and is leading iDST of Alibaba. Prior to joining Alibaba, he is a faculty member of the Computer and Science Engineering Dept. at Michigan State University from 2003 to 2014. His research is focused on statistical machine learning and its application to information retrieval. He published over 200 technique papers, mostly on the top conferences and prestigious journals. He is an associate editor of IEEE Transaction at Pattern Analysis and Machine Intelligence (TPAMI) and ACM Transaction at Knowledge Discovery from Data. Dr. Jin holds Ph.D. in Computer Science from Carnegie Mellon University. He received the NSF career award in 2006, and the best paper award from COLT in 2012.