The Most Important Books On Artificial Intelligence | The Reading Lists
Why should we be reading books on artificial intelligence? Well, if you’re anything like me, then you’re equally fascinated and terrified by the development of artificial intelligence. It is an area of study that cannot be ignored. Since 2000, the number of start-ups in the subject area has increased by 1400%, with investment into them increasing sixfold. Whether you fall into the category of those who are encouraged and excited by the speed of development, or you’re on the half of people who are fearful that it’s developing too quickly with not enough time to consider the rules and morals that should govern such intelligence – we should all be learning about it. There have been some incredible books on artificial intelligence; many you may have already read. However, this panel of experts has agreed to help me put together a reading list of the most important books on artificial intelligence. Each of them has their own unique viewpoint; which has made for a fascinating list of books on artificial intelligence. Please meet our expert panel who will help us discover some of the most important books on artificial intelligence.
Michael Wooldridge is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford. He has been an AI researcher for more than 25 years and has published more than 350 scientific articles on the subject. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI).
Roger Schank is the Chairman and CEO of Socratic Arts and the Executive Director and founder of Engines for Education. He was the founder of the Institute for the Learning Sciences at Northwestern University where he was John Evans Professor of Computer Science, Education, and Psychology. Prior to that, he was Professor of Computer Science and Psychology at Yale University and Director of the Yale Artificial Intelligence Project.
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. Her research focuses on perception for autonomous robotic manipulation and grasping. Before joining the Autonomous Motion lab in January 2012, Jeannette was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis was on Multi-modal scene understanding for Robotic Grasping.
Paolo Turrini is an Assistant Professor at the Department of Computer Science of the University of Warwick. The areas that Paolo works in are; game theory, artificial intelligence, logic, social choice theory and mechanism design. Paolo was awarded an Honorary Lectureship at Department of Computing, Imperial College London in 2017 and in 2016, a Fellowship of the Global Future Councils at the World Economic Forum.
Tim Rocktäschel is a Lecturer at the Department of Computer Science of the University College London. He did his Ph.D in Machine reading group at University College London, was the recipient of a Google Ph.D. Fellowship in Natural Language Processing and also worked as a Research Intern at Google DeepMind. His research focus is on machine learning models that learn reusable abstractions.
You’ve met the panel and now it is time to discover their nominations for the most important books on artificial intelligence.
Artificial Intelligence: A Modern Approach is what I think about when I think about an AI book. It is by far the most comprehensive introduction on the subject I’ve come across and what I like about it is that it really tries hard to embrace the huge number of approaches and problems that we call Artificial Intelligence within a unifying logical framework. It is not as technically deep as specialised books in, say, reinforcement learning or constraints satisfaction, but it does give the reader a great overview of all those. It is particularly good for students as well as for lecturers, as it comes with plenty of exercises, and the authors have taken care of writing down solutions and even slides.
I can vividly remember the ripples of excitement this book caused when it was first published. There had been many introductions to AI published before, but this book heralded the emergence of AI as a mature scientific discipline. The scope was astonishing: AI is a huge, sprawling field, and no other text has come close to this book in terms of putting the entire field into context. The third edition remains the standard undergraduate text on AI today and is unlikely to be surpassed any time soon. I use it as a reference work on a weekly basis. And if you really want to understand the science of AI, there is no better place to start.
This book is the first book I have read on Artificial Intelligence. It was the textbook for the introductory AI course I took while I was studying Computer Science in Germany. It remains the textbook of choice in many introductory courses to Artificial Intelligence all over the world. It is incredibly comprehensive and covers the basics of search, reasoning, planning, how to represent and deal with uncertainty, how to learn, communicate and act in the world. It is a great resource for anyone who would like to get a broad overview of the field. And I have fond memories of great examples such as the wumpus world.
The Human Use of Human Beings is the foundational book for thinking about AI in general.
Artificial Intelligence is an extremely fast-moving field, so any textbook that has been influential in past decades might already be outdated from today’s point of view. If I had to recommend only one book at the moment, it would be Machine Learning by Murphy. It is my go-to machine learning Bible and covers most of the fundamental principles behind currently successful AI models. It is beginner-friendly, it has lots of exercises, it covers a wide range of topics and it is currently the standard textbook in many machine learning classes at universities around the world.
This book contains a collection of classic papers from pioneers in AI such as Turing, Newell and Simon, as well as Minsky and Feigenbaum. These scientists have not only defined the field of Artificial Intelligence but shaped it for decades to come. It is also freely available from the
There is a cult around this book and with good reason. It presents a mind-expanding way of thinking about programs and programming, using a version of the LISP programming language. LISP was invented in the 1950s by John McCarthy – the man who gave the name to the field of AI. LISP was the AI programming language of choice for decades. Structure and Interpretation of Computer Programs requires some work, but if you persevere, then it gives you a whole new way of thinking about programs and programming. It is a beautiful and important book, and if everyone had read it and followed its lessons, the world would be a better place. Some AI colleagues might raise eyebrows at this choice, because it isn’t, strictly speaking, an AI book – but they miss the point. AI programs require mind-bending ways of thinking about programs, and if you “get” this book, then you are well-placed to build beautiful AI systems, whatever your programming language of choice.
After I graduated in Computer Science, I diverged a bit and studied Art & Technology. Surprisingly, it was this program (and not CS) in which I got to read and discuss important foundational papers in AI such as “Computing Machinery and Intelligence” by Alan Turing, “As We May Think” by Vannevar Bush or “Men, Machines and the World About” by Norbert Wiener. The classic book of the “Sciences of the Artificial” stuck with me until today. Specifically, Herbert Simon’s thesis intrigued me that the complexity of an agent’s behaviour may be shaped by the complexity of its environment while the agent’s behavioural system is actually quite simple. This has important implications when building artificial systems and as Edward Feigenbaum made me aware of, Chapter 5 and 6 are considered the “The Ur text of Design Thinking”.
An attempt to look at how people solve problems and an attempt to get computers to do what people do. On the one hand, this was a great idea but unrealistic as people don’t solve problems in general but rather have particular domain knowledge that they employ. But very helpful for thinking about AI.
If you want to learn what is behind some of the most exciting recent advances in AI like agents that learn to play Atari, Go, StarCraft, or Dota 2, I recommend that you learn about Reinforcement Learning through Sutton and Barto’s standard textbook.
What is exciting about this branch of machine learning is that it makes little assumptions about the task to be solved. Specifically, it is concerned with agents that learn interactively in an environment by maximizing their reward. Twenty years after the publication of the book, Sutton and Barto are currently preparing a second edition, addressing some of the recent topics in the field.
Modern AI is a lot about decision-making in uncertain environments. This typically means exploration, trial and error, and update of beliefs based on past decisions and events. The main tool for this is reinforcement learning, a powerful computational model of probabilistic inference in complex situations. Think of a robotic vacuum cleaner or a chess-playing engine, they are both exploring something and trying to take the best possible decisions accordingly. Sutton and Barto’s Reinforcement Learning book is a rigorous formulation of this paradigm and a key read to understand the modern developments of AI. It covers Markov decision processes as well as all the classical Belmann-like updates, in both active and passive reinforcement learning. It includes a great treatment of Monte Carlo Tree Search, possibly the most successful tree search method in modern AI.
Minsky was the most brilliant mind in AI. Anyone working in AI should read this book.
John McCarthy, the founder of AI, had a hugely influential vision for the field he named. He believed that logical reasoning was the key to AI. His dream was that the whole process of intelligent action could be reduced to logical deduction – an intelligent robot would be one that reasoned logically about what to do. This book is a gloriously pure articulation of McCarthy’s vision. The book had a huge influence on me as I began my PhD studies at the end of the 1980s, although McCarthy’s vision for logical AI fell out of favour not long after that – logic turned out to be a powerful tool for some problems in AI, but some seemingly trivial problems proved to be impossible for logic-based AI to handle. So this book is perhaps important mainly as a historical document, but it still has things to teach us, and it is worth reading simply to gain an understanding of one important thread in the tapestry of AI.
Machine Learning approaches towards problems in Artificial Intelligence have recently provided impressive advancements. They allow to leverages massive amounts of labelled training data to for example semantically understand street scenes that can be the input to an autonomous car. Yet, these learned models typically capture only correlation without considering the causal structure in the data. For example, if someone buys a new laptop online, she might be interested in buying a new bag pack as well. However, when buying a new bag pack, it is unlikely that because of that the person will afterwards be interested in buying a laptop. A machine learning model will however not have access to this reasoning. It is an extremely challenging problem to automatically infer this causal structure from data alone. Yet, it is fundamental to many problems in economics, social and health sciences, and business. This book analyses causation and provides a mathematical foundation. Judea Pearl just recently published an even more accessible book: The Book of Why.
Somewhat surprisingly, I choose the Selfish Gene by Richard Dawkins. This truly seminal and inspiring manuscript is claimed by a number of scientific disciplines – so why not AI as well?- and has fundamentally contributed to our understanding of genetic and cultural evolution as the product of the interaction of self-replicating processes. This might still seem far from artificial intelligence as we know it, but there is a twist. Recently published results in multi-agent reinforcement learning have established a beautiful mathematical connection between the evolutionary processes, as we know them from biology and game theory, and the learning algorithms, as we know them from artificial intelligence: genetic and cultural evolution can now be seen as reinforcement learning by a number of repeatedly interacting computational processes.
Do you want to understand the foundations of Machine Learning? This is the book. It is comprehensive, well-written and comes with great exercises. Anyone who works through this material will have gained a fundamental understanding of the principles underlying Machine Learning and have learned about the essential algorithms.
It is currently hard to get around Deep Learning if you are working in machine learning and AI. Its widespread success stems from learning representations from data automatically, the utilization of specialized hardware (for instance GPUs), excellent library support, and recent advancements in continuous optimization. While the standard textbook on the subject is by Goodfellow, Bengio & Courville (2017), I recommend starting with Trask’s book. Grokking Deep Learning is particularly suited for beginners and people who want to understand what is under the hood of current Deep Learning libraries. The book covers fundamental neural network layers and is full of excellent Python examples, teaching the reader how to implement neural networks from scratch.
Shakey was the first robot that could reason about its own actions and plan ahead to fulfil a simple set of tasks. The number of fundamental algorithms that resulted from this project is incredible: the A* search algorithm, the STRIPS planner, the Hough transform to name just a few. Nils Nilsson is one of the main contributors to Shakey and has written this book that traces the history of the subject of Artificial intelligence. Edward Feigenbaum considers it as the best historical book. It is also freely available on the web courtesy of Cambridge University Press.
My attempt to get people to understand that the mind is a complex idea and that AI is about doing what intelligent people can do not about making up computationally convenient things that bear no relation to how human memory works.
Which books would you consider the most important books on artificial intelligence? Comment below and let us know!
via The Reading Lists
October 11, 2018 at 07:41PM https://ift.tt/2OajuUx