Today, technical fellow and director at Microsoft, Dr. Horvitz gave a lecture on The One Hundred Year Study on Artificial Intelligence: An Enduring Study on AI and its Influence on People and Society; I am also fortunate to have a lunch together with Eric.

He has presented an update on the One Hundred Year Study on AI, described the background and status of the project, including the roots of the effort in earlier experiences with the 2008-09 AAAI Panel on Long-Term AI Futures that culminated in the AAAI Asilomar meeting. He then reflect about several directions for investigation, highlighting opportunities for reflection and investment in proactive research, monitoring, and guidance.

The field of Artificial Intelligence (AI) was officially born and christened at a 1956 workshop. The goal was to investigate ways in which machines could be made to simulate aspects of intelligence—the essential idea that has continued to drive the field forward.

Timeline

The project starts in Stanford as AI100. The timeline is as follows:

1950

In Alan Turing’s famous paper Computing Machinery and Intelligence, Alan Turing posists that computer programs could think like humans and proposes a test to ascertain whether a computer’s behavior is “intelligent”.

1956

Stanford computer scientist John McCarthy, above, convenes the Dartmouth conference on “artificial intelligence”, a term he defined. At this conference Herbert Simon and Allen Newell demonstrate a program that uses artificial intelligence to prove theorems in Principia Mathematica, by Bertram Russell and Alfred North Whitehead about logical foundations of mathematics, Simon and Newell also start work on computerized chess.

1962

Arthur Samuel, an IBM computer scientist who later became a Stanford professor, creates a self-learning program that proves capable of defeating one of America’s top-ranked checkers champions.

1965-1970

Researchers develop more expert systems with applications to biology, medicine, engineering and the military.

1973

SRI’s Artificial Intelligence Group creates Shakey the Robot, which crosses an obstacle-filled room autonomously using vision and locomotion systems. Shakey is the Computer History Museum’s Iconic exhibit for AI and Robotics.

1997

IBM’s Deep Blue beats world chess champion Garry Kasparov in a six-game match, capping what Simon and Newell started four decades earlier.

2000

Statistical machine learning research that began in the 1980s achieves widespread practical use in major software services and mobile devices.

2005

Computer scientist Sebastian Thrun, above, and a team from the STanford AI Laboratory built a driverless car called Stanley. It becomes the first autonomous vehicle to complete a 132-mile course in the Mojave Desert, winning the DARPA Grand Challenge, Stanley is now on exhibit in the Smithsonian.

2009

Computer scientist Eric Horvitz assembles an AAAI study group on long-term AI futures, which holds its final meeting at Asilomar in California.

2011

IBM’s Watson supercomputing system beats the two best human players of the TV game show Jeopardy, demonstrating an ability to understand and answer the types fof nuanced questions that had previously bedeviled computer programs.

2014

Stanford accepts proposal to host One-Hundred-Year Study on AI.

 

Eh… deep learning seems to be missing from the timeline, it all starts from Hinton.

 

Here is the full report in 2016 by their team.

Although the separation of AI into sub-fields has enabled deep technical progress along several different fronts, synthesizing intelligence at any reasonable scale invariably requires many different ideas to be integrated

Video Lectures

ai_timeline_0

 

In summary, following is a list of some of the traditional sub-areas of AI. As described in Section II, some of them are currently “hotter” than others for various reasons. But that is neither to minimize the historical importance of the others, nor to say that they may not re-emerge as hot areas in the future

 

Search and Planning deal with reasoning about goal-directed behavior. Search plays a key role, for example, in chess-playing programs such as Deep Blue, in deciding which move (behavior) will ultimately lead to a win (goal).

The area of Knowledge Representation and Reasoning involves processing information (typically when in large amounts) into a structured form that can be queried more reliably and efficiently. IBM’s Watson program, which beat human contenders to win the Jeopardy challenge in 2011, was largely based on an efficient scheme for organizing, indexing, and retrieving large amounts of information gathered from various sources.159

Machine Learning is a paradigm that enables systems to automatically improve their performance at a task by observing relevant data. Indeed, machine learning has been the key contributor to the AI surge in the past few decades, ranging from search and product recommendation engines, to systems for speech recognition, fraud detection, image understanding, and countless other tasks that once relied on human skill and judgment. The automation of these tasks has enabled the scaling up of services such as e-commerce.

As more and more intelligent systems get built, a natural question to consider is how such systems will interact with each other. The field of Multi-Agent Systems considers this question, which is becoming increasingly important in on-line marketplaces and transportation systems.

From its early days, AI has taken up the design and construction of systems that are embodied in the real world. The area of Robotics investigates fundamental aspects of sensing and acting—and especially their integration—that enable a robot to behave effectively. Since robots and other computer systems share the living world with human beings, the specialized subject of Human Robot Interaction has also become prominent in recent decades.

Machine perception has always played a central role in AI, partly in developing robotics, but also as a completely independent area of study. The most commonly studied perception modalities are Computer Vision and Natural Language Processing, each of which is attended to by large and vibrant communities.

Several other focus areas within AI today are consequences of the growth of the Internet. Social Network Analysis investigates the effect of neighborhood relations in influencing the behavior of individuals and communities. Crowdsourcing is yet another innovative problem-solving technique, which relies on harnessing human intelligence (typically from thousands of humans) to solve hard computational problems