IBM cognitive white paper: The path to intelligence

IBM cognitive white paper: The path to intelligence

When the term artificial intelligence was first coined in 1955, it unsurprisingly ignited the public imagination. Over the next 60 years, we have at times been fascinated by its promise, concerned about its potential being misused, and frustrated by its slow pace of development.

However, like all advanced technologies that are premature and ahead of their time, AI is widely misunderstood—misinterpreted by Hollywood movies and misinterpreted by the media as everything from the savior to the destroyer of humanity. But those who are actually involved in serious information science research and application in the industry understand the huge potential of intelligent systems. The future of this technology (which we believe will be "cognitive intelligence" rather than "artificial intelligence") will be very different from the name given to AI by the public, involving a variety of technical, scientific and social challenges and opportunities, and facing different regulatory, policy and management needs.

Cognitive computing refers to systems that can learn at scale, reason purposefully, and interact naturally with humans. They do not need to be precisely programmed in advance, but learn and reason from their interactions with us and their environment. These things are made possible by developments in multiple scientific fields over the past half century, and they are importantly different from the information systems that run them.

Those information systems are deterministic, while cognitive systems are probabilistic. Cognitive systems can not only answer a wide range of questions, but also generate hypotheses, reasoning, and recommendations based on more complex (and meaningful) data.

In addition, cognitive systems can understand what computer scientists call “unstructured” data, which accounts for 80% of the world’s data. This allows them to keep up with the massive, complex, and unpredictable information in the modern world.

None of this has anything to do with machines being sentient or autonomous. Rather, it’s about augmenting our human abilities to understand and operate the complex systems in society. This augmented intelligence is a necessary step toward improving our ability to harness technology in our quest for greater knowledge, greater capabilities, and better conditions for the human race. That’s why it’s not just a new technology, but the dawn of a new era in technology, business, and society: the cognitive age.

The success of cognitive computing will not be judged by the Turing test or the ability to emulate a human being. It will be judged by more practical criteria such as return on investment, new market opportunities, curing diseases and saving lives.

At IBM, we have been working for decades to build the foundation for cognitive computing, combining more than a dozen disciplines in cutting-edge computer science with more than 100 years of business expertise. Now, we are seeing firsthand its potential to transform business, government and society.

We’ve seen it transform big data from an obstacle into an opportunity, helping pediatricians make early diagnoses and providing innovative solutions for building smart cities. We believe these technologies present the best (and perhaps only) chance to tackle some of the most persistent systemic problems facing our planet, such as cancer, climate change, and a volatile global economy.

The History of Computing and the Rise of Cognition

To understand the future of cognitive computing, it must be placed in a historical context.

To date, we have experienced two different eras of computing - the tabulation era and the programming era. IBM has played an important role in both eras. We believe that cognitive computing is the third and most significant era in the history of computing evolution.

Watchmaking Era (1900-1940s)

Computers have their origins in a single-purpose mechanical system that counted numbers using punch cards to input and store data, ultimately determining what the machine was to do (albeit in a very primitive way). These tabulating machines were essentially calculators that enabled the expansion of commerce and society, helping us organize, understand, and manage everything from population growth to global economic progress.

Programming Era (1950s-Present)

The evolution from mechanical tabulating machines to electronic systems began during World War II, as military and scientific needs demanded. After the war, digital "computers" evolved rapidly, finding their way into business and government. They could perform if/then operations and loops based on software programs. Computers improved rapidly from vacuum tubes to transistors to microprocessors, a process that validated "Moore's Law," doubling the capacity and speed of processors every 18 months for 60 years. All computing devices we know of, from mainframes to personal computers to smartphones and tablets, are programmable computers.

Cognitive Era (2011-)

As early as 1960, JCR Licklider proposed the potential for going beyond programmable systems in his paper "Man-Machine Symbiosis". Much of modern computing is based on Licklider's research and his insights:

Human-computer symbiosis is a symbiotic relationship between humans and computers. It is a foreseeable development of the relationship between humans and computers. This relationship includes a strong coupling between humans and electronic partners. The main purpose is to:

Just as computers facilitate planning thinking, so too do they facilitate solving planning problems.

Enable humans and computers to collaborate in decision making and control complex situations without relying on inflexible predefined programs.

Preliminary analysis suggests that the symbiotic relationship will be more effective than humans operating the intelligence alone.

—JCR Licklider, “Man-Machine Symbiosis”, March 1960

Licklider knew that cognitive computing would be the necessary natural evolution of programmatic computing, although he did not know how to achieve this goal. 50 years later, massively parallel computing and the accumulation of vast amounts of structured and unstructured data have laid the foundation for cognitive computing.

The world's first cognitive system

In February 2011, Project Watson, IBM's cognitive computing system, was unveiled for the first time, defeating Ken Jennings and Brad Rutter on Jeopardy! This was the first public demonstration of cognitive computing, and marked the end of the so-called AI winter. Programmable systems had not been able to understand the mess of unstructured data in the previous 60 years of evolution, and therefore could not compete on Jeopardy! Watson could answer subtle, complex, double-entendre questions, and it was clear that a new era of computing was about to begin.

After the show, Watson went on to process more complex data sets, and beyond solving puzzles, it developed the ability to understand, reason, and learn. The goal of cognitive computing is to illuminate previously unknown parts of our world—specifically, the patterns and insights hidden in unstructured data—so that we can make smarter decisions about more important things. The true potential of the cognitive era will be the data analysis and statistical inference capabilities of machines, as well as special human abilities such as self-directed goals, common sense, and values.

This is exactly what Watson was tasked with doing, and what it is trying to do. Banks are analyzing customer requests and financial data to help them make better investment decisions. Companies in highly regulated industries are constantly querying the system to ensure they keep up with frequently changing regulations and compliance standards. Oncologists are using expert experience and research methods to test whether cognitive systems can help them understand medical information about cancer patients and find personalized, evidence-based treatment plans.

What does this experience mean for the professionals involved? Dr. Larry Norton, a world-renowned oncologist at Memorial Sloan-Kettering Cancer Center, is working with Watson to help physicians personalize cancer treatments for their patients. He said: "Computer science is advancing rapidly, and medicine will be affected by it. This is called coevolution. We have to help each other. I envision a scene where the patient, the computer, my nurse, my graduate student colleagues and myself are all communicating in the monitoring room."

We first saw hints of this symbiosis after Watson’s chess-playing predecessor, Deep Blue, defeated world chess champion Garry Kasparov in 1997. After that demonstration, Kasparov went on to compete in these new “freestyle” chess leagues, in which players were free to use whatever computer programs they liked. In these leagues, some players played alone. Some relied entirely on computer programs. But those who combined the computer with their own intuition and talent for the game were the most successful.

“A team of machines and humans can outperform even the most powerful computers. The combination of human strategic guidance and computer tactical acumen is unstoppable. We can focus on strategic planning instead of spending so much time on calculations. In these situations, human creativity is most important.”

—Gary Kasparov

The technological path forward and the science of what makes it possible

While Licklider helped to formulate a philosophical approach to cognitive computing, he could hardly articulate a technological path forward. That path is still being defined, constantly tweaked. In particular, we are acutely aware of how data is shaping our future. Gartner predicts that the world’s information will grow 800 percent in the next five years, and 80 percent of that data is unstructured. It includes everything recorded in human language (from textbooks to poetry), every moment captured in an image (every family photo in a CAT scan), and every message recorded in sound. It’s data hidden in smells, tastes, texts, and vibrations. It comes from our activities, from this instrumented planet.

In a society and global economy where value increasingly derives from information, knowledge and services, data represents the world’s richest, most valuable and most complex raw material. Until now, we have had no way to mine it effectively.

Programmable systems are based on rules that draw conclusions from data through a series of pre-defined processes. Although they are powerful and complex, they are also deterministic—they thrive on structured data but cannot handle qualitative or unforeseen inputs. This rigidity hampers their ability to cope with many aspects of the emerging complex new world of ambiguity and uncertainty.

Cognitive systems are probabilistic, meaning they are designed to adapt to and understand the complexity and unpredictability of unstructured language. They can "read" text, "see" images, and "hear" natural speech. They interpret that information, organize it, and provide explanations of what they mean, along with the rationale for their conclusions. They do not provide final answers. In fact, they do not "know" the answer. Instead, they are designed to weigh information and ideas from multiple sources, reason, and then provide hypotheses for consideration. A cognitive system assigns a confidence level to each potential insight or answer.

A mistake Watson made on Jeopardy! is a case in point. At the end of the first day of play, the "Final Jeopardy" category was "U.S. Cities." The clue was "Largest airport named after a World War II hero; second largest World War II battle." The answer was Chicago (O'Hare and Midway). Watson guessed Toronto. Watson was confused by the question for a number of reasons, including its grammatical structure, the fact that there is a city called Toronto in Illinois and that the Toronto Blue Jays play baseball in the American Baseball League.

As a result, Watson's confidence level is surprisingly low: 14%. If this were a regular Jeopardy! clue, rather than a clue from the "Final Jeopardy" stage, the contestant would probably ring the bell, but Watson would not ring the bell because the confidence level of the answer is too low. Watson knows what it doesn't know, which is hinted by the five question marks in Figure 2.

However, cognitive systems can learn from their mistakes. Through large-scale machine learning, cognitive systems can continuously improve through training and use.

By digesting corpus knowledge and being trained by experts on any given topic, cognitive systems can be trained through a series of Q&A. Human interaction with the system and responding to the correctness of the system's feedback will enhance the machine's "knowledge".

When Watson took on Jeopardy!, it did one thing - natural language Q&A based on five technologies. Today, Q&A is just one of many capabilities Watson offers as an API. Since then, we have developed more than 20 new APIs, using more than 50 different cognitive technologies. This is also the key difference between the technical approach of cognitive computing and the current approach to artificial intelligence. Cognitive computing is not an isolated field of computer science. It requires knowledge from many disciplines, from hardware architecture, algorithm strategy, industrial process design to industry expertise.

Many of the products and services we use every day — from search engine advertising apps and facial recognition on social media sites to “smart” cars, phones and power grids — are seeing aspects of artificial intelligence.

The vast majority of AI products and services are purpose-built, application-focused, and designed for a specific service. They use some of the core capabilities of cognitive computing. Some use text mining. Others use machine learning for image recognition. All products and services are limited to the ideas for which they were originally created.

Instead, cognitive systems have five core functions:

1. Deeper contact with people.

People interact with systems more fully, based on the mode, form, and quality of each person’s preferred interactions. They leverage the data they collect to create a detailed picture of an individual—e.g., location data, web page interactions, transaction histories, patterns of favorite shows, wearable device data, and electronic medical records—and add to that picture details that are harder to detect: tastes, moods, emotional states, environmental conditions, and the nature and strength of relationships. They reason from all the structured and unstructured data to figure out what’s important in human interactions. As they learn, these encounters will deliver increasing value and become more natural, predictable, and emotionally balanced.

2. Scaling and improving professional skills:

Industrial knowledge and expertise is expanding at a rate that no expert can keep up with—journals, new protocols, new legislation, new practices, and brand-new fields. A clear example is in healthcare, where in 1950 it was predicted that it would take 50 years for medical knowledge to double worldwide; by 1980, it was seven years; by 2015, it was less than three years. Meanwhile, an individual can generate one million GB of health data in their lifetime, the equivalent of 300 million books.

To help organizations keep pace, cognitive systems are being designed to serve as partners to experts to improve their performance. Because these systems have mastered the jargon of the profession—medicine, sales, cooking, etc.—they can understand and teach complex professional skills. This shortens the time it takes to go from an expert to an expert. And because these systems are trained by leading practitioners—in any industry, from customer service to oncology to case law—they can make their secrets available to a wide range of people.

3. Use cognitive integration products and services:

Cognitive technologies enable a new class of products and services that sense, infer, and understand users and the world around them. They also enable continuous improvement and adaptation, enhancing functionality to enable new, unimagined uses. This is already happening in the automotive, medical device, appliance, and toy industries. The Internet of Things is dramatically expanding the world’s digital products and services—and wherever there’s code and data, there’s cognitive technology.

4. Cognitive operations become possible:

Cognitive can also transform the way companies operate. Business operations infused with cognitive capabilities can transform data representation from internal and external sources into wealth. It allows companies to value workflows, context, and environments that facilitate continuous learning, improved predictions, and greater operational efficiency—making decisions at the speed at which data flows today. In an area where, for example, the average $1 billion company spends 1,000 people per week on supplier management, this is good news.

5. Improve exploration and discovery:

Ultimately, the most powerful tool cognitive business will have is a much better headlight for illuminating an increasingly complex and unstable future.

As leaders in every industry compete to place big bets on drug discovery, complex economic models, materials science, and startups, such “headlights” are becoming increasingly important. Applying cognitive technologies to big data allows leaders to find patterns, opportunities, and executable hypotheses that would be nearly impossible to discover using traditional research or programmable systems alone.

If cognitive computing is to be realized as envisioned, the underlying platform must be broad and flexible enough to be used in a variety of industries, and it must also support cross-industry applications. This requires a holistic R&D approach aimed at creating a robust platform with many features to support a wide variety of applications from the developer ecosystem.

This platform must cover machine learning, reasoning, natural language processing, speech and image recognition, human-computer interaction and dialogue and narrative generation, and more. Many functions require specialized infrastructure such as high-performance computing, specialized hardware structures, and even new computing paradigms. Each technology originates from its own scientific or academic field. However, these technologies must be developed in coordination with hardware, software, cloud platforms, and applications that support cognitive results.

As Watson rapidly evolves, the future may already be in sight. For example, a cognitive medical imaging application that analyzes X-rays, MRIs, and ultrasounds can process the natural language of medical journals, books, and articles. It uses machine learning to correct and enhance understanding. It can also develop deep knowledge representations and reasoning to help form possible diagnoses. To do this, specialized image processors are needed to support large-scale data and human expertise to guide the system's learning and interpret the results it generates.

The power of this new model can be applied to any field. Oil and gas companies can combine seismic image data with analysis of thousands of papers, reports, current events, economic data and weather forecasts to provide risk-return analysis for drilling. Or, by analyzing test scores, attendance rates and student behavior information on digital learning platforms, schools can build longitudinal student profiles and personalized education plans.

IBM is working with several leading cancer research institutions to speed up clinical identification and provide personalized treatment options for patients. It is also considered one of the most promising cognitive computing applications in the short term. The program aims to reduce the time it takes for medicine to interpret DNA, understand personal genetic information, and collect relevant information from medical literature from weeks to minutes. The resulting analysis results enable doctors to make a diagnosis for any patient's specific cancer gene mutations. In just a few minutes, Watson can complete the review process of genetic material and medical literature, generate a report that visualizes the data, and provide a feasible drug plan based on evidence-based medicine and the patient's unique genes. Clinicians can evaluate this evidence to determine whether its efficacy will be more targeted than the standard plan.

The implications and obligations of cutting-edge cognitive science

The Cognitive Era is the next step in the development of applied science that helps humans understand nature and improve the human condition. In this sense, it is a new chapter in an old story. The debate around artificial intelligence is just the latest example, a continuation of an old debate between those who believe in scientific progress and those who fear it. Contrary to the debate in the media and popular entertainment, in the field of science, the verdict is in. There is a broad consensus on the pursuit of a cognitive future and a widespread recognition of the need to take responsibility for technology.

“Technology creates possibilities and potential, but ultimately, our future will be determined by the choices we make. My life depends on me, not technology.”

——Erik Brynjolfsson, MIT (Massachusetts Institute of Technology, renowned professor of economics)

Specifically, we continue to shape the impact of cognitive computing on work and employment. Like all technologies, cognitive computing will change the nature of work. It will help us perform some tasks more quickly and more accurately. Many processes will be cheaper and more efficient. Some things, it will even do better than humans. This is what has been happening since the dawn of civilization: new technologies are found to be more valuable, and it allows our society and our lives to adapt and evolve. So there is reason to believe that this moment is consistent with the past. In fact, the cognitive era will open up a world of knowledge, discovery, and opportunity for humanity that will grow at an exponential rate. There is also good reason to believe that human work will become increasingly interesting, challenging, and valuable.

At the same time, social control and security are equally important. Once again, concerns about intelligent systems apply here. Every technological transition, from cars to medicines to mobile phones, raises questions about the safety of individuals and institutions. These questions are already pressing and will continue to be a constant with cognitive technologies. They have been fueled by today’s radical democratization of technology (driven by the rapid spread of the Internet and the cloud) and the resulting cost reductions.

We believe the answer is not to try to limit democratization but to embrace it while designing cognitive systems that blend privacy, security, and human control.

Paving the way for the next generation of human cognition

In the end, all technological revolutions are not only discovered, but driven by business and social needs. We pursue these new possibilities not just because we can, but because we want to.

With every revolutionary technology, our initial understanding of it is limited by the complexity of the world and our own ingrained biases and methods. However, all limitations are bound to be overcome by progress. In fact, we have been paying a high price for not knowing: we don’t know what causes patients’ diseases; we don’t know where the consumers of our products are; we don’t know where important natural resources are hidden; we don’t know where the investment risks of each are.

"The greatest obstacle to wise action is ignorance, which is also the greatest source of fear. A small candle gives off a misleading faint light and casts a huge and ominous shadow. The midday sun is bright and casts no shadow. It is time to put this whole problem of man and machine under the glare of the midday sun. Computers will never take away human initiative or replace human creative thinking. Computers will free humans from low-level repetitive thinking and allow them to make fuller use of reason and create more opportunities."

——Thomas Watson Jr. (Thomas Watson Jr., second-generation president of IBM)

At IBM, we believe that many of the world’s biggest problems can eventually be solved. With cognitive computing, we can make that happen.

The hype surrounding the “man vs. machine” drama is a distraction from the exciting but misleading fiction that exists only in fiction. Today’s cognitive systems are not our competitors, and they will never be. Neither the scientific nor the economic evidence supports this fear. Real cognitive systems are tools for deepening an important relationship—the relationship between people and the world.

Through them, we will pave the way for the next generation of cognition. We will be able to think and reason in new and powerful ways. Cognitive systems are machines that are truly inspired by the human brain. In the same way, these machines will truly inspire the human brain, improve our reasoning ability, and change the way we learn. In the 21st century, knowing all the answers is not wisdom, but asking better questions is true genius.

<<:  Top 10 predictions for the Internet industry in 2016: cloud computing, big data, and artificial intelligence

>>:  2016 Ford Global Trends Report

Recommend

More than 50 types of scams are coming, can you avoid them?

"The account is at risk. Please verify the o...

The Double Eleven copywriting you want is here

The Double 11 copywriting has only one purpose, w...

Please save these tips for achieving mobile app user growth!

When traffic and users in various industries are ...

Common mistakes in keyword selection for the App Store

In the process of mobile game or mobile applicati...

How to analyze retention data and reduce user churn?

As we enter the second half of the Internet era, ...

Aspartame is classified as a Class II carcinogen. Is it really dangerous?

Introduction: Staying away from aspartame is a wi...

Microsoft is working on Android Launcher

[[142899]] Microsoft is currently developing a la...

Danjie Entrepreneurship "Short Video Traffic Monetization Course"

The course comes from Danjie Entrepreneurship’s s...