Introduction: Concept-Driven Machine Intelligence

ShareShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

This post is a draft of the introduction of a book I am working on that addresses the topic of machine intelligence…

I pay close attention to news related to technology and it appears to me that the topic of artificial intelligence gets more attention now than it has seen in its existence. I believe the attention is due to several factors, particularly the recent progress in deep learning neural networks and the headlining announcements of its potential danger from prominent people in science and technology. I notice another trend related to artificial intelligence news: thought leaders believe that artificial intelligence and neuroscience require collaboration and cross-consideration if we are to make machines think in the manner that humans and other biological entities do. I strongly agree with that thought and the ideas in this book emphasize that opinion. And there is one more trend in consensus that is important to mention: even with all the advancements with current digital workhorses of AI such as machine learning and deep learning neural networks, the experts closest to the science admit we are still a long ways off from realizing basic human intelligence with these technologies. I believe we are just one quantum (and yes, significant) step away from bringing biological-like intelligence to computers. The final frontier of artificial intelligence lies in machines ability to manipulate concepts. The leveraging of concepts is what truly enables learning from new experiences, solving new problems, and demonstrating true intelligence.

Before I expound on the idea of concept manipulation, I want to set the tone for this book and explain my intent in collecting these ideas. I don’t have an academic background or advanced degree in a field related to artificial intelligence so this book will not contain a lot of technical jargon and complicated mathematical equations. I design software systems and unique applications. I try to write the requirements for these applications that are clear and concise and can be understood by a wide audience of stakeholders and interested parties. Although I do not present a blueprint for the creation of a completely intelligent system (sorry), I hope to present ideas that can be pursued and detailed in order to realize the thoughts in this book. It is my hope that this book can unite expert thoughts in neuroscience, mathematics and software development for the purposes of designing and developing a digital model that demonstrates intelligent behavior.

So now I want to get back to the idea of concept manipulation. Deep learning neural networks have made tremendous progress in classifying and categorizing items given a set of inputs. They still fall short in capability in some areas that are evident and easily provable. The scenarios below provide some practical examples.

I recently did a Google image search for “broken bucket”. The search returned images you would likely expect: pictures of broken buckets. Then I searched for “bucket that is not capable of retaining water”. The results were very different: some pictures of buckets, some pictures of water, an occasional picture of a bucket with holes dispersing water, and some pictures that had nothing to do with the images I would expect to see. Although very advanced, Google’s search routines are not concept-driven; they are based on patterns related to the terms entered in the search query.

Natural language processing (NLP) is an implementation of machine learning intended to derive meaning from spoken or written text. In his book, “Goodbye, Descartes”, Dr. Keith Devlin discusses how the state of the art routines still struggle with ambiguity and reference in this area. Consider these sentences:

I was wondering where Mike is now. Tony told me that he lives in San Francisco.

I asked my sons (eight and six years old) who lives in San Francisco and they told me that Mike lives in San Francisco. I asked them how they knew that and they answered something to the effect of “you were talking Mike and where he was”. So they recognized the concept of Mike’s location from the first sentence and put that into context in the second sentence, realizing that San Francisco is a place, so Tony was likely referring to Mike when he said “he”. Machine learning algorithms do not demonstrate this concept manipulations in the manner achieved by young children.

This following example related to NLP is a little more complicated, but a person would likely understand the intent of the words after giving them some thought:

When your brother, Tim, was little he had a friend named Matthew. His mother used to call him “Timmy Turtle”. He didn’t like being called by that name.

The first ambiguity emerges in the second sentence. “His mother” could refer to Tim’s mother or Matthew’s mother. But if Tim is “your brother”, then it is conceptually likely that Tim’s mother is your mother so “his mother” must refer to Matthew’s mother. And Matthew’s mother likely called Tim “Timmy Turtle” because, given your conceptual understanding of nicknames, it would make much more sense to call Tim “Timmy Turtle” as opposed to Matthew. And as far as the “he” leading the third sentence, you would expect it is more likely that we are referring to Tim’s dislike of the nickname because “being called” relates to the target of the nickname, not to somebody who observed the action. You can probably imagine how this string of sentences could challenge NLP in determining the meaning of the references.

Lastly for now, let’s consider the current intelligent behavior of digital assistants. I own an Amazon Echo device and I’m impressed with the continual evolution of the Amazon Alexa voice service. It can provide answers to general questions regarding the height of the Great Wall of China and whether or not a certain celebrity is still alive. However, with quick, targeted experimentation it can be made clear that the service is not operating at the conceptual level. Here is a conversation I have had with my Echo device and its Alexa voice service:

Joe: Alexa, how tall is Mount Everest?
Echo: Mount Everest’s height is 29,029 feet; 8848 meters.
Joe: Alexa, where is it?
Echo: Hmm, I don’t recognize that place but I’m working to learn more about local businesses.
Joe: Alexa, where is Mount Everest?
Echo: Mount Everest is a mountain in the Himalayas
Joe: Alexa, where are they?
Echo: Hmm, I don’t recognize that place but I’m working to learn more about local businesses.
Joe: Alexa, where are the Himalayas?
Echo: The Himalayas is a mountain chain in Asia.

The challenge the Alexa service encounters is clear and familiar by now: it does not understand context. It does not leverage the concepts that are relevant in this conversational session.

So in summary, these representative interactions with intelligent routines and services highlight the gap between the state of the art and the capabilities and performance of biological intelligence.

If concept manipulation is the key to intelligence, I need to establish some assertions around this book’s working definition of a concept:

  • Every building block of intelligence is a concept
  • Concepts can be represented mathematically
  • The intelligent mind can create, retrieve, and apply concepts
  • The association of concepts is represented by a concept

The section below details each of these points.

Every building block of intelligence is a concept

This point is important and fundamental. Every time you learn something new, recall a memory, solve a problem or answer a question, and do any intelligent task, you are manipulating concepts. Every mental component of your intelligent behavior is a concept in these cases. If I ask you to compare blue jays and cardinals from a physical perspective (assuming you familiar with them), you may tell me that they are similar in shape and size but the cardinal is red and the blue jay has coloring that is a combination of blue, white, and black. In order to provide that answer, you had to leverage your concepts of the birds themselves (you retrieved the concept of their images from your memory) and the concepts of shape, size, color, comparison, similarity, and difference. This idea that everything is a concept applies to simple concepts like numeric quantities and complex concepts like the American Revolution. Additionally it applies to tangible concepts like books and trees as well as abstract concepts like ambivalence and angst.

Concepts can be represented mathematically

The belief that concepts can be represented mathematically supports the primary ideas in this book. This belief is also the most difficult to conceptualize and maybe even to agree with. The statement asserts that every concept in your mind: the idea of a triangle, your experience in third grade, your favorite novel, and the birth of your first child are thoughts that can be represented by some mathematical model. Look out a window and take in the view of some complex scenery: a yard with trees, other plants and birds, or a possibly a cityscape with multiple buildings, vehicles and people. However complicated this perception of the view might be, the thoughts conjured in your head can be represented by some numeric structure that your brain understands. As I mentioned, this assertion may be hard to believe. However, our current understanding of the human brain’s mechanics related to learning and memory involves a finite (ok, very large) number of neurons with a finite number of synaptic connections firing in some pattern. Although the description is simplified, nothing about this mechanisms exceeds the conceivable limits of mathematics. I should point out that there is pre-existing work on this idea from experts who have dedicated a lot of time and thought to the belief. I will refer to that existing work later in this book.

The intelligent mind can create, retrieve, and apply concepts

These three utilizations of concepts: creation, retrieval, and application, constitute the basis of intelligent behavior. Concepts are created as we observe our universe and relate our perceptions with our currently established concepts. At a very early age, we probably developed the concept of movement by watching things change location. Concepts are retrieved from memory typically driven by what we are currently perceiving and experiencing. We apply concepts solve problems and achieve goals in general. We turn on lights if a room is too dark for us to see. We open umbrellas to keep us dry if it raining. Machine learning techniques really only demonstrate one of these three capabilities: they perform concept retrieval. If you provide a well-trained neural network with new input, it can categorize or classify that input to one of its known targets. Although it is common to refer to how a trained neural network “learns”, I will explain why this learning is not equivalent to concept creation. The examples earlier in this introduction have hopefully made it clear that machine learning cannot apply concepts in order to reach goals or solve problems.

The association of concepts is represented by a concept

This statement is related to the “everything is a concept” assertion. The concept mathematics that enables intelligence requires that the operators and evaluators in the “equation” are also concepts. These associations are typically abstract and provide the important connection between other concepts. We know that a square peg can’t fit in a round hole because the shapes are different. In this goal of placing the peg in the hole, the shapes are concepts in the equation and the idea of “difference” is also critical concept. You know that the difference in the shapes will provide a challenge. Whenever you search in vain for an item or tool to solve a problem and settle for something that is “close enough” you are effectively performing concept mathematics. You may want a hammer to drive a wooden stake into the ground but settle for an appropriately shaped rock if you cannot find a hammer. Machines simply do not demonstrate this behavior yet because they cannot mathematically operate on concepts in the manner that we can.

A quick note on terminology: throughout the book, I will use the term “machine intelligence” in reference to the core topic. Terms like “artificial intelligence” and “machine learning” may conjure specific beliefs and approaches with people who have relevant experience in those fields, and I want to label the idea more generally. Hopefully this short introduction sets the table for the ideas that will be discussed in this book. We will cover them all in detail in the chapters that follow, beginning with answering the question of how we create concepts in the first place.

Leave a Reply

Your email address will not be published.