What Does It Mean to Be Intelligent?

The Singularity is a term you’ll find in science and in science fiction. It was coined by mathematician John von Neumann to define a theoretical moment when the artificial intelligence of computers surpasses the capacity of the human brain. The term is borrowed from physics and quantum mechanics, where the term gravitational singularity is used in the study of black holes. These events are all considered singular because we are unable to predict what happens next; the disruptive degree of change associated with the event is simply too great for our current body of knowledge.
intelligent1

While we are far from attaining the goal of artificial intelligence, there was a brief flurry of excitement recently when a computer reportedly passed the Turing Test, to mixed reviews. In this post, we’ll talk about the Turing test, how computers are already augmenting human cognition, and what it may mean to the learning profession.

 

The Turing Test and the Definition of Artificial Intelligence

 intelligent2

Alan Turing was a code-breaker in World War Two and a pioneer in digital computing. He posited that it would one day be possible to build a computer that would be able to behave much like a human. Specifically, he believed it would be able to learn, and to apply that learning to solve problems beyond its program. He suggested that the best way to recognize success – the singularity some people speak of today – was to put the computer to a test. Engage the computer in a conversation with multiple users for an extended period of time. If the computer convinces at least 30% of the users that they are communicating with a “real person,” the computer passes the test. While some have suggested that it is time to update the test, it still excites us when a computer comes close to passing. Want to see how one person interacted with this “intelligent” computer program? Read this interesting transcript and decide for yourself.

Augmented Cognition – The Flip Side of Artificial Intelligence

While computer scientists will continue to pursue true artificial intelligence, another area of exploration is yielding more immediate returns. Augmented Cognition is the use of neuroscience to determine a subject’s cognitive state in order to enhance it, usually with computers. To me, this is the flip side of Artificial Intelligence. Instead of trying to make a computer act like the human brain, we try to make our brains a bit more like computers.

The U.S. Defense Advanced Research Project Agency (DARPA) has been interested in this technology for years. Samsung is developing a device to enable people to operate a computer through brain signals. Honeywell has developed a prototype helmet that monitors brain states associated with distraction and information overload. The system produces a visual readout to help commanders understand the cognitive patterns of individual soldiers. Researchers at the University of California (UC), San Francisco, and UC San Diego are watching the brain of a volunteer in real-time as she opens and closes her eyes and hands. They hope to understand how her brain transmits these commands. On my desk right now, there’s a headset called MindWave. I use this headset to monitor my own brainwaves and maybe eventually control them. Teachers are starting to use a similar technology to study how students learn. With such devices, we might be able to identify the state that Csikszentmihalyi called “flow.” This state is often described as a feeling of hyper-learning and well-being.

intelligent3

In other words, by marrying our brains to computers, human beings may become the Singularity.

Where Do We Go from Here?

It is hard to say when the Singularity will occur, or whether we will even recognize it when it happens. It may be that our convergence with computers is so gradual that we never see a sharp line, but more of a gradual blending – like colors turning from one shade to another. When does blue become blue-green? When does the brain become a biological computer?

As learning professionals, we need to think about how we can use these new technologies to help people learn faster, perform better, retain memories longer, and hopefully become more human in the process.

The Internet of Everything Is Us

It has been said that we are living in the era of the Internet of Everything, meaning that everything will become smarter through connection to the Internet. I’m not sure that the authors of this term realized they were not just talking about toasters and automobiles.

They were talking about themselves.

Originally appeared in www.learningtogo.info

Don’t Look Now, But We May Have Just Missed the Singularity

Most of you know that I’m a learning consultant by trade and I apply the science of learning to real-world learning and performance improvement projects for my clients. You may also have noticed that one of my side interests is artificial and augmented intelligence. At least, I used to think that this was a side interest, only tenuously connected to my “day job,” until several different threads converged in my brain and got me thinking:

What if the Singularity – meaning the emergence of a true “artificial” intelligence (AI) — has already happened and most of us just haven’t noticed?

Here’s a short chronology of how my perception started to shift from “this is kind of cool” to “this could change everything.” First I need to give you a bit of a disclaimer here: This story is not intended to be a detailed chronology of scientific developments in the fields discussed. It is merely a meta observation of how my unique brain changed the way it pays attention to these fields as they have developed over time.

singularity1

Computer science started drawing from neuroscience (Instead of the other way around.)

In 2009, a group of scientists founded FACETS (Fast Analog Computing with Emergent Transient States). This initiative was formed to figure out how the brain solves problems and then build a computer that works the same way. You see, the brain and the traditional computer process information in very different ways. As the need for “Big Data” got well, bigger and bigger, scientists realized that we were going to come up against a power unless a new way of computing emerged. That’s when computing scientists started to look at the human brain for inspiration.

Why build a “meat computer?”

In 2010 neuroscientists estimated that the human brain’s storage capacity was somewhere around 2.5 petabytes (or a million gigabytes). This estimate took into account our roughly 86 billion neurons and all the different interconnections possible for each neuron, making the brain by far the most complex computing machine ever “built.” But that was 2010. By 2016 scientists had discovered that the capacity of the brain was actually at least 10 times higher, once they took into account how individual neurons could be connected to each other in multiple ways to encode different memories. Once computer scientists realized that the human brain could be studied at a certain level as a computing network, we started hearing about “neural nets” being built in labs. In 2012, scientists at The Scripps Research Institute in California and the Technion–Israel Institute of Technology announced the development of a “biological computer” made entirely from biomolecules. This may not have been the first instance of such an accomplishment, but it was the first one I happened to note in my Twitter account.

The evolution of machine learning

singularity2

The dream of building an artificial intelligence can be found in classic going back to at least 1909, but fiction started to become reality 1951, when Alan Turing proposed the idea of a machine that could believably pass as a human at least 50% of the time in controlled conversations with real humans. The “Turing Test” continues to be discussed and applied to modern AI, although many are starting to think we may need a new test to sync up with our latest advances in this field.

singularity3

Inspired by Turing’s work, scientists started trying to build computers that could mimic some of the functions of the human brain, such as navigating a space without help, solving problems without the answer being programmed into memory, recognizing patterns and so on. For the next several decades, these scientists believed that the answer to this challenge was in finding the right algorithms to tell the computer exactly how to perform each complex task. Perhaps the greatest example of this approach is IBM’s Deep Blue, a super computer that beat a human grand master in chess in 1997. More recently, IBM’s Watson beat the best ever human contestants in the television quiz show, Jeopardy.

While these earlier accomplishments were based on writing the right algorithms to locate memory that had already been stored in super computers, more recent efforts at AI have turned to skipping the algorithms and just teaching computers how to recognize patterns. Then, by dumping a massive amount of data into the machine (or connecting it directly to the Internet) the thinking is that the machine will actually teach itself how to think. Over time, just like a child, the computer will grow more and more competent, as it incorporates feedback from the outside world and revises its internal models based on new information. Using this type of learning, a computer defeated an expert in the game of “Go” for the first time in 2016. The winning move, number 37, may be considered a famous turning point, as the machine “invented” a move that, according to Go Master Fan Hui, “was not a human move.”

singularity4

Digital assistants get “real”

singularity5

Long before smartphones, a company called Palm changed the way we manage our day-to-day data with the Palm Pilot, a personal digital assistant. These hand-held computers could give you instant access to your calendar, your contacts and anything else you wanted to remember – as long as you took the time to painstakingly enter it in the first place. Now touch screen convenience with these little devices! You had to use a metal stick, called a stylus, to poke at the screen or keyboard to input information. And that little sucker was always getting lost!

Then came Apple’s Siri. Maybe one of the most remarkable things about this feature when it first came out was that you could “teach” it over time. Within a few days, each owner’s Siri became a unique “personality,” shaped by the selected preferences and responses of the owner. In 2015, Apple took this customization aspect to a new level, with a voice recognition capability, so that Siri can be “trained” to respond only to the owner’s voice.

When the 2013 movie “Her,” presented a fictional story about a man falling in love with his digital assistant it didn’t seem like too much of a stretch to the general public. In the movie, the man’s closest friends even come to accept “her” as a friend.

The Internet of Things becomes ubiquitous

In 1996, computer scientist Karl Steinbach predicted that computers would soon be “interwoven into almost every industrial product. Today I have a Fitbit on my wrist, uploading my heart rate, steps and other data to a site on the Internet, where I can download reports about the quality of my sleep, my workouts and more. We may have actually reached the point when the Internet of Things is “not just talking about toasters and automobiles” but talking about ourselves.

AI infiltrates education

So far, I was just noticing all these apparently separate movements as cool stuff I liked to read and think about. Then it started hitting closer to home, to my work as a learning consultant. Let’s fast forward to May, 2016. That’s when we first learned that a computer programming class at Georgia Tech had been using an AI as a teaching assistant for an entire semester, completely unnoticed by most of the students in this graduate-level class. “Jill Watson,” (the name might have a been a clue) appeared to be friendly but a little green at the start of the semester, but as she learned more about the students she became more comfortable helping them with their homework assignments. At the end of the semester, the professor let the students in on the secret experiment. Most students admitted that the they could not tell the difference between Jill and a “real” TA. While a few students claimed to be suspicious from the start, the numbers suggest to me that Jill Watson actually passed the Turing Test, while I haven’t seen anyone else make this claim. Another application of AI in education predated Jill Watson by two years and is still in operation. This educational AI grades essays and routinely performs well enough to be useful in real-life classrooms.

If the Singularity is here – does it really matter?

singularity6

Maybe it really doesn’t matter, at this stage in the game. If we’re already using these assorted targeted types of AIs to crunch our big data, keep our bridges from collapsing, monitor our health, serve as our personal assistants, grade our homework and annotate our research papers, haven’t we as a society already slipped into the Singularity without even noticing? It may be, as some fear, the cultural equivalent of the original “singularity,” a huge black hole that threatens to devour mankind, or it could be the beginning of a beautiful friendship between man and the machines that have already begun to surpass us in so many ways.

Originally posted on www.learningtogo.info

Combating the Homer Simpson Effect in Learning

You Have to Forget Some Old Truths to Master Essentials of Brain-Based Learning

There really is a “Homer Simpson” effect in neuroscience. The phenomenon was given this name in honor of the Fox network character, Homer Simpson. Homer once told Marge that “Every time I learn something new, it pushes some old stuff out of my brain.” Who knew Homer was such a brilliant neuroscientist?

The Homer Simpson Effect means that in order to “make room” for new learning, your brain weeds out prior learning. We specifically tend to forget information that is similar to the new information. While more work needs to be done to fully understand the mechanism, it seems that the frontal cortex is sending a signal that inhibits retrieval of prior learning in order to make it easier to encode and retrieve new information. The older information is still there, but it becomes temporarily harder to access, giving the new information a chance to take hold.

Expect Performance Dips

This “unlearning to learn” behavior probably helped our ancestors develop increasingly more effective survival strategies, rather than sticking with things we already knew. When you are in the middle of the process, however, it can be just as vexing as it is helpful. If you are teaching people a change to an existing process or a new procedure that is similar to, but also different from a previous one, you need to allow some extra time for learners to integrate prior and new learning. If learners need to go back on the job and use both bodies of knowledge, there may be a temporary drop in the performance of the earlier skill while their brains struggle to integrate and also separate the two.

An Example from Essentials of Brain-Based Learning

I observed the Homer Simpson effect first-hand while teaching a recent session of Essentials of Brain-Based Learning for ATD. My audience was comprised of experienced learning professionals. I knew that many of the adult learning techniques these participants already knew would be seen in a new light by the end of our three sessions together, as seen through the lens of neuroscience. However, during the first two weeks, I noticed that most participants tended to question everything they already knew about learning. Their brains were clearing the way for new information to settle in. By our last session they were starting to integrate the new brain-based practices into their existing body of knowledge.

Minimizing the Homer Simpson Effect

If you are teaching a new skill that has similarities to a previously learned skill, here are a few things to keep in mind:

  • Tell your participants to expect some unlearning to occur. If they are aware of the Homer Simpson effect, they will be better prepared to select which parts of the old learning will still be useful.
  • Link the new information to previous learning, so participants can more easily locate those suppressed neural pathways to the earlier skill. For example, if you are teaching a new software application that has similarities to an application that your learners already know, comparing and contrasting the two skills can help learners incorporate the new concepts without completely throwing out the old ones.

In Essentials of Brain-Based Learning, I was aware that the Homer Simpson effect might apply, so I built in many references to instructional design before we knew how the brain learns. Some of those earlier techniques are still quite relevant, while others need to be replaced by more recent findings.

It’s funny how our brains work. Just because I’m aware of certain revelations coming out of neuroscience doesn’t mean that my brain is any smarter. I recently learned how to use Adobe FrameMaker to fulfill a client’s request for a technical manual. After working on that project for several weeks, I found myself staring blankly at a blank screen in Microsoft Word, unable to remember where to begin to write a new document.

Doh!

Originally appeared on www.learningtogo.info