robots-steal-jobs-content-2019.jpg

A new theory changes the thinking behind creating robots and smart machines

Asim Roy, an information systems professor at the W. P. Carey School of Business, was on sabbatical at Stanford University in 1991 when several years of thinking about the operation of the brain and artificial systems inspired him to act. In a message to the leading Connectionist scholars, he threw down the gauntlet, challenging the prevailing school of thought and thereby the very foundations of the technologies behind smart machines and artificial intelligence.

Asim Roy, an information systems professor at the W. P. Carey School of Business, was on sabbatical at Stanford University in 1991 when several years of thinking about the operation of the brain and artificial systems inspired him to act.

In a message to the leading Connectionist scholars, he threw down the gauntlet, challenging the prevailing school of thought and thereby the very foundations of the technologies behind smart machines and artificial intelligence. "There was a Connectionist mailing list [online] and I just came flat out and said, 'Hey, all of your theories of brain-like learning don t make sense,'" said Roy. In order for the Connectionist theories to work, he said, they would require what he labeled "magic."

While claiming a sprinkle of magic may have been acceptable during the height of alchemy, it did not go down easily in 1991. Roy's colleagues around the world did not take kindly to his blunt, confrontational postulating. "It does get personal," said Roy, adding that some of what followed was like a "street fight." With funding and journal articles at stake, some researchers reacted badly, he said. Some dissenters even walked out on his presentations.

The sandbox in which the scuffle broke out is at the intersection of a number of disciplines: cognition and learning, neuroscience, computer science, robotics, artificial intelligence and more than a little philosophy. What Roy was proposing would — if it were accurate, and that was a big if — undermine the field of Connectionism, not to mention all those researchers whose life work was built on its foundations. "It's hard to upset a science," he says.

Breaking the rules

The prevailing wisdom in artificial intelligence is that humans learn by storing a system of rules. Thus, if one were learning to hit a tennis ball, one would be told to grip the racket at a certain place, in a certain way, with a certain pressure, to move one's shoulder, arm and wrist in just the right way, to look at the ball in a specific way and place and so on and so forth, filling pages upon pages for a single hit, given one instance and one set of conditions.

If the ball were to bounce just the slightest bit faster, slower, higher or lower, countless new rules would need to be called forth and applied from one's memory. By combining all of the possible permutations of a bouncing ball, codifying every possible rule becomes a Sisyphean task.

Although such rules can be very effective in limited cases, it would take enormous computing machines to store every rule needed to perform a certain task and then exercise them properly. A computer that plays chess is an example. What both Roy and the opposing Connectionist faction have sought is an understanding about how best to copy what the human brain seems to do: connect experiences and understandings and learn from them.

Put more simply, Roy says that if you were to lock someone in a room and teach him or her how to hit a tennis ball every possible way for years and years and years without the advantage of actually swinging a racket and connecting with any green fuzz, the supposed tennis savant would fall easily to someone who'd played only a little tennis in the flesh. Why? Because our brains actually learn from different contacts with the ball — the hits and the misses — understanding better and more quickly the thousands of other situations on the court.

In other words, human learning comes from data generated from the practice of a task, be it in the learning of mathematics, languages or sports. That s why teachers make students write essays, do math homework and practice hours and hours to get that perfect serve in tennis. There is profound truth to the saying that practice makes one perfect. The world was shocked in 1997 when reigning world chess champion Garry Kasparov conceded defeat at the hands of Deep Blue.

Although IBM's supercomputer triumphed, it had not taught itself chess but merely spun through countless computations based on rules entered by its programmers. Today, scientists working to get computers to solve seemingly elementary challenges (such as understanding human speech and meaning, so-called "natural language") are stymied. The Hollywood depiction of smart robots that can learn and adapt like humans seems eons away.

Connectionists believe that this learning — be it on the tennis court or in any other situation — comes from the most basic of building blocks in the neural network: neurons. Rather than storing an incomprehensible amount of rules, the brain stores tiny little bits of data and, depending on how they are connected, crafts solutions.

Rather than learn rules on how to return every possible tennis serve, the mind makes rapid connections between past data; if the serve is coming at 70 mph and you've returned a serve at 60 mph and one at 80 mph, in a macro way, the brain connects the dots. While the human brain may not, in fact, be the best model upon which to pattern learning in machines, Roy notes that the brain still outperforms today's computers and, if researchers are to craft a human-like machine, it should be, de facto, patterned after human faculties.

Control theory

Roy's control theory holds that while there are indeed connections that have to be made between neurons (or, in the case of a computer network, neural nodes), there is also a titular controller organizing the system. It's been nearly 10 years since he first began work on an academic paper defending his theory.

During this time he was ostracized for his work, and after a half dozen rejections, revisions and resubmissions, the journal IEEE Transactions on Systems, Man and Cybernetics (Part A: Systems and Humans) is set to publish his paper entitled, "Connectionism, Controllers and a Brain Theory." Cybernetics focuses on replacing human control functions with mechanical and electronic systems.

In this paper, Roy postulates that there are parts of the brain that control other parts. And, to the dismay of connectionists, he proves his theory partly by showing that connectionist brain-like learning systems actually use higher-level controllers to guide the learning in their systems contrary to the widespread belief that they use only local controllers at the level of the neurons.

"A new theory is on the table and it practically invalidates Connectionism," says Roy of the paper's acceptance. "People will be forced to look at it and think about the arguments — and they are solid arguments — and they either have to refute it or not." Although there are still many skeptics, a number of scientists have lined up behind Roy since his paper was accepted for publication.

"Professor Roy's paper goes to the core of the inherent limitations of commonly accepted theories of brain function and organization, and points the way to a new hybrid framework that combines the insights of existing theories to overcome their shortcomings," says Dr. Christian Lebiere of Carnegie Mellon University. Lebiere's book, "The Atomic Components of Thought" (co-authored with Prof. John Anderson also of Carnegie Mellon), presents a unified theory of a cognitive architecture, one that competes with connectionism as a theory of the brain.

Control vs. connectionism

What form Roy's controller takes is still a point of speculation; at the theory's early stage the controller is little more than a generic, guiding ghost in the machine. Roy's work is not based on brain imaging scans or laboratory dissections but is more theoretical, logic-based.

"What I did was structurally analyze Connectionist algorithms to prove that they actually use control theoretic notions even though they deny it. Plus [I] added some neuroscience evidence," says Roy. Prof. Mark Bickhard of Lehigh University, editor of New Ideas in Psychology, agrees with Roy. After reading Roy's paper, Birkhard wrote "I agree fully with your paper's claims."

And you are correct that I share an interest in autonomous learning. I have some partially convergent arguments in my 1995 book 'Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution' concerning control and connectionism, though I don't focus there on control in the same way as your paper does. Prof. Bickhard scholarship spans many fields including cognitive robotics, philosophy of knowledge, and psychology.

In the forthcoming article, Roy uses a series of basic analogies which compare brain functions (and well known algorithms in his field) to more everyday interactions in order to explain what he sees as inherent flaws in Connectionist thinking, the so-called "magic" he railed against years ago.

He writes: "Humans operate many man-made devices - e.g. a car ... In these overall systems, the car is the "subservient" subsystem and the human is the controller, the "master or executive" subsystem. The overall system consists of both the man-made device and the human. The human in these systems supplies the operating parameters to the subservient subsystems. For example, the human uses the accelerator of a vehicle to set its speed."

The Connectionists would argue that a driver is not controlling the car but is rather part of a mutual give-and-take dynamic, as Roy goes on to write: "The standard argument against controllers runs as follows. The airplane that is operated by a human is actually a feedback system. In a feedback system, a subsystem receives inputs (feedback) from the other subsystems, and these inputs (feedback) are then used to determine its output(s), its course of action. Thus these subsystems are completely dependent on each other (co-dependent) for their outputs and, therefore, there is no subsystem controlling another subsystem in these overall systems."

While Roy is not a neuroscientist he highlights findings from other research that support his control theory, including studies of dopamine and other neural transmitters and acknowledged control centers of the brain such as the prefrontal portion of the cerebral cortex. Roy says his theory does not posit that there is a single executive controller in the brain but rather that there may very well be "multiple distributed controllers" controlling various subsystems or modules of the brain.

This significant shift in thinking about how the brain works and learns will eventually have an impact on the design of industrial robots that can be taught to perform a variety of tasks, smart devices that can help with cooking, laundry and other household chores, and smart robots that can be on the front lines of a war, drive tanks and airplanes and perform basic medical services.

Roy believes that this theory opens the door to creating those futuristic systems that we have always dreamed of. It has been a long but rewarding road for Roy who must find a next step now that his paper is finally set to appear in the scientific canon.

"I still remember the days I was so terrified that I'd questioned this body of science and [thinking] the whole world is going to come crashing down on me, but then I enjoyed it," says Roy, "There were people beating up on me and I was able to answer them, and these were some of the top scientists I would actually get top people together at these conferences and say, 'OK, it's one man against all the others but I'll go at it' I decided to stay in the lion's den and just fight it out. I didn't want to quit."

Bottom Line:

  • Understanding how to make artificial systems that learn and act like humans — be it a Hollywood-like robot, a house with "smart" sensors or an autonomous vacuum cleaner — requires an understanding of how the human brain works.
  • The Connectionism school of thought in learning says that connections between neurons in the brain allow us to learn. Connectionists do not believe that there is a control mechanism that guides these connections.
  • In a new academic paper, Asim Roy, a professor of information systems at the W. P. Carey School of Business, argues that the brain — and by association, artificial systems that aim to mimic the brain — must have some sort of control mechanism that affects how connections are made.
  • Dr. Roy's new control theory will not revolutionize the thinking behind artificial intelligence and the like overnight. Instead, he says it may be decades before the new philosophy effectively changes the way computers operate.

Latest news