Unit 4 Questions to Ponder gives us the following whopper:
Being able to sense the self, being self-aware, is the foundation for consciousness. Scientists today still argue about what animals are conscious, and how that relates to their intelligence, because consciousness is a necessary part of higher intelligence of the kind people have. What do you think it will take to get robots to be self-aware and highly intelligent? And if some day they are both, what will their intelligence be like, similar to ours or completely different?
The question includes unproven and controversial assumptions: that intelligence can be rated such that some are higher than others, that consciousness is a necessary part of higher intelligence, and that self-awareness is the foundation for consciousness. Any question including intelligence or consciousness must address the current ideas about them. The questions themselves, without the preceding sentences, are fine on their own, so I’ll try to answer them as such.
The first question touches on two main points, which may or may not be related (no one knows for sure): self-awareness and intelligence. The term self-awareness is associated with artificial intelligence. Scientists and other pontificators predict their emergence within a few decades with mixed expectations, ranging from Terminator to Commander Data in Star Trek. I started two related discussions, titled At what point does a system become a robot, and at what point does it exceed this definition? and How will the first artificial intelligences be treated in our society? A television show called Westworld uses these ideas as some of its central themes. The basic idea is that, in the future, a pair of inventors make theme parks full of life-like humanoid and animal robots, one of these parks being set in the American wild west, where the real people, the “guests”, can do whatever they wish like a current day video game, but, unknown to most, the robots are gaining self-awareness. A great deal is said about what it means to be self-aware and what it takes for this to happen, and what it looks like to appear self-aware but actually not be.
In the article titled Creative blocks (2012) on Aeon.co, David Deutsch argues that…
What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.
He also tromps over the idea that self-awareness is anything special, or at least the ability to pass any self-awareness test, saying that “it is a fairly useless ability as well as a trivial one.” He goes on to say the following:
Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Kurt Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. So has consciousness. And here we have the problem of ambiguous terminology again: the term ‘consciousness’ has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI. At the other, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.
AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves.
For a program to qualify as an artificial general intelligence (“AGI”), they will have to think the same way that we do. As Deutsch says, “That AGIs are people has been implicit in the very concept from the outset.” Will they come up with the same answers to common problems, such as political reform strategies or methods for reducing untrue messages in media? Maybe. There are undoubtedly some problems that lend themselves to beings that are integrated with a computer chip – those that can crunch numbers a billion times faster than humans. Other problems, such as whether abortion is good or bad or how to reconcile differences between religious groups with minimal conflict, may have little to gain from our new friends. On the other hand, an AGI may be able to teach us new philosophical (and other) insights that make present-day hard problems less so.
The Landing is a social site for Athabasca University staff, students and invited guests. It is a space where they can share, communicate and connect with anyone or everyone.
Unless you are logged in, you will only be able to see the fraction of posts on the site that have been made public. Right now you are not logged in.
If you have an Athabasca University login ID, use your standard username and password to access this site.
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.
Comments
Tyler,
I just finished the book "Life 3.0 Being Human in the Age of Artificial Intelligence" by Max Tergmark (MIT). Extremely fascinating look at the questions humanity will need to deal with (now) if we are to have any measure of control over the possible future scenarios he presents for the coming AGI and superintelligence. The last chapter deals with the subject of consciousness, how it relates to subjective experience, and why it may be entirely possible for non-biological consciousness. It's exciting to think that consciousness theories such as Giulio Tononi's integrated information framework for neural-network-based consciousness are being tested presently In this light the current WestWorld is far more fascinating than the original from the 70's. Interesting post. Thanks.
The Life 3.0 book gets 4.26/5 stars on goodreads.com -- pretty impressive. I looked up the integrated information framework ("IIF"). Love how it attempts to deal directly with the hard problem of conciousness. I'll have to take a look -- thanks. (I haven't been able to take a stance on whether conciousness is necessary for general intelligence. There are many apparently solid arguments.)
Season 2 of Westworld is slated for Spring 2018. :)