Most recent blog

Shin Megami Tensei III: Nocturne Review

Friday 26 July 2019

The Chinese Room

Mind over machine matter

In the past I have covered topics with very tenuous links to the world of gaming. I just think it is nice to mix things up every now and again. With that said, this topic is going to be one that very much stretches the definition of 'gaming blog'. Don't take this blog to mean I am finding myself feeling limited with my chosen specialization, nor that I am running out of topics to cover. I love gaming, always have, always will. But I also find theoretical teach unassailably cool. So don't be surprised if you see more blogs come out-of-left field like this one. Now I've got that out of the way, let me ask you: Have you ever played: The Turing Test?

It is a very cool indie-feeling puzzle game by a relatively small studio called Bulkhead Interactive. (Note I said 'indie-feeling'. The game itself was actually published by Square Enix. Good for them.) In the game you are put into the shoes of an engineer called Ava Turing (golf clap) as she wakes up late from her cryosleep. The rest of her team have already alighted to their destination, the moon Europa, and forgot to leave contact details. Ava, alongside the your helpful AI assistant T.O.M travel to the planet only to find that her team already set up their prefab base but in a very peculiar alignment. (Almost like every room was designed to be a puzzle.) As you travel through these 'challenge rooms' it becomes clear to Ava that this complex has been set up in such a way as to prevent a computer from breaching it's depths. The puzzles are meant to only be solvable by a human mind. I won't go too much further in the events of the story, but with a name like 'The Turing Test' you can likely take some educated guesses.

I played through that game a year or so back and it proved to be an enjoyable enough puzzle romp, but something about the concept stuck in my mind and it isn't just the fact that the entire plot is bogus. Seriously, the fact that the entire complex is built to keep out an AI makes no sense. afterall, the game's puzzles all require logical deduction in order to solve them. What's that one thing that computers are infinitely better than humans at again? Oh that's right, logical deduction. Or maybe the architects were relying on the fact that the AI in question is unit-based and therefore lacks the ambulatory movement required to trudge through the base. Although that is made moot by the fact that almost every room is fitted with an open circuit camera. And some of those cameras even have guns attached, for some reason. It just makes me think that someone failed to think this whole thing through...

But I digress. The real reason that 'The Turing Test' has wormed itself into the back of my mind for so long is because it introduced me to the concept of 'The Chinese Room'. Before I had heard of that, I always had trouble wrapping my head around 'The Turing Test'. (The thought experiment not the game.) Now, far be it for me to refute an idea proposed by 'the father of modern science' Alan Turing, but I never found his 'test' satisfactory. Even by 'thought experiment' standards I can't help but find it questionable to conclude that if a computer can fool someone into thinking it's a human than it has established some form of artificial intelligence. Of course the idea itself is more nuanced than that, but not much more. So when I heard about John Searle's critique, which he formed in the shape of his own thought experiment, I found myself intrigued.

In 1980, John Searle had his paper "Minds, Brains and Programs" published by Cambridge university's peer-reviewed scientific journal: Behavioural and Brain Sciences. (Great name for the journal, by the way. Really rolls off the tongue.) In this paper the philosopher refutes the possibility of what he refers to as Strong AI (What is now known as general AI): "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense (as) human beings have minds." He opposes the idea that if you can create a computer that acts like a human sufficiently enough to pass The Turing Test then you have effectively created a mind. To explain this stance he created: The Chinese Room.

The Chinese Room is a thought experiment that was devised to answer the question, can an AI machine experience information in the same way that humans do? Or, can AI truly 'understand' in the same way that humans do. It is conducted like this: You are placed inside of a room with nothing but a rule book and a bunch of boxes with Chinese symbols on them (Any language that you do not understand could substitute here.) The rule book tells you how to process those symbols. At some point someone outside the room feeds you a batch of Chinese symbols on a little piece of paper, you work through them and return another batch of Chinese symbols. Unbeknownst to you, the test operators call the little batch of symbols that they put in 'questions' and the ones they receive 'answers'. The rule book is the 'computer program', the boxes are the 'computer database', and you are the 'computer.' Or the central processing unit.

That is the setup for The Chinese Room. It is that simple. But what is the point of all this? Well, eventually you will become so effective and efficient at writing these responses to the observer that your answers have become indistinguishable from the writings of a native Chinese speaker. But at the end of the day, you still do not understand Chinese. All of your understanding for Chinese is for the formal syntactical objects. But the essential part of the mind, it's defining characteristic, is it's ability to hold mental contents as well as syntax. It allows us to attach meaning to things that we see and experience in the world. In summary, The Chinese Room argument is that programs exist entirely in syntactical entities, whereas minds have semantics. Therefore the two can never truly meet.

As far as John Searle is concerned, a computer executing a program cannot have a mind, understanding or consciousness despite how advanced that computer may be or become. This relates to The Turing Test by telling us that the ability to fool a human observer is not grounds upon which to declare general intelligence, but rather just grounds upon which we can commend the ingenuity of the algorithms author. The logic is very sound when you consider the computing technology that is available for the public today. Everything, from phones, to drones to computers are dependant on an input in order to produce their output. However, some argue that whilst The Chinese Room argument might hold water for today's tech, it fails to address the tech of tomorrow.

You may have heard of some of the amazing game-playing AI's that bored technicians throw together now and then. I'm not talking about that one AI that beat all the world champions at DoTA, or even about Deep Blue, the iconic chess playing AI who bested a grandmaster or two in it's time. No, I'm referring to an AI who's achievements topped even theirs. AlphaGo. Go is one of the oldest and most complicated strategy games ever devised. Dating back to 4th century BC China (at least), Go is an adversarial game wherein it is the goal of each player to surround their opponent's units. I'm sure there's more to it than that, but honestly I've watched 5 tutorials and I don't understand it. But I'll tell you someone who does seem to understand it, Google's AlphaGO does!

AlphaGo is an AI that was built with the lofty goal of mastering the game of GO. I call this lofty because many consider GO a game so complex that it eclipses even Chess. There are thousands of possible strategies that have been devised in 2.5 thousand years that this game has been around, and much of those strategies require empathetic decisions as much as logical ones. Yet Google persisted to train the AI with deep learning algorithms with the intention of pitting their baby against world champion Lee Sedol. The results were impressive to say the least. AlphaGo beat Sedol in 4/5 games. Expert onlookers commended AlphaGo's playstyle, remarking about how they saw the computer execute strategies that had never been seen before. This was a computer that was built in order to complete a task but was left to it's own devices to figure out how to do it, and it created unprecedented techniques in order to do so. Some would say that this means that the game of Go might posses more inhuman knowledge to it than human knowledge; implying that AIs like AlphaGo and Deep Blue display levels of strategic planning that would require super human levels of 'understanding' to pull off.

Other philosophers would disagree with such bold statements. Philosopher and mathematician Gottfried Leibniz for example. He may not have been around to see the modern age of tech, having died in the 18th century, but as with any good philosopher his words still ring with relevance. When talking about machines, Gottfried comments "Supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill." He raises this hypothetical in order to highlight that "Upon examining it's interior" we would "Find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in simple substance, and not in compound or in a machine, that perception must be sought for". His conclusion is somewhat similar to Searle, no matter how advanced the operations of the machine become you only need to see the workings to see that the machine is merely "Parts which work one upon another", and not some undiscovered blueprint for the mind. Those who believe in the computational theory of the mind might take umbrage with this line of thought, but there are are a few more direct critiques I want to focus on right now.

Some people have taken to refute The Chinese Room in the only way philosophers truly know how, to out logic them in their own theory. Some people insist that strong AI is only a matter of time and that The Chinese Room doesn't debunk this assertion but rather bolsters it. (From this point responses get very "Nu-uh" sounding, so you're going to have to bear with me.) Philosophers argue that whilst the person in The Chinese Room does not understand Chinese, the system as a whole does. This is supported by the assertion that the human mind does not create intelligence but rather causes it, like a machine of its own. Searle pooh-pooh's this with the argument the even the system could not make the leap from syntax to semantics. Even if the person in the room memorized all the formal rules and processes to the point where he answer questions without help and even facilitate conversation, thus becoming the system all by himself, he still would be unable to understand Chinese. He only carries out the rules and can't associate meaning to the symbols. Were you to stop feeding him an input, he would be unable to provide a coherent output.

Then there is the argument that, instead of manipulating Chinese symbols, we could envision a computer that simulates neuronal firings in the brain of a Chinese person. The computer would operate in the exact same way as a brain and therefore that brain would understand Chinese. Searle merely incorporates this theory into his thought experiment by saying that he could introduce water pipes and valves for our poor room dweller to represent neuronal firing. The person in the room would have instructions to guide water through the pipes to imitate the brain of a Chinese person, but they still stubbornly refuse to take a language course. Those are the sorts of conclusions you come to when you argue with semantics.

For my part, I happen to find the Searle's original argument compelling. There are no computers that come close to operating on the same level that our brain do, but I have to concede that doesn't mean there never will be one. If we could understand the exact workings that would be required to create a mind then we would created such a machine already. As it just so happens, AI developers are hard at work on figuring that out right now. Admittedly, we're not quite there yet. All those incredible AI feats that you see on the news are almost always the result of deep learning algorithms. Impressive stuff, but very methodical and logical, nothing that transcends the bounds of current tech.

I suppose that's what were all waiting for, that turning point when humanity discovers the secret to creating life. Mechanical life, but life nonetheless. Is that even possible? Who knows. Until the time comes, this train of thought remains the domain of the philosophers and Elon Musk's of the world. But that still doesn't take the fun out of amateur theorizing. Ultimately, I like the feeling of comfort out of accepting that The Chinese Room is valid, but that nagging part of me can't help but remind me that it is the height of naivety to believe in one's distinct supremacy. What do you think?

No comments:

Post a Comment