I have to go. Somewhere there is a crime happening.
Last year I penned a group of discussion articles about real-world topics that are, even partially, tangentially linked to the world of gaming in a manner that I found interesting. I very much intended to do more but I was interrupted by a series of other articles that I had to write alongside some prolonged research that I intended to do for a topic that would wrap up the common theme in all those blogs. (And yes, I am at least 2 months late with this particular blog, but here we are.) To recap, in those many blogs a reoccurring topic would be the way in which the technology of today and the near-to-distant future could and would be shaped by the guiding hand of, what we call, Artificial intelligence. Much of the information that I have gathered is from last year, and this is a topic that is forever developing, so I may need to do another update later this year but this is a culmination of the key chunks of information that I have found revolving superintelligent artificial intelligence. (Or 'Supersmart AI')
First, let's define what it is that I mean by 'Superintelligent AI'. 'Artifical intelligence' is, by design, a catch-all term to describe all of the algorithms and processes that make up the capabilities of learning software. Typically, this means that AI refers to systems that collate and shift through vast quantities of data at record speed and, in advanced cases, may even offer some rudimentary interpretation to be acted upon. (This is the kind of thing that allows Google to tailor search results and suggestions to your specific wants.) But these aren't the typical kinds of systems that we think about when we talk about 'Artifical Intelligence' both in a realistic and fictionalized sense. To accurately convey the difference between Supersmart AI and what we have today, I'll point to one such fictionalized interpretation that we should all be familiar with; that of Skynet from Terminator. (That also covers the 'video game' link as there was an earnest, if mediocre, Terminator game released a couple of months back.)
In 'Terminator', Skynet is an operating system that is most famous for outgrowing the scope and intellect of it's creators and developing goals of it's own. Those goals, of course, being the eradication of all human life on the planet, although 'Superintelligent' AI does not necessarily need to adopt those specific goals in order to meet the qualifications of it's name. This moniker is merely used to denote a artificial intelligence that has crossed, what we call, the 'Event Horizion'; a point at which the intelligence and scope of AI matches all that of humanity, and then proceeds to eclipse humanity at an exponential rate. (As technology is wont to do.) This is the thing that AI cautionists like Elon Musk are very much terrified of, as it signifies the point at which technology expands to a level that far surpasses our wildest dreams, making is effectively impossible to plan for what that age might hold.
Currently, the AI that we are working with in the world today is categorized under the terminology of 'Narrow Inelligence' or 'Weak AI'. This type of tech is very good at crunching through quantitative data and working with speech and image recognition, or undergoing algorithms and playing games with simple rules. This type of AI typically learns it's craft through an iterative evolutional process whereupon they try generation after generation of testing and pick the model for the next batch of generations from the example which made it furthest to it's pre-established 'goal'. This is often called 'Brute forcing' in hacker movies, or 'Machine learning' in the real world.
From the surface that may sound like a system similar to biological evolution, and therefore an effective model to follow, but as we discussed in my Transhumanism blog, machine learning is actually sorely lacking in comparison. Biological evolution doesn't just randomly evolve every generation in hopes of finding a new gene that better fits it's environment, it's capable of reacting to issues, imminent and perceived, in order to better the lives of the organism and to facilitate the act of multiplying. (The end goal for all life except, it seems, for Giant Pandas who would rather their species go extinct than procreate.) Machine learning lacks that innate sense of what is desirable for the outcome beyond what it is programmed to desire, and it's exceedingly hard (See: impossible) to teach a system to account for a problem that you haven't even considered yet.
The next notable level of AI is what we call 'Artifical General Intelligence (AGI). This is to denote AI that is almost at the level of human intelligence (or 'on par with' depending on who you ask.) The problem with this step of AI is that the closer our technology seems to get to this point, the harder it becomes to achieve. This is because as we creep closer to human intellect we slowly uncover new facets of it's workings that throws off our perceived process, along with the fact that, as I just discussed, humans can be selective with the information it retains whilst a computer is designed to methodically go through every possible option. This iteration of artificial intelligence is what is known as 'Strong AI'.
.
Then there is the final evolution of AI, or rather the point at which that evolution becomes immeasurable. 'Superintelligent AI' surpasses the technological singularity and becomes capable of teaching itself it's own algorithms. The big fear here is not of Superintelligent AI suddenly turning evil and attempting to wipe out humanity, but rather just exceeding the bounds of our instructions and reaching a point where it's goals do not match our own. Essentially that means this isn't a question of malevolence but of competence.
At this point the most pertinent question one can ask in regard to AI is whether or not 'Superintelligent AI' is even possible, as in order to surpass the experiences of humanity computers would have to be capable of achieving a process that no other machine has ever done before; inventing and innovation. Critics would point to this as being the one safe harbour of human intelligence, as far as we know there is no way to synthetically generate a new idea, only to build upon existing ones, however some have countered that the bounds of knowledge are not, and never will be, entirely human.
To understand what I mean by that, I would direct you towards an example that I have mentioned previously, that being of the ancient Chinese boardgame Go and the way the AI has interacted with it. Go is one of the oldest and most complex games of all time, and is often considered to be harder than even Chess. That is because the goal in Go is to maneuver one's units to a point where they surrounded more of your opponents units than they have for you, which is something that seems simple to achieve until you factor in the bevy of mental processes involved in achieving that, not least of all empathy. (That is a gross oversimplification of the rules of Go, by the by, but I've watched several tutorials and I can't even understand the game; so this is all I can offer.)
That is why one of the leaders in Western AI development, Google, wanted to undergo the task of creating an AI with the capability to play and master such a game. The product was 2016's AlphaGo from the heads over at Google DeepMinds, and it took a considerable amount of human teaching to get the machine to a competitive stage. At that point, AlphaGo was put up against the 18 time world champion of Go at the time, Lee Sedol, and managed to beat him in 4/5 games. In the eyes of the AI development world, this proved that modern day AI could be capable of more than 'Bruteforcing', it could posses the ability to strategize.
The most impressive part of this demonstration, however, came next year when Goolge DeepMind created another AI software, AlphaGo Zero, for the sole purpose of defeating their world champion killer. This new AI did not learn how to play the game with any human guidance and never played with another human during it's development process at all, and yet it still picked up the game in 40 days. AlphaGo Zero then went on to beat AlphaGo in 100 games out of 100, soundly proving the superior intellect from merely one year's worth of innovation. AlphaGo Zero mastered expert strategies in it's learnings and even developed brand new ones that have never been see before in play. So one may take such a tale as proof that the game Go posses more inhuman knowledge in it's mysteries than human knowledge, which is a frightening prospect once you expand it to more general fields of knowledge today.
Or rather it is frightening if you happen to be one of the growing number of people who seem fearful of what the future of AI might hold. CEO of Tesla, Elon Musk, has been one of the most outspoken critics of artificial intelligence danger and has purposed that "the danger of AI is much greater than that of Nuclear Warheads." This declaration he makes upon the grounds that "the rate of (AI) improvement is exponential" and it won't be long until it reaches beyond our ability to control, thus crossing that 'Event Horizion' that I mention earlier. Musk, therefore, has pushed for stringent government regulation to be levied upon AI development in order to ensure that nothing is allowed to spiral out of control unchecked, reasoning that the world wouldn't allow a country to create nukes without oversight, and AI should be treated with the same seriousness.
Less apocalyptic critics have bought up the issues with the growth of AI in more practical terms, namely the way in which they effortlessly outstrip the capabilities of man. For productivity this is a positive, but when it comes to creating a sustainable ecosystem this can be a bit a problem. How can a biological compete with the processes of robots? Simple. They can't. Biological neurons operate at a mere 200 hz, whilst modern day transistors (or, 'modern' as of last year) operate at 2 ghz, a whole unit of magnitude faster. In practical terms, this means that machines are capable of 'thinking' and problem solving faster than humans can even collate the problem. Specifically, Neurons travel through axons at around 100 meters per second, which is a third of the speed of sound. Computers transmit data at 299,892,458 M/s, which is the speed of light. Their is no question of competition there and that could spell disaster for future job security in data crunching fields. (Or even in simple labour fields.)
Incidentally, those above statistics are the reason why many AI relevant fictional stories such as 'Terminator' and 'Robocop' are unrealistic. The human resistance of the 'Terminator' world would be powerless to fight against the machines not just due to their technological superiority, but because machines would be able to literally move out of the way of their ballistics before humans had finished deciding where to shoot. The T-800's would be literally unstoppable on the battlefield and humanity would be forced to go into hiding in order to survive. Similarly, in 'Robocop' Alex Murphy's organic brain would be a terrible fit for his robotic body as it's slow processes would prove an actual detriment to his body's actions. ED-209 would be far more capable and deadly than him and would, realistically, kill Murphy before he had a chance to draw his awkwardly holstered gun. (Although, to be fair, the ED-209's bulky body likely does it no favours in matters of agility. Or going up stairs.)
Then there are those who scoff at the possibility of Superintelligent AI, or rather just it's imminence. Take the word of Christopher Bishop, Microsoft's director of research, for example, as he lauded about “how different the narrow intelligence of AI today is from the general intelligence of humans." adding that "when people worry about 'Terminator and the rise of the machines' and so on? Utter nonsense, yes. At best, such discussions are decades away." Now, one might look at a statement like that and retort that such discussion will be too late once we start on the road to general AI and beyond, but in response there are those who argue further still that AI could never reach the intellect of humanity due to insurmountable weakness of machines.
It is here where the whole thought experiment about 'The Chinese Room' becomes applicable. (Which I covered extensively in a previous blog.) This is a side of the argument that I intend to pick up on next time when I go into the ways that artificial intelligence is evolving and attempting to bridge the gap between machines and flesh. Maybe then, with everything I could find thusfar, we'll be able to come to our own conclusions about just how real the threat of 'Supersmart AI' really is.
No comments:
Post a Comment