Most recent blog

Final Fantasy XIII Review

Showing posts with label Supersmart AI. Show all posts
Showing posts with label Supersmart AI. Show all posts

Sunday, 19 January 2020

Is Supersmart AI a possibility? Part 2

They'll be back

Today we are going to do our best to round up my set of 2019 knowledge on Supersmart AI to come to a final conclusion upon what Superintelligent AI actually is and just how looming of a threat it would seem. As such, I felt it important through the course of my research to take a look at the mediums that were doing the best for AI development and, surprise, surprise, I found one of the leaders to be a topic quite familiar to this blog. You see, whilst 'weak AI' (defined in the last blog) is an incredibly useful modern tool for managing packets of data, we can see the greatest investment and innovation towards it being funded not by an industry in general but by a nation. The Chinese Nation. And that makes sense, doesn't it?

China is the most populated country in the world and, different to all the other countries on that top ten list, their government very much has the desire to monitor and sanitize as much of that populace as humanly possible. We hear about the ludicrous amount of surveillance that goes into 'safegaurding' the average Chinese city, as well as the deeply ingrained 'social credit system', but rarely do we reflect upon the complexity of the AI systems that are involved in order to make all that possible. Whatsmore, China have a desire to increase their presence on the world scale in order to become the global superpower, and as such they devote a heavy amount of research and funds into areas that they presume will blow up in the years to come. Should it be any surprise then, that China are the leaders in AI technology around the entire globe? All that is under the knowledge, however, that this drive to 'improve' AI is more aimed at increasing the capacity and scope of 'weak AI', rather than expanding the abilities of that AI itself and moving towards 'General AI'. (Intelligence comparable to humans.)

Many scholars, such as those who agree with the arguments presented by John Searle's 'Chinese Room' thought experiment, would argue that there is a fundamental barrier that AI will never be able to cross. That thing that we humans like to argue is the key part of our brain function that separates us from animals is something we refer to as 'Inspiration'. For the sake of this discussion, we'll choose to define such a broad word as 'the ability to come up with new ideas'. (Or compound upon old one's to create innovation.) As computers are expressly designed to operate within the parameters of their instructions, they should be actually incapable of 'inspiration' which would make their capacity to learn inferior to that of even the dullest human. As such, General AI should be an impossibility. But there are those who disagree and even those who think that concepts such as 'inspiration' are complete fabrications of human hubris. (but that's a topic I really don't have the time to go into today.) As I hinted at earlier, there is one medium in which AI possibilities are regularly being pushed to their limit, and true to this blog it lies in the world of gaming.

In 2018, Danny Lange held a conference through GOTO which explored the scope for AI development. But what made this particular demonstration worth paying attention to was the fact that Lange wasn't interested in AI that was taught to reach it's goal, but rather AI that learned what to do through the act of 'Deep learning'. Deep learning is a model of development very much inspired by biology, albeit in a more mindless, thrashing, fashion; so it's the perfect system upon which to test the possibilities of AI simulating Human-like intelligence. To this end, he named his conference "Deep learning in gaming" and you can watch it yourself to get a better understanding of what we're talking about. But I'll attempt to pull up the salient points.

Gaming makes for a great environment in order to conduct testing for learning capabilities, as that objective-based ecosystem works to quantify traditionally qualitative traits such as 'problem solving'. Therefore in order to create a game that simulates human intelligence we would have to define the key operating goal of every human being, those driving forces that unite us all. To that end, we must look at our base drives; we work in order to maintain energy and prevent entropy. We amass food in order to consume and keep our energy up, maintain order to prolong reliable access to energy supplementing sources and seek to multiply to stave of entropy, even if that's more symbolic. When we simplify intelligence like this, we aren't so much measuring 'human' intelligence as we are observing ' biological' intelligence in general, but that is an important stepping stone on the road to general intelligence.

Using a game engine no more powerful than Unity, Mr. Lange showed us a model of AI that was taught to navigate an environment in order to reach it's goal, with no prior programing to assist it in that task. The only incentive provided to the AI was a standard reinforcement learning algorithm, which is a process not so dissimilar to the 'Pavlov's Dogs' model. The AI would navigate it's avatar to the goal and once it reached there it would receive a 'point'; this builds upon how learning is achieved in nature all around us, through the merit of observation and reinforcement. By building upon this model for deep learning, one could teach an AI how to explore and exploit in a manner that replicates very human behaviours. Once the AI started to show some emergent behaviours, such as demonstrating an understanding of the reward function, the courses could become progressively harder to ramp up it's rate of learning. (Which, incidentally, is another parallel showing how computers learn in a similar curriculum to humans.)

Now, the problem with this model is thus; standard reinforcement learning takes too long. As I've mentioned several times now, when you provide a computer with nothing more than a goal it will take literally every path in order to reach it, no matter how little that path-way makes sense. Lange described solving this problem as teaching AI to 'Learn long short-term memory', describing that memory needs depth in order to grow into intelligence. To achieve this, there was a new parameter added to the reward system for the AI that would mirror the values of human's, in the form of extrinsic and Intrinsic rewards. Most AI deep learning evolves by offering extrinsic reward values, but if one could find a way to give AI the ability to distinguish between the extrinsic and intrinsic, you could then promote very human-like qualities to that machines such a curiosity. We saw an example of an algorithm that seemed to display exactly that during this presentation, and in the pursuit of expanding the scope of AI, that alone is supremely promising.

But what we discussed so far has been a string of maybes and hypotheticals, or small scale experiments that may mean something more at some far-off date in the future; but perhaps the most important question when it comes to the possibility of supersmart AI, is if we even want it. Now, that isn't to fear monger about how AI has the potential to wipe out humanity, but merely to wonder if creating such an intelligence will benefit mankind's struggles. Anthropomorphism has the tendency to make one assume that general or supersmart intelligence will emulate us, a thought-train which shapes the very methods we use to teach AI, although artificial minds might not end up thinking anything like ours do.

To demonstrate this best, I heard one thought experiment that I will refer to as the 'Stamp collector conundrum'. The scenario goes like this; you are a stamp collector with the desire of getting as many rare stamps as humanely possible. In order to achieve this end, you buy as many as you can off E-bay, but you find it hard to track them down so you create a general AI to assist you with the task; giving it the goal "help me get as many stamps as possible". With the help of this AI, you are able to contact thousands of fellow stamp collectors simultaneously in order to buy their stamps off of them, but you feel like you could do better and so does your AI. Now your AI is scouring the web for folks who aren't advertising their wares on E-Bay, for anyone who might have stamps and spamming Email their way. But you only have so much money, so you can't buy them all. So then the AI begins spoofing credit card numbers to pay for the transactions that you can't afford, or scamming them from the sellers in whatever way possible. Soon you have more stamps than you know what to do with, but the AI hasn't reached it's potential of "As many stamps as possible". Maybe it will hack into the postal service and start shipping stamps directly to your door. Soon it will have to come to a sobering realization, there are only a certain amount of stamps on the planet; so the obvious solution would be to create more. The AI starts hacking home printers all over the world and money printers and industrial plants, all to create stamps to ship to your door. What happens when it starts to run low on resources, or energy? Maybe it starts diverting energy from nearby infrastructure and taking over automatic harvesters to accelerate deforestation efforts. You get the message. It probably ends with an end to all humanity.

Now, obviously, that there is an example that is positively seeping in hyperbole, but the point is still valid; who says that a machine that is capable of the same level of thought as a human would play by human rules? Why would it? How could it share our ethical compass without the generations worth of evolution and societal training to ingrain it? How could we teach the goals of humanity to an AI and make them share and retain those goals as they evolve ever more, and is it even ethical to force that? Up until this point, Humanity has managed to evolve through the act of pooling their knowledge, until we can be sure that the next step of AI would be willing an able to do the same, perhaps it's best to slow down on the AI bandwagon.

Not that such would ever be likely in the near future, not with some of the biggest companies in the world throwing all their money at it every year. In America we see huge businesses like Google, Facebook, Amazon and Microsoft leading the charge for AI, whilst in China we can see Baidu, Tencent and Alibaba doing the exact same. It is for that reason, as well as the significant level of spending from the Chinese government, (like I mentioned earlier) that many call AI "The new Space Race", with it's ultimate winner earning the prize of potentially shaping the direction of humanity's future. Therefore when we come back around to the question that started this all off, is Supersmart AI a possibility, our corporations will be the one's to answer it for us, not our scientists. From my point of view, however, I'm slowly warming to the possibilities. Technology is always expanding and growing in surprising ways, but we must always be aware of the fact that once that line has been crossed, that is a can of worms that can never be closed again. (I doubt we'll get there in our lifetimes, though.) In my next blog, which won't be next Sunday, we'll take a look at the leaders of AI to determine exactly where this 'inevitable' breakthrough will occur.

Sunday, 12 January 2020

Is Supersmart AI a possibility? Part 1

I have to go. Somewhere there is a crime happening.

Last year I penned a group of discussion articles about real-world topics that are, even partially, tangentially linked to the world of gaming in a manner that I found interesting. I very much intended to do more but I was interrupted by a series of other articles that I had to write alongside some prolonged research that I intended to do for a topic that would wrap up the common theme in all those blogs. (And yes, I am at least 2 months late with this particular blog, but here we are.) To recap, in those many blogs a reoccurring topic would be the way in which the technology of today and the near-to-distant future could and would be shaped by the guiding hand of, what we call, Artificial intelligence. Much of the information that I have gathered is from last year, and this is a topic that is forever developing, so I may need to do another update later this year but this is a culmination of the key chunks of information that I have found revolving superintelligent artificial intelligence. (Or 'Supersmart AI')

First, let's define what it is that I mean by 'Superintelligent AI'. 'Artifical intelligence' is, by design, a catch-all term to describe all of the algorithms and processes that make up the capabilities of learning software. Typically, this means that AI refers to systems that collate and shift through vast quantities of data at record speed and, in advanced cases, may even offer some rudimentary interpretation to be acted upon. (This is the kind of thing that allows Google to tailor search results and suggestions to your specific wants.) But these aren't the typical kinds of systems that we think about when we talk about 'Artifical Intelligence' both in a realistic and fictionalized sense. To accurately convey the difference between Supersmart AI and what we have today, I'll point to one such fictionalized interpretation that we should all be familiar with; that of Skynet from Terminator. (That also covers the 'video game' link as there was an earnest, if mediocre, Terminator game released a couple of months back.)

In 'Terminator', Skynet is an operating system that is most famous for outgrowing the scope and intellect of it's creators and developing goals of it's own. Those goals, of course, being the eradication of all human life on the planet, although 'Superintelligent' AI does not necessarily need to adopt those specific goals in order to meet the qualifications of it's name. This moniker is merely used to denote a artificial intelligence that has crossed, what we call, the 'Event Horizion'; a point at which the intelligence and scope of AI matches all that of humanity, and then proceeds to eclipse humanity at an exponential rate. (As technology is wont to do.) This is the thing that AI cautionists like Elon Musk are very much terrified of, as it signifies the point at which technology expands to a level that far surpasses our wildest dreams, making is effectively impossible to plan for what that age might hold.

Currently, the AI that we are working with in the world today is categorized under the terminology of 'Narrow Inelligence' or 'Weak AI'. This type of tech is very good at crunching through quantitative data and working with speech and image recognition, or undergoing algorithms and playing games with simple rules. This type of AI typically learns it's craft through an iterative evolutional process whereupon they try generation after generation of testing and pick the model for the next batch of generations from the example which made it furthest to it's pre-established 'goal'. This is often called 'Brute forcing' in hacker movies, or 'Machine learning' in the real world.

From the surface that may sound like a system similar to biological evolution, and therefore an effective model to follow, but as we discussed in my Transhumanism blog, machine learning is actually sorely lacking in comparison. Biological evolution doesn't just randomly evolve every generation in hopes of finding a new gene that better fits it's environment, it's capable of reacting to issues, imminent and perceived, in order to better the lives of the organism and to facilitate the act of multiplying. (The end goal for all life except, it seems, for Giant Pandas who would rather their species go extinct than procreate.) Machine learning lacks that innate sense of what is desirable for the outcome beyond what it is programmed to desire, and it's exceedingly hard (See: impossible) to teach a system to account for a problem that you haven't even considered yet.

The next notable level of AI is what we call 'Artifical General Intelligence (AGI). This is to denote AI that is almost at the level of human intelligence (or 'on par with' depending on who you ask.) The problem with this step of AI is that the closer our technology seems to get to this point, the harder it becomes to achieve. This is because as we creep closer to human intellect we slowly uncover new facets of it's workings that throws off our perceived process, along with the fact that, as I just discussed, humans can be selective with the information it retains whilst a computer is designed to methodically go through every possible option. This iteration of artificial intelligence is what is known as 'Strong AI'.
.
Then there is the final evolution of AI, or rather the point at which that evolution becomes immeasurable. 'Superintelligent AI' surpasses the technological singularity and becomes capable of teaching itself it's own algorithms. The big fear here is not of Superintelligent AI suddenly turning evil and attempting to wipe out humanity, but rather just exceeding the bounds of our instructions and reaching a point where it's goals do not match our own. Essentially that means this isn't a question of malevolence but of competence.

At this point the most pertinent question one can ask in regard to AI is whether or not 'Superintelligent AI' is even possible, as in order to surpass the experiences of humanity computers would have to be capable of achieving a process that no other machine has ever done before; inventing and innovation. Critics would point to this as being the one safe harbour of human intelligence, as far as we know there is no way to synthetically generate a new idea, only to build upon existing ones, however some have countered that the bounds of knowledge are not, and never will be, entirely human.

To understand what I mean by that, I would direct you towards an example that I have mentioned previously, that being of the ancient Chinese boardgame Go and the way the AI has interacted with it. Go is one of the oldest and most complex games of all time, and is often considered to be harder than even Chess. That is because the goal in Go is to maneuver one's units to a point where they surrounded more of your opponents units than they have for you, which is something that seems simple to achieve until you factor in the bevy of mental processes involved in achieving that, not least of all empathy. (That is a gross oversimplification of the rules of Go, by the by, but I've watched several tutorials and I can't even understand the game; so this is all I can offer.)

That is why one of the leaders in Western AI development, Google, wanted to undergo the task of creating an AI with the capability to play and master such a game. The product was 2016's AlphaGo from the heads over at Google DeepMinds, and it took a considerable amount of human teaching to get the machine to a competitive stage. At that point, AlphaGo was put up against the 18 time world champion of Go at the time, Lee Sedol, and managed to beat him in 4/5 games. In the eyes of the AI development world, this proved that modern day AI could be capable of more than 'Bruteforcing', it could posses the ability to strategize.

The most impressive part of this demonstration, however, came next year when Goolge DeepMind created another AI software, AlphaGo Zero, for the sole purpose of defeating their world champion killer. This new AI did not learn how to play the game with any human guidance and never played with another human during it's development process at all, and yet it still picked up the game in 40 days. AlphaGo Zero then went on to beat AlphaGo in 100 games out of 100, soundly proving the superior intellect from merely one year's worth of innovation. AlphaGo Zero mastered expert strategies in it's learnings and even developed brand new ones that have never been see before in play. So one may take such a tale as proof that the game Go posses more inhuman knowledge in it's mysteries than human knowledge, which is a frightening prospect once you expand it to more general fields of knowledge today.

Or rather it is frightening if you happen to be one of the growing number of people who seem fearful of what the future of AI might hold. CEO of Tesla, Elon Musk, has been one of the most outspoken critics of artificial intelligence danger and has purposed that "the danger of AI is much greater than that of Nuclear Warheads." This declaration he makes upon the grounds that "the rate of (AI) improvement is exponential" and it won't be long until it reaches beyond our ability to control, thus crossing that 'Event Horizion' that I mention earlier. Musk, therefore, has pushed for stringent government regulation to be levied upon AI development in order to ensure that nothing is allowed to spiral out of control unchecked, reasoning that the world wouldn't allow a country to create nukes without oversight, and AI should be treated with the same seriousness.

Less apocalyptic critics have bought up the issues with the growth of AI in more practical terms, namely the way in which they effortlessly outstrip the capabilities of man. For productivity this is a positive, but when it comes to creating a sustainable ecosystem this can be a bit a problem. How can a biological compete with the processes of robots? Simple. They can't. Biological neurons operate at a mere 200 hz, whilst modern day transistors (or, 'modern' as of last year) operate at 2 ghz, a whole unit of magnitude faster. In practical terms, this means that machines are capable of 'thinking' and problem solving faster than humans can even collate the problem. Specifically, Neurons travel through axons at around 100 meters per second, which is a third of the speed of sound. Computers transmit data at 299,892,458 M/s, which is the speed of light. Their is no question of competition there and that could spell disaster for future job security in data crunching fields. (Or even in simple labour fields.)

Incidentally, those above statistics are the reason why many AI relevant fictional stories such as 'Terminator' and 'Robocop' are unrealistic. The human resistance of the 'Terminator' world would be powerless to fight against the machines not just due to their technological superiority, but because machines would be able to literally move out of the way of their ballistics before humans had finished deciding where to shoot. The T-800's would be literally unstoppable on the battlefield and humanity would be forced to go into hiding in order to survive. Similarly, in 'Robocop' Alex Murphy's organic brain would be a terrible fit for his robotic body as it's slow processes would prove an actual detriment to his body's actions. ED-209 would be far more capable and deadly than him and would, realistically, kill Murphy before he had a chance to draw his awkwardly holstered gun. (Although, to be fair, the ED-209's bulky body likely does it no favours in matters of agility. Or going up stairs.) 

Then there are those who scoff at the possibility of Superintelligent AI, or rather just it's imminence. Take the word of Christopher Bishop, Microsoft's director of research, for example, as he lauded about “how different the narrow intelligence of AI today is from the general intelligence of humans." adding that "when people worry about 'Terminator and the rise of the machines' and so on? Utter nonsense, yes. At best, such discussions are decades away." Now, one might look at a statement like that and retort that such discussion will be too late once we start on the road to general AI and beyond, but in response there are those who argue further still that AI could never reach the intellect of humanity due to insurmountable weakness of machines.

It is here where the whole thought experiment about 'The Chinese Room' becomes applicable. (Which I covered extensively in a previous blog.) This is a side of the argument that I intend to pick up on next time when I go into the ways that artificial intelligence is evolving and attempting to bridge the gap between machines and flesh. Maybe then, with everything I could find thusfar, we'll be able to come to our own conclusions about just how real the threat of 'Supersmart AI' really is.