Most recent blog

Along the Mirror's Edge

Sunday 19 January 2020

Is Supersmart AI a possibility? Part 2

They'll be back

Today we are going to do our best to round up my set of 2019 knowledge on Supersmart AI to come to a final conclusion upon what Superintelligent AI actually is and just how looming of a threat it would seem. As such, I felt it important through the course of my research to take a look at the mediums that were doing the best for AI development and, surprise, surprise, I found one of the leaders to be a topic quite familiar to this blog. You see, whilst 'weak AI' (defined in the last blog) is an incredibly useful modern tool for managing packets of data, we can see the greatest investment and innovation towards it being funded not by an industry in general but by a nation. The Chinese Nation. And that makes sense, doesn't it?

China is the most populated country in the world and, different to all the other countries on that top ten list, their government very much has the desire to monitor and sanitize as much of that populace as humanly possible. We hear about the ludicrous amount of surveillance that goes into 'safegaurding' the average Chinese city, as well as the deeply ingrained 'social credit system', but rarely do we reflect upon the complexity of the AI systems that are involved in order to make all that possible. Whatsmore, China have a desire to increase their presence on the world scale in order to become the global superpower, and as such they devote a heavy amount of research and funds into areas that they presume will blow up in the years to come. Should it be any surprise then, that China are the leaders in AI technology around the entire globe? All that is under the knowledge, however, that this drive to 'improve' AI is more aimed at increasing the capacity and scope of 'weak AI', rather than expanding the abilities of that AI itself and moving towards 'General AI'. (Intelligence comparable to humans.)

Many scholars, such as those who agree with the arguments presented by John Searle's 'Chinese Room' thought experiment, would argue that there is a fundamental barrier that AI will never be able to cross. That thing that we humans like to argue is the key part of our brain function that separates us from animals is something we refer to as 'Inspiration'. For the sake of this discussion, we'll choose to define such a broad word as 'the ability to come up with new ideas'. (Or compound upon old one's to create innovation.) As computers are expressly designed to operate within the parameters of their instructions, they should be actually incapable of 'inspiration' which would make their capacity to learn inferior to that of even the dullest human. As such, General AI should be an impossibility. But there are those who disagree and even those who think that concepts such as 'inspiration' are complete fabrications of human hubris. (but that's a topic I really don't have the time to go into today.) As I hinted at earlier, there is one medium in which AI possibilities are regularly being pushed to their limit, and true to this blog it lies in the world of gaming.

In 2018, Danny Lange held a conference through GOTO which explored the scope for AI development. But what made this particular demonstration worth paying attention to was the fact that Lange wasn't interested in AI that was taught to reach it's goal, but rather AI that learned what to do through the act of 'Deep learning'. Deep learning is a model of development very much inspired by biology, albeit in a more mindless, thrashing, fashion; so it's the perfect system upon which to test the possibilities of AI simulating Human-like intelligence. To this end, he named his conference "Deep learning in gaming" and you can watch it yourself to get a better understanding of what we're talking about. But I'll attempt to pull up the salient points.

Gaming makes for a great environment in order to conduct testing for learning capabilities, as that objective-based ecosystem works to quantify traditionally qualitative traits such as 'problem solving'. Therefore in order to create a game that simulates human intelligence we would have to define the key operating goal of every human being, those driving forces that unite us all. To that end, we must look at our base drives; we work in order to maintain energy and prevent entropy. We amass food in order to consume and keep our energy up, maintain order to prolong reliable access to energy supplementing sources and seek to multiply to stave of entropy, even if that's more symbolic. When we simplify intelligence like this, we aren't so much measuring 'human' intelligence as we are observing ' biological' intelligence in general, but that is an important stepping stone on the road to general intelligence.

Using a game engine no more powerful than Unity, Mr. Lange showed us a model of AI that was taught to navigate an environment in order to reach it's goal, with no prior programing to assist it in that task. The only incentive provided to the AI was a standard reinforcement learning algorithm, which is a process not so dissimilar to the 'Pavlov's Dogs' model. The AI would navigate it's avatar to the goal and once it reached there it would receive a 'point'; this builds upon how learning is achieved in nature all around us, through the merit of observation and reinforcement. By building upon this model for deep learning, one could teach an AI how to explore and exploit in a manner that replicates very human behaviours. Once the AI started to show some emergent behaviours, such as demonstrating an understanding of the reward function, the courses could become progressively harder to ramp up it's rate of learning. (Which, incidentally, is another parallel showing how computers learn in a similar curriculum to humans.)

Now, the problem with this model is thus; standard reinforcement learning takes too long. As I've mentioned several times now, when you provide a computer with nothing more than a goal it will take literally every path in order to reach it, no matter how little that path-way makes sense. Lange described solving this problem as teaching AI to 'Learn long short-term memory', describing that memory needs depth in order to grow into intelligence. To achieve this, there was a new parameter added to the reward system for the AI that would mirror the values of human's, in the form of extrinsic and Intrinsic rewards. Most AI deep learning evolves by offering extrinsic reward values, but if one could find a way to give AI the ability to distinguish between the extrinsic and intrinsic, you could then promote very human-like qualities to that machines such a curiosity. We saw an example of an algorithm that seemed to display exactly that during this presentation, and in the pursuit of expanding the scope of AI, that alone is supremely promising.

But what we discussed so far has been a string of maybes and hypotheticals, or small scale experiments that may mean something more at some far-off date in the future; but perhaps the most important question when it comes to the possibility of supersmart AI, is if we even want it. Now, that isn't to fear monger about how AI has the potential to wipe out humanity, but merely to wonder if creating such an intelligence will benefit mankind's struggles. Anthropomorphism has the tendency to make one assume that general or supersmart intelligence will emulate us, a thought-train which shapes the very methods we use to teach AI, although artificial minds might not end up thinking anything like ours do.

To demonstrate this best, I heard one thought experiment that I will refer to as the 'Stamp collector conundrum'. The scenario goes like this; you are a stamp collector with the desire of getting as many rare stamps as humanely possible. In order to achieve this end, you buy as many as you can off E-bay, but you find it hard to track them down so you create a general AI to assist you with the task; giving it the goal "help me get as many stamps as possible". With the help of this AI, you are able to contact thousands of fellow stamp collectors simultaneously in order to buy their stamps off of them, but you feel like you could do better and so does your AI. Now your AI is scouring the web for folks who aren't advertising their wares on E-Bay, for anyone who might have stamps and spamming Email their way. But you only have so much money, so you can't buy them all. So then the AI begins spoofing credit card numbers to pay for the transactions that you can't afford, or scamming them from the sellers in whatever way possible. Soon you have more stamps than you know what to do with, but the AI hasn't reached it's potential of "As many stamps as possible". Maybe it will hack into the postal service and start shipping stamps directly to your door. Soon it will have to come to a sobering realization, there are only a certain amount of stamps on the planet; so the obvious solution would be to create more. The AI starts hacking home printers all over the world and money printers and industrial plants, all to create stamps to ship to your door. What happens when it starts to run low on resources, or energy? Maybe it starts diverting energy from nearby infrastructure and taking over automatic harvesters to accelerate deforestation efforts. You get the message. It probably ends with an end to all humanity.

Now, obviously, that there is an example that is positively seeping in hyperbole, but the point is still valid; who says that a machine that is capable of the same level of thought as a human would play by human rules? Why would it? How could it share our ethical compass without the generations worth of evolution and societal training to ingrain it? How could we teach the goals of humanity to an AI and make them share and retain those goals as they evolve ever more, and is it even ethical to force that? Up until this point, Humanity has managed to evolve through the act of pooling their knowledge, until we can be sure that the next step of AI would be willing an able to do the same, perhaps it's best to slow down on the AI bandwagon.

Not that such would ever be likely in the near future, not with some of the biggest companies in the world throwing all their money at it every year. In America we see huge businesses like Google, Facebook, Amazon and Microsoft leading the charge for AI, whilst in China we can see Baidu, Tencent and Alibaba doing the exact same. It is for that reason, as well as the significant level of spending from the Chinese government, (like I mentioned earlier) that many call AI "The new Space Race", with it's ultimate winner earning the prize of potentially shaping the direction of humanity's future. Therefore when we come back around to the question that started this all off, is Supersmart AI a possibility, our corporations will be the one's to answer it for us, not our scientists. From my point of view, however, I'm slowly warming to the possibilities. Technology is always expanding and growing in surprising ways, but we must always be aware of the fact that once that line has been crossed, that is a can of worms that can never be closed again. (I doubt we'll get there in our lifetimes, though.) In my next blog, which won't be next Sunday, we'll take a look at the leaders of AI to determine exactly where this 'inevitable' breakthrough will occur.

No comments:

Post a Comment