Jump to content
House Price Crash Forum

Archived

This topic is now archived and is closed to further replies.

wonderpup

It May Not Just Be Our Jobs That Ai Will Take From Us

Recommended Posts

I came across this report from Wired about the ongoing contest between Google's AI system and GO grandmaster Lee Sedol. At the time the report was written the AI had won the first two games.

But the interesting thing about the piece was that it was not the usual fanboy article extolling the virtue's of a new technology but something rather unexpected in tone- the title of the piece was; The Sadness and Beauty of Watching Google’s AI Play Go.

The beauty refers to the unexpected elegance of the AI's moves- but the sadness was described thus;

But at the same time, AlphaGo’s triumph stirred a certain sadness in so many of the humans who watched yesterday’s match from the press rooms at the Four Seasons and, undoubtedly, in many of the millions of others who followed the contest on YouTube. After looking so powerful just a few days before, one of our own now seemed so weak.

http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

Perhaps I am reading too much into this but this paragraph seems to me to foreshadow an issue that will come to be seen as deeply significant in the future.

The idea that AI may threaten human jobs is now a media staple- but until this article I have seen almost no mainstream discussion of the psychological implications for humans if sophisticated AI systems really do begin to encroach on domains hitherto dominated by human intelligence.

The article goes on;

Rather unexpectedly, I felt this sadness as the match ended and I walked towards the post-game press conference. I was soon stopped by a Chinese reporter named Fred Zhou, whose home country has so closely followed this match.

According to Google, 60 million Chinese watched the first game on Wednesday afternoon. Zhou said he was so happy to talk with another technology reporter, lamenting how many journalists were treating the match like sport and hailing the power of Google’s machine learning.

But then his tone changed. He said that although he was so very excited to see AlphaGo triumph after Game One on Wednesday, he now felt a certain despair. In the first game, Lee Sedol was caught off-guard. In the second, he was powerless.

Again we have this meme of disenfranchisement and even a sense of emasculation- and this from reporters whose job it is to celebrate the onward march of technology. At the very least it's fascinating to see words like 'despair' and 'powerless' in a story from Wired about Artificial Intelligence.

Much has been written as to the damaging impact that contact with a superior alien species might have on the psyche of the human race, but little thought seems to have been given concerning the blow to our collective ego should we find ourselves intellectually outmatched by intelligent machines.

As it happens Lee Sedol has won the forth match against Googles AI- but again the language in which this victory is couched is quite fascinating in what reveals about the true stakes in this contest. This from the Guardian;

AlphaGo, developed by the Google subsidiary Deepmind has an insurmountable lead in the series, but Sedol’s win restored some human dignity.

http://www.theguardian.com/world/2016/mar/13/go-humans-lee-sedol-scores-first-victory-against-supercomputer

Human dignity?! Ok- it's written by a journalist- but even allowing for normal press hyperbole to talk of human dignity in the context of a board game suggests some quite profound unease with the spectacle of a human intellectual champion being overthrown by a machine.

Perhaps I make too much of this- were similar kinds of emotion stirred up by Kasparov's defeat by Big Blue? I can't recall. But all this melancholy is coming from somewhere- and at some point if this technology proves to be as invasive and ubiquitous as some of it's champions suggest they may find that anxiety about jobs is replaced by a concern both more primitive and more profound- the fear that not only your job but the very meaning and purpose of your life may be undermined by some idiot savant technology whose only ability is to do the one thing that you do best, but do it better.

Here's a quote from Lee Sedol about his victory;

“This one win is so valuable and I will not trade this for anything in the world,” said Lee, one of the best Go players in the world.

Why valuable? He's not winning he's losing, three games to one in a five game match, so even if there is a prize he will not be collecting it. So valuable in what sense exactly?

I think he means valuable in the sense that it at least restores some of that 'human dignity' that the guardian hack was referring to.

To me these strange emotional responses to what is an essentially trivial tale about a man losing a board game to a machine are straws in the wind, an early sign that our current equanimity toward the idea of smart machines might not last too much longer- something deep is being stirred here in the collective unconscious that is presently incohate but has the potential to surface at some point into something more focused and hostile.

Share this post


Link to post
Share on other sites

While all the other Bots waffle on this subject - I say bring it on. Hopefully the Byte space they take up in some dusty side of server rack city will show us all that houses really are in a bubble and will no longer be needed thus I predict going to the Chippy will not be about hot potato things on a Saturday night and even how pins is yours will be silent to the BGA club of bedroom go players.

Share this post


Link to post
Share on other sites

There's a big difference between programming ai along defined rules in a boardgame and actual ai. I couldn't beat the computer at chess but didn't feel like I lost my dignity. I could always smash it.

Share this post


Link to post
Share on other sites

I came across this report from Wired about the ongoing contest between Google's AI system and GO grandmaster Lee Sedol. At the time the report was written the AI had won the first two games.

But the interesting thing about the piece was that it was not the usual fanboy article extolling the virtue's of a new technology but something rather unexpected in tone- the title of the piece was; The Sadness and Beauty of Watching Google’s AI Play Go.

The beauty refers to the unexpected elegance of the AI's moves- but the sadness was described thus;

http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

Perhaps I am reading too much into this but this paragraph seems to me to foreshadow an issue that will come to be seen as deeply significant in the future.

The real test of an AI is not whether one can be built to beat a human at a game, but whether one can be built that can itself write another AI to build an AI that can do so. IOW an AI that can replace the google programmers rather than the game player is the real test.

Share this post


Link to post
Share on other sites

The real test of an AI is not whether one can be built to beat a human at a game, but whether one can be built that can itself write another AI to build an AI that can do so. IOW an AI that can replace the google programmers rather than the game player is the real test.

When you can hand an AI machine a brown envelope full of £50 notes and get them to introduce a tax payer backed scheme that helps the person handing over the envelope and convince the general populous you are doing this to help them...then you know AI has finally arrived.

Share this post


Link to post
Share on other sites

I think the journalists' language ties into a sense of awe that we might be able to surpass or replace ourselves with a device of our own creation. It's Frankenstein-max. What we are seeing is a human engineering effort to build something that can solve the global problems that have defeated us - clean energy, advanced genetics, far-reach space travel etc. But in so creating we wipe ourselves out of the running.

The Go match is an exercise and call to attention, DeepMind's real objective is to "solve intelligence". I don't think they have much interest in Plato, Descartes or anything connected to the humanities - they are out to build machines that can compete or beat our real-world human abilities, in data analysis - Google have their eye on hospital diagnosis cash - service robotics - home help, refuse collection, driverless cars, shop staff - and selling their algorithms into all areas of manufacturing, services and civic life.

Once the machines take up their role in society, how much further would state support have to extend to provide all the basic needs of life for the majority of the human populace, taking them out of employment forever? When I walk around my burg I reckon more than two thirds of the people I see are retired, in education, unemployed and fully supported by the state or part of the coffee drinker/coffee buyer service economy. And this global slowdown in growth doesn't look temporary to me.

Share this post


Link to post
Share on other sites

The article was written by an AI bot.

They've already replaced us. We'll be told about it in due course, but they - the bots - are drip feeding us the news so we don't fall into shock when the big annoincement is made ;)

Share this post


Link to post
Share on other sites
The real test of an AI is not whether one can be built to beat a human at a game, but whether one can be built that can itself write another AI to build an AI that can do so. IOW an AI that can replace the google programmers rather than the game player is the real test.

In a sense this system has already done what you are suggesting- some of it's strategies were completely opaque even to the people who created it- meaning that they could not follow it's reasoning. Whatever process it was using to come up the strategies in question was inaccessible to it's creators, which is slightly disturbing.

Not a problem in system designed to play a board game- but maybe a bit more problematic in a system designed to drive a car or control some avionics.

It may well be that the very thing that makes these systems so powerful- their ability to evolve- will make them dangerously unpredictable in practice.

Share this post


Link to post
Share on other sites

The AI won the fifth and final game;

Google's DeepMind artificial intelligence has secured its fourth win over a master player, in the final of a five match challenge.

http://www.bbc.co.uk/news/technology-35810133

Again some interesting psychological resonances here;

The 4-1 mechanical victory has also made some Go players doubt themselves.

The European champion who lost last year to AlphaGo said it had really knocked his self-confidence, even as it enabled him to climb up the world rankings.

So as a result of losing to the machine his game improved and he climbed up the world rankings- but still he says that his' self confidence' was knocked? Seems contradictory on the face of it.

But perhaps it's understandable this way- what does it mean to a human champion in an arena in which the ultimate master is now a computer? The feeling seems to be that something more than a contest has been lost here- though it's hard to articulate precisely what that something is.

Maybe it's the fact that a talent once believed to be uniquely individual and human has now by implication been transformed into a commodity-after all if one Alpha GO can be created so can a million more- each capable of beating the best humanity has to offer.

What these new technologies by implication represent is the commoditization of intelligence itself- yet intelligence is supposed to be the very thing that makes us humans so unique and special. So perhaps all this angst and soul searching that the victory of Alpha GO seems to have engendered is a reflection of our anxiety that the accelerating pace of technological change is slowly turning us all into commodities-and of the most cheap and disposable kind.

Share this post


Link to post
Share on other sites

I still think of AI as a tool - it doesn't have a face yet. So a guy hits you over the head with a hammer, who are you angry with - hammer or guy? The AI in the case of the Go match still feels linked to the developer team and hence I see them as the real player, though I take your point that the AI is beginning to go its own way. (Deepmind talk loud about the need for caution and ethical advances in AI to prevent catastrophe, though everything they do seems to be pushing AI into completely ungoverned waters.)

I do share your implied sense of human redundancy though. Ref back to one of your other posts on the desperate attempts to talk up education and productivity to yield ever-more growth, I think it's clear the politicians are finding it impossible to shrink general state support for the masses and they'll be forced into some form of citizen's income - news out of New Zealand today that they're considering it.

AI will be very efficient at administering it.

On a side note, I saw a debate recently on the question of who will drive the refugees out of Greece - argument was that when it comes to the push, humans will fail to go through with their orders, to literally drag/kick/eject people out of the country. AI would have no qualms here, so it would fulfill the needs of the future world elite to act without question or compassion.

Share this post


Link to post
Share on other sites

The impact in South Korea is pretty strong;

How victory for Google’s Go AI is stoking fear in South Korea

Watching Google’s AlphaGo AI eviscerate Korean grandmaster Lee Sedol put the nation into shock, especially after the national hero confidently predicted that he would sweep AlphaGo aside. The actual result laid bare the power of AI.

“Last night was very gloomy,” said Jeong Ahram, lead Go correspondent for the Joongang Ilbo, one of South Korea’s biggest daily newspapers, speaking the morning after Lee’s first loss. “Many people drank alcohol.”

This ability to make beauty has left many shaken. “This is a tremendous incident in the history of human evolution – that a machine can surpass the intuition, creativity and communication, which has previously been considered to be the territory of human beings,” Jang Dae-Ik, a science philosopher at Seoul National University, told The Korea Herald.

“Before, we didn’t think that artificial intelligence had creativity,” said Jeong. “Now, we know it has creativity – and more brains, and it’s smarter.”

As Lee’s losses stacked up, I kept getting worried messages from my Korean friends. “I thought it might be fun to watch, but now it’s getting really scary,” one of them said. Another told me: “Thinking that these AIs are only accessible to a few groups and people – it is scary.”

Headlines stacked up in the South Korean press too: “The ‘Horrifying Evolution’ of Artificial Intelligence,” and “AlphaGo’s Victory… Spreading Artificial Intelligence ‘Phobia’.”

https://www.newscientist.com/article/2080927-how-victory-for-googles-go-ai-is-stoking-fear-in-south-korea/

The part about who has access to the technology is interesting here- the geo political implications of who has this kind of technology and who does not may introduce an 'arms race' mentality that will accelerate the development of AI- should it come to be regarded as offering some kind of strategic advantage.

Share this post


Link to post
Share on other sites
Maybe it's the fact that a talent once believed to be uniquely individual and human has now by implication been transformed into a commodity-after all if one Alpha GO can be created so can a million more- each capable of beating the best humanity has to offer.

What we are really saying is that the intellect of 150 programmers (who are probably pretty smart), and billions of pounds worth of investment in hardware research, has managed to beat a single bloke at a rules based board game. And yet, that AI could not be placed in a car and drive you home, which is a task that the most simple minded taxi driver could achieve with ease.

These things are tools, not overlords. Until an AI has an element of "self" and "ambition", it will remain in this place. Was the machine elated to win the match? Not at all, it came to the end of its decision tree (or whatever) and was shut down. When the AI shouts "f*** yeah, bring it on sucker" at the end of the match, start to get worried if no one has programmed that into it.

Share this post


Link to post
Share on other sites

What we are really saying is that the intellect of 150 programmers (who are probably pretty smart), and billions of pounds worth of investment in hardware research, has managed to beat a single bloke at a rules based board game. And yet, that AI could not be placed in a car and drive you home, which is a task that the most simple minded taxi driver could achieve with ease.

These things are tools, not overlords. Until an AI has an element of "self" and "ambition", it will remain in this place. Was the machine elated to win the match? Not at all, it came to the end of its decision tree (or whatever) and was shut down. When the AI shouts "f*** yeah, bring it on sucker" at the end of the match, start to get worried if no one has programmed that into it.

You miss the point here. The importance of this event is not what the google computer achieved- but how it achieved it. You seem to suggest that it was achieved simply by throwing a vast amount of computing power at the problem which allowed the computer to come up with the 'right' move simply by looking at all possible moves and picking the best move available from that finite collection of possibilities- but this is not correct.

This method could work for other games like checkers or even chess where the possible moves in any given situation are able to be calculated. But in the case of the GO this method will not work because the number of possible moves is so vast that even the Google computer could never calculate them all. In the case of GO the number of possible outcomes is for all practical purposes infinite.

So imagine yourself in the same situation- you must make a choice between an incalculably large number of options- how would you go about making that choice? You can't simply throw time or brainpower at the problem because the number of choices is beyond calculation. So how do you choose?

The answer is that you make the choice based on your experience of making similar choices in the past that had the optimum outcome- and in reality this is how human beings must often make choices, because the future is far too complex to predict with any accuracy.

So imagine that you are Google's computer- faced with an incalculably large number of options regarding your next move in the game- how do you choose? The answer is that you make the choice based on your experience of making similar choices in the past that had the optimum outcome.

In other words the google computer beat the human player not simply because it had vastly superior number crunching abilities but by applying a degree of judgement based on it's experience of playing the game.

So what Google has demonstrated here is that it is possible for a computer to arrive at a decision based not on pre programmed responses or on simple number crunching but based on it's ability to learn from experience and apply that experience as a guide for action.

And this is a non trivial fact that has far reaching implications for future applications of this technology.

To argue that this is 'just' a tool is to underestimate the potential cultural impact that such tools might have if they prove capable of matching or even exceeding the capabilities of humans in a broad spectrum of cognitive tasks.

Share this post


Link to post
Share on other sites

Fair enough, all valid. To garner experience, DeepMind had multiple computers (industrial grade no doubt) playing hundreds of millions of games without pausing over many months, perhaps pushing a year. In effect, ranks of computers lived many thousands of full lifetimes of Go playing, incalculably greater than any human player could experience.

This astonishing processor speed and the ability to play millions of games with a shared knowledge of the outcomes must be one of DeepMind's strengths and might be prescient regards the most likely route to a comprehensive AI.

But it's crude. Crood. DeepMind haven't really explained how their algorithms work but it's starting to look like trial and error on a vast scale. They say they learn. They say they could be applied to many tasks. But it's possible that they are still reliant on extraordinary levels of raw number crunching to pull off their feats.

Now that's fine, but it's a dull, flickering glimmer of intelligence compared to the dazzling solar blaze of a child deciphering, interpreting, inferring and interacting in a few short seconds with this world. That same child is a self-replicating, self-contained and autonomous bizarrely complex sensory, calculating, remembering, communicating, self-fuelling and evolving chemical machine with the ability to imagine and create things that don't even exist. Like AI.

Share this post


Link to post
Share on other sites

Fair enough, all valid. To garner experience, DeepMind had multiple computers (industrial grade no doubt) playing hundreds of millions of games without pausing over many months, perhaps pushing a year. In effect, ranks of computers lived many thousands of full lifetimes of Go playing, incalculably greater than any human player could experience.

This astonishing processor speed and the ability to play millions of games with a shared knowledge of the outcomes must be one of DeepMind's strengths and might be prescient regards the most likely route to a comprehensive AI.

But it's crude. Crood. DeepMind haven't really explained how their algorithms work but it's starting to look like trial and error on a vast scale. They say they learn. They say they could be applied to many tasks. But it's possible that they are still reliant on extraordinary levels of raw number crunching to pull off their feats.

Now that's fine, but it's a dull, flickering glimmer of intelligence compared to the dazzling solar blaze of a child deciphering, interpreting, inferring and interacting in a few short seconds with this world. That same child is a self-replicating, self-contained and autonomous bizarrely complex sensory, calculating, remembering, communicating, self-fuelling and evolving chemical machine with the ability to imagine and create things that don't even exist. Like AI.

You are right of course- compared to even the dullest human brain deep mind's creations are morons and I'm certainly not in the camp that see's AI taking over the world any time soon.

But your post does raise an obvious question- given the amount of human intelligence already being underutilized by lack of opportunity and underinvestment in it's development why is so much time and treasure being invested in the attempt to replicate this already underexploited resource in the form of artificial intelligence?

Surely if Google really wanted to maximize the amount of gross 'intelligence' available to itself or the world the obvious thing to do would be to spend it's money training existing human minds, not spend billions trying to replace them with crude replicas?

But-of course- there is a snag. Those human minds come with 'rights'- they need to be paid, they may require healthcare or other costly services- more importantly those human minds cannot be owned. So lurking behind all the high blown rhetoric is perhaps something a little darker and less pretty.

Because you could argue that the real dream here is to create an utterly pliant form of intelligent slave labor that would enable those who control that labor to gain the benefits of controlling labor with none of the messy downsides currently associated with a labor force made up of humans.

From a certain point of view the entire idea that we might deploy crude replica's of ourselves in order to avoid employing each other is quite odd- yes there is the economic point that an AI might be cheaper than it's human equivalent- but I feel there is a deeper motivation in play also- and it has something to do with exercising the maximum control over our environment by excluding from that environment the most threatening entities in it- other human beings.

Take the example of the automated checkout at the supermarket. The 'official' reason that we use them is because they are supposed to be quicker- yet I often see situations in which human till operators sit idle while people queue up at the automated tills- preferring to scan and bag their own shopping rather than engage in the minimal interaction required with the human beings behind the vacant tills- it seems that many people prefer the control of doing the job themselves rather than having another human do it for them, and will willingly wait longer for the opportunity to do so.

And also take the key rationale offered in favor of self driving cars- that they will be safer than cars driven by human beings.

So a key meme of AI cosmology is that human beings are really quite dangerously unpredictable and anything that can be done to eliminate human involvement in a given process is always and everywhere a good thing.

It could be argued that while the economic imperative is the 'official' reason that drives research into AI there is in fact another less explicit agenda in play, which is the idea that the more we eliminate the human, the safer we will all be. From this viewpoint the money spent on AI is money well spent not in spite of the loss of human jobs but because of the loss of those jobs- because each human job lost to technology represents a reduction of the inherent risk and uncertainty that inevitably arises whenever human beings are involved.

It's both funny and disturbing to consider the possibility that the world the super geeks are building for us may be one shaped in part by their own antipathy to the inherent complexity of dealing with other human beings in face to face interaction in the real world.

Share this post


Link to post
Share on other sites

In other words the google computer beat the human player not simply because it had vastly superior number crunching abilities but by applying a degree of judgement based on it's experience of playing the game.

That's still just rules. Last time I did option A in a particular situation, I lost, so I'll do option B. As soon as the computer can study and play more games than the human opponent, it is likely to win. I agree that chess is even easier, because that has a finite and dimininishing set of outcomes.

The driving example is relevant. All of the approaches so far have been rules based, and IMO are likely to fail, because they cannot cope with the unexpected. How would the current crop of driverless cars cope with the Shoreham air crash? I would guess that they would drive straight into the fireball because the road is technically clear - there was nothing in the way. The average human has never experienced this, never seen it before, is not trained for it, but they still stand on the brakes within a few milliseconds. Nothing exists in technology that can remotely simulate that general purpose 'intelligence', and such a thing probably won't exist in our lifetimes.

The key rationale for me on driverless cars (real driverless, not the guff claims that are being made today), is not the subjugation of taxi drivers, it is the likelihood that I will be able to carry on moving around long after my faculties would ordinarily prevent me from doing so. Or come home from the pub asleep on the back seat of my car. Driverless cars - bad for Saturday night cabbies, but fantastic for the pub trade. I don't,think google are trying to enslave anyone, more that techies at a loose end are trying to build some cool stuff.

Share this post


Link to post
Share on other sites
That's still just rules. Last time I did option A in a particular situation, I lost, so I'll do option B.

Yes but how do you know what option B is? This is the problem that Google's computer proved better at solving than it's human opponent. Neither the computer nor the human could possibly calculate the near infinite number of possible outcomes to a given strategy mathematically- both had to rely on their respective experience of the games they had played in the past in order to choose the best move they could make in the future.

And in this case the computer proved better than the human at learning from it's own experience, making it the superior player.

This is not some simple application of predefined rules to solve a finite math problem, it is far more sophisticated than that. What this AI did was model future possible outcomes based on it's past knowledge and experience- in human beings we call this this kind of inter-temporal speculation 'thinking about the future'- and the process by which this speculation leads to the selection of a course of action we call 'making a decision'.

The driving example is relevant. All of the approaches so far have been rules based, and IMO are likely to fail, because they cannot cope with the unexpected. How would the current crop of driverless cars cope with the Shoreham air crash? I would guess that they would drive straight into the fireball because the road is technically clear - there was nothing in the way. The average human has never experienced this, never seen it before, is not trained for it, but they still stand on the brakes within a few milliseconds. Nothing exists in technology that can remotely simulate that general purpose 'intelligence', and such a thing probably won't exist in our lifetimes.

You make a good point- in these kind of extreme outlier situations it's hard to see how a driverless car could cope- but consider this- if the choice was between a driverless car that was far safer than a human 99.9% of the time but would fail if a fireball type of event were to occur- or a human driver who could avoid that fireball but was far more dangerous in almost every other situation- which of the two would you choose to drive you or your loved ones around?

We know from the Airline business that a technology need not be perfect in order to be trusted- airplanes can and do crash but people fly anyway because the odds of catastrophic failure are low enough to make the risk worth taking. Driverless cars will be used if they reach the point where in almost every situation they are safer than human drivers, even if scenarios can be envisaged where those driverless cars might catastrophically fail.

So we don't need driverless cars to have sophisticated general intelligence- we only need driverless cars that can cope with the variables that are present 99.9% of the time on the road.

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • The Prime Minister stated that there were three Brexit options available to the UK:   43 members have voted

    1. 1. Which of the Prime Minister's options would you choose?


      • Leave with the negotiated deal
      • Remain
      • Leave with no deal

    Please sign in or register to vote in this poll. View topic


×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.