Jump to content
House Price Crash Forum

Automation And Jobs- Why It Really Is Different This Time.


Recommended Posts

0
HOLA441
59 minutes ago, Qetesuesi said:

Aye, in the sense that "the poor ye have always with you".  But the trans-planetary prescription is Hawkingite hooey.  No other planet/moon comes close to suitability; earth's atmosphere is thicker than most other terrestrial bodies', and disposes safely enough of almost everything that falls in, and no meteorite however destructive is going to wipe out humanity.  Plus whatever we could do to 'burn' our home could simply be repeated anywhere else.  Why does Hawking hate planet earth so much?

I agree Hawking's timescale is a little alarmist, but then he is a believer in strong-AI. 

As for no rock being capable of wiping us out - that is simply wrong. 

I agree that the notion of us simply transporting our current lifestyle and civilisational form off-planet is wrong headed. But then, that is not what I really had in mind. Today each western citizen has the equivalent of 500 labouring slaves by virtue of technology and fossil fuels - I read that in an article recently. Maybe to live successfully off-world we might need the equivalent of 500,000 "energy slaves". Doesn't seem impossible given the resources in the solar system and the progression of technology.

I also agree that if our civilisation does succeed in some form of breaking our earthly bonds, then we'll create a fair old mess elsewhere. But that's OK because that's what life does, and the universe is making a mess of itself anyway over time so all we do is speed it a long a little. Environmental concerns don't really apply beyond our immediate planetary neighbourhood.

Edited by scepticus
Link to comment
Share on other sites

1
HOLA442
5 hours ago, scepticus said:

The situation of competition IS natural, that's why nature is referred to as being red in tooth and claw. I do agree that it is possible, and perhaps rational, to hold beliefs that humans are different, somehow 'apart from nature' therefore we can make realities that are not like nature (e.g. un-natural). It being possible to conceive however, doesn't make it correct.

No, that's the argument which says "Your post irritates me, therefore it's fine for me to punch you" or "that woman looks nice, I'll hit her over the head and drag her back to the cave to have my way with her."  Mixing natural laws with the results of concious decisions doesn't cut it.

If we create a sh1tty world that isn't as pleasant to live in as one that's within our means of creating, or if the one we've got gets worse for reasons other than natural disasters, we've only got ourselves to blame and I'll happily pile that blame on those who either deliberately pushed for it or were happy to shrug and go along. Milton Keynes did not happen by accident.

Edited by Riedquat
Link to comment
Share on other sites

2
HOLA443
Quote

This is true of all technological progress though, your argument is essence the same as the Luddites - "it will cause some short term job losses so therefore its evil" ignoring the the wider economic benefits of the increased productivity.

Of course you will turn around and say "but this time its different", but there is no evidence that it is different, you only have supposition vs 5000 odd years of technological progress that has been largely beneficial.

For most of those 5000 years the horse was indispensible to human civilisation- then came steam and shortly afterward internal combustion and within a few hundred years the horse went from indispensible to redundant. So the past is not always a good guide to the future.

What happened to the horse is obvious- their value as a source of muscle power was surpassed by machines that offered a better and cheaper form of muscle power. But this was not achived by building a mechanical horse, complete with four legs and a tail.

Humans-being smarter than horses- had an alternative source of value to offer when horses became redundant- they had brain power. But what happens to humans when brain power becomes commoditised in the same way that muscle power was commoditised?

And just as it did not require mechanical horses to replace real horses it does not require an artificial brain to replace the human brain- all that is required is a machine that can replicate the value that is currently added by humans to a given productive task.

I really don't see how we can continue to develop technology that replicates the cognitive function of humans without at some point making some of those humans redundant- and I really don't see any obvious place for those redundant humans to go next in terms of paid work.

The reality is that the AI industry is lying to somebody- either it's lying to it's investors in terms of the profits it expects to make or it's lying to the public in terms of the jobs it expects to replace-  because the only way those profits are going to materialise is by replacing people with machines.

Edited by wonderpup
Link to comment
Share on other sites

3
HOLA444
Quote

We already have a substantial percentage of our population who are unemployable, simply consume, and give back almost nothing to society.

Simply by consuming is giving back to society.....if renting also, are giving back what you never earned to others who did not earn it but given the power to spend it.....;)

Link to comment
Share on other sites

4
HOLA445
25 minutes ago, Riedquat said:

No, that's the argument which says "Your post irritates me, therefore it's fine for me to punch you" or "that woman looks nice, I'll hit her over the head and drag her back to the cave to have my way with her."  Mixing natural laws with the results of concious decisions doesn't cut it.

 

That is a trivial reducto ad absurdum, which doesn't cut it. We have the laws we have because they have evolved over time, out-competing alternative law-systems, with communism being a nice recent example of that. One of the first law-systems to be out-competed was anarchy, as described by you above. The existence of laws and ample evidence of altruistic behaviour does not mean that competitive processes that characterise the natural world don't still apply to us. Competitive processes are part of the evolutionary search process which remains ongoing. Even if individual human decisions are inscrutable because of some special magic in consciousness, that does not say that large numbers of such decisions won't follow more prosaic natural laws.

Given that resources are finite, the laws we have now have allowed us to reach some local maxima via an evolutionary search that has served to maximise the the size and power of our civilisation - for which a very good rough proxy is the amount of resources we consume. It is not surprising that reduced levels of competition between competing social systems will result in increased levels of internal competition within the few social systems that remain.

It seems reasonable to say that our current system embodies less overt conflict and competition than in previous ages (e.g. less war, fairer laws, less racism etc) but given that our society now is larger than it was in the past the total amount of competition has grown simply by virtue of the fact there are more of us.

Link to comment
Share on other sites

5
HOLA446
26 minutes ago, wonderpup said:

The reality is that the AI industry is lying to somebody- either it's lying to it's investors in terms of the profits it expects to make or it's lying to the public in terms of the jobs it expects to replace-  because the only way those profits are going to materialise is by replacing people with machines.

The AI industry doesn't even really understand how its algorithms work so they have IMO little more idea than anyone else what the ultimate outcome of working on this technology will be. And even if they are putting a rosy glow on possible future profits then they won't be the first industry to have done that. In the 19th century we had the railways, and at the end of the 20th century we had dotcom boom 1.0. 

Likewise all the automation efforts that have come over the last 100 years have been no different since their profits had to come from replacing jobs.

So your statement here is disingenuous to say the least and fails to place AI/ML in a different category to previous and existing industries.

 

Link to comment
Share on other sites

6
HOLA447
Quote

If we create a sh1tty world that isn't as pleasant to live in as one that's within our means of creating, or if the one we've got gets worse for reasons other than natural disasters, we've only got ourselves to blame and I'll happily pile that blame on those who either deliberately pushed for it or were happy to shrug and go along. Milton Keynes did not happen by accident.

There's this assumption that Capitalism is somehow innately anthropormorphic in it's nature but it may well be that this apparant happy alignment between the free market and general human well being is an illusion- a product of poor sampling over too short a time frame.

On reflection I would argue that the critique of Artificial Intelligence re employment is really a critique of the limitations of the Capitalist model, which seems to have no real response to offer in a scenario in which human labour is no longer required.

The irony is that work- once considered a curse visited upon man by a vengful god- has now become so indispensible that-like the devil- if it did not exist it would be neccessary to invent it.

Work-in a capitalist society- is not just a means to an end because it is also vital component of the capitalist regime- so in a truly perverse twist of history that which the sons of Adam once regarded as a punishment for the transgressions of Eden is now deemed so vital to human nature that we even talk of having a 'right' to work- an idea that our distant ancestors would have found utterly incomprehensible.

The problem seems to be that the mixture of capitalsim and powerful technologies is becoming so combustble and toxic that it threatens our future on every level, from environmental pollution to social implosion due to large scale job loss.

What we need is not less technology but a better social framework around which to deploy that technology than the current Capitalist model that seems to have no answers to any of the problems we now face. There is something increasingly desperate to be detected in the ever more shrill claims that what we need is more free market as a way to solve our problems- even those who make these claims seem less and less convinced by them as time goes by.

What AI really threatens is not people but Capitalism- the same capitalism that will embrace AI with all the fervour of a junkie who knows their habit is killing them but just can't stop.

Link to comment
Share on other sites

7
HOLA448
Quote

The AI industry doesn't even really understand how its algorithms work so they have IMO little more idea than anyone else what the ultimate outcome of working on this technology will be. And even if they are putting a rosy glow on possible future profits then they won't be the first industry to have done that. In the 19th century we had the railways, and at the end of the 20th century we had dotcom boom 1.0. 

Likewise all the automation efforts that have come over the last 100 years have been no different since their profits had to come from replacing jobs.

So your statement here is disingenuous to say the least and fails to place AI/ML in a different category to previous and existing industries.

I didn't say that the AI business was unique in this repect-only that there is a clear mismatch between the expectations of it's investors and it's public postion re the impact of it's technology on employment.

It cannot be true that AI will be both highly profitable in the future and have little impact on future employment if those predicted profits are based on replacing people with technology, which they largely seem to be.

So the oft repeated meme to be found in many articles and interviews re AI that the aim is to 'make people's jobs easier' rather than replace them is itself more than a little disingenuous. The low hanging fruit in terms of monetising AI is to take an exisiting human job and automate it as far as possible to cut costs- no one really denies this.

Also can I point out that if it's true that the AI Industry doesn't even really understand how its algorithms work then your arguments as to the inherent  limitations of these techniques would appear to be on somewhat shaky ground- how would you-or anyone else- know what those limitations might be?

Link to comment
Share on other sites

8
HOLA449
9 minutes ago, wonderpup said:

It cannot be true that AI will be both highly profitable in the future and have little impact on future employment if those predicted profits are based on replacing people with technology, which they largely seem to be.

As I said in the post earlier, there is an estimation that today the average westerner lives a lifestyle roughly proportionate to someone from a pre-fossil fuel society who had recourse to 500 personal slaves. This situation has come about via replacing human effort with tools and free energy sources. Yet we now have more people than then and the total amount of work done by humans now is much greater than it was then even if we discount non jobs.

Technology is a multiplier to human effort. If machine intelligence is a tool then its the same story - just increasing the multiplier. Obviously these kind of multipliers can have distributional issues but again this has always been the case.

You seem to agree with me that machine intelligence is a tool rather than a new thing with agency all of its own, so on that basis, what is the essential difference between an x10, x100, or x1000000 multiplier on human effort? Why is the next 10x different to the previous 10x?

 

 

 

Link to comment
Share on other sites

9
HOLA4410
On 12/05/2017 at 8:50 PM, scepticus said:

 

This is exactly why I posted the earlier link to "the myth of the superhuman AI".

The above post has little basis in reality and is not based on informed opinion about what ML can do now and what it is likely to be able to do in future. Its taking recent progress in ML, which is little more than computational statistics on steroids, and extrapolating that to a point where ML recreates everything the human brain does. A review of recent Neuroscience literature makes this plain. There is more to what a general intelligence like the human brain can do than statistics. Memory for one.

Let me make this quite clear - the relationship between recent ML techniques and what the brain does is IMO entirely accidental and ML will progress before long to the point where they drop any pretence of taking a similar approach to the brain. The 'neural' in 'neural network' is a misnomer, wet neurons do not function at all like the neurons in "deep neural networks" - not even close.

The brain remains vastly more potent in a general AI sense than silicon based ML algorithms do. Silicon AIs will of course excel at specialised point tasks in one sector or another of course. But computers have been been replacing various human job functions for decades. 

I reckon the time is not far off when the edifice of neuroscientific dogma that provides a semblance of biological plausibility to what is being done with ML collapses. 

As to the majority of humanity being redundant meat bags, so very many people on this forum have been banging on about non-jobs, public sector waste and value-destroying financial complexity long before AI became a hot topic. I would therefore humbly submit that the prevailing view on this forum has been that a very great part of the western citizenry were already effectively redundant meat bags 15 years ago. That would be their view though, not mine.

 

 

 

 

Yes, you are 100% spot on.

ML is basically linear algebra plus various statistical/numerical analysis techniques applied to calculate thresholds in a neural network or more generally in an algorithm.

I did all this stuff 20 years ago at university and find it strangely easy to have picked this up again along with IoT.  Whilst people in IT have been worried about being surplus to requirement by the time they hit 40, I'm seeing the opposite.

Link to comment
Share on other sites

10
HOLA4411

Nomadic Stone Age Humans lived in an age of abundance with a very short working week. Nobody had a 9 to 5 day job. Social capital was more valuable than material wealth. Any attempts at imposing a hierarchy  failed miserably, until the invention of agriculture.

After the invention of agriculture, human height shrank by 6 inches or more. Life spans halved. Progress? For 3,000 years the first stone age towns maintained their egalitarian social systems. All of them collapsed. Once people had jobs and lived in towns only hierarchical societies flourished.

Now we are about to become a workless society, like our ancestors in the stone age, can any hierarchical system survive? Is the collapse of our financial and govt institutions a symptom of the workless society? Have the elites shot themselves in the foot?

 

Link to comment
Share on other sites

11
HOLA4412
17 hours ago, scepticus said:

That is a trivial reducto ad absurdum, which doesn't cut it. We have the laws we have because they have evolved over time, out-competing alternative law-systems, with communism being a nice recent example of that. One of the first law-systems to be out-competed was anarchy, as described by you above. The existence of laws and ample evidence of altruistic behaviour does not mean that competitive processes that characterise the natural world don't still apply to us. Competitive processes are part of the evolutionary search process which remains ongoing. Even if individual human decisions are inscrutable because of some special magic in consciousness, that does not say that large numbers of such decisions won't follow more prosaic natural laws.

That's exactly the point though. We managed to see the downsides of those natural decisions and "laws" and limit them. "Trivial" reducto ad absurdums aren't logical fallacies, they highlight the underlying simple concepts driving more complex situations. Scale is rarely important to the validity of a concept. They're generally useful when people make blanket statements,  to get them out of the way and deal with the nitty-gritty details and exceptions that make up the complexity of the real world. Without that you end up with idiots who say things like "Oh, you don't like a lot of modern technology, why are you posting this message from a computer on to the internet?"

Other than the laws of physics the only ones that apply to us are the ones we chose to, yet on this technological front we seem hell-bent on making ourselves redundant for no gain - it's driven by the arms race that I mentioned earlier. And arms races have been reigned in from time to time. Humans can chose not to play the game. Nothing else that we know about in nature can. A lot of our laws attempt to restrain these things, they don't play in to the competitive model that you're arguing for.

Quote

Given that resources are finite, the laws we have now have allowed us to reach some local maxima via an evolutionary search that has served to maximise the the size and power of our civilisation - for which a very good rough proxy is the amount of resources we consume. It is not surprising that reduced levels of competition between competing social systems will result in increased levels of internal competition within the few social systems that remain.

It seems reasonable to say that our current system embodies less overt conflict and competition than in previous ages (e.g. less war, fairer laws, less racism etc) but given that our society now is larger than it was in the past the total amount of competition has grown simply by virtue of the fact there are more of us

Less wars? And, in the UK at least, laws don't appear to be getting fairer.

You're arguing for a system that maximises consumption of those resources as goal itself - everything else is really a side-effect, which indeed produces short-term gains for some. That, in the long run, is self-destructive behaviour. You can look back in nature if you want for examples where the end result of this competition has been ultimately self-destructive. The idea that everything can be explained by competition, and that it results in the most desirable outcome, is horribly flawed.

Now because of our short-sighted stupidity and worship of "competition" I think all these things will come about, despite it being within our power to not do it. And that's just plain bloody stupid, and why I get depressed about the future.

damned quote problems

Edited by Riedquat
Link to comment
Share on other sites

12
HOLA4413
17 hours ago, wonderpup said:

What we need is not less technology but a better social framework around which to deploy that technology than the current Capitalist model that seems to have no answers to any of the problems we now face. There is something increasingly desperate to be detected in the ever more shrill claims that what we need is more free market as a way to solve our problems- even those who make these claims seem less and less convinced by them as time goes by.

Better social framework to deploy resources really, not technology. As I said the technology has nothing much to offer and does have its price. So far the counter-arguments are either shrugging to the inevitable (fatalistic but not unreasonable, although it's also responsible for creating the environment in which that happens), or based on the idea that extreme hardship is a TV without a remote control. I'd argue that the problems which meant people in the UK lived in the earlier years of the 20th century, and extending for fewer as it went on, were entirely social, not technological, as have been any improvements that may have happened over most of that time. It's always possible to find the odd exception.

Edited by Riedquat
Link to comment
Share on other sites

13
HOLA4414
12 hours ago, Mikhail Liebenstein said:

Yes, you are 100% spot on.

ML is basically linear algebra plus various statistical/numerical analysis techniques applied to calculate thresholds in a neural network or more generally in an algorithm.

I did all this stuff 20 years ago at university and find it strangely easy to have picked this up again along with IoT.  Whilst people in IT have been worried about being surplus to requirement by the time they hit 40, I'm seeing the opposite.

Yup that's the point I have been trying to convey.

Link to comment
Share on other sites

14
HOLA4415
9 hours ago, Riedquat said:

 

damned quote problems

Quite - how the heck do you separate parts of the quotes post and respond to them separately? I had to requote them below...

 

Quote

Other than the laws of physics the only ones that apply to us are the ones we chose to, yet on this technological front we seem hell-bent on making ourselves redundant for no gain - it's driven by the arms race that I mentioned earlier. And arms races have been reigned in from time to time. Humans can chose not to play the game. Nothing else that we know about in nature can. A lot of our laws attempt to restrain these things, they don't play in to the competitive model that you're arguing for.

I am talking about laws like the central limit theorem:

"In probability theory, the central limit theorem (CLT) establishes that, for the most commonly studied scenarios, when independent random variables are added, their sum tends toward a normal distribution(commonly known as a bell curve) even if the original variables themselves are not normally distributed. In more precise terms, given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined (finite) expected value and finite variance, will be approximately normally distributed, regardless of the underlying distribution.[1][2] The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions."

 This applies to all propabailistic situations including decisions made by humans.The CL theorem is not a physical law, it is a mathematic result that holds. And it will thus apply to the probabilities a bunch of humans chose this or that. When I talk about natural laws I refer in general to these general mathematical and statistical cases. Sorry if that wasn't clear.

Beyond that there may be natural (e.g. physical) laws or constraints that are emergent from more basic physical laws that would also bind human futures since humans are also governed by the same fundamental physical laws. One example of such a thing is the Maximum Entropy Production Principle (MEPP). Another is the recent view emerging among physicists that gravity is not a fundamental physical force but rather emerges from fundamental physics as an entropic force. But I don't need to invoke MEPP or entropic gravity to make my point, the central limit theorem will do just as well for my purposes here.

Quote

 

Less wars? And, in the UK at least, laws don't appear to be getting fairer.

 

Yes. We have less wars per head of world population than in the past IMO measured over the last 30 or 40 years.

Quote

You're arguing for a system that maximises consumption of those resources as goal itself - everything else is really a side-effect, which indeed produces short-term gains for some. That, in the long run, is self-destructive behaviour. You can look back in nature if you want for examples where the end result of this competition has been ultimately self-destructive. The idea that everything can be explained by competition, and that it results in the most desirable outcome, is horribly flawed.

No I am arguing that complex adaptive systems which include but are not limited to living systems will try to grow extensively when they can and when they cannot they will regulate their consumption of resources at the maximum level consistent with their own survival. Of course this is necessarily a somewhat stochastic process since no entity living or otherwise can know the future it can only predict it. So yes sometimes such systems get it wrong and collapse.

When such systems cannot grow extensively they will engage in intensive growth (e.g. an internal re-org) in order to tray and arrive at a situation in which they are capable of growing extensively again by consuming more resources. 

I don't see anything in human history or what we know about natural complex systems that would suggest this view is incorrect.

Quote

Now because of our short-sighted stupidity and worship of "competition" I think all these things will come about, despite it being within our power to not do it. And that's just plain bloody stupid, and why I get depressed about the future.

Well, we face situations which are naturally competitive (such as war, finance/investment, monopolistic business practices etc). These situations are competitive by definition not because we necessarily want them to be. Competitive situations will always exist and it seems unrealistic that if players in these competitions can get an advantage from AI then they will do so. Why not use ML to trade markets if your competitors are doing so successfully?

Link to comment
Share on other sites

15
HOLA4416

A Robot Just Landed a Simulated Boeing 737, So What Next for Air Travel?

The following news may, however, lift the spirits of — or further terrify — frequent flyers around the world: a robot has successfully landed a Boeing 737.

Aurora Flight Sciences, which developed the robot, are bursting with pride according to sources, as the bot was able to sit in the co-pilots seat and land the simulated plane.

This has lead the Defense Advanced Research Projects Agency (DARPA), part of the US Department of Defense team responsible for the development of emerging technologies for use by the military, to advocate the use of robots in their planes.

https://sputniknews.com/science/201705171053698661-boeing-737-robot-landing/

Link to comment
Share on other sites

16
HOLA4417
25 minutes ago, Errol said:

A Robot Just Landed a Simulated Boeing 737, So What Next for Air Travel?

The following news may, however, lift the spirits of — or further terrify — frequent flyers around the world: a robot has successfully landed a Boeing 737.

Aurora Flight Sciences, which developed the robot, are bursting with pride according to sources, as the bot was able to sit in the co-pilots seat and land the simulated plane.

This has lead the Defense Advanced Research Projects Agency (DARPA), part of the US Department of Defense team responsible for the development of emerging technologies for use by the military, to advocate the use of robots in their planes.

https://sputniknews.com/science/201705171053698661-boeing-737-robot-landing/

The flight control software can already auto-land the plane presumably, having a physical thing than moves physical controls seems completely pointless.

Edited by goldbug9999
Link to comment
Share on other sites

17
HOLA4418

I suppose there must be advantages to the robot (over and above just software), although I'm not an expert. Presumably DARPA has its reasons.

If nothing else it proves a point.

Edited by Errol
Link to comment
Share on other sites

18
HOLA4419

World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI

Ke Jie, who once boasted he would never be beaten by a computer at the ancient Chinese game, said he had ‘horrible experience’.

AlphaGo’s success is considered the most significant yet for AI due to the complexity of Go, which has an incomputable number of move options and puts a premium on human-like “intuition”, instinct and the ability to learn.

AlphaGo uses two sets of “deep neural networks” containing millions of connections similar to neurons in the brain. It is partly self-taught, having played millions of games against itself following initial programming. After Lee lost to AlphaGo last year, Ke boldly declared: “Bring it on!” But the writing was on the wall in January when Ke and other top Chinese players were flattened by a mysterious competitor in online contests. That opponent was revealed afterwards to be the latest version of AlphaGo, which was being given an online test run by its developer, London-based AI company DeepMind Technologies, which Google acquired in 2014.

https://www.theguardian.com/technology/2017/may/23/alphago-google-ai-beats-ke-jie-china-go

Link to comment
Share on other sites

  • 3 months later...
19
HOLA4420

Putin - Artificial intelligence is not only the future of Russia, but the future of all mankind

Russia's President Putin has revealed which technological sphere will determine the future leader of the world. Artificial intelligence is not only the future of Russia, but the future of all mankind. It holds both tremendous opportunities and is fraught with scarcely predictable dangers. Whoever takes the lead in this sphere will become Lord of the World," President Putin told Russian schoolchildren during an open lesson on their first day of the new school year.

https://sputniknews.com/russia/201709011057000758-putin-schoolchildren-world-lord/

Link to comment
Share on other sites

20
HOLA4421

As per usual the capitalists will take all the gains from increased efficiency of production since capital needs it's pound of flesh at an ever increasing rate.
Once the workers become completely superfluous to the system there will be a great culling leaving the privileged few to be served by robots.


 

Link to comment
Share on other sites

  • 4 years later...
21
HOLA4422
22
HOLA4423
23
HOLA4424
24
HOLA4425

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information