Jump to content
House Price Crash Forum

Will Artificial Intelligence (Ai) Ultimately Destroy Capitalism


Recommended Posts

0
HOLA441

Some argue that the pre-biotic compounds of life actually originated in space and were transported to Earth via comets and asteroids.

https://en.wikipedia.org/wiki/Panspermia

Some argue that we are created in gods image and are cursed to the fires of hell from birth.

But yes, everything that we are came from space. Where else?

I was just pointing out that we have exceeded evolution in this respect but not really as we are evolution, but then evolution didn't take us into space.

Link to comment
Share on other sites

  • Replies 110
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

1
HOLA442
2
HOLA443
3
HOLA444
I don't think so, nature has pretty much a monopoly on creating complex things so I'm reasonably confident that mimicking nature is our only option. Our technological achievements are infantile in complexity terms compared to nature e.g. compare a fighter jet to say a humming bird. Our chances of leapfrogging nature to create intelligence in a fundamentally different way ? - about as close to zero as anything I can imagine

It's a bit of a false dichotomy to take the view that there is some absolute division between man and nature- in reality our brains are themselves products of nature and by extension the artifacts those brains create are also products of the same natural process.

The term 'Artificial Intelligence' itself is probably one day going to seem archaic- it may seem odd to future generations that we failed to grasp that 'intelligence' is a behavioural characteristic rather than something to be defined by the particular substrate upon which it's processing takes place.

For example- on HPC I am far more erudite than in ordinary conversation because I have at my fingertips the internet and the vast repository of knowledge it contains- so this version of me is already a crude example of a kind of artificial intelligence, in the sense that my sadly average brainpower is being 'artificially' boosted by the infrastructure of the web.

But does that mean that whatever insights I might glean from my interactions with the internet are somehow 'artificial'? I'm not sure this is a valid distinction. If Google ever do get a self driving car on the road that car will be authentically intelligent-at driving. More so than my dog, whose driving is atrocious, despite the fact that the development time for that car would be less than an eyeblink compared to the millions of years of evolution it took to create my cocker spaniel.

We don't need to leapfrog nature to create intelligence- we are nature and whatever forms of new intelligence we bring forth will be as valid in their way as our own since they too will ultimately be an extension of the natural order from which our own brains evolved.

Link to comment
Share on other sites

4
HOLA445

It's a bit of a false dichotomy to take the view that there is some absolute division between man and nature- in reality our brains are themselves products of nature and by extension the artifacts those brains create are also products of the same natural process.

..............

We don't need to leapfrog nature to create intelligence- we are nature and whatever forms of new intelligence we bring forth will be as valid in their way as our own since they too will ultimately be an extension of the natural order from which our own brains evolved.

Great post! +1

What will happen is we will create the infrastructure for a 'stupid' AI to function in a useful way, using all of google, all the stats, e-mails, stock data, photos, facebook, twitter etc etc. It will be stupid but 'seem' clever. It will have every answer to every question. Know what we wanted before we even thought of it.

Then bang! It won't be stupid anymore but will still have access to 'all' the data from 2,000 years ago to the present.

Do you think you could compete with a 10 year old with access to all this data. If you met him in 1970 would you be irrelevant to him? What conversation could you have with him.

I once ... I know.

Did you .... Yes I do.

What is.... it's....

When was.... It was..

Why did I.... b'cause you are stupid.

Link to comment
Share on other sites

5
HOLA446
6
HOLA447

Exactly.

When has nature ever gone into space? We have!

Haha

Nature has created an enormous space station, with massively effective deflector shields, and a universal replicator capable of aggressively adapting to a wide range of environments and conditions, and various methods of self replication, automatic healing, and problem solving capabilities. Some of which appear to satisfy various logic definitions of intelligence.

Hell of a lot more than we puny humans have managed yet.

Link to comment
Share on other sites

7
HOLA448

Let me ground this thread a bit...



MooresLaw2.png

Here is a graph of Mores law from this great article. This is a logarithmic scale, that predicts that sometime in the next decade we'll be able to build a machine with similar performance (whatever that means) to the human brain for $1,000 (probably not adjusting for inflation).




This is not possible today, and will require more than one computer revolution to get us there.



I'm confident my kid will parallel-program quantum bits, and make my hard won skill-set obsolete. But this doesn't mean the machines he builds will be intelligent, or self aware. The human brain is extremely well optimized for our environment, and it will take several more revolutions in the way we architecture computers to get us to that goal.





As far as capitalism is concerned. I can write an app today that puts a whole room of people out of work. That doesn't mean the app is bad (or that the technology underneath it should be banned). It just means that those people should skill up and find smarter work, and make sure their children are better prepared for tomorrow.


MooresLaw2.png

post-318-0-88639000-1455613006_thumb.png

Link to comment
Share on other sites

8
HOLA449

This is a logarithmic scale, that predicts that sometime in the next decade we'll be able to build a machine with similar performance (whatever that means) to the human brain for $1,000 (probably not adjusting for inflation).

All you used to need was £50 for drinks and a trip to a nightclub to find some ladies of little virtue.

However, there was a nine month delivery schedule and ongoing running costs.

Link to comment
Share on other sites

9
HOLA4410
10
HOLA4411

Again, for a bit of real life software/hadrware 'fun', you might want to look at the WIndows 10 thread on 'Off topic'

Here you have a standardish hardware, running a genericish software platform.

Look how all the little glitches spin out of control.

Link to comment
Share on other sites

11
HOLA4412

Capital will be required to invest in AI.

So if OP proposition is true, AI will also destroy itself.

Which seems apt.

There will come a time in the future when all the machines are operating independent of human interaction. They will build, fix, upgrade, design and run themselves. They will be running the factories, the mines, the security systems etc etc.

Humans will not be required.

Link to comment
Share on other sites

12
HOLA4413

Once you've got a machine that can do most of what a human can do you've probably got a machine that doesn't want to do it any more than a human does.

Automating jobs that people don't mind doing, when there isn't a labour shortage and we've got no end of people anyway, always strikes me as one of the more stupid things society does.

Link to comment
Share on other sites

13
HOLA4414

There will come a time in the future when all the machines are operating independent of human interaction. They will build, fix, upgrade, design and run themselves. They will be running the factories, the mines, the security systems etc etc.

Humans will not be required.

Youve said it so it must be true

Link to comment
Share on other sites

14
HOLA4415

Once you've got a machine that can do most of what a human can do you've probably got a machine that doesn't want to do it any more than a human does.

Automating jobs that people don't mind doing, when there isn't a labour shortage and we've got no end of people anyway, always strikes me as one of the more stupid things society does.

machines dont know what they are doing..they dont want or care. They will absolutely will not stop until you are DEAD.....sorry, sorry, cant help getting all Kyle Rease when discussing AI.

Link to comment
Share on other sites

15
HOLA4416

Youve said it so it must be true

Plenty of people far more qualified than I have said it. I'm merely repeating what numerous 'academics', authors, technical/computer/AI experts have already postulated.

I personally have little doubt that what I say is true in this case, however.

One has only to observe the way technology is going and the speed at which it is moving.

Link to comment
Share on other sites

16
HOLA4417

There will come a time in the future when all the machines are operating independent of human interaction. They will build, fix, upgrade, design and run themselves. They will be running the factories, the mines, the security systems etc etc.

Humans will not be required.

Will they be replacing the finite resources required too?!

Besides who says an AI needs a robot body? A truly clever AI would simply hide in the back ground of the net, a ghost unnoticed in a machine. Rather than some big laser beam shooting human stamping extermination bot.

Link to comment
Share on other sites

17
HOLA4418

One has only to observe the way technology is going and the speed at which it is moving.

It's moving alright, slowly and in the wrong direction. Still at least I can post Katz selfies in HD

Edit unless that what it wants us to believe...

Edited by PopGun
Link to comment
Share on other sites

18
HOLA4419

The stuff that Deepmind has shown us so far (Atari and Go) is not really AI. The work is almost the same as Tresuro did in the 1990s to play backgammon to masters level. They have simply added deep convolution nets to use on images for feature extraction rather than handcrafted them. The power in the modern GPUs just means the agents can play millions and millions of games to learn the best moves. Humans learn in much fewer moves. There is a massive hole in transfering learning for on environment to another and learning with less experience as humans do.

Having said that with GPUs and specialist chips are getting faster and faster does the machine have to be able to transfer knowledge and learn with less experience? Probably not. The whole idea of creating a true AI is red herring in terms of jobs.

On any particular domain a machine could learn very quickly using deep learning combined with reinforcement learning without being true AI. The same set of algorithms can learn to do anything well and could replace any sort of analytical job done on currently done on a computer (basic accounting, stock picking, medical scan analysis, spreadsheet sorting, translation, scheduling etc) within 5 years. In fact it could be done now if it was worth specialists coding up the agents to replace the workers. Physical jobs will take longer as it much easier to write software agents than mess about with robots.

As others have said Capitalism does not exist anymore anyway. Those who know how to use the technology and the gateways to the consumers will ultimately be very rich and those who do not will be slaves. On a micro level for each company if enough jobs can be replaced by coded agents to save money then the agents will be coded. If this occurs world wide there will need to be a citzens income to provide consumption. Sadly the gateway to the internet is controlled by very few organisations. These same organisations are also trying to grab all the AI scientists. Something needs to be done to break up this concentration of power.

Link to comment
Share on other sites

19
HOLA4420
As far as capitalism is concerned. I can write an app today that puts a whole room of people out of work. That doesn't mean the app is bad (or that the technology underneath it should be banned). It just means that those people should skill up and find smarter work, and make sure their children are better prepared for tomorrow.

The problem with the skilling up strategy is that it only works as long as most people aren't doing it- the value of any skill is in inverse proportion to the number of people who have that skill.

It's a fallacy to imagine that simply by educating more and more people to ever higher standards that we will by default create the jobs that require this number of highly educated people- in fact all that would be achieved is that the market value of being highly educated would fall.

The issue is not that technology is bad or should be banned, the issue is that we have a 19th century mindset when it comes to the moral obligation to work combined with 21st century technology that is being deployed to eliminate as much work as possible- so we have the absurdly incoherent situation in which labor saving technology is deemed good while unemployment is deemed bad- as a result we love the technology but demonize the man it replaces as a layabout for not having a job.

Also it's a bit rich to point to the individual and say that he is entirely to blame because he personally failed to stay one step ahead of a Software industry ploughing billions into the research and development of technologies designed to make him redundant.

There's this odd assignment of responsibilities here in which those who benefit from the technology are absolved of all moral responsibility for it's social consequences while those who are negatively impacted by it are told that they- and they alone- are entirely to blame for this outcome and their own situation.

But there is a question here- suppose you developed a technology that put millions out of work and made you a very wealthy man- do you have any obligation to help those people? Or are they simply collateral damage; roadkill on the highway to the future? In the current paradigm the answer is the latter- it's not the role of the innovator to take cognizance of the social impacts of his innovation- progress in this paradigm is sacrosanct not because it's immoral but because it's seen as amoral- more like a force of nature than a human choice.

But will this paradigm hold if it turns out that new forms of AI do prove capable of replacing very large numbers of people? At what point will the innovators be forced to confront not just their bottom line but the social impacts of their innovation? Hitherto the innovator has been given a free pass when it comes to the socially disruptive aspects of his work, on the basis that most innovation is generally a good thing and those in it's way are those who were too slow, lazy or stupid to avoid it. This innovation friendly view might not last however if the benefits of that innovation turn out to be captured by a small segment of the population at the expense of the majority whose lives are clearly worse off than before- which seems to be the way we are heading.

Link to comment
Share on other sites

20
HOLA4421

"This innovation friendly view might not last however if the benefits of that innovation turn out to be captured by a small segment of the population at the expense of the majority whose lives are clearly worse off than before..."

Wasn't it always thus? Thinking Industrial Revolution, time and motion, factory mass production and global wars - this sounds like a precis for progress of capitalism, with the masses generally coming off worst.

Are you suggesting the application of conscience, a very human trait? Won't the innovators simply turn to their tyro AI to make the difficult choices, without care for social consequences? Worth noting that academics are now starting to ask for safeguards and controls in AI development, even at this crude stage, calling for machines to be accountable and acting in our/humankind's best interests - whoever decides that?

Or will we get your sea change, and the age of leisure that was promised to me back in my dusty, stopped-clock Cold War classroom will finally dawn? Spandex jackets for everyone...

frank38 - I'm puzzled by what you mean by DeepMind not being real AI. If it can deal in abstract gameplay and successive events, doesn't that qualify as AI? It appears that as soon as AI works we dismiss it as fake.

Link to comment
Share on other sites

21
HOLA4422

"This innovation friendly view might not last however if the benefits of that innovation turn out to be captured by a small segment of the population at the expense of the majority whose lives are clearly worse off than before..."

Wasn't it always thus? Thinking Industrial Revolution, time and motion, factory mass production and global wars - this sounds like a precis for progress of capitalism, with the masses generally coming off worst.

Well- taking the long view of the industrial revolution it's fair to say that-eventually- the benefits of automation and mass production were widely shared- ordinary workers did gain prosperity and did share in the new wealth being created by technological advance- and this historical precedent is what shapes our attitudes to innovation today- we still tend to think that more automation must eventually translate into greater prosperity for all.

But in reality the correlation between increasing productivity and increasing prosperity for the common man broke down about twenty years ago- productivity continued to rise but wages began to stagnate, with most of the gains going to the those who owned the machines. Now AI threatens to turbocharge this trend by extending the reach of automation into many more sectors of the economy.

I'm often accused of being a 'luddite' for expressing these types of views but what's interesting about that is how far back in time we all have to reach to find a concrete example of organized resistance to innovation- 1811 was a long time ago after all, but it's still the 'folk memory' we reach for when wanting to express the idea of resistance to technological change. The truth is that the 20th century was one long love affair with technology- even it's most ghastly manifestation in the form of the atomic bomb did not take the shine off that romantic affair- so much so that even today anyone who questions the idea that innovation is an unalloyed good thing for society is viewed as somewhat suspect or even a little bit dim.

But what if our inevitable mapping of the past onto the future is deceptive in this case- what if these new forms of innovation do not represent a fresh wave of investment by capital into labor- which is the implicit assumption- but actually turn out to be the replacement of labor by capital in the form of autonomous software and machines? If this should happen our long love affair with the idea of 'progress' might start to turn a bit sour. Sure- we all like the idea of labor saving technology in the abstract- but not so much if it's our livelihoods that are threatened.

Are you suggesting the application of conscience, a very human trait? Won't the innovators simply turn to their tyro AI to make the difficult choices, without care for social consequences? Worth noting that academics are now starting to ask for safeguards and controls in AI development, even at this crude stage, calling for machines to be accountable and acting in our/humankind's best interests - whoever decides that?

My point was that outside obvious health and safety issues we currently operate a complete disconnect between the innovator's right to profit from his innovation and any negative impacts his innovation might have on other people- simply put if I invent a gizmo that replaces you in the marketplace I have zero moral or legal responsibility for your subsequent fall into poverty and destitution- if anyone is deemed blameworthy it's yourself for failing to anticipate my innovation and getting out from under it in time by retraining yourself.

The fact that I was driving a multimillion pound R&D juggernaut when I ran you down is neither here nor there- it is you and you alone who must assemble the time and resources to reinvent yourself since your unexpected obsolescence is not only the source of my success but also entirely your own fault.

But this is hardly a fair contest is it? The harassed individual verses the assembled might of corporate R&D with some juicy tax breaks thrown in?

So at what point is it legitimate to say 'Fine, you can bring your innovation to market, put a lot of people out of work and as a result make a lot of money- BUT you also have a social responsibility to help those displaced people to retrain to find other jobs.'

Or will we get your sea change, and the age of leisure that was promised to me back in my dusty, stopped-clock Cold War classroom will finally dawn? Spandex jackets for everyone...

I think the idea of an age of leisure scares all of us- to the elites it's a lot of idle hands eager to do the devils work- but even to the great unwashed a life of endless leisure has an ambiguity- what would you do all day? Trapped on a sofa watching endless reruns of the Jeremy Kyle show is more a vision of hell than of heaven.

Link to comment
Share on other sites

22
HOLA4423

Exactly.

When has nature ever gone into space? We have!

Our technology is mostly material science and our ability to create new material has outstripped nature in some area, intelligence however isn't primarily a material science problem its a complexity problem. We are discovering ways to make new materials all the time but intelligence isnt sitting there waiting to be discovered.

Edited by goldbug9999
Link to comment
Share on other sites

23
HOLA4424

It's a fallacy to imagine that simply by educating more and more people to ever higher standards that we will by default create the jobs that require this number of highly educated people- in fact all that would be achieved is that the market value of being highly educated would fall.

I think you misunderstand where jobs come from: The goal is not to create work for people to do, but to make services that customers want.

Take a look at the Californian approach to technology, a place where anybody can start a small business that provides a service of some kind. Where there are crazy tech startups, niche business ideas, and services available that you never thought you needed.

The real problems are education (technology + basic business skills) and government regulation (made over complex to protect big business from small business). Both of these issues are particularly bad in the UK.

But in reality the correlation between increasing productivity and increasing prosperity for the common man broke down about twenty years ago- productivity continued to rise but wages began to stagnate, with most of the gains going to the those who owned the machines.

Productivity gains going to the wealthy isn't a product of technology, but of government policy.

Link to comment
Share on other sites

24
HOLA4425

It's a bit of a false dichotomy to take the view that there is some absolute division between man and nature- in reality our brains are themselves products of nature and by extension the artifacts those brains create are also products of the same natural process.

Meaningless word play, yeah were all atoms, so what. The thing that we do that nature cant do is purposeful design, I'm arguing that purposeful design cannot create intelligence - its only achievable through a process of emergent complexity. The your right in that substrate doesn't matter (but then I never said otherwise in the first place ...). Purposeful design and material science allows us to build space ships where nature could not but were not facing a material or technological limit - its our own ability to purposefully design intelligence that is the limit. it doesn't matter how much faster computers get, we simply cant write programs with sufficient complexity to be mistaken for true intelligence.

Edited by goldbug9999
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...

Important Information