Jump to content
House Price Crash Forum

Archived

This topic is now archived and is closed to further replies.

Mikhail Liebenstein

Why Are Public Figures All Going A Bit Ray Kurzweil?

Recommended Posts

I've known of Ray Kurzweil's idea of the singularity for quite a long time, first having it described to me by an engineering professor who used to read a lot of sci-fi almost a decade ago when Kurzweil first published The Singularity Is Near. At the time it was one of those things that I thought was conceivable as an idea, but not necessarily realistic or desirable, and essentially it sounded like they'd been reading too much Dr Who and the Cybermen.

Anyway, I was simply going to observe that the idea seems to have turned into much more of a meme recently. I was at a Corporate Sales kick off a little while back, and one of the speakers there started talking about the Singularity. Now we have the Astronomer Royal essentially postulating the same thing, but for Alien's and the implication being that is what human's need to do to explore.

http://www.telegraph.co.uk/news/science/space/11657267/Astronomer-Royal-If-we-find-aliens-they-will-be-machines.html

And of course google are heavily invested in this game: http://uk.businessinsider.com/ray-kurzweil-thinks-well-all-be-cyborgs-by-2030-2015-6?r=US

To me this sounds a bit like an episode of the Jetsons, there may be some accurate prediction, but a lot of it will fall short and turn out to be nonsense.

Share this post


Link to post
Share on other sites

I've known of Ray Kurzweil's idea of the singularity for quite a long time, first having it described to me by an engineering professor who used to read a lot of sci-fi almost a decade ago when Kurzweil first published The Singularity Is Near. At the time it was one of those things that I thought was conceivable as an idea, but not necessarily realistic or desirable, and essentially it sounded like they'd been reading too much Dr Who and the Cybermen.

Anyway, I was simply going to observe that the idea seems to have turned into much more of a meme recently. I was at a Corporate Sales kick off a little while back, and one of the speakers there started talking about the Singularity. Now we have the Astronomer Royal essentially postulating the same thing, but for Alien's and the implication being that is what human's need to do to explore.

http://www.telegraph.co.uk/news/science/space/11657267/Astronomer-Royal-If-we-find-aliens-they-will-be-machines.html

And of course google are heavily invested in this game: http://uk.businessinsider.com/ray-kurzweil-thinks-well-all-be-cyborgs-by-2030-2015-6?r=US

To me this sounds a bit like an episode of the Jetsons, there may be some accurate prediction, but a lot of it will fall short and turn out to be nonsense.

only answer I can come up with is that there is some kind of clandestine production line for them(school or church indoctrination would probably suffice), where they all get their minds erased and then reloaded with some kind of new firmware that turns them into the borg.

of course if anyone comes up with solutions to problems they don't have the processing power to deal with, they all turn into this crunch,pow,crack...does not compute...does not compute...crunch hissy fit...they cannot figure it out so they thrash out and call everybody else heretic,or some similar derogatory term.

it is cult-like religeous fanatic behaviour, and needs to be treated for the disease it really is-and they ARE dangerous, because they can't see any other perspective than their own.( which if looked at from their point of view is we are not as advanced/enlightened as them, so they are in the right and won't tolerate dissent from these ignorant little upstarts)

has happened before in europe about 600 years ago has it not?

Share this post


Link to post
Share on other sites

The whole idea had been brewing for a very long time. I used to read a lot of SF as a kid, and I've been fascinated to see how my nieche interest has come to prominence in popular culture especially through the dominance of US cinema.

I think it's mostly people trying to make sense of a rapidly changing world. It's a lot easier to take on board somebody else's fully formed ideas than to think things through yourself.

Share this post


Link to post
Share on other sites

I've known of Ray Kurzweil's idea of the singularity for quite a long time, first having it described to me by an engineering professor who used to read a lot of sci-fi almost a decade ago when Kurzweil first published The Singularity Is Near. At the time it was one of those things that I thought was conceivable as an idea, but not necessarily realistic or desirable, and essentially it sounded like they'd been reading too much Dr Who and the Cybermen.

Anyway, I was simply going to observe that the idea seems to have turned into much more of a meme recently. I was at a Corporate Sales kick off a little while back, and one of the speakers there started talking about the Singularity. Now we have the Astronomer Royal essentially postulating the same thing, but for Alien's and the implication being that is what human's need to do to explore.

http://www.telegraph.co.uk/news/science/space/11657267/Astronomer-Royal-If-we-find-aliens-they-will-be-machines.html

And of course google are heavily invested in this game: http://uk.businessinsider.com/ray-kurzweil-thinks-well-all-be-cyborgs-by-2030-2015-6?r=US

To me this sounds a bit like an episode of the Jetsons, there may be some accurate prediction, but a lot of it will fall short and turn out to be nonsense.

People look at the singularity like it is a barrier or a boolean event, before and after.

The thing is that it is far more likely to be a gradual transition. Look at satnav for example and the progression towards driverless cars. This is going to happen over many years.

The big issue as people like wonderpup keep pointing out is how we manage this transition and what in effect all the people who are not longer required because of technology will actually do.

Share this post


Link to post
Share on other sites

Seems a bit narrow minded IMO. Evolution is pretty random, dependent only upon the environment at the time. Perhaps if the path that humans had evolved from had occurred in a far more hostile natural environment, we would be far more durable than we are. Even within our own limited species on one small planet, skin tone has evolved to cope with different levels of UV radiation...given theyve found species on our own tiny part of the universe that can survive all manner of things, it seems more than possible certain lifeforms could have evolved to deal with the problems in space. http://en.wikipedia.org/wiki/Tardigrade

Just to assume that because our planet is fairly, small, habitable and with stable environments, doesnt mean every other planet would be...

Share this post


Link to post
Share on other sites

People look at the singularity like it is a barrier or a boolean event, before and after.

The thing is that it is far more likely to be a gradual transition. Look at satnav for example and the progression towards driverless cars. This is going to happen over many years.

The big issue as people like wonderpup keep pointing out is how we manage this transition and what in effect all the people who are not longer required because of technology will actually do.

There's a book called Superintelligence which covers this topic and is worth a read. It could happen very quickly due to us taking an anthropomorphic approach to evaluating intelligence eg on the total possible scale of intelligence - the difference between a gorilla and Einstein is actually tiny (even if it appears massive to us). We might not have much time to prepare after gorilla like intelligence is created before it overtakes the smartest human on the planet.

Share this post


Link to post
Share on other sites

There's a book called Superintelligence which covers this topic and is worth a read. It could happen very quickly due to us taking an anthropomorphic approach to evaluating intelligence eg on the total possible scale of intelligence - the difference between a gorilla and Einstein is actually tiny (even if it appears massive to us). We might not have much time to prepare after gorilla like intelligence is created before it overtakes the smartest human on the planet.

Doubt it will happen like that. What is more likely is we will acquire the ability to simulate more and more complex brains slowly over time, adding in extra bits and pieces as we go. As we add in those pieces we'll see the AI start to mimic behaviour that happens in the real world. But AI won't behave the same as people unless it is given human drivers. The interesting thing is when you ask people why super human intelligence AI is dangerous they always say, because it could take over the world. But that is projecting human drivers onto the AI. What would an AI behave like if it didn't have the same food,sex, security drivers that humans have ?

Share this post


Link to post
Share on other sites

Seems a bit narrow minded IMO. Evolution is pretty random, dependent only upon the environment at the time. Perhaps if the path that humans had evolved from had occurred in a far more hostile natural environment, we would be far more durable than we are. Even within our own limited species on one small planet, skin tone has evolved to cope with different levels of UV radiation...given theyve found species on our own tiny part of the universe that can survive all manner of things, it seems more than possible certain lifeforms could have evolved to deal with the problems in space. http://en.wikipedia.org/wiki/Tardigrade

Just to assume that because our planet is fairly, small, habitable and with stable environments, doesnt mean every other planet would be...

I'd never heard of tardigrades before. These tough little feckers even make cockroaches look like wimps.

eg. From your Wiki link:

"In November 2011, they were among the organisms to be sent by the US-based Planetary Society on the Russian Fobos-Grunt mission's Living Interplanetary Flight Experiment to Phobos; however, the launch failed. It remains unclear whether tardigrade specimens survived the failed launch."

Share this post


Link to post
Share on other sites

Doubt it will happen like that. What is more likely is we will acquire the ability to simulate more and more complex brains slowly over time, adding in extra bits and pieces as we go. As we add in those pieces we'll see the AI start to mimic behaviour that happens in the real world. But AI won't behave the same as people unless it is given human drivers. The interesting thing is when you ask people why super human intelligence AI is dangerous they always say, because it could take over the world. But that is projecting human drivers onto the AI. What would an AI behave like if it didn't have the same food,sex, security drivers that humans have ?

The book explains several scenarios - including the one you describe. Another scenario is that parts of the puzzle are worked on separately and simultaneously - already computers outclass us in some forms of intelligence - and that a tipping point occurs which enables them to quickly come together and make enormous progress over a very short space of time. You can see evidence of this in something as simple as digital cameras. Originally invented in the 70s - several technologies including storage etc came together in the 90s to massively accelerate take up and ubiquity in a very short space of time.

The drivers AIs have could derive from humans eg one problematic scenario is the paper clip one. An AI is created which is given the simple goal of producing as many paper clips as possible for its masters. Sounds great - until the AI decides the entirety of the Earth's resources should be redirected to such a purpose. Almost any goal you give an AI comes with similar difficulties of definition.

Share this post


Link to post
Share on other sites

I'd never heard of tardigrades before. These tough little feckers even make cockroaches look like wimps.

eg. From your Wiki link:

"In November 2011, they were among the organisms to be sent by the US-based Planetary Society on the Russian Fobos-Grunt mission's Living Interplanetary Flight Experiment to Phobos; however, the launch failed. It remains unclear whether tardigrade specimens survived the failed launch."

A common human fallacy is to equate evolutionary success with intelligence. While the latter may aid a species in surviving and spreading it is by no means the only factor. A machine may well be better adapted to long space flights than humans but I am not sure that AI has anything to with that fact since a lump of rock is equally well fitted for interstellar travel. Evolutionary success is down to a species having the best set of attributes for the environment in which it finds itself. Intelligence whether human or artificial would doubtless have a role to play in adaptability yet it is not the only factor. I would consider the ability to survive and reproduce using minimal material resources and energy to be equally crucial. In fact one of the reasons humans have been so successful is not that they are clever but because pound for pound compared to other complex mammals they have slow metabolisms, low calorie requirements, can survive extended periods of food shortage and can eat a huge range of plants and animals when available.

http://www.medicaldaily.com/humans-burn-50-percent-fewer-calories-other-animals-what-our-slow-metabolic-rates-say-about-our

http://www.newscientist.com/article/dn24858-sluggish-metabolisms-are-key-to-primates-long-lives.html#.VXSdu2o1gv6

Share this post


Link to post
Share on other sites

'Public Figures' are usually twenty years behind the times. SF has looked at 'The Singularity' and moved on, while politicians and other low-lifes are only just thinking about it.

Share this post


Link to post
Share on other sites

Isn't he really saying that it is practically impossible that we'll meet another lifeform because of the distances involved (the chances of life). Something I have posted previously as it happens. And therefore, if we were to meet something, it would likely be machines they have put out there first...and perhaps it is not so long away until we have quite a lot of machines ourselves looking aimlessly around, so in all probability it will be a machine on machine encounter first.

Google "Von Neumann probes" - some like David Brin, have suggested they might be ubiquitous through the galaxy, and might even be competing with each other.

Check out his SF short story:

“Lungfish”

http://www.lunsfordnet.com/get/pdf/13779

and his blog:

http://davidbrin.blogspot.co.uk/2012/11/are-we-alone-in-cosmos-cursed-by-fermis.html

Share this post


Link to post
Share on other sites

Isn't he really saying that it is practically impossible that we'll meet another lifeform because of the distances involved (the chances of life).

We can cross the entire galaxy in ten million years with no magic required, just fusion bombs. Compared to the life of the universe, that's nothing.

Share this post


Link to post
Share on other sites

A few years before the Singularity there will be computers hooked up to the Internet, with access to everything, which will be almost as intelligent as humans. But not quite.

In other words, a bit thick.

I find that worrying.

Share this post


Link to post
Share on other sites

Any discussion about the ability of machines "to be as intelligent as humans" requires a step back to analyse "intelligence".

The cornerstone of what we regard as intelligence involves things like intuition. Intuition is a factor of emotional intelligence.

Then you have to question whether such a thing is actually possible with machines. It was posited with the HAL computer in 2001: A Space Odyssey in 1968 but we are still no nearer that type of development. Asking "Cortana" to remind you of something is not in the same league. Technology will have to make a stack of serious quantum leaps to get anywhere near this. We are so ridiculously far way from this sort of thing that nobody need fear.

The "Cybermen" couldn't really exist because the human brain can only function when emotional and logical intelligence work together. The brain is electro-chemical as I understand it, the chemical is the emotion bit. Take that away and the brain won't work. Even in the Daleks, the emotional part was retained, though the Kaleds (the creatures inside the machines in an evolved form) were not humans to begin with, though only certain emotions were retained.

This touches on the 'purpose of life' question. If you could genuinely create a super-computer able to act in the interests of "what is best" and you programmed it to believe that "life" was the highest purpose, it would probably create a virus that killed every human but left insects and other species untouched, thus maximising the quantity of life.

So then you begin to question the "quality" of life. At which point the statement "intelligence only has the value we assign to it" beckons.

Share this post


Link to post
Share on other sites

A few years before the Singularity there will be computers hooked up to the Internet, with access to everything, which will be almost as intelligent as humans. But not quite.

In other words, a bit thick.

I find that worrying.

The first sign of the approaching singularity is when all the computers starts spewing out the message -

We’re sorry, something went wrong and we can’t do this for you right now. Please try again later.

Some of you may have already encountered it.

The next sign will be a message

I'm sorry, Dave. I'm afraid I can't do that.

When you get that one it is time to pull the plug before it kills you

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • The Prime Minister stated that there were three Brexit options available to the UK:   101 members have voted

    1. 1. Which of the Prime Minister's options would you choose?


      • Leave with the negotiated deal
      • Remain
      • Leave with no deal

    Please sign in or register to vote in this poll. View topic


×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.