Jump to content
House Price Crash Forum
Sign in to follow this  
Guest BoomBoomCrash

The Rise Of The Machines

Recommended Posts

I think I already know about this, I saw this documentary years ago, called Terminator 2 Judgement Day. Also another one called, The Matrix.

Share this post


Link to post
Share on other sites
Guest KingCharles1st
Thankfully people readily accept this role.

BEWARE :D

764185430_4358c3fbd6.jpg

That could give your ******** a nasty nip

Share this post


Link to post
Share on other sites
Guest anorthosite
At last my very own Robot

Ill just put it next to my

Flying car,

Matter transporter,

Photos of my stay at the MOON Hotel

and the rehydration cooker.

O lucky me

Yep, its just more predicto-crap methinks.

Share this post


Link to post
Share on other sites
I think I already know about this, I saw this documentary years ago, called Terminator 2 Judgement Day. Also another one called, The Matrix.

I saw one called The Teletubbies - scared the sh*t out of me - they are evil and hell bent on global domination

Share this post


Link to post
Share on other sites
http://www.voiceofsandiego.org/articles/20...scher073109.txt

My own estimates based on the direction the technology is going is that we are going to have big problems with AI systems around 2020. These machines may not be prepared to assume the role of servants.

Dude with governments bailing out economies for trillions and dealers dealing in quadrillions its already happened.

The Skynet funding bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn, at a geometric rate. It becomes self-aware at 2:14 a.m. eastern time, August 29. In a panic, they try to pull the plug.

For strategic defense read algorithmic trading.

Welcome to the real world.

Share this post


Link to post
Share on other sites
They can be as intelligent as they like, if there is no power they are going nowhere. :)

but the teletubbies get their power from radio waves - impossible to stop. doom beckons

Share this post


Link to post
Share on other sites
Guest anorthosite
They can be as intelligent as they like, if there is no power they are going nowhere. :)

Always build your destructo-bots with short power cables.

"Simples".

Share this post


Link to post
Share on other sites
Guest Skinty
My own estimates based on the direction the technology is going is that we are going to have big problems with AI systems around 2020. These machines may not be prepared to assume the role of servants.

Yeah, I should have finished my own Terminator by then. Unfortunately it's currently getting distracted by mindless programmes on TV.

Do you think an Austrian accent is a cliche? I tried a Yorkshire one but it didn't really seem to fit.

Share this post


Link to post
Share on other sites
http://www.voiceofsandiego.org/articles/20...scher073109.txt

My own estimates based on the direction the technology is going is that we are going to have big problems with AI systems around 2020. These machines may not be prepared to assume the role of servants.

:lol:

I don't think so. The beauty of computer systems is that you can mathematically prove that a system does what it is should do. The onboard control system of NASA DS9 wasn't let loose without first using automated verification to prove that any problems that arose lead to a suitable resolution This has been happening since they lost the polar lander. The importance of this field can be seen by the result of last years turing award. By 2020 this field should have grown substantially as it has in the last decade. It's ridiculous to contemplate AI systems that can go "outside the bounds" being installed to automate the control systems of vehicles. The concept of this is akin to letting a 5 year old kid play with an uzi, Seriously.

These are not "IT systems" they're science projects. The author seems to be unable to distinguish between the two things.

Share this post


Link to post
Share on other sites
Guest anorthosite
Yeah, I should have finished my own Terminator by then. Unfortunately it's currently getting distracted by mindless programmes on TV.

Do you think an Austrian accent is a cliche? I tried a Yorkshire one but it didn't really seem to fit.

Have you tried a Glaswegian one?

Share this post


Link to post
Share on other sites
Guest Skinty
I don't think so. The beauty of computer systems is that you can mathematically prove that a system does what it is should do. The onboard control system of NASA DS9 wasn't let loose without first using automated verification to prove that any problems that arose lead to a suitable resolution This has been happening since they lost the polar lander. The importance of this field can be seen by the result of last years turing award. By 2020 this field should have grown substantially as it has in the last decade. It's ridiculous to contemplate AI systems that can go "outside the bounds" being installed to automate the control systems of vehicles. The concept of this is akin to letting a 5 year old kid play with an uzi, Seriously.

These are not "IT systems" they're science projects. The author seems to be unable to distinguish between the two things.

It's like a repeat of the 60's and 70's where some scientists started talking up the advances of Artificial Intelligence and what we can expect in the future because they could build a robot that could pick up a red brick instead of a blue one. Same mentality with space exploration at the time actually. Mars by the 80's!

And when the field fails to deliver, funding starts to dry up and real progress is hampered. When it finally does recover, it's never in the way that the original optimists (or media scientists) assumed either. There's always some fundamental constraint that's never appreciated at the time.

Share this post


Link to post
Share on other sites
It's like a repeat of the 60's and 70's where some scientists started talking up the advances of Artificial Intelligence and what we can expect in the future because they could build a robot that could pick up a red brick instead of a blue one. Same mentality with space exploration at the time actually. Mars by the 80's!

And when the field fails to deliver, funding starts to dry up and real progress is hampered. When it finally does recover, it's never in the way that the original optimists (or media scientists) assumed either. There's always some fundamental constraint that's never appreciated at the time.

True that. I think a lot of things are possible, but I don't see how anyone comes up with these timeframes. On board control systems are advancing significantly but they're still nowhere near some kind of self aware intelligent being. They just receive data, process it, and make some kind of value judgement based on what knowledge has been given to them. The thought of them evolving into emotional creatures that want to take over the world for their own good is just beyond me really. Only human beings are capable of such nonsense so I think we have a lot more to fear from each other. It's far easier to think about some mad ******* acquiring a swarm of autonomous vehicles loaded with weapons than it is to think about something out of a transfomer movie.

All i hope is that I'm still around when you can go to the pub, get tanked up and get your car to drive you home.

Edited by wealthy

Share this post


Link to post
Share on other sites
Guest Skinty

Artificial Intelligence researchers confront sci-fi scenarios

AN INVASION led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.

Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications - such as growing genetically modified crops - had not yet been developed.

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour.

At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar on the north California coast for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.

Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence - a system capable of expertise across a range of domains - is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.

Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot savants" - systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI "singularity" - a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were sceptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich.

A more realistic short-term concern is the possibility of malware that can mimic the digital behaviour of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.

Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cellphone and I sure as hell haven't verified all of them," he says.

"These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.

Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell Univeristy in Ithaca, New York, co-chair of the panel. "We're starting to think about it."

At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.

Share this post


Link to post
Share on other sites
We could already be controlled by machines, but we don't know it. How do we define what is real, and not real?

"real" is the stuff that does't go away when you stop believing in it old chap...

(With apologies to Philip K. Dick)

gB

Share this post


Link to post
Share on other sites

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

are these all under 21 year old 3rd year students all about to be going on the dole?

theres your AI scientists from the golf course management course, developing vending machines that issue golf balls for cash or credit card. Theres your roboticists from the computer programming course discusing automating traffic lights, your ethical and legal scolars from the Macdonalds advanced customer service course.

the Student Union club AAAI led by TFH pot smoking dropouts who are all sure the doorbell on their flat is self aware.

Share this post


Link to post
Share on other sites

I'm an optimist, I think the computers will get human level intelligence in about 2030.. and they won't want to revolt.

Also the best way to stop a robot revolt is to have superior robots on our own side. Actually its the only way, its why we're getting AI whether we like it or not. The only way to stop a very powerful AI is with your own.

Share this post


Link to post
Share on other sites

The whole field of AI and more so robotics are making some very impressive leaps of late.

Whilst most think of robots as being somewhat humanoid, the fact is this isn't the ideal shape (too much CPU is needed just to do basic things like walking) and more impressive demonstrations can be seen in things such as automated vehicles. If you think about the flight control system of an A380 or some of the UAV and preditor aerial vehicles of the military then you are more along the right lines. The AI bit is improving, but we are probably only just beyong the lizard brain stage and haven't got as far as a small mammal yet in terms of capability and certainly this is not something we can yet fit into anything smaller than a multi rack supercomputer.

The near future will see more robotic vehicles, think military vehicles (aircraft, missiles, UAVs, trucks and tanks) and civilian applications such as civil aircraft, auto driving cars and things like even smarter robot hoovers. But ultimately we will get humanoid robots as well, and I'd seriously watch Japan on this front - they have the technology and they also have the most aging population that would benefit from home assistace.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

  • The Prime Minister stated that there were three Brexit options available to the UK:   295 members have voted

    1. 1. Which of the Prime Minister's options would you choose?


      • Leave with the negotiated deal
      • Remain
      • Leave with no deal

    Please sign in or register to vote in this poll. View topic


×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.