Jump to content
House Price Crash Forum

AI and the role of human expertise


wonderpup

Recommended Posts

0
HOLA441
23 minutes ago, One-percent said:

It kind of draws us into discussions of what is knowledge?  These are genuine questions as I know nothing about ai.  So, would ai be able to solve things for us. Perhaps such as medical breakthroughs, curing heart disease, diabetes and suchlike?

A real life example...

Quote

For the first time ever a computer has managed to develop a new scientific theory using only its artificial intelligence, and with no help from human beings.

Computer scientists and biologists from Tufts University programmed the computer so that it was able to develop a theory independently when it was faced with a scientific problem. The problem they chose was one that has been puzzling biologists for 120 years.

So how did it get on?

Quote

This took three days of trial and error guessing and tweaking -- an approach that would be unfathomably inefficient if it were implemented by humans.

Three days! :unsure:

Source :- http://www.wired.co.uk/article/computer-develops-scientific-theory-independently

Yes it was set up for one problem and it used data generated by humans but....

Quote

What the computer discovered was that the process requires three known molecules and two proteins that were previously unknown. This discovery, says Levin, "represents the most comprehensive model of planarian regeneration found to date". "One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend," he adds. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data,"

 

Link to comment
Share on other sites

1
HOLA442
2
HOLA443
3
HOLA444
Quote

Again the question would also be how fallible is that higher intelligence or expert system

It might be smart enough to solve problems that are currently beyond the capacity of humans but that is no guarantee that it would not make misjudgments of its own.  Surrendering responsibility for decision making that impacts our lives to something that is more intelligent than us but not infallible could potentially have dire consequences

I don't disagree- my point is that if we want to exploit the power of AI in the future this may be a risk we will have to take- something not made explicit by most advocates of AI at present.

Going back to my car analogy it's clearly much safer to have a car move at walking pace with a guy in front waving a red flag- but to fully exploit the power of the car you need to take the risks that come with allowing the car to move a lot faster, with all the potential for disaster this entails.

I guess the question comes down to this; suppose an AI system developed a cure for some disease that worked really well but we have no real understanding of the process by which the AI arrived at it's formulation- do we not use that cure because it may contain some deep flaw that might only become apparant over time?

All powerful technologies come with attendant risks- what makes AI different is that those risks may be harder to quantify because certain aspects of it's operation will be opaque in the sense that they will be emergent properties of the technology that cannot be made completely transparant.

This is already true of humans of course- even Einstein could not have completely described the brain processes that gave rise to his theories- in that sense Einstien's brain was a 'black box' even to himself. Come to that I could not really describe the mental processes involved in walking- it's something I just know how do, without really knowing how I do it.

So to the degree that AI mimics this kind of emergent problem solving process- in a much simpler way- then it too will have this elusive quality of generating results but in ways not entirely subject to precise definition.

It may not be possible to reverse engineer the outputs of AI systems that deal in vast numbers of variables in order to reach their conclusions, so we will be forced to accept these outputs at face value or not at all.

 

 

Link to comment
Share on other sites

4
HOLA445
Quote

"One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend," he adds. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data,"

So the AI solved in 3 days a problem that had been unsolved by humans in 120 years of trying.

The question is; what exactly had it been doing in the 3 days? 'Guessing and Tweaking' seems to be the answer- but these are not very precise terms.

In the case of new theories and abstract ideas this lack of precision may not be a worry- so long as the theories work. But in the case of things like drugs or weapon systems this lack of precision may be more of a concern.

Link to comment
Share on other sites

5
HOLA446
7 minutes ago, wonderpup said:

So the AI solved in 3 days a problem that had been unsolved by humans in 120 years of trying.

The question is; what exactly had it been doing in the 3 days? 'Guessing and Tweaking' seems to be the answer- but these are not very precise terms.

In the case of new theories and abstract ideas this lack of precision may not be a worry- so long as the theories work. But in the case of things like drugs or weapon systems this lack of precision may be more of a concern.

It's like the money on a typewriter theorem but the AI can automatically reject all the wrong answers. Is it our 'type' of intelligence? No I don't think it is. Is it a more powerful type of intelligence? Probably yes.

 

Link to comment
Share on other sites

6
HOLA447

.

Quote

What the computer discovered was that the process requires three known molecules and two proteins that were previously unknown. This discovery, says Levin, "represents the most comprehensive model of planarian regeneration found to date". "One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend," he adds. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data,"

To go full AI it would need an AI peer group to check that there weren't any flaws in it's research methods, inferences and conclusions etc.

Edited by billybong
Link to comment
Share on other sites

7
HOLA448
10 hours ago, XswampyX said:

It's like the money on a typewriter theorem but the AI can automatically reject all the wrong answers. Is it our 'type' of intelligence? No I don't think it is. Is it a more powerful type of intelligence? Probably yes.

 

Poppycock. Chess computers of thirty years ago were capable of generating opening surprises and tactical novelties but rarely did so. Today's chess engines are exponentially superior - more powerful, indeed, than any human grandmaster - and yet computer chess is still ugly and functional, and produces few masterpieces.

Link to comment
Share on other sites

8
HOLA449
9
HOLA4410
10
HOLA4411
4 hours ago, zugzwang said:

Poppycock. Chess computers of thirty years ago were capable of generating opening surprises and tactical novelties but rarely did so. Today's chess engines are exponentially superior - more powerful, indeed, than any human grandmaster - and yet computer chess is still ugly and functional, and produces few masterpieces.

Is it?   I genuinely don't know.   The point is to win.  Everything must bend to that truth.

It's advancing.  One day a beautiful mutually beneficial co-existence.

Quote

Now, it recognised a kinship that crossed not just the ages, species or
civilisations, but the arguably still greater gap between the fumblingly confused and dim awareness
exhibited by the animal brain and the near-infinitely more extended, refined and integrated sentience of
what most ancestor species were amusingly, quaintly pleased to call Artificial Intelligence (or something
equally and - appropriately, perhaps - unconsciously disparaging).  (-Excession)

 

Edited by Venger
Link to comment
Share on other sites

11
HOLA4412
7 hours ago, Venger said:

Is it?   I genuinely don't know.   The point is to win.  Everything must bend to that truth.

It's advancing.  One day a beautiful mutually beneficial co-existence.

 

Unlike humans, chess computers play by iteratively searching through billions of prospective moves. Occasionally this brute force evaluation will unearth a line of attack that's been overlooked or prematurely discarded by human grandmasters, or conjure up an unexpected sacrificial brilliance. Generally, however, chess games between computers are deathly, uninteresting affairs which is why no-one has ever sought to televise them.

Brute force evaluation doesn't seem to play much of a role in the development of intelligence. There are tribes in South America which have names for the leaves of every individual plant or tree in the jungle but no concept of the leaf - the abstract set of characteristics that all leaves share. Perhaps unsurprisingly, these are hunter-gatherer societies, geographically isolated and materially backward. The ability to reason reductively and in the abstract is what set Newton, Kepler and Descartes apart from their scholastic contemporaries, without which the Universal Laws would have remained undiscovered. Putting aside for a moment issues of moral agency and self-determination, I'd argue that until we understand how the human mind/brain accomplishes these gymnastic leaps of abstraction there's no reason to believe a machine will ever truly think.

 

 

Edited by zugzwang
Link to comment
Share on other sites

12
HOLA4413
13
HOLA4414
14
HOLA4415
15
HOLA4416
Quote

Unlike humans, chess computers play by iteratively searching through billions of prospective moves.

Google's AlphaGO system seems to be doing something in between the human appoach and the chess computer approach- it uses a kind of pseudo intuitive process to help select it's moves;

Quote

 

Chess and checkers do not need sophisticated evaluation functions," says Jonathan Schaeffer, a computer scientist at the University of Alberta who wrote Chinook, the first program to solve checkers. "Simple heuristics get most of what you need. For example, in chess and checkers the value of material dominates other pieces of knowledge — if I have a rook more than you in chess, then I am almost always winning. Go has no dominant heuristics. From the human's point of view, the knowledge is pattern-based, complex, and hard to program. Until AlphaGo, no one had been able to build an effective evaluation function."

So how did DeepMind do it? AlphaGo uses deep learning and neural networks to essentially teach itself to play. Just as Google Photos lets you search for all your pictures with a cat in them because it holds the memory of countless cat images that have been processed down to the pixel level, AlphaGo’s intelligence is based on it having been shown millions of Go positions and moves from human-played games.

The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a "policy" network to help AlphaGo predict the next moves, which in turn trains a "value" network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.

 

http://www.theverge.com/2016/3/9/11185030/google-deepmind-alphago-go-artificial-intelligence-impact

Alpha GO is not simlply brute forcing it's way to a solution, it's also using 'past experiance' as guide to action. Assuming that human players do not have direct knowledge of the future they too must be doing something similar- extrapolating from past experiance in order to decide on future actions.

What we call 'intuition' may in reality be just an unconscious pattern recognition process that gives the appearance of insights that emerge from 'nowhere' but are in fact the product of past experiance mapped onto present circumstances. Would a human GO player whose memeory of all past games was wiped every night- leaving him with only the knowledge of the rules-  still be a good player in the morning? I suspect not.

So both the Human GO player and Google's system are leveraging past experience of the game in order to increase their skill.

Edited by wonderpup
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...

Important Information