Jump to content
House Price Crash Forum
wonderpup

AI and the role of human expertise

Recommended Posts

It's pretty clear that in the future analysis made by computers will play an increasing role in our lives- things like mortgage or loan applications and even job applications are being increasingly automated as AI systems armed with big data are used to assess risks and candidate suitability.

But the argument is usually made that in most cases the final decision is always made by a human being, with the computer playing a subordinate role to human judgement.

But there is a flaw in this comforting argument. If we accept the idea that AI is usefull mainly due to it's ability to recognize patterns in huge amounts of data that humans could not detect due to the sheer volume of data involved then the argument that the final decisions are best made by humans makes no sense- because -by definition- the humans involved will not be able to process the amount of data required to second guess the machine's conclusion.

So the inevitable endpoint to this process seems to be that the conclusion of the machines will come to be viewed as the optimal conclusion.

Consider the following thought experiment;

In some near future scenario a decendant of IBM's watson system has been trained on vast amounts of data, images and medical records on the identification and various treatment outcomes of a given disease- far more information than a human specialist in this field could reasonably expect to have mastered or keep track of.

If we accept the proposition that the AI 'works'- in other words  that the system has proven it's worth by accurately diagnosing and recommending successful treatment regimes in the past then in which party do you place your trust? If- for example- the AI and the Human doctor disagree- which of the two has the greater credibility?

On the one side you have a clever and dedicated human being with decades of man hours of experiance and on the other a non sentient machine that has the functional eqiuivalent of hundreds of years of man hours of experience because it has read millions of case files and every published paper on the subject at hand.

It's by no means clear that in this scenario the human decision is  the correct decision- but this is where the real problem begins, because in order for the Human to evaluate the computers decision that human must do the impossible, which is to master the dataset that the computer has used to reach it's conclusion.

In other words in a scenario in which the AI and the human disagree the human cannot be the final arbiter for the very reason that the human is not an AI and so cannot replicate it's process.

The conclusion seems to me inevitable- If AI systems that demonstrably work in the sense that they produce consistenly successful outcomes become availaible in any given field then the decisions made by those AI systems will soon become the default decisions.

So the notion currently being promoted by the AI community that AI will always assume a subordinate role to the incumbant human experts in any given field where it is implemented is- I think- a disingenuous argument . The point being that when it comes to  conclusions derived from an AI analysis the human expert is no more qualifed than a non expert to evaluate the process by which that AI reached it's conclusion.

The long term outcome then of a successful implementation of AI in a given field will be that those AI systems will not merely complement the incumbant human experts but eventually replace them.

Sure-in the early stages of this process- the human expert will be vital in assessing the performance of the AI- but once a high degree of confidence has been achived in this performace the role of the human expert will eventually decline untill it is largely irrellevant allowing that expert to be replaced by a far less experianced and less expensive 'operator'.

 

 

Edited by wonderpup

Share this post


Link to post
Share on other sites
3 hours ago, wonderpup said:

[T]here is a flaw in this comforting argument. If we accept the idea that AI is usefull mainly due to it's ability to recognize patterns in huge amounts of data that humans could not detect due to the sheer volume of data involved then the argument that the final decisions are best made by humans makes no sense- because -by definition- the humans involved will not be able to process the amount of data required to second guess the machine's conclusion.

So the inevitable endpoint to this process seems to be that the conclusion of the machines will come to be viewed as the optimal conclusion.

I disagree that it is clear "the argument that the final decisions are best made by humans makes no sense." I think it would be more accurate to say "I do not find the argument persuasive that decisions are best made by humans." I think the leap from being unpersuasive to be nonsensical is due to your assumption about the inevitable endpoint of the development of AI. Computers are superior to humans at some things and inferior at others. Is there consensus that AI will achieve total superiority? In what timescale? Until AI is better than humans at everything it makes sense for there to be input from humans, though perhaps it is arguable that AI could or should make the final decisions after a human has had a look.

I think the kind of errors which humans can currently detect are pretty obvious. For example a human can see an obvious mistake in data (e.g. on a mortgage application your salary is £50,000 not £50. Or your salary is 50,000, you don't have 50,000 financial dependents. Or "the computer marked that multiple choice test and you got zero and I thought that's odd, so I had a look at it turns out you used the wrong pencil, so the markings weren't picked up by the computer. I re-marked it and you got more than zero.").

And when it comes to the possibility that AI will have much improved social intelligence - they can detect lying and make accurate judgements about character, how does that work? Is it programmed by someone else? If so, you (as a hiring manager) might wish to have suboptimal control rather than be at the mercy of the programming of another human. Or if AI is actually really independent of the human programmers (it really learns) might we have reason to be worried about their actions? In this case they might be right to say it's desirable for humans to retain control, but disingenous as that may not be possible.

Share this post


Link to post
Share on other sites

AI has a few ways of getting around this, but I really just want to focus on one.

If you are building a 'deciding machine' which, given a set of inputs, determines which to accept in order to maximise some overall property of the outputs (e.g., given applicants, select only ones that are profitable to our business/given a set of treatments, select the most cost effective drug for the number of days the patient's life is extended/given a set of treatments, select the drug that causes the patient the least side effects for the remainder of their palliative care), then you end up with two separate issues to contend with:

  1. Assuming all inputs are known to be correct, how do you generate the calculation? Can you provide a labelled dataset sufficiently large to describe all outcomes?  If I am a bank, I likely have a relatively comprehensive corpus of all loan applications and whether they defaulted, and can allow the machine to derive the indicators of a poor choice to loan (food for thought: how would it be possible for me to ensure the algorithm generated was not sexist or racist or class-ist?).  If I am a hospital, I have no way to say if a given A&E prescription would have been the least side-effect-inducing, because I only got to try one or two of them before they died.  In this case, do I implement my own algorithm (which naturally causes the machine to have the exact same flaws as the human), or do I need to assume that experiments where different people are given different drugs and their mortality observed is indicative of the population as a whole and allow AI to learn from those?  Do the gaps in the machine's knowledge (e.g. suspicions of gender/race effects of a drug for which there is a lack of formal research for the AI to have ingested) therefore mean a medical professional is actually better and has good reason to override?  If the medical professional didn't override, and the patient suffered significantly, we'd have a datapoint for the algorithm to fix it in future. However, we can't know if the AI was correct if the doctor overrides it, and can't 'learn' based on the human choice as that's a step backwards from moving to AI.
  2. Assuming all choices are correct if we have the right information, how do we validate the data?  For some things, like financial position, I think we're already seeing HMRC moving to mass-integration with banks.  We're seeing midata on internet banking and the suggestion of read-only accounts for automated data parsing.  We could move to a world where there is no form.  Where you do not certify anything except prove your identity, the machine calculates the rest from data it has from hundreds of partners.  Alternatively, we could say that there is still a form, but that the same checking goes on behind the scenes, and the quantity of lies forms yet another data point for the classification.  Naturally, some things will always be based on humans.  The chart the doctor fills in and notes your pain on - this is only subjective but forms a significant part of selecting which drug or potency of painkiller to be using on you.

Computer Science regularly states that AI is a big, hard, messy thing.  The simplest bits are coming out, yes, and they appear highly intelligent, but if you look behind the curtain they can have some very peculiar results.  Overfitting can be a big one, or illogical fitting - "If the applicant's surname begins with an E, it has been determined that the user is 10.6% less likely to default.".

So when it comes to your thought experiment and to the 'disingenuous' claim that the AI will be subordinate, I have three thoughts.

  1. When being developed and run for the first time, systems will essentially be shadowing the human, giving a suggestion, and likely often being ignored or providing the operator with confidence. (I was going to say X, the machine said X, therefore X is probably right!)  This stage could take completely different amounts of time for different fields.  We might be okay with a system being let loose on a factory in a year, but a self-driving car after ten.  However, what you do at this point depends on how people feel about technology at that point.  I'm sure in some manufacturing places with big spinning blades, they still want giant red STOP buttons all over the place that they can use as an emergency cut-out.
  2. The AI community almost has to say this to stop fears.  No new technology is just thrown into the market and completely replaces the old one.  There is a transitional process, and for the purpose of this transition, people being encouraged to try it have to feel like they still have some form of control.  I would not be surprised if self-driving cars have their steering wheels removed by around 2030.  However, I don't think it's disingenuous.  I think what they're saying is that it will be subordinate until the majority of people stop caring if it's subordinate or not, and start telling the AI people to automate that final step.  When the market need is worth investing to meet, then they will.
  3. I don't think the human professionals will go away.  I think they will move into technology.  They will be the men and women who ensure that appropriate safeguards are put on the system.  The firm, inflexible rules that the system should not have to learn by making mistakes, and that it should not be assumed that the system will automatically learn (e.g. the system says "Oh, look, we can fit four cars side-by-side on this [three lane] road if we use the dotted road markings to denote where the CENTRE of the car should be!" - things like the highway code should just be fed in, and a machine is not writing the highway code.)

Share this post


Link to post
Share on other sites
Quote

I disagree that it is clear "the argument that the final decisions are best made by humans makes no sense." I think it would be more accurate to say "I do not find the argument persuasive that decisions are best made by humans." I think the leap from being unpersuasive to be nonsensical is due to your assumption about the inevitable endpoint of the development of AI. Computers are superior to humans at some things and inferior at others. Is there consensus that AI will achieve total superiority? In what timescale? Until AI is better than humans at everything it makes sense for there to be input from humans, though perhaps it is arguable that AI could or should make the final decisions after a human has had a look.

I think the kind of errors which humans can currently detect are pretty obvious. For example a human can see an obvious mistake in data (e.g. on a mortgage application your salary is £50,000 not £50. Or your salary is 50,000, you don't have 50,000 financial dependents. Or "the computer marked that multiple choice test and you got zero and I thought that's odd, so I had a look at it turns out you used the wrong pencil, so the markings weren't picked up by the computer. I re-marked it and you got more than zero.").

And when it comes to the possibility that AI will have much improved social intelligence - they can detect lying and make accurate judgements about character, how does that work? Is it programmed by someone else? If so, you (as a hiring manager) might wish to have suboptimal control rather than be at the mercy of the programming of another human. Or if AI is actually really independent of the human programmers (it really learns) might we have reason to be worried about their actions? In this case they might be right to say it's desirable for humans to retain control, but disingenous as that may not be possible.

 

When the first motor cars appeared a law was passed that a human had to walk in front of the car waving a flag to warn other road users of it's approach. The flaw in this scheme is  obvious- all the speed and endurance advantages of the car were completely negated by the need to accomodate the physical limitations of the flag waving human.

Taking this as a crude analogy a similar problem arises in the case of AI if we insist that all it's output must be validated by a human- because by definition that human can only validate output that he or she completely understands, which means that they would need to understand the exact process by which that output was achived-in which case why bother with AI- why not just have the Human generate the output in the first place?

So the fact that AI is to some degree a 'black box' is the only reason it has any value at all. If the entire process by which a self learning system arrived at it's conclusions were totally transparent to it's human operators those human operators would themselves be capable of duplicating that process negating the need for the AI.

An AI has value to the degree that it's internal process is opaque and incomprehensible to it's creators- that it is doing somthing that they themselves could not do by processing vast amounts of data to derive patterns and conclusions that a human could not identify.

But that being so how can it then make any sense to argue that the human is able to second guess these conclusions? What it boils down to is this- we will either trust the output of the AI on it's own merit-in which case we no longer require the human or we will not trust that output in which case the AI has no value.

Think of it this way- if you stood me next to a Mathematical genius and asked me to check his answers to some complex math questions I would not be able to do that- but if you only asked him simple questions that I would be able to check  then you would not need him in the first place- just put your questions to me instead.

 

 

Edited by wonderpup

Share this post


Link to post
Share on other sites

Humans should be able to override the machine if it screws up. If every decision made by AI is accepted without question absurdity follows. Computer says no. 

Share this post


Link to post
Share on other sites

 

  1.  

    Quote

     

    1. So when it comes to your thought experiment and to the 'disingenuous' claim that the AI will be subordinate, I have three thoughts.

    2. When being developed and run for the first time, systems will essentially be shadowing the human, giving a suggestion, and likely often being ignored or providing the operator with confidence. (I was going to say X, the machine said X, therefore X is probably right!)  This stage could take completely different amounts of time for different fields.  We might be okay with a system being let loose on a factory in a year, but a self-driving car after ten.  However, what you do at this point depends on how people feel about technology at that point.  I'm sure in some manufacturing places with big spinning blades, they still want giant red STOP buttons all over the place that they can use as an emergency cut-out.
    3. The AI community almost has to say this to stop fears.  No new technology is just thrown into the market and completely replaces the old one.  There is a transitional process, and for the purpose of this transition, people being encouraged to try it have to feel like they still have some form of control.  I would not be surprised if self-driving cars have their steering wheels removed by around 2030.  However, I don't think it's disingenuous.  I think what they're saying is that it will be subordinate until the majority of people stop caring if it's subordinate or not, and start telling the AI people to automate that final step.  When the market need is worth investing to meet, then they will.
    4. I don't think the human professionals will go away.  I think they will move into technology.  They will be the men and women who ensure that appropriate safeguards are put on the system.  The firm, inflexible rules that the system should not have to learn by making mistakes, and that it should not be assumed that the system will automatically learn (e.g. the system says "Oh, look, we can fit four cars side-by-side on this [three lane] road if we use the dotted road markings to denote where the CENTRE of the car should be!" - things like the highway code should just be fed in, and a machine is not writing the highway code.)

     

    As I explained in my post above the notion that the judgements of AI systems will always remain subordinte to the judgements of humans negates the purpose of building AI's in the first place- just as placing a human pedestrian in front of a car negates the purpose of building the car.

Just as it makes no sense to build a car and then run along in front of it it makes no sense to build an AI and not-in the long term- trust it's output. At some point the human expert must be dispensed with in order for the AI to have any value.

Share this post


Link to post
Share on other sites
Quote

Humans should be able to override the machine if it screws up. If every decision made by AI is accepted without question absurdity follows. Computer says no.

True up to a point- if AI systems are 'trained' there is a learning period where humans will correct errors and overide decisions- but the whole point of building an AI is to arrive at the point where this is no longer the case.

If the purpose of AI is to generate insights that humans alone could not, then by defintion those insights cannot be fully evaluated by those humans- there will come a point where those insights must be taken 'on trust'

But once we arrive at the point where this trust exists then the AI will have taken on the role of those  we describe as 'experts' in the sense that 'an expert' is one whose assertions we cannot evaluate ourselves due to lack of knowledge but accept anyway because we 'trust their judgement'.

 

 

 

 

Share this post


Link to post
Share on other sites
28 minutes ago, wonderpup said:

 

  1.  

    As I explained in my post above the notion that the judgements of AI systems will always remain subordinte to the judgements of humans negates the purpose of building AI's in the first place- just as placing a human pedestrian in front of a car negates the purpose of building the car.

Just as it makes no sense to build a car and then run along in front of it it makes no sense to build an AI and not-in the long term- trust it's output. At some point the human expert must be dispensed with in order for the AI to have any value.

Not necessarily - the AI could do all the grunt work, give the results say where there was 100% confidence leaving edge cases to be given further human consideration. You could have a medical system where you had multiple AI systems and if they all came out with correlating diagnoses it would give a higher confidence level still. 

However, if you successfully aggrege the accumulated knowledge of multiple experts into an AI system it would be pretty obvious very quickly that the AI had pretty much taken over.

Share this post


Link to post
Share on other sites
Quote

Not necessarily - the AI could do all the grunt work, give the results say where there was 100% confidence leaving edge cases to be given further human consideration. You could have a medical system where you had multiple AI systems and if they all came out with correlating diagnoses it would give a higher confidence level still. 

However, if you successfully aggrege the accumulated knowledge of multiple experts into an AI system it would be pretty obvious very quickly that the AI had pretty much taken over.

But there is this basic problem that if you want a human to check the AI's homework for errors before deploying it's output then you can never trust any output that cannot be checked by a human- in which case why not just get the human to do the job in the first place?

It would be like building a self driving car but never engaging it's autonomus mode because you didn't trust it- you might as well not have built a self driving car. In fact the amazing premise of all the billions of research money being spent developing self driving cars is that at some point humans will literally trust an AI system with their lives and the lives of their families.

So the notion that AI's may supplant human experts in areas like medical diagnosis is not as far fetched as it might seem- it would after all be a bit inconsistant to allow yourself to be driven to the hospital by an AI controlled car but on arrival demand a human doctor because you don't trust AI's.

Trust is not so much a dial but a toggle- we will either trust a given AI system or we won't, it's a binary propositon- but any professional should be wary if that toggle is flipped in their particular field because ultimately trust is their product, it's the essence of what they are selling.

I recently watched a debate in which a group of  statistical analysts were trying to decide if AI would steal their jobs in the future and they made a fairly strong case-to each other at least- that this could never happen due to various complexities inherent in thier daily tasks- but I was struck by the fact that they kind of missed the bigger picture, which was the fact that what they are selling to their clients is not these procedural complexities but something more visceral and more abstract- their real 'product' was the degree of confidence their clients had in their analyisis of that clients data.

 The  threat to them from AI was not that it might replicate their procedures but that it might colonise this postiion of trust in the minds of their clients. If their clients came to beleive that the AI's ability to interpret their data was superior to that of human analysts then those clients would switch to AI as a result.

So the real threat to many professionals from artificial intelligence is that the corona of superiority surrounding the technology might become so bright that even if that technology were not superior in it's perfomance to themselves they could still be replaced by clients who have been sold on the idea that AI is the smarter more modern solution.

Edited by wonderpup

Share this post


Link to post
Share on other sites
16 hours ago, wonderpup said:

True up to a point- if AI systems are 'trained' there is a learning period where humans will correct errors and overide decisions- but the whole point of building an AI is to arrive at the point where this is no longer the case.

If you assume that AI will soon achieve that purpose then people who point out it hasn't yet and might never seem wrong to you. But why assume that?

You might as well say the whole point of progressive income taxation is to achieve equality, so I assume we will one day arrive at equality. Once there is equality we will have another set of problems. Isn't it puzzling that nobody worries about the problems faced by an egalitarian society?

Share this post


Link to post
Share on other sites

Would AI encourage things like better lending practices ?

Computers are not human in the normal sense so they don't worry about the size of their annual bonus and might decide not to issue liar loans.

A system based on AI might not actually produce the results that companies management want since what it might decide to take a longer a far longer view of what is appropriate lending behaviour than the humans involved who probably have entirely selfish and short term goals. In that respect the machines might not actually do their masters bidding.

Edited by stormymonday_2011

Share this post


Link to post
Share on other sites
Quote

If you assume that AI will soon achieve that purpose then people who point out it hasn't yet and might never seem wrong to you. But why assume that?

All debates depend on certain assumptions and you are correct to point out that AI might turn out to be overhyped and never actually represent a threat to jobs in the way I describe.

But the point I was making is that there is a logical flaw in the claims made by AI developers that their technology-assuming it works- will not be a threat to incumbant professionals in the fields where it may be deployed.

It's simply incoherent to insist both that AI will offer unique problem solving capabilites yet remain at all times totally transparent to human judgement as to the validity of those solutions. And the reason for this should be obvious- if our AI systems are to add any value they must deliver insights  that a human could not deliver- but if these insights  are such that no human could have calculated them then it's logically impossible for any human to fully evaluate them either- they will be to some degree 'black box' outcomes.

So a point will inevitably be reached where there is no option but to trust that the AI knows what it is doing, even though we cannot fully understand how it has arrived at it's position.

And this position of trust is the space currently occupied by the human expert.

 

Edited by wonderpup

Share this post


Link to post
Share on other sites
Quote

Would AI encourage things like better lending practices ?

Computers are not human in the normal sense so they don't worry about the size of their annual bonus and might decide not to issue liar loans.

A system based on AI might not actually produce the results that companies management want since what it might decide to take a longer a far longer view of what is appropriate lending behaviour than the humans involved who probably have entirely selfish and short term goals. In that respect the machines might not actually do their masters bidding.

It's a subtle question because while the official policy of the lenders might be responsible lending those bosses whose bonus depended on the performace of their sales teams could choose to turn a blind eye to any departure from that official policy secure in the knowledge that should it turn out badly it would be their underlings who took the heat and not themselves. This is what seems to have happened in the run up to the 2008 crash- and it was amazing how many CEO's afterward claimed to have no knowledge of what their subordinates were doing to generate the vast profits upon which their own huge incomes were based.

If the lending decisions were made by AI systems and the official lending criteria was built into those systems this strategy of selective blindness to the shady practices of subordinates would not be avialable to management- so automating these processes would limit the ability of managers to game the system for their own short term gain at the expense of everyone else.

There is also a wider point here I think- because if AI systems ever do achieve the levels of trust required to allow them to drive cars or diagnose illness or oversee complex financial or legal transactions then a point could be reached where human involvement could come to be viewed as potential point of failure. We might get a cultural inversion in which the status of AI and human expertise are inverted leading to a scenario in which the AI solution becomes the preferred solution purely on the basis that the AI solution will be seen as more robust and free from human error.

Share this post


Link to post
Share on other sites

I haven't followed this thread because I am not smart enough. There, I've said it. 

It seems at top level though that people are saying if the super-AI gives a verdict how can humans confirm it is the correct verdict? And that is a problem I have a better chance to understand. What if a brain surgeon asks the AI for advice and the AI says drill a hole up the patient's nose. Brain surgeon thinks that is stupid, but the AI has never been wrong.

Maybe we are looking at the wrong problem, we have made it "brain surgeon v. AI - who wins?".  Why not have an additional AI whose function is not brain surgery but instead solely to confirm the correct functioning of other AIs?

Share this post


Link to post
Share on other sites
1 hour ago, wonderpup said:

It's a subtle question because while the official policy of the lenders might be responsible lending those bosses whose bonus depended on the performace of their sales teams could choose to turn a blind eye to any departure from that official policy secure in the knowledge that should it turn out badly it would be their underlings who took the heat and not themselves. This is what seems to have happened in the run up to the 2008 crash- and it was amazing how many CEO's afterward claimed to have no knowledge of what their subordinates were doing to generate the vast profits upon which their own huge incomes were based.

If the lending decisions were made by AI systems and the official lending criteria was built into those systems this strategy of selective blindness to the shady practices of subordinates would not be avialable to management- so automating these processes would limit the ability of managers to game the system for their own short term gain at the expense of everyone else.

There is also a wider point here I think- because if AI systems ever do achieve the levels of trust required to allow them to drive cars or diagnose illness or oversee complex financial or legal transactions then a point could be reached where human involvement could come to be viewed as potential point of failure. We might get a cultural inversion in which the status of AI and human expertise are inverted leading to a scenario in which the AI solution becomes the preferred solution purely on the basis that the AI solution will be seen as more robust and free from human error.

It is an interesting subject which is usually framed along the lines that better machines will make more human workers redundant which to is certain extent what has happened historically. I think it actually poses some wider questions the answers to which may not all be socially negative. For example in the run up to banking crisis of 2008 human actors in the financial system made some spectacularly bad financial decisions which had hugely damaging repercussions both for the institutions for which they worked and for society at large. These were nearly all predicated on maximising the short term monetary gains for themselves as individuals. While the fallout from that event briefly tempered some of this behaviour there are signs the same driving forces are emerging again.

It is quite possible that an AI system in its early stages might replicate some of the initial errors in lending to parties that would fail to pay back loans but one might argue overtime if it was truly intelligent it might learn from its mistakes and not repeat them. This could particularly apply if the goals set for a system was for example to ensure the survival of a business and to maximise profits over a timescale of a decade or more rather than the short term horizons of many human agents. Such a system might lead to better results for society as a whole than allowing humans to continue to make poor short term decisions. As you rightly state a machine based learning system by definition must be able to act independently because allowing human intervention to override it essentially comprises the claim that it is genuinely intelligent. Of course, such machines would not only render low or middle level human decision makers redundant. They could also put many of the higher level managers of society out of a job too. It will be interesting to see whether that possibility tempers some of the enthusiasm for AI we are seeing at the moment. Personally I think true AI would pose a huge threat to vested interests around the globe which is why we probably won't be seeing it anytime soon.

Share this post


Link to post
Share on other sites
2 hours ago, Funn3r said:

I haven't followed this thread because I am not smart enough. There, I've said it. 

It seems at top level though that people are saying if the super-AI gives a verdict how can humans confirm it is the correct verdict? And that is a problem I have a better chance to understand. What if a brain surgeon asks the AI for advice and the AI says drill a hole up the patient's nose. Brain surgeon thinks that is stupid, but the AI has never been wrong.

Maybe we are looking at the wrong problem, we have made it "brain surgeon v. AI - who wins?".  Why not have an additional AI whose function is not brain surgery but instead solely to confirm the correct functioning of other AIs?

Machines monitoring other machines already happens so that would not be a conceptual step change. The problem is which machine would you trust. As someone who works in IT I know monitoring software currently can generate lots of false positives which have to be weeded out by human actors. If that was the case how would one know which of the machines was the right and who would be the arbiter.

Moreover, given that smart people have been responsible for some of the most disastrous decisions in human history would smarter machines be any better. They might make less low level errors but then make one seemingly intelligent decision that is absolutely catastrophic

As we don't really understand human intelligence it is something of a gamble to punt our future on artificial intelligences about which we know even less. Obviously in their early stages learning and expert systems might be comprehensible to us but ultimately they might reach a stage where they would be beyond our understanding. Then as Wonderpup has pointed out it would all be a matter of trust.

Edited by stormymonday_2011

Share this post


Link to post
Share on other sites
Quote

Personally I think true AI would pose a huge threat to vested interests around the globe which is why we probably won't be seeing it anytime soon.

That must be true- but it's also true that another set of vested interests stand to gain massively from AI- outfits like Google for example.

But the truth is that AI research cannot be stopped because the economic and military advantages that would accrue to the nation that succeeded in creating genuinely smart computers would be potentially huge and even dangerous to their competitors. So like the development of nuclear weapons the research will continue despite any dangers or opposition from vested interests.

Quote

Maybe we are looking at the wrong problem, we have made it "brain surgeon v. AI - who wins?".  Why not have an additional AI whose function is not brain surgery but instead solely to confirm the correct functioning of other AIs?

The problem then is how do you trust the second AI?

What I think what most  current debates on AI kind of gloss over is that an AI we could fully understand would be pointless because it could only do the things we could do ourselves. So the only value that AI can add to society would be an ability to process data in ways that we can't fully understand in order to arrive at insights and conclusions we could not have arrived at on our own.

So-by definition- a viable AI must be an AI that is not fully under our control in the sense that it's 'intelligence' will be opaque to us in some important ways. I don't mean by this that AI's will become sentient and take over the world but it does mean that at some critical point we will be forced to vest our AI's with the same kind of authority we vest in human experts, whose judgements and decisions we are forced to take on trust because most of us have no way to know if they are right or not.

Share this post


Link to post
Share on other sites
On 28/02/2017 at 7:28 PM, wonderpup said:

In some near future scenario a decendant of IBM's watson system has been trained on vast amounts of data, images and medical records on the identification and various treatment outcomes of a given disease- far more information than a human specialist in this field could reasonably expect to have mastered or keep track of.

If, for example, for cancer, the only treatments added to the data by humans will be slash / poison / burn (operation / chemo / radio). If anything else then it would be classed as failure regardless. GcMaf = fail. Ketogenic diet = fail. That the mhra is 100% funded by the pharmaceutical industry won't be entered into the computer either.

Share this post


Link to post
Share on other sites

Question here as I know nothing about ai or computers. If ai took over from humans, would they (it?) only be able to do,what they are programmed to do, or would they also be able to push back the boundaries of knowledge?

Share this post


Link to post
Share on other sites

...push back the boundaries of knowledge. Not our knowledge as we won't be able to understand it. Imagine Einsteins genius to an advanced AI. It would seem like a monkey picking it's bum with a stick and smearing it on the walls... in slow motion.

 

 

 

Share this post


Link to post
Share on other sites
5 minutes ago, XswampyX said:

...push back the boundaries of knowledge. Not our knowledge as we won't be able to understand it. Imagine Einsteins genius to an advanced AI. It would seem like a monkey picking it's bum with a stick and smearing it on the walls... in slow motion.

 

 

 

It kind of draws us into discussions of what is knowledge?  These are genuine questions as I know nothing about ai.  So, would ai be able to solve things for us. Perhaps such as medical breakthroughs, curing heart disease, diabetes and suchlike?

Share this post


Link to post
Share on other sites
2 hours ago, wonderpup said:

That must be true- but it's also true that another set of vested interests stand to gain massively from AI- outfits like Google for example.

But the truth is that AI research cannot be stopped because the economic and military advantages that would accrue to the nation that succeeded in creating genuinely smart computers would be potentially huge and even dangerous to their competitors. So like the development of nuclear weapons the research will continue despite any dangers or opposition from vested interests.

The problem then is how do you trust the second AI?

What I think what most  current debates on AI kind of gloss over is that an AI we could fully understand would be pointless because it could only do the things we could do ourselves. So the only value that AI can add to society would be an ability to process data in ways that we can't fully understand in order to arrive at insights and conclusions we could not have arrived at on our own.

So-by definition- a viable AI must be an AI that is not fully under our control in the sense that it's 'intelligence' will be opaque to us in some important ways. I don't mean by this that AI's will become sentient and take over the world but it does mean that at some critical point we will be forced to vest our AI's with the same kind of authority we vest in human experts, whose judgements and decisions we are forced to take on trust because most of us have no way to know if they are right or not.

Again the question would also be how fallible is that higher intelligence or expert system

It might be smart enough to solve problems that are currently beyond the capacity of humans but that is no guarantee that it would not make misjudgments of its own.  Surrendering responsibility for decision making that impacts our lives to something that is more intelligent than us but not infallible could potentially have dire consequences. From my experience in IT poor programmers make dumb errors that are normally spotted long before a system gets into the wild. It is the smart technicians crafting clever algorithms that often create the worst disasters. To a certain extent this was what happened during the financial crisis when some of the complex mechanisms by institutions used to quantify risk were found to be completely wanting. In addition like humans learning systems might simply just keep replicating dangerous behaviours over extended periods because until disaster strikes they are not going to recognise that their decisions ultimately may have a negative effect which wipes out all the positive 'learning' experience that happened earlier.

Share this post


Link to post
Share on other sites

 

On 2/28/2017 at 7:28 PM, wonderpup said:

...The point being that when it comes to  conclusions derived from an AI analysis the human expert is no more qualifed than a non expert to evaluate the process by which that AI reached it's conclusion.

The long term outcome then of a successful implementation of AI in a given field will be that those AI systems will not merely complement the incumbant human experts but eventually replace them.

Bring it on...

6 minutes ago, XswampyX said:

Not our knowledge as we won't be able to understand it. Imagine Einsteins genius to an advanced AI. It would seem like a monkey picking it's bum with a stick and smearing it on the walls... in slow motion.

Exactly.

Quote

 

They were so intelligent that no human was
capable of understanding just how smart they were (and the machines themselves
were incapable of describing it to such a limited form of life).

-Consider Phlebas

The Mind had an image to illustrate its information capacity. It liked to
imagine the contents of its memory store written out on cards; little slips of
paper with tiny writing on them, big enough for a human to read.
[.....]In base 10 that number would be a 1 followed by twenty-seven zeros, and even
that vast figure was only a fraction of the capacity of the Mind. To match it
you would need a thousand such worlds; systems of them, a clusterful of
information-packed globes . . . and that vast capacity was physically contained
within a space smaller than a single one of those tiny rooms, inside the Mind.

-Consider Phlebas

 

Quote

 

Later, Li had us all play another game; guess the generalization. We each had to think of one word to describe humanity; Man, the species. Some people thought it was silly, just on principle, but the majority joined in. There were suggestions like 'precocious', 'doomed', 'murderous', 'inhuman', and 'frightening'. Most of us who'd been on-planet must have been falling under the spell of humanity's own propaganda, because we tended to come up with words like 'inquisitive', 'ambitious', 'aggressive', or 'quick'. Li's own suggestion to describe humanity was 'MINE!', but then somebody thought to ask the ship. It complained about being restricted to one word, then pretended to think for a long time, and finally came up with 'gullible'.

'Gullible?' I said.

'Yeah,' said the remote drone. 'Gullible ... and bigoted.'

'That's two words,' Li told it.

'I'm a f**king starship; I'm allowed to cheat.

-The State of the Art

 

 

17 minutes ago, One-percent said:

Question here as I know nothing about ai or computers. If ai took over from humans, would they (it?) only be able to do,what they are programmed to do, or would they also be able to push back the boundaries of knowledge?

:)

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Next General Election   89 members have voted

    1. 1. When do you predict the next general election will be held?


      • 2019
      • 2020
      • 2021
      • 2022

    Please sign in or register to vote in this poll. View topic


×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.