Jump to content
House Price Crash Forum

The employment Implications of AI for the 'predictive' professions.


Recommended Posts

0
HOLA441

The financial crash of 2008 was seen as a massive failure on the part of those professionals who failed to 'see it coming'. The reputation of economists took a huge hit because a big part of their job was supposed to be making reasonably accurate predictions about the economic future- yet almost all of them failed to predict the most catastrophic economic event in nearly a hundred years.

But strange as this failure of prediction was, what was even stranger were the consequnces for economists in terms of their jobs- because there were no consequnces. No one was sacked, no mass cull of failed economists took place, nor was the teaching of economic theory much impacted by the complete failure of that theory to predict the meltdown of the global economy.

In short following the complete failure of economics as a tool to accurately model the  behavior of the economy-it's raison d'etre- the profession carried on 'business as usual' as if nothing had happened.

How can this be explained? Imagine a similar  global failure in aircraft design or architecture in which failed theories led to catastrophic outcomes- would these professions have been allowed to ignore their failure and carry on as before? Not likely- if planes started falling out of the sky or buildings started collapsing  the result would be a hunt  to identify the culprits and make sure they were sacked as soon as possible. In the case of economists this simply did not happen, despite the devastating consequences their collective failure had caused.

The reason-in my view- that economics seems so immune from it's own failures is because the role of the economist is not in reality a practical one but a ceremonial one- they occupy the same space in the corporate cultures of today as the witch doctor occupied in the tribal cultures of the past when he cast chicken bones on the ground and examined the resulting pattern for clues as to the future. Of course the witch doctor could not really see the future and most of his predicitions turned out to be wrong- but that did not diminish his status in the tribe, because the value he provided was of the emotional and psychological kind, not the the practical kind.

 Employing an economist is not really about predicting the economic future- because this is not something they do that well. What it's really about is offering emotional and psychological comfort to your clients or superiors, who-knowing that you have employed an 'expert' - can sleep a little more soundly in their beds.

 So up until now it did not matter that much if your ability to deliver accurate predictions was poor- you still added value as a reassuring source of 'expertise', performing your role as the reader of runes and observer of portents. But now there is a new kid on the block called Artifiical intelligence- a shiney new technology that-among other things- claims to be able to examine vast amounts of data and from that data draw valuable insights regarding future outcomes. Even if these powers of prediction are just as flawed as your own they still represent a threat because for the first time in history a new source of 'expertise' has arisen- a competing mythology in which machine intelligence is pitted against the human 'expert' as a source of reassurance and guide to future action.

It seems to me that there is an entire class of professions that are based on the idea that a deep knowledge of present configurations in a given domain can be used to make actionable predictions about the future of that domain- some people are paid a lot of money based on this assertion, often despite evidence that their actual ability to make such predictions is no better than random chance. Entire office blocks in the City of London are filled with people whose job is to predict the future of various financial assets or instruments.

But what is 'big data' driven AI if not an attempt to build machines that predict the future? So I predicit that the prediction business in all it's forms is about to get clobbered by a new oracle- one that offers it's own competing mythology as source of valid insights concerning the future.

Link to comment
Share on other sites

1
HOLA442

I disagree that if a building collapses or an aircraft falls out of the sky step 1 is to sack the person involved.   All professions evolve, and in many cases it's previous building collapses and aircraft crashes that have shaped today's safer designs.  Yet occasionally a building still collapses (or in the case of recent events burns down) and there are still lessons to learn.  No, many economists didn't see the credit crunch coming - if they had of course, it wouldn't have happened.  But all of them will learn from it.

In addition, the global economy is far more complex than a building, not least because it is so heavily dependant on the actions of people.  It is far harder to predict how 7 billion people will react to changes in sentiment than how inert materials will respond to changes in temperature or force.

Does AI have a role?  Well I'd expect so, it certainly would be a valuable tool in the economist's armoury. 

Link to comment
Share on other sites

2
HOLA443
3
HOLA444
Quote

I disagree that if a building collapses or an aircraft falls out of the sky step 1 is to sack the person involved.   All professions evolve, and in many cases it's previous building collapses and aircraft crashes that have shaped today's safer designs.  Yet occasionally a building still collapses (or in the case of recent events burns down) and there are still lessons to learn.  No, many economists didn't see the credit crunch coming - if they had of course, it wouldn't have happened.  But all of them will learn from it.

In addition, the global economy is far more complex than a building, not least because it is so heavily dependant on the actions of people.  It is far harder to predict how 7 billion people will react to changes in sentiment than how inert materials will respond to changes in temperature or force.

Does AI have a role?  Well I'd expect so, it certainly would be a valuable tool in the economist's armoury. 

I think it's fair to say however that if there were a global failure in aviation or architecture on the scale of the financial crisis there would have been consequnces for the people involved- yet in economics we see no such outcome- the same theories are being taught by the same people at the same universites- and there has been little fallout of any kind for those highly paid experts who failed to do the very thing they were being paid to do- which was to accurately model the behaviour of the economy. What happened in 2008 was not only unforeseen, it was theoretically impossible according to the experts involved.

But my real point here is that it did not matter that economics failed on a technical level because that is not the reason why economists exist- their true value is as providers of the illusion of certainty in an uncertain world. Crudely put we employ economists for the same reason some people wear lucky charms- in an attempt to impose some sense of predictability on an unpredictable universe.

So the apparant immunity of the profession to it's own technical failure is understandable when the Economist is seen not as a scientist but as a provider of reassurance and certainty in the face of the unknown that is the future. We need economists for the same reason primative tribes needed witch doctors with chicken bones- to create the illusion that the future can be forseen- the fact that this is impossible does not matter, what matters is the illusion.

The point about AI is that it represents the first serioius competitor to the human practitioner in the realm of predictions. Untill now only human beings were deemed capable of taking exisiting data and extrapolating from it a model of future outcomes- this is no longer the case. AI is all about predicting the future- even in the case of the self driving car we are dealing with systems that must engage in short term predictions as to the behaviour of the enviroment and other road users if they are to be safe- merely relying on their superhuman reaction times will not be enough- we will need our robot cars to do what we do when driving, which is to peer into the next ten or twenty seconds in order to try and anticipate what will happen next.

So the real threat from AI to anyone whose job involves modelling the future is that the idea will take hold that machines might do this better than they can- at which point their value a sources of insight  will be undermined.

The basic takeaway here I think is that in reality prediciting the future is impossible- so what is at stake here is the social role of oracle- and wether that role will be occupied by a human expert or by a machine.

Link to comment
Share on other sites

4
HOLA445
7 hours ago, stormymonday_2011 said:

Bad news for astrologers ?

Mars moving into Uranus indicates that Saggitarians will be disappointed by the accuracy of their horoscopes. 

The role of economists is ceremonial. But what if all of our jobs are ceremonial?  

What if none of our jobs are really necessary, but we are all going through the motions because of our belief in 'the economy'?

Link to comment
Share on other sites

5
HOLA446
Quote

Bad news for astrologers ?

It would be a battle of two mythologies- Astrology as a pseudosciene verses AI as Oracle- the employment prospects of the human astrologer would hinge on which of these mythologies was the most resilient.

One of the ironies here is that in stressing the scientific rigour of their methodologies the Astrologers might have cleared the path for the AI whose claims to scientific rigour might be seen as the stronger.

Link to comment
Share on other sites

6
HOLA447

Fascinating, Wonderpup, as ever.

So yes, economists deal in abstract concepts - money, profit, loss - that nonetheless have a visceral, diurnal tie to our lives as we feel that we need to earn and spend like the carnivores pawing the grasslands and thus it's not that abstract, though of course it is. We need them to persuade us it makes some kind of sense. And it's all a product of human imagination.

And yes, people want to believe. That's why we have magicians, and priests - as you say - who pander and exploit said human need to believe. They conjure up stories and promises that suit the changing seasons, the eclipses, the strange acts of nature and attempt to transform them into something we can understand as relevant to ourselves. They do all this through imagination. And building machines - sound chambers in statues, levers in mannequin arms - ancient trickery in robots, devices, all round sneakery.

And AI. Why shouldn't it understand everything? It's unconstrained, not like us, walled in by a birth-canal limited cranium and bodily wants. It certainly hasn't lived up to expectations yet, the big finger stock freaks have almost deffo been algorithm screw ups - but you already know I'm more sceptical than you on human-level machine intelligence being round the corner. Still, AI could take the magic and make it real. And it's a product of our own human imagination - our ultimate, greatest achievement, to be able to imagine something more impressive than ourselves.

AI is the addendum and human end-game to The Golden Bough.

Link to comment
Share on other sites

7
HOLA448
8
HOLA449
18 hours ago, wonderpup said:

AI is all about predicting the future- even in the case of the self driving car

Er this is a bit weak, to say that "AI is all about predicting the future" means nothing since pretty much every constructive activity involves the passage of time. To answer your wider question there is no evidence so far that AI will do a better job of prediction in complex fields such as economics.

Link to comment
Share on other sites

9
HOLA4410
Quote

Er this is a bit weak, to say that "AI is all about predicting the future" means nothing since pretty much every constructive activity involves the passage of time.


It's not so much the passage of time as the evolution of data. To be fair to the AI industry they don't claim to be building literal oracles that can see into the future- what they claim is that if you give them enough data from a given domain they will deliver a prediction as to the possible evolution of that dataset in the future. It's about seeing patterns in data that can be extrapolated beyond the current data set to generate new data that can then be used to make more informed decisions.

This is the same claim made by Economists and others whose work involves making predictions based on current data.

Quote

To answer your wider question there is no evidence so far that AI will do a better job of prediction in complex fields such as economics.

There's no evidence that it's possible to reliably predict economic outcomes at all- this is the lesson of 2008, when a system the 'experts' claimed was functioning normaly suddenly collapsed in total contradiction of their most basic understandings of how that system worked.

I'm not arguing that either AI or human beings can actually predict the future- I'm arguing that the real social function of Economists  is to provide the illusion that the future can be managed and to some extent controlled, and that Artificial Intelligence now represents a competing source of such comforting illusions.

Given the fact that it's impossible to really see the future we are dealing here in the realms of mythology and belief rather than reality- and should the idea take root that machines are better at extrapolating patterns from data than humans those humans might find themselves losing status and authority as the sole arbiters of such extrapolations.

AI represents the first challenge to the status of the human 'expert' in the entire history of our species- and this challenge is especially acute for those experts whose status depends on their ability as pattern recognition specialists- which turns out to be a surprisingly wide demographic among the population of human expert practitioners.

 

 

Link to comment
Share on other sites

10
HOLA4411

We can all read the future with some accuracy. Reclining here now, sipping my morning cup of nectar from the tropics, I'm pretty confident on the next thirty seconds. I'd say I'm strong on most of today. Stepping bravely out into the timeline, there are plans and expectations which I can safely say are likely to happen. We're all constantly planning and speculating, with a high degree of success, indeed that must have been one of the big evolutionary drivers for intelligence - planning where to store food, how the seasons might change, what hazards and threats might emerge etc. So I think it's fair to say that AI does look into the future and could perhaps use the same, or improved, methods to gauge the likelihood of events to come. In short, once you hit forty you can guess just about everything that might happen in a given future situation, you just don't know which one it will be so you award a likelihood percentage to each one and go for the highest number. And as the events branch off into other events we're soon overwhelmed by the scale of the maths, the Hydra of potential outcomes sprouting from every event channel.

The question for me is, if the machines can do the maths - and I'm talking a whole lot of maths - could they use a similar method of likelihood apportioning to make better, more reliable estimates on future events. I was right about the 30 seconds that I future-scanned at the beginning of this post, within the tiny event sphere of my bedchamber. I was able to anticipate. Could a machine use the same method to look further into the future with greater accuracy - could a machine truly anticipate?

Link to comment
Share on other sites

11
HOLA4412

I think that the best comparator to an economist is a meteorologist when it comes to any prediction. They are supported by models but meteorologists' predictive powers are really quite limited. They can explain how the weather works and discuss patterns but cannot predict the weather more than a few days out. The power of models and AI might be useful tools for them but we are a long way from them capturing the randomness of the weather. 

Economics is ultimately about modelling human behaviour and should be seen as a branch of psychology as much as anything else. Humans are relatively unpredictable and the way we think keeps changing.

As a result, AI may contribute to the analysis that economists undertake, but it is not a job that machines are likely to be able to outperform humans at. And that is a pretty low starting point. 

Link to comment
Share on other sites

12
HOLA4413
32 minutes ago, thehowler said:

We can all read the future with some accuracy. Reclining here now, sipping my morning cup of nectar from the tropics, I'm pretty confident on the next thirty seconds. I'd say I'm strong on most of today. Stepping bravely out into the timeline, there are plans and expectations which I can safely say are likely to happen. We're all constantly planning and speculating, with a high degree of success, indeed that must have been one of the big evolutionary drivers for intelligence - planning where to store food, how the seasons might change, what hazards and threats might emerge etc. So I think it's fair to say that AI does look into the future and could perhaps use the same, or improved, methods to gauge the likelihood of events to come. In short, once you hit forty you can guess just about everything that might happen in a given future situation, you just don't know which one it will be so you award a likelihood percentage to each one and go for the highest number. And as the events branch off into other events we're soon overwhelmed by the scale of the maths, the Hydra of potential outcomes sprouting from every event channel.

The question for me is, if the machines can do the maths - and I'm talking a whole lot of maths - could they use a similar method of likelihood apportioning to make better, more reliable estimates on future events. I was right about the 30 seconds that I future-scanned at the beginning of this post, within the tiny event sphere of my bedchamber. I was able to anticipate. Could a machine use the same method to look further into the future with greater accuracy - could a machine truly anticipate?

A machine might be able to allocate  probabilities, but would that really help? 

Go back a year to the referendum - how would a machine have predicted that? Would an apportionment of probabilities have been any better than a bookmaker's odds? I do not think so. 

Link to comment
Share on other sites

13
HOLA4414

I'm guessing bookmakers calculate their odds based on the small sample they have of people willing to make a bet. Imagine they do some chartist and real world analysis too. But my question was whether you could take one human approach to future predicting - assessing likelihood - and then augment it via an AI algorithm with enormous power? This would have to be as close to ubiquity as we could get, so access to every terminal, keyboard, camera, microphone on the planet, as a hypothetical. With enough data, and enough processing power, the machine might be able to attain a higher degree of future prediction than we have.

I'm thinking of a machine that could track the progress of every individual dollar. That might be a formidable tool to the economist.

Link to comment
Share on other sites

14
HOLA4415
Quote

Economics is ultimately about modelling human behaviour and should be seen as a branch of psychology as much as anything else. Humans are relatively unpredictable and the way we think keeps changing.

As a result, AI may contribute to the analysis that economists undertake, but it is not a job that machines are likely to be able to outperform humans at. And that is a pretty low starting point. 

I don't disagree with the idea that  Economists are in the psychology business, but consider the fact that they have done everything in their power to deny this and have inserted into their professional lexicon an array of dubious mathematical models in an attempt to bolster their claims that they are a 'true' natural science rather than a social science.

The irony being that having styled themselves as operating purely on  the basis of hard maths and empirical data they have exposed themselves to the idea that an artificially intelligent machine might credibly do their job better than they can.

This substitution of human experts with Artificial experts is not theoretical- it's happening to some degree even as we speak;

Quote

 

Insurance firm Fukoku Mutual Life Insurance is making 34 employees redundant and replacing them with IBM’s Watson Explorer AI

A future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.

Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.

The move is unlikely to be welcomed, however, by 34 employees who will be made redundant by the end of March.

The system is based on IBM’s Watson Explorer, which, according to the tech firm, possesses “cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video”.

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts, according to the Mainichi Shimbun.

While the use of AI will drastically reduce the time needed to calculate Fukoku Mutual’s payouts – which reportedly totalled 132,000 during the current financial year – the sums will not be paid until they have been approved by a member of staff, the newspaper said.

 

https://www.theguardian.com/technology/2017/jan/05/japanese-company-replaces-office-workers-artificial-intelligence-ai-fukoku-mutual-life-insurance

What's interesting here is that IBMs claim is not simply that it's system can crunch the data much faster than the humans involved- they are claiming that it can

“analyse and interpret" that data too- that it can 'think like a human'- which is bulls*it- no current AI can think remotely like a human.

So they are sacking 34 skilled people on the basis of a tacky PR claim that bears no close examination.How does this work?

I think the reason this scenario has come about is because the output of those human experts was always to some degree subjective- the business of analysing insurance claims has never been a simple matter of adding fact A to fact B and arriving at fact C- If that were the case then these jobs would have automated years ago.

The truly perverse aspect of this story is that IBM have pulled off the trick of selling their system not based on it's number crunching ablity but on it's ability to replicate the subjective-that is 'intuitive'- aspects of the task- the very elements of the task that should have made those human workers safe from automation.

Here's the punchline as I see it- any task that involves  subjective expert judgement- where the outcome is in part determined by that judgement- is now potentially exposed to the IBM strategy above. The argument goes like this; if we employ an expert  because a problem has variables that cannot be precisely calculated simply by taking the exisiting data and crunching the numbers then-by definition- we can never really know if that expert's solution was in fact the best one- we are forced to accept his judgement- that is, after all, why we needed him in the first place.

For example- suppose we hire an expert in marketing to launch our new product, which leads to a given number of sales in the first few months. Can we be sure that if we had chosen a different expert  those sales would  have been lower No, we can't, they might even have been higher. There is no way to crunch the sales data in order to find out because that data itself is partly a derivative of the strategy that our expert chose to deploy- in other words the outcome of this kind of expert led decision making has an irreducible subjective component that can never be objectively tested against the performance of other experts.

What this means is that from the point of view of those employing experts those experts are 'black boxes', whose judgements-compared to those of other experts- are not easily measured by any objective means. So now along comes IBM saying that they too have a 'black box' called Watson- an AI system capable of a superhuman degree of analysis that allows it to excercise 'judgement' when arriving at it's conclusions.

And just as in the case of the human expert the strategies recommended by Watson will also have their irriducible 'subjective' aspects- not because Watson is human but-like it's human counterparts- it too has a 'black box' dimension that renders it's decision making opaqe. By their very nature systems like Watson are not totally transparent even to their operators.

So-imagine that you are  tasked with sourcing expertise to solve a given problem- on the one hand you have a human expert who claims to have unique insights into your sort of problem and on the other you have IBM and it's billion dollar  brain who also claim to have unique insights into your sort of problem- on what do you base your decison? In both cases you are faced with 'black box' processes that happen either in the organic brain of the human or the artificial brain of the machine- both are equally opaque from your point of view- and since you must pick one you will never know if the other was the better choice.

So how to choose? In the end your choice will amount to an act of faith- do you subscribe to the idea that the human brain is the superior tool- or can IBM convince you that it's machine is the better solution. The point is that whichever path you take you will never know that you chose the right one- if you choose the human, the machine might have been the better choice and vice versa.

In short (he said ironically) the very thing that should make the human expert safe from automation- the subjectivity of his judgement- is opening up a flank that can be exploited by those selling AI solutions to replace him. From the point of view of an end user the choice between an opaqe human expert or an equally opaqe AI is not really that clear. In the end what it might come down to is which meme proves to be the most persuasive- the meme of the human expert or the meme of the intelligent machine.

So those who imagine that the subjectivity and opacity of their expertise will protect them forever from the enchroachment of machines might be mistaken- because the very fact that their judgements are subjective and opaque means that choosing between them or the machine becomes a choice between two competing mythologies- man as expert verses the intelligent machine. In the case of the IBM example above the machine mythology won the day and the human experts were shown the door.

Link to comment
Share on other sites

15
HOLA4416
13 hours ago, wonderpup said:

So they are sacking 34 skilled people on the basis of a tacky PR claim that bears no close examination.How does this work?

The job itself sounds very mundane - mainly just scraping prices and dates from receipts, no big surprise that someone was able to automate it.

Link to comment
Share on other sites

16
HOLA4417
14 hours ago, wonderpup said:

So-imagine that you are  tasked with sourcing expertise to solve a given problem- on the one hand you have a human expert who claims to have unique insights into your sort of problem and on the other you have IBM and it's billion dollar  brain who also claim to have unique insights into your sort of problem- on what do you base your decison? In both cases you are faced with 'black box' processes that happen either in the organic brain of the human or the artificial brain of the machine- both are equally opaque from your point of view- and since you must pick one you will never know if the other was the better choice.

So how to choose? In the end your choice will amount to an act of faith- do you subscribe to the idea that the human brain is the superior tool- or can IBM convince you that it's machine is the better solution. The point is that whichever path you take you will never know that you chose the right one- if you choose the human, the machine might have been the better choice and vice versa.

I'd choose by expecting anything I'm spending money on to have some evidence that it works - in the case of the computer controlled tests and comparisons. How they work isn't so important there, beyond having a general idea of the capabilities of computers. So far the vast majority of computer "intelligence" is being able to churn through human-provided models rapidly with different variables, and thus provide the best or most likely outcome based on a set of human-provided criteria.

Link to comment
Share on other sites

17
HOLA4418

Watson? More hype and claptrap from the Silicon Valley BS machine.

Quote

It was one of those amazing “we’re living in the future” moments. In an October 2013 press release, IBM declared that MD Anderson, the cancer center that is part of the University of Texas, “is using the IBM Watson cognitive computing system for its mission to eradicate cancer.”

Well, now that future is past. The partnership between IBM and one of the world’s top cancer research institutions is falling apart. The project is on hold, MD Anderson confirms, and has been since late last year. MD Anderson is actively requesting bids from other contractors who might replace IBM in future efforts. And a scathing report from auditors at the University of Texas says the project cost MD Anderson more than $62 million and yet did not meet its goals. The report, however, states: "Results stated herein should not be interpreted as an opinion on the scientific basis or functional capabilities of the system in its current state."

“When it was appropriate to do so, the project was placed on hold,” an MD Anderson spokesperson says. “As a public institution, we decided to go out to the marketplace for competitive bids to see where the industry has progressed.”

The disclosure comes at an uncomfortable moment for IBM. Tomorrow, the company’s chief executive, Ginni Rometty, will make a presentation to a giant health information technology conference detailing the progress Watson has made in healthcare, and announcing the launch of new products for managing medical images and making sure hospitals deliver value for the money, as well as new partnerships with healthcare systems. The end of the MD Anderson collaboration looks bad. Even if the decision is as much a result of MD Anderson's mismanagement or red tape--which it may be--it is still a setback for a field without any big successes.

https://www.forbes.com/sites/matthewherper/2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-intelligence-in-medicine/#657333303774

 

Link to comment
Share on other sites

18
HOLA4419
3 hours ago, hotairmail said:

 

Another Ignominy

Just like the Internet itself. The costs of protecting society from its dangers now exceed the benefits of using it. Time to switch it off, just disconnecting may not be enough.

https://qz.com/1013361/wikileaks-the-cia-can-remotely-hack-into-computers-that-arent-even-connected-to-the-internet/

 

Edited by zugzwang
Link to comment
Share on other sites

19
HOLA4420
Quote

I think the way an expert makes a so called 'subjective' decision (ie without hard and fast parameters and rules that can be expressed and to which the 'expert' may themselves not even be aware - such as using the autonomic nervous system...'gut' feel) does not mean the outcome cannot be measured. Yes, for each decision there may be the problem of the counterfactual, but if you can observe outcomes, perhaps the performance of the decision maker's outcomes statistically and over a period, does mean there may be a method of judging the efficacy of the decision making.

This implies that the outcomes in question will be subject to some kind of overt subjective analysis- but there might be good reason why those involved might not wish to carry out this anaylisis. What if the counterfactual to a failed prediction was the conclusion that reliable prediction of any sort was in fact impossible? This might be very bad news indeed if your business model depended on the idea that you or your employees expertise allowed them to make reliable predictions.

For example, consider the odd fact that the City of London is full of people who are only employees because they can't actually do the thing they are employed to do- if they could do what they are being paid to do  they would not be employed doing it.

Sounds like nonsense I know- but consider what it is that all those highly paid analysts and traders are being asked to do by the people who employ them- they are asking their employees to predict the future with sufficiant accuracy to allow them to profit from those predicitive abilites.

But now ask yourself a question- If you could predict the future of the stock market or the FX market or any other market where serious money could be made- would you be an employee or would you go into business for yourself?  Clearly the latter, which means that anyone currently working in a role that requires them to accurately model the future only stay in their jobs because they know they can't actually do the thing they are employed to do- if they could do it they would leave, raise some capital by demonstrating their gifts and set up in business for themselves. And this obvious truth must also be clear to the people who employ them.

So there is something rather odd going on here it seems to me- some kind of game is being played out in which the participants all agree to pretend that the future can be predicited but none of them actually believe this is the case. Certainly money is being made in the city, but that money is not being made by predicting the future, it is being made by propagating the myth that the future can, to some degree, be predicted. The entire edifice of investment banking is in reality operating on the principle that if enough financial monkeys bang away at enough bloomberg terminals then on average sheer chance will generate enough correct 'predictions' to keep the profits flowing.

This explains the curious lack of consequnces for those economists who failed to see the 2008 meltdown coming- the strange truth is that no one lost their job following the crisis because no one who employed them really expected them to be able to do their job in the first place. Their function was as window dressing- a prop to wheel out for the benefit of nervous clients who required the reassurance that their bankers had populated their establishments with 'experts' who would be on hand to ensure the safety of their investments.

And by extension the same thing is true of all those people who are employed in the City to predict the future- their actual role is to provide a vast theatrical backdrop to a process in which random chance is the primary mover- but which investor is going to be happy to hand a percentage of their profits to a banker who admits this truth? Predicting the future may not be possible but it is highly profitable.

What we seem to have then is a profitable trade predicted on the notion that people who can predict the future choose not use this gift for themselves but allow others to profit from it instead. Why does it not occur to the billionaire that the people who advise him on his investments would be billionaires themselves if they had the kind of insight into the future that he is paying them to deliver?

But now onto this huge theatrical stageset lumbers Artificial Intelligence and in it's train an army of geeks who may just be autistic enough to start digging around in search of hard proof that the denizens of the City really can do what they say they can do in terms of prediciting the future. Because they will require this hard data to measure the perfomance of their own oracles.  What happens if those geeks start to produce evidence that the prognosticators of the City and Wall street are in fact no better at predicting the future than anyone else? Then the game will be up.

It may turn out that AI is no better than humans at prediciting the future- but in trying to build their artificial oracles the geeks may accidentally expose the embarassing truth that the entire industry of prediction is more snake oil than ambrosia.

 

 

 

 

Link to comment
Share on other sites

20
HOLA4421

It's all down to complexity. I'm a strict determinist, & so think that the reason for failure of activities like meteorology & economics is due to the (almost) incalculable number of cause & effect interactions taking place. They're bad science - scientific method is extremely powerful but quite limited in application.

So how far can we push science? as our models become increasingly complex, as computing power increases, then will economics become genuinely viable? No way of knowing at the moment. My suspicion is not (well, not before we collide with Andromeda anyway).

There's also a conundrum here; any model you build of the entire physical world has to exist as a subset of the physical world so cannot model it completely. it's knowing when the model is complex enough to give useful predictions that is the criterion I suppose.

 

Link to comment
Share on other sites

21
HOLA4422
9 hours ago, wonderpup said:

So there is something rather odd going on here it seems to me- some kind of game is being played out in which the participants all agree to pretend that the future can be predicited but none of them actually believe this is the case. Certainly money is being made in the city, but that money is not being made by predicting the future, it is being made by propagating the myth that the future can, to some degree, be predicted. The entire edifice of investment banking is in reality operating on the principle that if enough financial monkeys bang away at enough bloomberg terminals then on average sheer chance will generate enough correct 'predictions' to keep the profits flowing.

This is essentially the thrust of Nassim Talebs Fooled By Randomness.

Link to comment
Share on other sites

22
HOLA4423
27 minutes ago, ****-eyed octopus said:

It's all down to complexity. I'm a strict determinist, & so think that the reason for failure of activities like meteorology & economics is due to the (almost) incalculable number of cause & effect interactions taking place. They're bad science - scientific method is extremely powerful but quite limited in application.

So how far can we push science? as our models become increasingly complex, as computing power increases, then will economics become genuinely viable? No way of knowing at the moment. My suspicion is not (well, not before we collide with Andromeda anyway).

There's also a conundrum here; any model you build of the entire physical world has to exist as a subset of the physical world so cannot model it completely. it's knowing when the model is complex enough to give useful predictions that is the criterion I suppose.

Predicting markets is a fools errand because even if you come up with an algorithm or whatever that works, it self-invalidates as soon as it interacts with the market (because the algorithm cant include the effect of its own trades). This is why all the automated trading increases volatility - the trading algorithms add complexity to the system, the more sophisticated the algorithms the more complex and volatile the system.

Edited by goldbug9999
Link to comment
Share on other sites

23
HOLA4424
On ‎25‎/‎06‎/‎2017 at 7:49 PM, wonderpup said:

I don't disagree with the idea that  Economists are in the psychology business, but consider the fact that they have done everything in their power to deny this and have inserted into their professional lexicon an array of dubious mathematical models in an attempt to bolster their claims that they are a 'true' natural science rather than a social science.

That's just incorreect.  I am an Economics graduate.  Economists use mathematical techniques to help governments and business owners manage the allocation of scarce resources.  Of course the use of Maths doesn't mean that Economics is a natural science, and I have never heard any Economist pretend otherwise.  Indeed, human behaviour is ever-changing (unlike natural sciences) so many economic problems can never be completely "solved" the way that, say, E=mc^2 is a solved equation.

Your logic seems to be: Economists cannot predict anything with certainty, therefore they are of no more value than witch doctors.  Just as with weather-forecasting, or business planning, or government forecasting the point is that even though you don't know the future for CERTAIN, you still need to make decisions about it.  For example, a shop needs to buy an amount of stock in, even though it isn't CERTAIN how much their customers will want to buy next week.  You need to either take your umbrella with you OR not, even though you can't be CERTAIN whether it will rain.

Economics is more valuable than witch doctoring, because - even though it isn't perfect - it *is* better than no information at all when making your decisions.  And AI will no doubt help economists create even more helpful information - but whilst I can see AI helping with mathematical modelling it is harder to see it replacing the element of psychology.  And it will never be 100% accurate - how could any predictive process ever be?

Link to comment
Share on other sites

24
HOLA4425

We are seeing our traditional business decline hence are betting the farm on AI. I get powerpointed all the time by our R&D and it's clear that (at least at present) AI is not a magic bullet that can do just anything. A lot of it sounds like complete BS but there are many interesting bits. Good stuff I have heard recently is IT security monitoring, because detecting unusual network behaviour is not so hard but in large networks it has got beyond the ability of human monitors to evaluate and take action quickly enough. Another is medicine, surprisingly to me as I would have thought a doctor with years of experience is the least likely job to be automated. Turns out though a lot of symptoms are too similar for easy diagnosis, for example you turn up at the GP and say I got this abdominal pain, could be any one of many reasons. It takes time for human to work through them all, and the AI is better at it. Or perhaps more accurately a human makes the diagnosis better and faster when AI-assisted.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...

Important Information