Jump to content
House Price Crash Forum


  • Posts

  • Joined

  • Last visited

Everything posted by wonderpup

  1. Untill recently UK nationals were EU workers- so where's the problem? Just hire locals. Of course this may require a hike in wages but that's called the free market- a very popular concept among business owners until it involves paying more for labour at which point they demand government intervention to rig the labour market by maintaining their supply of cheap eastern european workers- whose legendary 'work ethic' seems to have suddenly vanished in the face of a declining pound. It turns out that people from eastern europe are not fools prepared to work hard for meagre pay after all- they are voting with their feet and who can blame them?- why should they contine to be exploited by low paying british employers when the numbers for them no longer add up?
  2. But why create tax credits in the first place if not to augment poor wages? There's a 'chicken and egg' scenario here in which low pay creates a state response which then becomes incorporated into the pay structure leading to an exacerbation of low pay. There's something deeply wrong with the slogan 'Making work pay'- wrong because it's an admission of failure at the most basic level of the economy that state intervention is now required to make having a job economically viable for those at the bottom. Tax credits were basically an attempt to deal with the reality that for a lot of people at the lower end of the income scale working was an irrational choice when being on benefits would mean a higher standard of living for them and their families. But of course it created a new set of perverse incentives that has further muddied the water around low pay and it's interactions with the benefit system. In a sense tax credits are the inverse of 'flexible' labour- having made working a non viable option for millions by facilitating their commoditisation the state then finds it must intervene in the very labour markets it sought to liberate by deploying measures designed to prevent those now commoditised workers falling into destitution. So we have the bizzare outcome that we now have a labour market more deeply entwined with the state than ever before.
  3. That's the theory- yet in reality we have rising demand for food banks along with that near full employment- so either the stats are fiction or the wages being paid are so sh*t that even people with jobs can't afford to eat anymore. I drove past a 'hand car wash' establishment today that was employing five guys to wash a single car- so assume ten minutes per car, six cars per hour washed and the minimum wage is currently 7.50 per hour- so those five guys cost 37.50 per hour to employ. In theory if you charge 10 quid a car you do make a small profit of 22.50 per hour. But that assumes that your car wash is constantly busy with zero downtime and zero overheads- in reality those five guys are going to be standing around idle some of the time and there must be some kind of cost for the materials and the space they are renting- so it's a mystery to me how the numbers add up unless the people they are using are on some kind of 'freelance' or contractor deal that means they only get paid for the cars they wash- meaning they almost certainly don't earn the minumum wage in terms of the hours they spend at work. There's a lot of games being played around the employment of 'flexible' labour that make the current employment figures highly suspect. And has anyone noticed that the 'flexibility' of work has not been matched by an equal flexibility in debt obligations? Your employer might have the ability to treat you as a disposable commodity that can be turned on and off at will- but your creditors would not be at all happy if you passed this income variability on to them in the form of reduced or missed repayments- and in many cases the two parties are the same organisation- those corporations who call for flexible labour would be horrified at the suggestion that this idea of flexibility should be extended to their debtors- their advocacy of 'modern' flexible financial arrangements re wages does not extend to their outlook on debt repayment- on this matter they remain firmly fixed in the 19th century.
  4. Given the state of India's road system I doubt driverless cars are much of a threat to their jobs, even if they can be made to work in other places- but I wanted to post this to mark the first time I have seen any real political pushback against Artificial intelligence in any form. Untill now the universal consensus among the worlds ruling elite has been that AI is always and everywhere a good thing. So to have an official goverment spokesman of a major state like India come out against it on the basis of protecting jobs is-I think- an interesting development. At the very least it suggests some unease is creeping in about the social and political consequnces of AI. The irony is that it's not self driving cars the Indian Goverment should be worried about, it's the people who work in call centres and the other offshored service jobs that are most under threat from software 'bots' that will soon be capable of taking on much of the routine service tasks currently performed by humans. So- unless anyone can cite an even earlier example- I think we can claim this statement by India's transport minister represents the first stirring of resistance toward Artificial Intelligence at the official level. So in decades to come if you ever find yourself being hunted through the ruins of civilisation by intelligent robots armed with laser guns you can at least take comfort from the fact that you were witness to the first overt signs of human rebellion against the machines http://www.bbc.co.uk/news/technology-40716296
  5. Yes- the way a Tiger in the circus functions well untill one day it bites your head off. We are still living with the consequnces of the last time the City blew itself up- and it looks like another varient of sub prime is on the cards in the auto loan market. Real wealth can only be generated by making real things or adding real value- and it's not that clear to me that a great deal of what goes on in the financial sector qualifies. In any case a lot of the 'wealth creation' activity of the city is about to be eaten by fintech which will commoditise the profit right out of it.
  6. The idea that investment is driven by the amount of expendable cash that companies have sloshing around in their accounts seems really odd to me- yet this seems to be the basis of the current governements drive to cut corporation tax. In reality companies invest only when they see the possibility of a return on that investment- and in an economy with stagnant disposable incomes that demand just is not there. This is a truly perverse development but makes perfect sense when labour is cheap and plentiful and machines expensive to purchase and maintain. Productivity as the road to prosperity is a slippery idea anyway- after all, the more productive your workforce becomes the less labour you need which will depress wages. A truly hyperproductive ecomomy would probably collapse because it would pay out so little in wages that there would be no effective demand for the abundance of goods and services it produced. Those who argue that increased productivity will by default lead to wider prospertiy imagine themselves to be advocates of capitalism, but in reality they have a concealed socialist premise in their minds that assumes that the benefits of that increased productivity would be widely shared among the population- in reality this does not happen that much- the gains from increased productivity tend to be captured by the owners of capital or machines, not the workforce whose bargaining power has been eroded by those machines.
  7. Some straws in the wind here; http://news.efinancialcareers.com/uk-en/279189/wilmotml-aric-whitewood-jonathan-wilmot-credit-suisse/ This paragraph from the article is indicative of the kinds of narrative that will be constructed to support the idea of AI as the superior source of prediction; It's from such narrative speculations that meme's and mythologies are born.
  8. I think you are right- it's the 'inconvenient truth' factor- we tend to ignore predictions we don't like. What's interesting is that almost every human society in history has had it's oracles- they go by different names, like Shaman, or Witchdoctor or Economist- but their role is always the same, which is to act as mediators between the tribe and it's uncertain future. But our modern prognosticators claim that their oracular power comes from science, not superstition, pointing to the mathematical rigor of their craft. But why would anyone who really could predict the future waste their time being an economist? Would they not simply use this power to make themselves rich? Also it may in retrospect have been an error on the part of economists to insist that their oracular insight was the product of the rigorous application of math to the data, given that this is a domain in which computers may have a distinct advantage. It might have been wiser to emphasize the role of intuition and human psychology- but this was viewed as effete unmanly activity best left to the 'social' sciences. Economics wanted to be a 'hard' science, so much so that they even invented their own nobel prize to enhance their status in this regard. In reality Economists are just another manifestation of the all too human desire to see into the future and thus control it, and in that sense an AI system designed to model future outcomes is also not a new idea- it's just yet another iteration of that same desire. The attraction of a non human Oracle to the modern mind is paradoxical in that the very opacity of an artificial thinking machine would lend it a mystique that could transcend the established cult of the human 'expert'. We might find that the predictions of machines are held in higher regard than those of humans precisely because they originate from a non human source. Were that to happen we would have come full circle and ended up recreating in modern form the oracles of the ancient past whose power of prophecy was derived from non human entities whose insight into the future was considered superior to that of mere human interlocutors. If the reality is that those humans currently employed as oracles cannot really predicit the future with a usefull degree of precision- and this seems to be the case- then their job security would appear to rest on the their value as psychological support for the anxious client- however we already see AI developers bringing their own oracles to market, backed by the potent mystique of 'artificial intelligence' as a modern mythology that seems likely-in the long run- to eclipse the hitherto unchallenged dominance of human expertise.
  9. No- my argument is that Economists have the same value value as witchdoctors because they provide the same illusion of mastery and control over future events that in reality cannot actually be predicted or controlled. The complex math calculations in which economists engage are best regarded as analogous to the rituals performed by the Withdoctor as he casts the runes in order to decipher the course of future events. And just as in the case of the Witchdoctor, when the predictions of the Economist fail to materialise his status does not change, because his value to the tribe is not in the accuracy of his predictions but in the illusion of control over the unknown future that he represents. The reason Economists get hired in not because they can predict the future but because they offer reassurance to the employer's clients that the future can to some degree be anticipated and thus controlled- this is a valuable contribution to the employer's bottom line. So my argument is that the value added by those in the 'predicitive' professions-like economists- is not that they have any real ability to predicit anything but that they provide a comforting illusion of mastery and control over future events that allow their employers to reassure nervous clients that their money is in good and capable hands. The reason that AI represents a threat to these professions is because it can viewed as an alternate means by which Clients can be reassured. If the meme that AI is a superior methodology for anticipating future outcomes becomes dominant the role of human experts like Economists will be diminshed as they find themselves either as appendages of or entirely replaced by Artifcial Intelligence systems. It may not even be true that AI systems are any better at guessing the future than humans are- what matters is that this is believed to be true- at which point clients will demand that those managing their affairs employ AI systems rather than humans.
  10. This implies that the outcomes in question will be subject to some kind of overt subjective analysis- but there might be good reason why those involved might not wish to carry out this anaylisis. What if the counterfactual to a failed prediction was the conclusion that reliable prediction of any sort was in fact impossible? This might be very bad news indeed if your business model depended on the idea that you or your employees expertise allowed them to make reliable predictions. For example, consider the odd fact that the City of London is full of people who are only employees because they can't actually do the thing they are employed to do- if they could do what they are being paid to do they would not be employed doing it. Sounds like nonsense I know- but consider what it is that all those highly paid analysts and traders are being asked to do by the people who employ them- they are asking their employees to predict the future with sufficiant accuracy to allow them to profit from those predicitive abilites. But now ask yourself a question- If you could predict the future of the stock market or the FX market or any other market where serious money could be made- would you be an employee or would you go into business for yourself? Clearly the latter, which means that anyone currently working in a role that requires them to accurately model the future only stay in their jobs because they know they can't actually do the thing they are employed to do- if they could do it they would leave, raise some capital by demonstrating their gifts and set up in business for themselves. And this obvious truth must also be clear to the people who employ them. So there is something rather odd going on here it seems to me- some kind of game is being played out in which the participants all agree to pretend that the future can be predicited but none of them actually believe this is the case. Certainly money is being made in the city, but that money is not being made by predicting the future, it is being made by propagating the myth that the future can, to some degree, be predicted. The entire edifice of investment banking is in reality operating on the principle that if enough financial monkeys bang away at enough bloomberg terminals then on average sheer chance will generate enough correct 'predictions' to keep the profits flowing. This explains the curious lack of consequnces for those economists who failed to see the 2008 meltdown coming- the strange truth is that no one lost their job following the crisis because no one who employed them really expected them to be able to do their job in the first place. Their function was as window dressing- a prop to wheel out for the benefit of nervous clients who required the reassurance that their bankers had populated their establishments with 'experts' who would be on hand to ensure the safety of their investments. And by extension the same thing is true of all those people who are employed in the City to predict the future- their actual role is to provide a vast theatrical backdrop to a process in which random chance is the primary mover- but which investor is going to be happy to hand a percentage of their profits to a banker who admits this truth? Predicting the future may not be possible but it is highly profitable. What we seem to have then is a profitable trade predicted on the notion that people who can predict the future choose not use this gift for themselves but allow others to profit from it instead. Why does it not occur to the billionaire that the people who advise him on his investments would be billionaires themselves if they had the kind of insight into the future that he is paying them to deliver? But now onto this huge theatrical stageset lumbers Artificial Intelligence and in it's train an army of geeks who may just be autistic enough to start digging around in search of hard proof that the denizens of the City really can do what they say they can do in terms of prediciting the future. Because they will require this hard data to measure the perfomance of their own oracles. What happens if those geeks start to produce evidence that the prognosticators of the City and Wall street are in fact no better at predicting the future than anyone else? Then the game will be up. It may turn out that AI is no better than humans at prediciting the future- but in trying to build their artificial oracles the geeks may accidentally expose the embarassing truth that the entire industry of prediction is more snake oil than ambrosia.
  11. I don't disagree with the idea that Economists are in the psychology business, but consider the fact that they have done everything in their power to deny this and have inserted into their professional lexicon an array of dubious mathematical models in an attempt to bolster their claims that they are a 'true' natural science rather than a social science. The irony being that having styled themselves as operating purely on the basis of hard maths and empirical data they have exposed themselves to the idea that an artificially intelligent machine might credibly do their job better than they can. This substitution of human experts with Artificial experts is not theoretical- it's happening to some degree even as we speak; https://www.theguardian.com/technology/2017/jan/05/japanese-company-replaces-office-workers-artificial-intelligence-ai-fukoku-mutual-life-insurance What's interesting here is that IBMs claim is not simply that it's system can crunch the data much faster than the humans involved- they are claiming that it can ÔÇťanalyse and interpret" that data too- that it can 'think like a human'- which is bulls*it- no current AI can think remotely like a human. So they are sacking 34 skilled people on the basis of a tacky PR claim that bears no close examination.How does this work? I think the reason this scenario has come about is because the output of those human experts was always to some degree subjective- the business of analysing insurance claims has never been a simple matter of adding fact A to fact B and arriving at fact C- If that were the case then these jobs would have automated years ago. The truly perverse aspect of this story is that IBM have pulled off the trick of selling their system not based on it's number crunching ablity but on it's ability to replicate the subjective-that is 'intuitive'- aspects of the task- the very elements of the task that should have made those human workers safe from automation. Here's the punchline as I see it- any task that involves subjective expert judgement- where the outcome is in part determined by that judgement- is now potentially exposed to the IBM strategy above. The argument goes like this; if we employ an expert because a problem has variables that cannot be precisely calculated simply by taking the exisiting data and crunching the numbers then-by definition- we can never really know if that expert's solution was in fact the best one- we are forced to accept his judgement- that is, after all, why we needed him in the first place. For example- suppose we hire an expert in marketing to launch our new product, which leads to a given number of sales in the first few months. Can we be sure that if we had chosen a different expert those sales would have been lower No, we can't, they might even have been higher. There is no way to crunch the sales data in order to find out because that data itself is partly a derivative of the strategy that our expert chose to deploy- in other words the outcome of this kind of expert led decision making has an irreducible subjective component that can never be objectively tested against the performance of other experts. What this means is that from the point of view of those employing experts those experts are 'black boxes', whose judgements-compared to those of other experts- are not easily measured by any objective means. So now along comes IBM saying that they too have a 'black box' called Watson- an AI system capable of a superhuman degree of analysis that allows it to excercise 'judgement' when arriving at it's conclusions. And just as in the case of the human expert the strategies recommended by Watson will also have their irriducible 'subjective' aspects- not because Watson is human but-like it's human counterparts- it too has a 'black box' dimension that renders it's decision making opaqe. By their very nature systems like Watson are not totally transparent even to their operators. So-imagine that you are tasked with sourcing expertise to solve a given problem- on the one hand you have a human expert who claims to have unique insights into your sort of problem and on the other you have IBM and it's billion dollar brain who also claim to have unique insights into your sort of problem- on what do you base your decison? In both cases you are faced with 'black box' processes that happen either in the organic brain of the human or the artificial brain of the machine- both are equally opaque from your point of view- and since you must pick one you will never know if the other was the better choice. So how to choose? In the end your choice will amount to an act of faith- do you subscribe to the idea that the human brain is the superior tool- or can IBM convince you that it's machine is the better solution. The point is that whichever path you take you will never know that you chose the right one- if you choose the human, the machine might have been the better choice and vice versa. In short (he said ironically) the very thing that should make the human expert safe from automation- the subjectivity of his judgement- is opening up a flank that can be exploited by those selling AI solutions to replace him. From the point of view of an end user the choice between an opaqe human expert or an equally opaqe AI is not really that clear. In the end what it might come down to is which meme proves to be the most persuasive- the meme of the human expert or the meme of the intelligent machine. So those who imagine that the subjectivity and opacity of their expertise will protect them forever from the enchroachment of machines might be mistaken- because the very fact that their judgements are subjective and opaque means that choosing between them or the machine becomes a choice between two competing mythologies- man as expert verses the intelligent machine. In the case of the IBM example above the machine mythology won the day and the human experts were shown the door.
  12. It's not so much the passage of time as the evolution of data. To be fair to the AI industry they don't claim to be building literal oracles that can see into the future- what they claim is that if you give them enough data from a given domain they will deliver a prediction as to the possible evolution of that dataset in the future. It's about seeing patterns in data that can be extrapolated beyond the current data set to generate new data that can then be used to make more informed decisions. This is the same claim made by Economists and others whose work involves making predictions based on current data. There's no evidence that it's possible to reliably predict economic outcomes at all- this is the lesson of 2008, when a system the 'experts' claimed was functioning normaly suddenly collapsed in total contradiction of their most basic understandings of how that system worked. I'm not arguing that either AI or human beings can actually predict the future- I'm arguing that the real social function of Economists is to provide the illusion that the future can be managed and to some extent controlled, and that Artificial Intelligence now represents a competing source of such comforting illusions. Given the fact that it's impossible to really see the future we are dealing here in the realms of mythology and belief rather than reality- and should the idea take root that machines are better at extrapolating patterns from data than humans those humans might find themselves losing status and authority as the sole arbiters of such extrapolations. AI represents the first challenge to the status of the human 'expert' in the entire history of our species- and this challenge is especially acute for those experts whose status depends on their ability as pattern recognition specialists- which turns out to be a surprisingly wide demographic among the population of human expert practitioners.
  13. It would be a battle of two mythologies- Astrology as a pseudosciene verses AI as Oracle- the employment prospects of the human astrologer would hinge on which of these mythologies was the most resilient. One of the ironies here is that in stressing the scientific rigour of their methodologies the Astrologers might have cleared the path for the AI whose claims to scientific rigour might be seen as the stronger.
  14. I think it's fair to say however that if there were a global failure in aviation or architecture on the scale of the financial crisis there would have been consequnces for the people involved- yet in economics we see no such outcome- the same theories are being taught by the same people at the same universites- and there has been little fallout of any kind for those highly paid experts who failed to do the very thing they were being paid to do- which was to accurately model the behaviour of the economy. What happened in 2008 was not only unforeseen, it was theoretically impossible according to the experts involved. But my real point here is that it did not matter that economics failed on a technical level because that is not the reason why economists exist- their true value is as providers of the illusion of certainty in an uncertain world. Crudely put we employ economists for the same reason some people wear lucky charms- in an attempt to impose some sense of predictability on an unpredictable universe. So the apparant immunity of the profession to it's own technical failure is understandable when the Economist is seen not as a scientist but as a provider of reassurance and certainty in the face of the unknown that is the future. We need economists for the same reason primative tribes needed witch doctors with chicken bones- to create the illusion that the future can be forseen- the fact that this is impossible does not matter, what matters is the illusion. The point about AI is that it represents the first serioius competitor to the human practitioner in the realm of predictions. Untill now only human beings were deemed capable of taking exisiting data and extrapolating from it a model of future outcomes- this is no longer the case. AI is all about predicting the future- even in the case of the self driving car we are dealing with systems that must engage in short term predictions as to the behaviour of the enviroment and other road users if they are to be safe- merely relying on their superhuman reaction times will not be enough- we will need our robot cars to do what we do when driving, which is to peer into the next ten or twenty seconds in order to try and anticipate what will happen next. So the real threat from AI to anyone whose job involves modelling the future is that the idea will take hold that machines might do this better than they can- at which point their value a sources of insight will be undermined. The basic takeaway here I think is that in reality prediciting the future is impossible- so what is at stake here is the social role of oracle- and wether that role will be occupied by a human expert or by a machine.
  15. The financial crash of 2008 was seen as a massive failure on the part of those professionals who failed to 'see it coming'. The reputation of economists took a huge hit because a big part of their job was supposed to be making reasonably accurate predictions about the economic future- yet almost all of them failed to predict the most catastrophic economic event in nearly a hundred years. But strange as this failure of prediction was, what was even stranger were the consequnces for economists in terms of their jobs- because there were no consequnces. No one was sacked, no mass cull of failed economists took place, nor was the teaching of economic theory much impacted by the complete failure of that theory to predict the meltdown of the global economy. In short following the complete failure of economics as a tool to accurately model the behavior of the economy-it's raison d'etre- the profession carried on 'business as usual' as if nothing had happened. How can this be explained? Imagine a similar global failure in aircraft design or architecture in which failed theories led to catastrophic outcomes- would these professions have been allowed to ignore their failure and carry on as before? Not likely- if planes started falling out of the sky or buildings started collapsing the result would be a hunt to identify the culprits and make sure they were sacked as soon as possible. In the case of economists this simply did not happen, despite the devastating consequences their collective failure had caused. The reason-in my view- that economics seems so immune from it's own failures is because the role of the economist is not in reality a practical one but a ceremonial one- they occupy the same space in the corporate cultures of today as the witch doctor occupied in the tribal cultures of the past when he cast chicken bones on the ground and examined the resulting pattern for clues as to the future. Of course the witch doctor could not really see the future and most of his predicitions turned out to be wrong- but that did not diminish his status in the tribe, because the value he provided was of the emotional and psychological kind, not the the practical kind. Employing an economist is not really about predicting the economic future- because this is not something they do that well. What it's really about is offering emotional and psychological comfort to your clients or superiors, who-knowing that you have employed an 'expert' - can sleep a little more soundly in their beds. So up until now it did not matter that much if your ability to deliver accurate predictions was poor- you still added value as a reassuring source of 'expertise', performing your role as the reader of runes and observer of portents. But now there is a new kid on the block called Artifiical intelligence- a shiney new technology that-among other things- claims to be able to examine vast amounts of data and from that data draw valuable insights regarding future outcomes. Even if these powers of prediction are just as flawed as your own they still represent a threat because for the first time in history a new source of 'expertise' has arisen- a competing mythology in which machine intelligence is pitted against the human 'expert' as a source of reassurance and guide to future action. It seems to me that there is an entire class of professions that are based on the idea that a deep knowledge of present configurations in a given domain can be used to make actionable predictions about the future of that domain- some people are paid a lot of money based on this assertion, often despite evidence that their actual ability to make such predictions is no better than random chance. Entire office blocks in the City of London are filled with people whose job is to predict the future of various financial assets or instruments. But what is 'big data' driven AI if not an attempt to build machines that predict the future? So I predicit that the prediction business in all it's forms is about to get clobbered by a new oracle- one that offers it's own competing mythology as source of valid insights concerning the future.
  16. I didn't say that the AI business was unique in this repect-only that there is a clear mismatch between the expectations of it's investors and it's public postion re the impact of it's technology on employment. It cannot be true that AI will be both highly profitable in the future and have little impact on future employment if those predicted profits are based on replacing people with technology, which they largely seem to be. So the oft repeated meme to be found in many articles and interviews re AI that the aim is to 'make people's jobs easier' rather than replace them is itself more than a little disingenuous. The low hanging fruit in terms of monetising AI is to take an exisiting human job and automate it as far as possible to cut costs- no one really denies this. Also can I point out that if it's true that the AI Industry doesn't even really understand how its algorithms work then your arguments as to the inherent limitations of these techniques would appear to be on somewhat shaky ground- how would you-or anyone else- know what those limitations might be?
  17. There's this assumption that Capitalism is somehow innately anthropormorphic in it's nature but it may well be that this apparant happy alignment between the free market and general human well being is an illusion- a product of poor sampling over too short a time frame. On reflection I would argue that the critique of Artificial Intelligence re employment is really a critique of the limitations of the Capitalist model, which seems to have no real response to offer in a scenario in which human labour is no longer required. The irony is that work- once considered a curse visited upon man by a vengful god- has now become so indispensible that-like the devil- if it did not exist it would be neccessary to invent it. Work-in a capitalist society- is not just a means to an end because it is also vital component of the capitalist regime- so in a truly perverse twist of history that which the sons of Adam once regarded as a punishment for the transgressions of Eden is now deemed so vital to human nature that we even talk of having a 'right' to work- an idea that our distant ancestors would have found utterly incomprehensible. The problem seems to be that the mixture of capitalsim and powerful technologies is becoming so combustble and toxic that it threatens our future on every level, from environmental pollution to social implosion due to large scale job loss. What we need is not less technology but a better social framework around which to deploy that technology than the current Capitalist model that seems to have no answers to any of the problems we now face. There is something increasingly desperate to be detected in the ever more shrill claims that what we need is more free market as a way to solve our problems- even those who make these claims seem less and less convinced by them as time goes by. What AI really threatens is not people but Capitalism- the same capitalism that will embrace AI with all the fervour of a junkie who knows their habit is killing them but just can't stop.
  18. For most of those 5000 years the horse was indispensible to human civilisation- then came steam and shortly afterward internal combustion and within a few hundred years the horse went from indispensible to redundant. So the past is not always a good guide to the future. What happened to the horse is obvious- their value as a source of muscle power was surpassed by machines that offered a better and cheaper form of muscle power. But this was not achived by building a mechanical horse, complete with four legs and a tail. Humans-being smarter than horses- had an alternative source of value to offer when horses became redundant- they had brain power. But what happens to humans when brain power becomes commoditised in the same way that muscle power was commoditised? And just as it did not require mechanical horses to replace real horses it does not require an artificial brain to replace the human brain- all that is required is a machine that can replicate the value that is currently added by humans to a given productive task. I really don't see how we can continue to develop technology that replicates the cognitive function of humans without at some point making some of those humans redundant- and I really don't see any obvious place for those redundant humans to go next in terms of paid work. The reality is that the AI industry is lying to somebody- either it's lying to it's investors in terms of the profits it expects to make or it's lying to the public in terms of the jobs it expects to replace- because the only way those profits are going to materialise is by replacing people with machines.
  19. Yes- that's how it will be done- subpar automated services for the plebs, justified on the basis that a low cost automated service will be better than no service at all. Then over time a slow migration up the socio economic ladder as the systems feed on their own performances to improve their game. I agree that for anything involving presenting a case to a courtroom no one is going to be looking for a robot lawyer anytime soon- but for the 'back end' stuff where most of the billed hours are to be found the pressure to automate will be coming from the clients. Anacdotally, I watched a recent episode of 'Madame president'- a drama series set in Washington DC- that featured a character who declared that they were going to Harvard to study law- only to be told that there was no point since in the future most of the legal work would be done by robots! Not exactly a scientific survey I know, but it's always interesting to see the point where the tin foil hat concerns of the fringe start to show up in mainstream culture.
  20. The Law is a good example of how even those professions long considered 'safe' from technology are now starting to become vulnrable; Artificial intelligence closes in on the work of junior lawyers https://www.ft.com/content/f809870c-26a1-11e7-8691-d5f7e0cd0a16 As is now the norm with articles of this type the meme of benign redeployment is ever present with the claim that the technology will 'free lawyers up to do more interesting things' which may well true in some cases- but any technology that reduces a task from 'weeks' to 'a few miniutes' is surely bad news for the people currently being paid for those weeks of work.
  21. It's a good point. I used to think that by diversifying out of the digital stuff and into 'real' art (I also paint landscapes on canvas) I could avoid the impact of automation but then realised of course that even the most technology proof worker still needs somebody somewhere to buy the product or service they provide. So if AI does start to eat jobs in a significant way it will be a ripple effect in which the first people to be hit will be those losing their jobs- but the next group will be those whose jobs depended on the first group having a job in order to buy stuff. There is also a third ripple that happens when people who still have jobs start to save instead of spend because they hear about how other people are losing their jobs, creating more slowdown in demand leading to more lost jobs. I am sometimes accused of being a technophobe when I post on this subject but that's not really true- I love technology and spent my youth reading science fiction and dreaming about the amazing future we all would live in. So the problem I see is not technology in itself, rather it's the interface between technology and our essentially 19th century social attitudes and institutions that seem totally incapable of dealing with the potential social disruption that might be created by that technology. Even now we have a virtual obsession with work as some kind of social good- the meme being that only through work can we be truly fullfilled and justifed in our existence. Which is fine except that we are also curently investing billions in order to develop technologies that are directly targeted at putting a lot of people out of work! So there seems to be this slow motion collision taking place between the cultural ideal of work as a social good of utmost value and the commercial imperative to automate as many jobs as possible in order to be 'competitive'. How can these totally incompatible goals be integrated into some kind of coherent vision of the future? At present nobody seems to have an answer. All we have is a fuzzy notion that the population must 'skill up' in order to remain one step ahead. Exactly what skills they should be aquiring is never made clear, perhaps because the idea that we all migrate en masse to the upper echelons of the skills pyramid is inherently absurd- pyramids having this tendancy to narrow markedly toward the top.
  22. It's that gap between the operator and the artisan that technology tends to close. My own particular experience was the decimation of the illustation business that came with the introduction of photoshop-instead of using freelance artists to design unique book jackets or magazine illustrations ect the publishers started to use in house operators to create photo montage designs that looked very slick and finished but required minimal real artisitic skill to create. Your point about reusable assets is also important. Digitial assets are by their very nature highly plastic and can be easily repurposed even by relatively unskilled people. So, for example, if someone needs to create a human character for a game do they sculpt a detailed human figure starting from a simple primitive sphere? Or do they use a pre made human figure and base their design on that? In a time sensitive production scenario the answer is obvious. It makes perfect sense commercially to automate as much of the 'creative' process as possible and thus the trend is inevitably to erode the 'craft' aspects of the job with automated or pre-built solutions. There will- of course- always be a place for the exceptionally talented in virtually any field- but by definition most of those employed in any field are not the exceptionally talented- they are the workhorses whose input is valuable but not so unique that it is beyond the reach of increasingly smart technology. What seems to be happening is that technology is creating a 'winner takes all' scenarios in which a few big players in every feild make huge profits while everyone else struggles to survive. We see this in the Music industry, increasingly in the games industry- and when it comes to online search or social media there really is no longer a competitive landscape at all- it's Google and Facebook and virtually no one else. What always seems to be glossed over whenever people talk about the industries of the future is the fact that none of those industries seem to be very labour intensive- as you point out even in the new industries like computer games the trend is toward smaller teams using smarter tools and processess rather than expanding workforce numbers. The reality seems to be that any industry that is based on advanced technology tends to follow a fairly rapid arc in which it initially creates jobs in the early phase when the technology is developing but once the technology matures many of those jobs are themselves automated by technology.
  23. The problem is that 'the myth of a superhuman AI' is itself a myth in terms of the real concerns that people have regarding AI. And -ironicaly- it's people working in AI who seem to be mostly propagating this myth in their efforts to debunk it- they are fighting a straw man largely of their own creation. Firsty most people have no worries about AI what so ever- they just don't think about it. Among those who do worry about it the main concern is not Superintelligence- or even General intelligence- it's narrow intelligence that just happens to allow a machine or software to do their job or to reduce the skill needed to do their job to the point where their skillset is made irrelevent in commercial terms. Conflating some imagined public anxiety re the arrival of superhuman AI with legitimate concerns re the impact of narrow of AI on employment is just muddying the waters of the debate. In practical terms it does not matter if-for example- a self driving car lacks the smarts to engage in witty banter as it drives it's passengers to their destinations- if I make my living as a taxi driver this lack of social skills on the part of automated vehicles is very unlikely to be a deciding factor if those vehicles offer a cheaper or more convenient service than I do. So the real threat represented by AI in the near term is not the arrival of General or Superhuman intelligence, but the arrival of artificial idiot savants that do one thing so well that the humans curently employed to do that one thing find themselves surplus to requirements. Given that automating exisiting jobs is the 'low hanging fruit' when it comes to ROI in AI it's a bit disingenous for the AI 'community' to assert that the real concern is-or should be- the public's alleged anxiety regarding the arrival of Skynet and the Terminator. Instead of trotting out the dubious claim that the intent is only to 'make people's jobs easier'- rather than replace them- it would be more honest to acknowledge that the entire business model of Artificial Intelligence today is based on the premise that you increase the bottom line by reducing the cost of labour- either by outright replacement with technology or by downskilling the job to point where you can replace expensive skilled labour with cheaper unskilled labour and still get the same results. While the myth of the the myth of superhuman AI continues to clutter up the debate it's hard to have a serious conversation about the more mundane but far more immediate concerns regarding the impact of AI on our current economic/social arrangements.
  24. My point exactly- the more level that playing field becomes the harder it is for people to make a decent living from their 'mechanical'* skills- but there is a more subtle issue here- because it turns out that the need to have those skills was a significant barrier to entry in a lot of creative fields. Consider two different designs for a new product or logo- and let's say for arguments sake that idea one is a sketch in pencil produced by a very experienced designer with decades of award winning designs under his belt, while design two has been beautifully rendered via some slick 3D software by a total novice who has little design experience but is really good at producing polished images thanks to his software. In an ideal world the potential client would see past the difference in presentation and immediately grasp the essentials of both designs, and would then choose a design based on it's merit as an idea, not just as a pretty picture. In the real world it's incredibly difficult for a client-who by definition is not a designer themselves- to see past the surface of a presentation into the real value beyond. And this applies even if both are using the same software to create equally slick presentations. So while I agree that the real creativity is in the content- the idea- and not the actualisation of it- in practice this distinction is virtually impossible to make- being human we are deeply influenced by surface appearance- after all if we were not there would be no reason to worry about design aesthetics in the first place. The implications of this are that as technology makes the creation of slick visuals or music ect ever more easy the already crowded creative fields will become even more overpopulated with relatively unskilled people who none the less can offer highly finished work. This is an example of a wider trend of 'deskilling' in which the availability of powerful technologies can-as you say 'level the playing field' between the professional and the 'wanabee' to the point where the sheer number of entrants makes earning a living increasingly difficult. Having spent my career in a horribly overcrowded profession I may be a little more sensitive than most to this trend- but it's definately happening; https://www.theguardian.com/artanddesign/2013/dec/13/death-of-photography-camera-phones I know-people will argue that this is nothing new, and that's true. But what I think is new is that it's happening everywhere, all at once, because the computer is unlike any other invention in history- it's a machine that emulates other machines- the same computer you might use to do your accounts I could use to create art or someone else could use to write music, or edit video or a million other things. * There's a meme in AI that something is only considered intelligent untill a machine learns how to do it- then it gets downgraded to a 'mechanical' skill- this happened with Chess. You have extended this meme by making the point that any human skill tends to be considered intuitive untill a machine learns to do it-at which point it too gets downgraded to being 'mechanical'. In my case what used to be called 'artistic talent' is now starting to be replaced by the 'mechanical' skill of operating powerfull rendering software- though I'm sure that those doing the rendering would problably insist that what they do is itself a form of art- and they may be right. The problem they will face however is when the software they use becomes so smart that almost anyone will be able to churn out great looking images with little experience or knowledge- a trend that can already be seen in products like 'Keyshot' a super slick rendering engine that is very fast and easy to use; http://www.root-solutions.co.uk/products/keyshot/?gclid=CKLQqZeD0NMCFccy0wodlHQE9g Note the emphasis here is not on craft or technical expertise- these guys are going after the lowbrow market where what matters is ease of use plus great results- because that is how they will maximize their revenue. Once they can leverage Artificial Intelligence into their products the need for skill or artistry will be designed right out of them.
  25. What you describe here is a gradual-yet accelerating- process by which human labour is replaced by machines of various sorts- but note that this process is not cyclical, it is cumulative- those tasks that are automated tend to remain automated- with a few interesting exceptions that have relevance to your comment quoted below. but I think we can both agree that in general the trend to toward more automation of work is undeniable and inexorable. You make a good point here- and I will back up your argument by pointing out the odd fact that ten years ago if I wanted my car washed I would take it to a mechanical car wash, but today I will probably find my car being washed by a group of eastern europeans weilding spray hoses and wet rags- so what is going on? why is it now apprantly more viable to employ five men to wash my car instead of one man operating a machine to do the same job? Part of it is perhaps consumer prefence- people may feel that a hand wash is more thorough than a machine wash and might pay a bit more for the service- but there is also a more obvious reason I think, which is cheap labour. It's cost effective to employ those five men mainly because they are cheap to employ. Were those men to be paid a decent living wage- causing prices to rise- I suspect that the consumer preference in car washing would shift back to machines. So the apparent contradiction between increasing automation and the fact that some of us seem to be working harder than ever is-I believe- a consequnce of the impact of automation on wages; http://www.economist.com/news/briefing/21650086-salaries-rich-countries-are-stagnating-even-growth-returns-and-politicians-are-paying And this trend of wage suppression has been exacerbated by immigration which has added to the amount of labour avialiable at low wages and of course outsourcing which has also suppressed wages to some degree- and outsourcing itself has been facilitated by technology. So it's entirely possible for us to have escalating job automation on the one hand and a large amount of poorly paid insecure workers on the other, because the former undermines the bargaining power of the latter. The stagnation of wages and the rise of insecure work arrangements like 'zero hours' contracts are symptoms of the decline in the power of wage labour to demand a larger share of the economic pie. And part of this decline in bargaining power is the automation of work. So at present what seems to be happening is that people are competing with technology by working harder or longer for less money and under deteriorating terms and conditions. One other point I would make is that you rightly point out that there is huge potential demand for human labour in areas where machines do not compete like caring for others and in theory we could redeploy the workforce to do these jobs- but in practice what do we see? We see the closure of care homes because there is no money to sustain them. Despite the fact that they pay most of their staff the miniumum wage they are still not viable businesses. The point being that in capitalism need alone does not matter- it's need + the ability to pay that matters. So while there may always be jobs for humans to do in theory, if those jobs are not part of a viable business model they will not be created in the first place. I recently watched a presentation by a Guy called Andrew Ng- a leading expert in Artificial Intelligence- and he made the point that the easiest tasks to automate are those already being done by people, because this is where most data is available to train the AI's. His concern then was that most of the projects he was aware of in the field were directed at existing jobs for this very reason- those jobs are the low hanging fruit. So the elimination of jobs is not some side effect of current development, it is the primary focus of that development. In other words there is now a huge global investment in technologies that are not focused on creating new forms of employment but on automating existing forms of employment because this is the most direct route to profit and success for those companies and their investors.
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.