Jump to content
House Price Crash Forum


This topic is now archived and is closed to further replies.


Q: What is likely to be the most important life skill in the 21st Century?

Recommended Posts

A: Navigating a society in which software driven by Artificial Intelligence will become the dominant arbiter of your opportunites for employment, finance, renting a home and maybe even love.

First a confession; AI is well on it's way to becoming a cult- and I have in my own tiny way probably contributed to this by constantly bleating on about it here- so I am in that sense part of the problem.

That being said I do think something important is happening here that is at least worth pointing out, which is the now clearly visible trend to deploy AI driven systems in the evaluation of human personality and character traits.

Ok- this has been around a while now in the form of the 'computer says no' meme in which applications for some kinds of financial products were anacdotally assessed by computers, at least at the early stages- but these systems are just a toys in comparision to the systems coming down the pipe. For example systems are being developed that claim to be able to read human emotions via real time assessment of facial expressions. So how long will it be before your next job interview includes a video feed that can be examined via AI in order to extract those fleeting facial expressions that might be used to gain 'insight' into your actual personality or motivations? And how long before online dating sites start to deploy similar technologies to decide who they match with who-or even who they allow onto their sites in the first place? And how long before your Landlord wants a video interview that he can run it past his AI personality assessment software to decide if he wants to rent you his house?

Once the idea has been established that human character can be better understood by AI systems than fallible human beings the stage is set for a truly chilling scenario in which the judgment of the  machines becomes the only judgement that matters. If this seems an extreme concern consider the following;




Sent to Prison by a Software Program’s Secret Algorithms

“Can you foresee a day,” asked Shirley Ann Jackson, president of the college in upstate New York, “when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?”

The chief justice’s answer was more surprising than the question. “It’s a day that’s here,” he said, “and it’s putting a significant strain on how the judiciary goes about doing things.”

He may have been thinking about the case of a Wisconsin man, Eric L. Loomis, who was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.




Compas and other products with similar algorithms play a role in many states’ criminal justice systems. “These proprietary techniques are used to set bail, determine sentences, and even contribute to determinations about guilt or innocence,” a report from the Electronic Privacy Information Center found. “Yet the inner workings of these tools are largely hidden from public view.”


Note that these systems need not actually work well-if at all- in order to have an impact on your situation- this is where the cult factor becomes important. What matters here is that those deploying these systems themselves believe that they work. So if these trends continue it's highly likely that in the future your personal fate is increasingly going to be in part decided by computers that are themselves 'black box' technologies in the sense that even their creators do not fully understand how they reach their conclusions.

Another more personal example; I recently applied online for a job- the application consisted of a series of scenario descriptions that I was required to devise responses to- my suitability for the role was to be decided on how closely my solutions matched the ideal outcome that the employers had in mind. I'm fairly sure my application was processed by an AI, but that's not the point I want to stress here. I actually did very well on these tests and was given an interview on the strengh of my performance- so in theory my 'personality' was the sort they wanted.

But in reality I actually 'cheated' on the test- not because I knew the answers-I did not- but because I  reverse engineered the questions in order to derive the ideal answers they were designed to point toward. In other words I worked out as best I could based on my knowledge of the company and it's public statements which answers it would ideally want to see and fed them these answers. This was not the approach envisaged by the people who created those scenarios- their intent was to glean valuable insights into my personality and suitability for the role- I perverted this intent by in effect constructing a false persona that best matched their ideal candidate and presented this persona instead.

Was this dishonest of me? Perhaps- but then again the questions themselves were also a deception of sorts to the degree that they were not simply designed to elicit answers but were being used to construct a model of who I was- a model that would never be revealed to me yet would be used to decide the outcome of my application- so I view this as a 'fight fire with fire' situation- the company and I both treated the job application process as an excercise in information manipulation and control. So both parties are guilty of a degree of bad faith here in my view.

In any case this is the life skill to which I refer in my title. While it is not possible to discern the process's that drive AI based personality assessments it may be possible to derive the criteria that underpins those assessments and to some extent feed them what they are looking for- and in this way become less of a victim of increasingly automated personality judgements that may or may not have a basis in reality.

Of course it's fair to say that these types of 'personality test' techniques have been around a long time in some form- but what I think is new is the degree to which these types of test are being weighted and taken as definitive-in part because the 'aura' of Artificial Intelligence is lending the outcomes a degree of authority and apparant certainty that was not so common in the past.

Being aware that you are in the future more and more likely to be assessed and measured by some form of AI in a wide range of situations it makes sense to cultivate a degree of cynicism as to the true intent and nature of any test or questionaire you are asked to undertake, because the likelyhood is that your responses will be evaluted by a machine intelligence that has been designed to process your answers in a symptomatic rather than literal manner, and it's symptomatic conclusions about who and what you are will be very hard to both discover and challenge in most cases.

The implications of cheap and ubiquitous AI systems deemed capable of reaching valid assessments of human personality traits and character attributes have yet to become clear- but the blind faith often put in the judgements of computers should lead to a healthy suspicion of any situation in which you find yourself being subject to such assessments- because if the computers say 'no' to you on grounds of a software derived personality malfunction you might find getting a Job, or renting a home or even getting an online date impossible in the brave new world we are so mindlessly constructing.


Share this post

Link to post
Share on other sites


:lol: My tin foil hat is a little tight today I confess. But in world where prison sentances are being decided by machines a little paranoia goes a long way I feel.

Share this post

Link to post
Share on other sites

This is the direction of travel;



Mya is disruptive in every way, and she’s set to revolutionize the talent pipeline. As the first fully automated recruiting assistant, she instantly engages with applicants, poses contextual questions based on job requirements, and provides personalized updates, feedback, and next-step suggestions. By delivering custom messages designed to address specific recruiter pain points, and acquiring critical applicant answers, Mya enables recruiters to focus their time on interviewing and closing offers. Powered by natural language processing technology, Mya is able to answer any question a candidate has related to the employer, including topics about company policies, culture, benefits and even the hiring process. When she can’t answer a candidate question, she queries the recruiter, gets back to the candidate, and learns how to respond the next time. “A recruiter will never again have to answer the same question twice,” FirstJob’s CEO and Co-founder Eyal Grayevsky explains.

Ultimately, the platform takes the data that Mya obtains through her conversations and turns it into quantifiable intelligence around the engagement level of each candidate and how closely they fit the target profile.



The 'data' in question of course being your interactions with the system that are then-by some arcane method- transformed into 'quantifiable intelligence' that is used to evaluate your suitability for the role.

Are there-should there be- limits to this machine driven evaluation process? For example should your body language be subject to analysis, your voice stress patterns, iris expansions or facial expressions? Or is any degree of scrutiny ok to feed the machines the data they need to assess your personality?

And what happens when this level of intrusive tech is available to more or less anyone who wants it? And implict in this paradigm is the assumption that the machines are better than people at this kind of assessment.

So you could end up in a scenario where some people are effectively screened out of participation in virtually any and all forms of social actiivity should they be possessed of traits or character 'flaws' that trigger negative judgements in the AI evaluation systems that are deployed as gatekeepers designed to protect the 'normal' from the not so 'normal'? (this distinction of course being made by the AI's themsleves)

After all, given cheap availaible AI technology to assess personality it would surely make sense to deploy it in any context where one might wish to exclude 'undesirables'?

So at what point do we say enough?


Share this post

Link to post
Share on other sites

1. Pet grooming. Apparently some care homes for the elderly have animals coming over once/twice a week. The residents love to care the hamsters, ponies ... I heard.

2. Nail technicians

3. Beauty therapists

4. Plumbers

5. Builders

6. Ballet teachers

7. Karate teachers

8. Yoga teachers

and of course  'Care home workers' to care for the elderly.


Share this post

Link to post
Share on other sites

Ive seen a lot of businesses switch from 'classic' computerised call centre options (ie press 1 for statements, press 2 for....) To a more dynamic option 'tell me why you called today and i will try and direct you' type setup.

This has been painful, particularly with the passport office whose computer couldn't understand my post code yet the system could not direct me to an operator. 

Even if the systems get better i suspect there will continue to be a strong bias against certain accents which , in the recruitment example, would be duscrimination. 

I like the self service tills in shops but i fear it wont be long til they are programmed to make small talk, ask where you are going on holiday then try and sell you foreign currency or sunscreen. That type of thing. 

I see amazon take foodstamps now. I just might never leave the house again. 

Share this post

Link to post
Share on other sites

Same as it always has been in business since time immemorial - the ability to smooth talk and bluff the higher-up's into believing you know what you are doing, whilst simultaneously shitting on all below you in the food chain.

Share this post

Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • The Prime Minister stated that there were three Brexit options available to the UK:   288 members have voted

    1. 1. Which of the Prime Minister's options would you choose?

      • Leave with the negotiated deal
      • Remain
      • Leave with no deal

    Please sign in or register to vote in this poll. View topic


Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.