Subscribe

The Human Element in

All Things AI

Executive Summary

Much of the discourse on Machine Learning and Artificial Intelligence concentrates on their technical capabilities – how they can help organisations or where they have not worked as intended. But the human aspects of how people engage with this technology merits equal attention.

 

In this edition of our Government Insights series, we explore the role of people in the wider deployment of Artificial Intelligence. Find out why the requirements, attitudes and concerns of those who use, work alongside and be affected by this technological revolution must be heeded – and not ignored.

Data scientists are hot property. That’s not exclusive to the insurance world. But thanks to their ability to uncover all kinds of insight and combat fraud, they’re a particularly valuable asset to insurers.


In order for them to be as valuable as they can be, however, data scientists need to have the right tools and working relationships with the fraud team to turn that insight into something productive.

Discussions and analyses of Artificial Intelligence (AI) technologies, such as Machine Learning (ML) and Robotic Process Automation (RPA), are popping up seemingly everywhere.


Indeed, the momentum is now building to the point that some organisations seem to be saying “we need some AI tech” ahead of doing the analysis that shows that AI technology is actually the right way of solving their problem, or whether there is even a problem that necessitates the investment.


It is absolutely correct to say that these exciting technologies have the potential to radically alter all sorts of areas for the better – from law enforcement to medicine, customer service to exploration. However, it is people that ultimately need to use, work alongside, and are affected by this technology. Therefore, as part of introducing this technology, we need to recognise the importance of considering the human aspects – their requirements, their attitudes and their concerns.

Mark Woolger

Technical Consultant

BAE Systems Applied Intelligence

The trust test

 

Trust is fundamental to the relationships between people, between organisations and consumers, between technology and users. Without trust, AI technology will struggle to be accepted.


In psychological literature, research suggests that trust is: belief in competency (e.g. ability, expertise, knowledge) and belief in intent (e.g. motivation, integrity, honesty, fairness). Just as people build trust with other people, organisations will need to consider how this trust is built with the relevant AI technologies.


Consider a team that is responsible for some form of investigative activity, for instance insurance investigations or law enforcement. Using ML and RPA technologies will be key to the team being more efficient and effective by having greater capacity and coverage.


What needs to be done to build trust that the probability of the machine missing things it should otherwise have spotted (false negatives) is low enough to be tolerable? When missing a piece of information could result in lives being lost then that building that trust requires careful thought.


Similarly, the same team needs to trust what the machine is telling them is both of value and justified. Too many instances of the machine identifying things of potential interest that are not (false positives) erodes the benefit of the system taking on workload. Likewise, if decisions based on machine findings ultimately find their way into the legal system, having a firm and explainable basis for them is critical.

The trust test

 

Trust is fundamental to the relationships between people, between organisations and consumers, between technology and users. Without trust, AI technology will struggle to be accepted.


In psychological literature, research suggests that trust is: belief in competency (e.g. ability, expertise, knowledge) and belief in intent (e.g. motivation, integrity, honesty, fairness). Just as people build trust with other people, organisations will need to consider how this trust is built with the relevant AI technologies.


Consider a team that is responsible for some form of investigative activity, for instance insurance investigations or law enforcement. Using ML and RPA technologies will be key to the team being more efficient and effective by having greater capacity and coverage.


What needs to be done to build trust that the probability of the machine missing things it should otherwise have spotted (false negatives) is low enough to be tolerable? When missing a piece of information could result in lives being lost then that building that trust requires careful thought.


Similarly, the same team needs to trust what the machine is telling them is both of value and justified. Too many instances of the machine identifying things of potential interest that are not (false positives) erodes the benefit of the system taking on workload. Likewise, if decisions based on machine findings ultimately find their way into the legal system, having a firm and explainable basis for them is critical.

Life on the receiving end

End-consumers will increasingly find themselves on the receiving end of processes where decisions were made using ML technology. Trust is equally important here, too, as it may make the difference between an organisation gaining or losing custom, depending on their experience.


For instance, there have been plenty of examples quoted regarding bias in ML applications. My colleague, Mivy James recently discussed this, including how ML can’t be trusted if it amplifies human prejudices and the need for trained data ethics staff when it comes to training ML algorithms.


If an organisation’s technology is shown to be inherently biased then that is going drive away customers and erode its reputation. However, if this technology is implemented well then that can make the organisation both more efficient and more appealing to consumers.


When the quality of an ML algorithm can make a difference as to whether you are eligible for a mortgage or not, put forward for a job or not, how much you pay for insurance or what medical treatment you qualify for, the consumer’s trust as to whether the technology is competent enough to make the right decision and is being fair in making it is critical to acceptance.

Accountability and responsibility

 

The more we can get ML and RPA technologies to do, the more this could create concerns regarding people’s jobs.


There have been various stories in the press about how AI technology could put us out of a job. Although one might assume that job losses will be inevitable, that’s not necessarily the case. This report, for example, suggests that technology will drive the creation of many more jobs than it destroys over time, mainly outside the industry itself. But whilst opinions vary, it is certain that disruption is coming (for example in manufacturing), and as part of this there is the question about who is responsible for the machine’s decisions.


Consider the investigative scenario from earlier again. If something bad happened that could have been averted if the machine hadn’t missed that vital piece of information, who is responsible and who is accountable? Is it the provider of the technology? The team charged with training the ML algorithms? The operators of the technology who need to ask the machine appropriate questions in the appropriate way? The team leader? Someone higher?


We need to be clear what the appropriate responsibilities are, and where these and the ultimate accountability will be and for this to be done in a way that people understand and accept. This requires some effort, given that the technology often operates in an opaque way, leading to a need to put faith in a ‘black box’ (and is why ‘explainable AI is increasingly being discussed).


Part of this will involve education as to what the technologies can and can’t be expected to do, both generally and in each particular instance. For instance, the quality of outcomes from an ML algorithm depend on how well it has been trained (both initially and how it learns over time, including reduction or elimination of bias). It also depends on what it is being used for. ML algorithms can’t adapt their context in the same way as humans; likewise if an ML algorithm is being asked to make a prediction that is too much of an extrapolation from how it has been trained, then its quality of decision will be lower.


Through better understanding should therefore hopefully come an increase in trust given applications will be used for the right things in the right way.

Accountability and responsibility

 

The more we can get ML and RPA technologies to do, the more this could create concerns regarding people’s jobs.


There have been various stories in the press about how AI technology could put us out of a job. Although one might assume that job losses will be inevitable, that’s not necessarily the case. This report, for example, suggests that technology will drive the creation of many more jobs than it destroys over time, mainly outside the industry itself. But whilst opinions vary, it is certain that disruption is coming (for example in manufacturing), and as part of this there is the question about who is responsible for the machine’s decisions.


Consider the investigative scenario from earlier again. If something bad happened that could have been averted if the machine hadn’t missed that vital piece of information, who is responsible and who is accountable? Is it the provider of the technology? The team charged with training the ML algorithms? The operators of the technology who need to ask the machine appropriate questions in the appropriate way? The team leader? Someone higher?


We need to be clear what the appropriate responsibilities are, and where these and the ultimate accountability will be and for this to be done in a way that people understand and accept. This requires some effort, given that the technology often operates in an opaque way, leading to a need to put faith in a ‘black box’ (and is why ‘explainable AI is increasingly being discussed).


Part of this will involve education as to what the technologies can and can’t be expected to do, both generally and in each particular instance. For instance, the quality of outcomes from an ML algorithm depend on how well it has been trained (both initially and how it learns over time, including reduction or elimination of bias). It also depends on what it is being used for. ML algorithms can’t adapt their context in the same way as humans; likewise if an ML algorithm is being asked to make a prediction that is too much of an extrapolation from how it has been trained, then its quality of decision will be lower.


Through better understanding should therefore hopefully come an increase in trust given applications will be used for the right things in the right way.

Job security

At a different, but complementary, place on the AI spectrum is RPA which allows the replication of specific actions that a human would do. Clearly, this promises to reduce the need for many “mandrolic” tasks ranging from data entry and account reconciliation to the intelligent processing of emails, documents or audio.


Not unreasonably, if this is pitched in the wrong way then this can create a fear for people’s jobs. Obviously, some jobs will be more at risk than others, with those demanding human interaction and emotional intelligence at less risk for the moment as this is an area not yet suited to AI technology.


Surveys have indicated that the introduction of RPA hasn’t (yet) resulted in mass lay-offs and instead organisations are using the newly freed resource time on tasks requiring more human decision making and value-add. However, this does not mean that the fear will not be there as clearly there will be instances where some staff are not needed due to introduction of technology such as RPA. Staff will therefore need to be reassured and consulted throughout.

Back to trust, fairness and understanding

The adoption of AI technologies such as ML and RPA is a really exciting prospect. However, we must not lose sight of the fact that while these are ‘artificial’ in nature, their impact on us as people is very ‘real’. This means that the human elements of how we come to accept and work with these technologies need to be centre stage in their inevitable deployment.

How we can help

Rising to the technological challenge

BAE Systems provide business change and consulting services to help organisations evaluate change and preparing, equipping and supporting individuals and organisations to change their behaviour and alter their ways of working in order to successfully achieve business objectives and the realisation of the business strategy.


We also provide expertise in Human Factors which is a discipline of study that deals with human-machine interface, examining the psychological, social, physical, biological and safety characteristics of a user and the system the user is in.


For more information go to:

baesystems.com/governmentinsights

Profile

Mark Woolger

Technical Consultant,

at BAE Systems

Mark is a lead consultant who specialises in helping National Security and Law Enforcement clients develop their capabilities in support of their mission. He leads systems engineering and business analysis and consulting teams in the development of client strategy and delivery of critical systems, in both supply-side and client-embedded roles.

How we can help

Rising to the technological challenge

BAE Systems provide business change and consulting services to help organisations evaluate change and preparing, equipping and supporting individuals and organisations to change their behaviour and alter their ways of working in order to successfully achieve business objectives and the realisation of the business strategy. 

 

We also provide expertise in Human Factors which is a discipline of study that deals with human-machine interface, examining the psychological, social, physical, biological and safety characteristics of a user and the system the user is in.


For more information go to:

baesystems.com/

governmentinsights

 
Subscribe to Government Insights

Profile

Mark Woolger

Technical Consultant,

at BAE Systems

 

Mark is a lead consultant who specialises in helping National Security and Law Enforcement clients develop their capabilities in support of their mission. He leads systems engineering and business analysis and consulting teams in the development of client strategy and delivery of critical systems, in both supply-side and client-embedded roles.