With all the buzz and hype about this life-changing machine-era we have officially entered, critics are concerned about the uncertainties and repercussions of AI both in the present and in the future. So even though haters gonna hate, let’s take a look into some of the arguments against AI and whether there is validity to their positions.
Argument One: AI is Biased
Although it seems flawed to suggest that a machine can be biased in its decision making, some argue that because humans are the ones programming the AI systems, they program them in a way that, subconscious to the developers, are biased in nature. For example, in selecting determinants for a new hire, developers have to decide which attributes it wants the algorithm to consider such as age, gender, number of years of work experience, etc. While the model itself can produce accurate results based on what it learns about the candidates, some argue that because the attributes were chosen by humans who are subject to certain biases, the model, and therefore AI outcomes, are inherently biased.
Before we examine the biasing factor of AI itself, we first have to look at how relying solely on the human brain for decision-making compounds the repercussions of biases. Without the reliance on technology and more recently artificial intelligence, individuals made decisions both personally and professionally on a number of levels that relied primarily on intuition, whether or not they realized it. However, this ‘go with your gut’ way of problem-solving is subject to a wide variety of psychological biases.
For example, confirmation bias is a cognitive bias that causes people to validate incoming information that supports their preexisting beliefs and reject or ignore incoming information that contradicts their preexisting beliefs. So if Martha believes that Jack is the best candidate for the job, she’ll pay more attention to his references that laud his performance and less attention to the red flag she discovers about Jack’s termination from his previous position based on his performance. This is exactly what happened with investors of the blood-testing startup Theranos. Investors, including WalMart’s Walton family and former Secretary of State, George Shultz, placed heavier weight on what they believed to be a revolutionizing startup founded by exceptional young women and gave much less notice to the tell tale signs of the company’s impending doom.
Another psychological bias that humans are susceptible to is the illusion of control which is a psychological tendency to overestimate the influence to which you can affect events. Part of the illusion of control is known as the gambler’s fallacy, the belief that when flipping a coin, it is “due” for heads after a long streak of tails when the probability is still fifty-fifty. With reference to due diligence, individuals may feel as though they truly know their investment managers and are in control of their investment processes when in fact they’re subservient to their own biases. Without conducting objective, fact-based due diligence, expert and novice investors could find themselves caught up in a tangled network of Ponzi schemes.
Although the bias argument is compelling, allow me to introduce some of its negations presented by today’s leaders in AI. Dr. Rumman Chowdhury, the director of Responsible AI at Accenture, observes that it’s not the artificial intelligence systems themselves that are producing biased results but rather the bias that exists within society at large that affects our perception of the fairness of AI and machine learning. Chowdhury says, “With societal bias, you can have perfect data and a perfect model, but we have an imperfect world…what’s wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm.”
Taking a more philosophical approach to the bias argument, the concept of fairness itself is subjective. Is it fair that an equal number of males and females are considered for the position or is it fair to review candidates who are qualified and show the highest potential despite their gender?
It’s also important to note that since humans are programming the AI systems, they are able to quantify the content that the system is fed thus reducing bias. Dan Journo, the chief data scientist at Intelligo, outlines a step-by-step process of how to input data into an AI system to minimize bias and error. First, he reviews the raw data and creates features that will be used to translate relevant information into numerical values. Then his team analyzes the distribution of data, missing values, and other features of the data. Finally, they add the new data into the model and review the strengths and weaknesses. Journo explains that comparisons at this stage grant insight into the susceptibility of high variance or high bias pertinent in the model. It’s an iterative process, so any results that come back with issues can be fixed at their core. Since you can quantify the inputs into the algorithm, you have more control over reducing bias than when these tasks are performed manually
Argument Two: AI is a Blackbox
The second and quite common argument that is made against artificial intelligence is that it can not and should not be trusted because of the famed “blackbox” issue. The problems relating to the blackbox nature describes the fact that we know what goes into the system and what comes out, but it’s the in between stage that remains a mystery. CEO of Twitter, Jack Dorsey discusses the concept of explainability, “that is trying to understand how to make algorithms explain how they make decisions.”
AI relies on algorithms that use machine learning to analyze and interpret data in ways that are difficult for humans to comprehend. AI makes decisions by simultaneously analyzing multi-layered decision variables and finding geometric patterns among the variables which are nearly impossible for humans to visualize. The controversy here is that we can’t trust AI if we really have no idea what it’s getting up to.
But the black box banter too has propitious developments that are being researched and tested as you’re reading this. The U.S. Department of Defense has invested heavily (over $2 billion) in what it calls XAI, the Explainable Artificial Intelligence Program, which aims to produce new machine learning technique models that will offer more explainable results and thus increase prediction accuracy. The Department is also working on integrating human-computer interface techniques that have capabilities to translate these models into understandable and useful information for end-users.
XAI is only one of the many programs in which the Department of Defense has assiduously deconstructed the mitochondria of an AI system. Other programs are working towards enabling ‘third-wave AI systems’ where machines can interpret the context in which they operate and thus, over time, can produce explanatory models based on the external environments they exist in.
In addition, Carlos Guestrin, a professor at the University of Washington, has worked with his colleagues to produce a method that allows machine learning systems to provide a rationale for their results. In essence, the computer automatically finds a few examples from a data set and provides a short explanation about those results. With image recognition for example, the system can hint at the reasoning it identified that image by highlighting the features of the image that were most significant in its interpretation.
AI must go hand-in-hand with QA
To accurately be able to rely on the outcomes AI produces, there must be consistent and thorough analysis for quality review. This was highlighted at a conference on Neural Information Processing Systems, with a paper called the Local Interpretable Model Interpretable Explanations (LIME). The paper used a Husky vs. Wolf example to illustrate AI’s mode of reasoning.
The AI system’s task was to identify whether or not there was a wolf in the picture. The system falsely identified a Siberian Husky as a Wolf. Researchers then tried to understand why the system erred and found that if the picture contained snow, the animal was classified as a wolf.
The algorithm used the background of the picture (the snow) instead of using characteristics of the animal itself for classification. Because of this discovery, researchers were able to fix the model to prevent its interpretation of snow=wolf. This example demonstrates the importance of quality review by people who can overlook the AI systems’ findings and make any corrections, changes or improvements when necessary.
Further, circling back to our earlier example of background checks, some companies leverage analysts to review the outputs, understand how they were determined, and re-train the algorithm.
Argument Three: AI is Killing Jobs
While AI may be transforming the world as we know it one binary digit at a time, you may fear that it could be putting you out of a job in a few year’s time. Enter the next argument against AI. But the mass hysteria that has been circulated about how robots will replace people may not actually have as much merit as people think.
Ziprecruiter, the popular online job board site, surveyed more than fifty million job postings, eleven thousand job seekers, five hundred employers, and five transitioning industries in the U.S. They found that “while AI will reduce employment in some industries, it is also creating new products and services, giving rise to new markets, improving productivity, and making work better.” More specifically, researchers observed that AI created three times as many jobs in 2018 as it took away and eighty-one percent of employer research respondents said they would prefer to hire a human over implementing solely an AI system.
Furthermore, a study by McKinsey, found that by the year 2030, sixty percent of American jobs will consist of tasks that can be automated by at least thirty percent. However, less than five percent of jobs will be completely automatable. Globally, automation will replace fifteen percent of the workforce by 2030. A similar study by the Organization for Economic Co-operation and Development (OECD) reported that of the jobs across OECD countries, approximately fourteen percent of them will be highly automated while fifty to seventy percent of jobs will have one-third of the tasks automated.
A slight digression from the replacement of jobs by AI (or, as we’ve found, a lack thereof) is the ability for AI to enhance and improve tasks performed by employees. According to a study where C-level executives were interviewed, forty-eight percent believed that AI can make people more productive at work and forty-five percent said that AI is freeing up employees’ time for higher-level, more valuable work.
John Furneaux, CEO and co-founder of Hive, comments that AI will help employers and employees better understand how they work. “It can tell us just about everything we want to know about teams and collaboration, for example, if men or women get more done in the afternoon, and if summer Fridays are a myth.” Using data from 30,000 job tasks, Hive was able to identify significant trends in productivity. One example the AI system found patterns highlighting the difference in productivity between men and women. Men were much more productive early in the day while women were more productive in the later afternoon.
Evidently, even though AI will transform the global labor force in a consequential way, we will more likely see a shift, rather than a replacement of, job tasks and productivity. So don’t look too worried—that robot won’t be stealing your job after all.
The Results Are In
Anytime a new technology is introduced to the market, there’s always the back and forth controversy about whether it will save or destroy the world. And while AI is no exception, it’s important to uncover some of the rumors and myths that lie beneath the surface. So while there may be intricacies into AI’s perceived bias, black box characteristics, and job-stealing qualities, things may not be as black and white as they seem. In fact, leveraging AI can expand your capabilities at work and enable tasks to be performed at a faster and more accurate rate.