There’s no question but that artificial intelligence now has a firm footing in strategic decision-making processes in major firms. When I wrote two blogs on big data back in 2015, less than 10% of large firms had adopted AI decision-making. Today, more than 80% of large companies make use of AI. Indeed, major firms are moving far beyond AI process automation into its use to augment decision-making processes at all levels, including top management.
But many, perhaps most firms are still struggling with the human filter in AI, both in the creation of AI as well as in its use as a major augmenting strategy for serious decision-making. A major case in point re the creation of AI was the Reuters report on an AI failure by retail giant Amazon in 2018. After multiple attempts to create AI programs to evaluate resumes and identify top candidates, Amazon reportedly abandoned the project because it could not create a program that was able to evaluate candidates for technical positions without disadvantaging women.
Another lawsuit brought by the ACLU and others challenged Facebook’s use of ad targeting based on gender, age, zip code, and other demographic information. The settlement removes gender, age, and multiple demographic proxies from ad targeting options. Yet another case was recently filed alleging that Facebook continues to discriminate against older and female users by restricting their access to ads for financial services.
Yet, it is impossible to imagine our society without artificial intelligence. So one very important need is for professionals to develop a rich understanding of AI. Until firms begin to bring social scientists into the IT party, we will continue having these fumbling human and legal failures.
Social science and big data
Back in October 2015, I wrote a blog about big data and the need to bring social scientists into the party. My blog strategy remains one of the best ways to understand the role of the social scientist. In my blog, “True data about big data,” I proposed that Zero Dark Thirty’s lead character, Maya, was a near perfect example of the social scientist using AI to augment her decisioning. The unstated subtext of that blog was that it’s unwise to turn big data over to the IT group. Sure, you’ll need them, but their discipline differs significantly from what’s needed. And . . . I’ve found plenty of IT execs who understand that reality.
“Big Data Two” argued the case from my experience and knowledge about the thinking styles and knowledge base of IT and other business functions. My cognitive approach is one way to go at the issue. But Marchand and Peppard arrive at the same conclusion from still a different tack in their HBR article, entitled Why IT Fumbles Analytics. They got right to the crux of their understanding in the first paragraph:
“In their quest to extract insights from the massive amounts of data now available from internal and external sources, many companies are spending heavily on IT tools and hiring data scientists. Yet most are struggling to achieve a worthwhile return. That’s because they treat their big data and analytic projects the same way they treat all IT projects, not realizing that the two are completely different animals.”
What this implies is that rather than emphasizing information in the big data, the data is a resource that people themselves make valuable in their discipline and for their customers. To go back to the CIA analyst in Kathryn Bigelow’s film, Maya’s task as the most knowledgeable analyst was to question the validity and usefulness of the data. Inevitably, that’s a difficult task because
Strategic approach to big data
Without focused theory development, the data scientist will be overwhelmed by the amount of data available. All the IT processing power in the world won’t help. Maya’s theory, of course, focused upon a strategy to locate Osama Bin Laden. Her fundamental hypothesis was that if she could locate his primary couriers, she could locate him. The film began with torture tactics for gathering courier identities, and, essentially stayed on that fascinating and difficult track for nearly two hours before actually finding the terrorist. Then, she faced the intriguing task of selling her conclusions to the State Department, an issue that was just as difficult, perhaps, as locating the terrorist. In sum, the first steps after creating a hypothesis are acquiring, harmonizing and mining the data sources. Of course, in the iterations of this process, you will inevitably find new, exciting insights. That also implies, as David Meer has written, that as you gather these insights, it will be important to be open to new approaches and to challenge sacred cows.
If you’ve ever done a major hypothesis-driven research thesis, the strategic approach to big data is exactly the same iterative process as the Zero Dark Thirty portrayed. The distinction between my two analogies (Maya and hypothesis driven thesis) and the “big data approach” is that once the hypothesis is in place, IT tools are necessary to mine the data.
At the heart of the initiative
It can’t be emphasized strongly enough that people are at the heart of the initiative, not IT tools, big data initiatives or the IT group. Marchand and Peppard are devastating in their critique of the IT logic:
The IT approach assumes. . . that giving managers more high-quality information more rapidly will improve their decisions and help them solve problems and gain valuable insights. That is a fallacy. It ignores the fact that managers might discard information no matter how good it is, that they have various biases, and that they might not have the cognitive ability to use information effectively.
As I have argued previously, most IT professionals come from computer science, engineering and math backgrounds resulting in a focus on technology and the use of different cognitive approaches rather than what’s necessary. My extensive consulting experience in major firms over thirty years reveals that technology people are highly linear in their thinking, yet non-linear thinking is typically necessary to make the best use of AI. Sadly, there are very, very few in the tech field who can also think in non-linear, thoroughly strategic fashion.
Thus, a different animal will be needed to create and make the best use of information. I have no doubt that Marchand and Peppard are correct in their recommendation of the use of cognitive and behavioral scientists. But that is a personnel expense only the largest of firms are willing to fund. People who have a deep and rich understanding of the business are absolutely necessary for big data success. And then, gradually, as business understands the opportunities provided by big data, they will need to add the cognitive and behavioral scientists who. . . understand how people perceive problems, use information, and analyze data in developing solutions, ideas, and knowledge.
As a general rule, writers refer to six social sciences: sociology, political science, social psychology, anthropology, economics and history. A neat definition of the social sciences is what the leading 19th century English economist Alfred Marshall called it as “a study of mankind in the ordinary business of life; it examines that part of individual and social action which is most closely connected with the attainment, and with the use of the material requisites of wellbeing.” Nearly all definitions of social science miss one significantly useful discipline: modern rhetoric. Modern rhetoric is especially useful to digital scientists because its fundamental expertise is to unlock the values of information. More than that, critical rhetoric is fundamentally epistemic: its role is to create meanings out of data—the very task that Maya works on throughout Zero Dark Thirty. And in that role, modern rhetoric easily works on all the other social sciences at both a macro and micro perspective. In addition, rhetorical critics understand—almost uniquely—that situational elements are imprinted upon all information, the central issue that most concerned Maya in her search for the terrorist, bin Laden.
Recent research in AI reveals a surprising finding: individuals make entirely different choices based on identical AI inputs. (A rhetorician would understand that innately—as an issue of information turf.) Furthermore, the study reveals that AI based decisions in organizations are hybrid forms that heavily rely on human judgment. Eighty-seven-percent of the study managers believe that this hybrid approach will become the dominant form of human-machine collaboration in the future. Yet, since managers often differ significantly in their decision processes, the AI recommendation itself is only half the equation in assessing the quality of AI decision-making in organizations.
The evidence is already clear that data-driven decisions tend to be better decisions. Thus, I’ve attempted to lay out a big-picture of the fundamental processes to make advanced analytics work for a company. The process—simplified--is to get the right hypothesis from the appropriate people, mine the data iteratively, create the new business information, sell it to the relevant executives and implement.
Zero Dark Thirty is currently streaming on the Starz channel, through Amazon Prime.