There’s no question but that artificial intelligence now has a firm footing in strategic decision-making processes in major firms. When I wrote two blogs on big data back in 2015, less than 10% of large firms had adopted AI decision-making. Today, more than 80% of large companies make use of AI. Indeed, major firms are moving far beyond AI process automation into its use to augment decision-making processes at all levels, including top management.
But many, perhaps most firms are still struggling with the human filter in AI, both in the creation of AI as well as in its use as a major augmenting strategy for serious decision-making. A major case in point re the creation of AI was the Reuters report on an AI failure by retail giant Amazon in 2018. After multiple attempts to create AI programs to evaluate resumes and identify top candidates, Amazon reportedly abandoned the project because it could not create a program that was able to evaluate candidates for technical positions without disadvantaging women.
Another lawsuit brought by the ACLU and others challenged Facebook’s use of ad targeting based on gender, age, zip code, and other demographic information. The settlement removes gender, age, and multiple demographic proxies from ad targeting options. Yet another case was recently filed alleging that Facebook continues to discriminate against older and female users by restricting their access to ads for financial services.
Yet, it is impossible to imagine our society without artificial intelligence. So one very important need is for professionals to develop a rich understanding of AI. Until firms begin to bring social scientists into the IT party, we will continue having these fumbling human and legal failures.
Social science and big data
Back in October 2015, I wrote a blog about big data and the need to bring social scientists into the party. My blog strategy remains one of the best ways to understand the role of the social scientist. In my blog, “True data about big data,” I proposed that Zero Dark Thirty’s lead character, Maya, was a near perfect example of the social scientist using AI to augment her decisioning. The unstated subtext of that blog was that it’s unwise to turn big data over to the IT group. Sure, you’ll need them, but their discipline differs significantly from what’s needed. And . . . I’ve found plenty of IT execs who understand that reality.
“Big Data Two” argued the case from my experience and knowledge about the thinking styles and knowledge base of IT and other business functions. My cognitive approach is one way to go at the issue. But Marchand and Peppard arrive at the same conclusion from still a different tack in their HBR article, entitled Why IT Fumbles Analytics. They got right to the crux of their understanding in the first paragraph:
“In their quest to extract insights from the massive amounts of data now available from internal and external sources, many companies are spending heavily on IT tools and hiring data scientists. Yet most are struggling to achieve a worthwhile return. That’s because they treat their big data and analytic projects the same way they treat all IT projects, not realizing that the two are completely different animals.”
What this implies is that rather than emphasizing information in the big data, the data is a resource that people themselves make valuable in their discipline and for their customers. To go back to the CIA analyst in Kathryn Bigelow’s film, Maya’s task as the most knowledgeable analyst was to question the validity and usefulness of the data. Inevitably, that’s a difficult task because
Continue reading "Why Are Most Firms Still Struggling with Artificial Intelligence?" »