Over the last few months, a surprising number of viewers have been reading my post on two related movies, “Moneyball (2011)” and “Trouble with the Curve (2012).” With the majority of my blog readers between 18 and 34, I initially decided that the key to their interest now was my early assessment of artificial intelligence. At the time my readers and I were just getting introduced to artificial intelligence. It also turns out that nearly ten years later my conclusion regarding how artificial intelligence works was correct, though still fairly elementary. But I don't think that's why I'm still getting readership on my blog. What's going on?
Over the past ten years, AI has been getting a terrific amount of attention. With my genius grandson’s MS from Carnegie Melon in digital science, I’ve periodically accessed his intelligence and gained a still better understanding of AI. Recently, there have also been several critiques of AI, including extensive discussions of the troublesome bias built into many AI programs. With Frances Haugen’s hot Facebook testimony before congress, AI is going to be even more front and center. Given my growing AI knowledge, I thought the info would be fun gruel for my much-enlarged readership base.
But first, here’s the relevant stuff from that early post.
Movie Counterpoint
Moneyball and Trouble with the Curve make for superb counterpoint. Both portray human and business values in confrontation, almost demanding “spiritual” choice. By that I mean that the choice is about a belief system, not merely physical data. But the disguised behaviors and conflicted subtext of Curve are far more important than mere artifact or physical hardware of some sort. Still, having read Michael Lewis’ Moneyball years ago, I found the movie thoughtful and entertaining, not least because I admire Michael Lewis’ insights into our American culture.

Bennett Miller's adaptation of Michael Lewis' non-fiction best seller Moneyball stars Brad Pitt as Billy Beane, a one-time phenom who flamed out in the big leagues and now works as the GM for the Oakland Athletics. The Athletics are about to lose their three best players to free agency. Because the team isn't in a financial position to spend as much as the Yankees and the Red Sox, Beane realizes he needs to radically change how he evaluates what players can bring to the squad. After he meets Peter Brand (Jonah Hill), an Ivy League economics major working as an executive assistant for scouting on another team, Beane realizes he's found the man who understands how to subvert the human system of assessing players that's been in place for nearly a century and replace it with a mechanical artifact. Obviously, Brad Pitt and Jonah Hill make for a great movie. But the film (2013) is a popular introduction to artificial intelligence (though, if I remember correctly, AI is not introduced as terminology in the film). And the newness of software insights add to the audience intrigue. The film, like the book, replaces brainware with software. (revised from J. Hailey)
Surprisingly, however, it was “Trouble with the Curve” that caught me unawares and piqued my thinking.
Curve is predictable, but creatively plotted to test an important hypothesis: high-powered brainware makes the essential connections, not software. At first, because Eastwood and I are of the same generation, I had to ask whether my liking was generational bias. Setting that aside, I found myself ruminating over the reviewers’ comments: “cornball,” “manipulative,” “contrived,” “bland” and “preposterous.” I’d worry about my reputation in writing this interpretive review, except none of the critics recognized the significantly (phenomenally?) important subtext of Curve.
Gus (Clint Eastwood), a retiring baseball scout losing his eyesight and out of touch with the power of the digital, is front and center. Eastwood is his gruff, snarky, self. His boss and friend, Pete (John Goodman), asks his daughter Mickey (Amy Adams) to join him on the trip to make sure he’s okay—all against Gus’ desires. The conflict faced by Gus is that created by young technocrats relying solely on software (the converse of Moneyball) for their scouting. Together Gus, who is technologically illiterate, and Mickey scout a top new prospect in North Carolina. Mickey soon recognizes her father’s failing vision which has been hidden from his bosses. Along the way Gus reconnects with Johnny (Justin Timberlake), who has a friendly history with Gus. Finally, what's delicious about Gus is that he won't be had. Technology bad, scout good, curveball sweet. You can figure out the plot from there. (Revised from Peter Bradshaw.)
Human, not digital processors
But without revealing too much, the movie’s underlying theme is that human intelligence trumps technology. Indeed, data's only value depends upon the managers' ability to formulate questions and interpret results. With the constant jabber about companies from Facebook to Cisco, you’d think that most believe tools more important than human expertise. Dell’s Jim Stikeleather muses about the issue, reminding us that when we fail to understand that human expertise is more important than the tool, the “tool will be used incorrectly and generate nonsense (logical, properly processed nonsense, but nonsense nevertheless).”
The risks in failing to think about Data as part of human-driven discovery and management processes are infinite. It was over-automated data tools that had Target’s marketers sending baby coupons to a teen-ager who hadn’t yet told her parents she was pregnant, and software that caused the Flash Crash in 2010 resulting in a Dow Jones Industrial plunge of nearly 1,000 points.
But Curve reminds us that our front lobes need the constant presence of two truths: more data doesn't mean more intelligence, and data’s value relies on human, evaluative intelligence. Both choosing and using the data we already have are what’s going to matter most.
What is AI?
AI is essentially an attempt at prediction. But as the Harvard historian, Jill Lepore, commented “predictive algorithms start out as historians.” The data gathering on which an AI program is built is based on select data. Data Scientists select and study historical data for the purpose of detecting patterns. So, the very starting place for AI is not scientific but artistic—a historical strategy. Historians are much further ahead of computer science people in recognizing the subjective dimensions of their discipline. So historical critics look closely at a colleague’s factual assumptions up front. In contrast, computer scientists are still in the beginning stages of assessing their discipline’s assumptions.
Determining and selecting historical data, like AI data, is a profoundly difficult task and widely open to error. American history texts, for example, omit indigenous groups. So, the student only gains knowledge of information and ideas that reflect the historian’s specific interests and choices of white, Anglo-Saxon history. If, for example, the historian ignores Indigenous data, Indigenous groups, in effect, become non-existent. The Indigenous pay taxes, but the tax rewards go to white Americans. In sum, different interests construct different ways of knowing that influence our perception of the world.
In writing algorithms, the “historians,” those who select the data, then become “prophets”: they devise mathematical formulas that explain the pattern, test the formulas against the historical data selected and withheld for that purpose, and use the formulas to make predictions about the future. The recent challenges to medical/scientific algorithms that ignore minority data, and then apply the algorithm to minorities, are rightly challenged as inadequate and erroneous. Specifically, the algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, all revealed by a sweeping analysis of the results of the algorithm.
The study, which was published in Science on October 24, 2019, concluded that the algorithm was less likely to refer black people than white people--who were equally--sick to programs that were intended to improve care for patients with complex medical needs. The implications were huge. Both hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year, including millions of Blacks. Using Lepore’s language, the data historians became false prophets, negatively impacting the health of US minorities. The original data excluded minorities, so the minorities had no future. As far as the algorithm was concerned, the minorities did not exist. The health implications for Blacks and the indigenous were profound.
Why so many viewers?
At first, I thought the draw of my post was about artificial intelligence.