By Jerry Johnson
It appears the the AI bots (or just “AI’s” in current parlance) will take over soon, giving us in Orwellian style not only what we need but telling us what we should want. Even the aging Henry Kissinger asks:
If AI “thinks,” what are we? If an AI writes the best screenplay of the year, should it win the Oscar?…If an AI can care for a nursing home resident – remind her to take her medicine, alert paramedics if she falls, and otherwise keep her company – can her family members visit her less? Should they?
- Henry Kissinger, Eric Schmidt, Daniel Huttenlocher, “The Challenge of Being
Human in the Age of AI,” Wall Street Journal, 11/2/21
Does this mean people are rethinking the value of AI? Not in the slightest. But it does mean that people are nervously holding up tiny caution flags about what we as humans are doing with the outputs of AI. Upon last checking, it seems that we’re still in charge. So we still have what AI experts call “moral agency” – the ability to stop AI processing at the point at which it makes one or several recommendations, allowing humans to use interpretive judgment as to which of several things seems right to do.
Enter the MIT Media Lab
The MIT Media Lab – a pretty good barometer of the current state of thinking on many scientific problems – has entered the fray, offering a course on “Human-centered AI.” Their introduction to the course says “Don’t put AI in charge.” Pretty clear, eh?
Beyond the course, they are popularizing the notion that our society-wide expectations about the direct outputs of AI have gone a bit over the line and need to be restrained, or at least submitted to common sense. NLP applications of AI, for example, are still riddled with mistakes, including spectacular errors such as the psychiatric chatbot that too-quicky recommends suicide for troubled patients.1 (That would certainly end the problem, wouldn’t it?) We are still well away from the Turing Standard of simultaneously accurate and meaningful dialogues with machines.
What is a better way?
The MIT Media Lab and other thoughtful people argue for something called an “Appropriate Use” standard, and many global companies are starting to listen. Expedia Group, for example, is experimenting with the use of AI to optimize choices in communicating with their million-plus hospitality partners – the folks who run hotels, guest houses, vacation rentals, etc. They have found that their commissions – the fees they derive from partners when someone books a flight, hotel, car rental, etc. with that particular partner – increase when partners adopt an idea they’ve shared with them about featuring their properties on the Expedia website.
The problem is that Expedia has literally thousands of such ideas. (In 30 years they have learned something about travel.) In earlier times they simply batted these ideas around the marketing room, choosing to share with partners the ideas that won the informal internal debates among staffers.
Now they do something different. They use an AI-based application called Scenario Analyzer, which places a comparative monetary value on each potential decision (in this case, each message), enabling the user choose from among the top few profitable ones before launching a campaign – an exercise in moral agency.
1“Medical chatbot using OpenAI’s GPT-3 told a test patient to kill themselves,” AI News, 10/28/2020
The user of the system begins by constraining the AI engine to a few easy-to-understand factors – for example, type of partner, region, timing of the campaign, and so forth. The AI engine adjusts to these constraining factors and returns the top 5 or 6 campaigns that will make positive commissions for Expedia. The thousands of other potential messages that produce less are simply not shown to the user. The user is free to choose among the top profitable campaigns, and may of course use other factors in the ultimate choice – the difficulty of making the claim, the use of labor to mount the campaign, and so forth.
Is there a larger lesson?
The magic in all this is not that one company has adopted the MIT-supported Appropriate Use standard. It’s that the idea may catch on more widely and actually impart some additional wisdom and prudence to the society at large. What if Appropriate Use of AI had been a prevailing standard at the time Covid-19 broke out?
There’s at least a chance that the US GDP might not have lost $2 trillion in value. Appropriate Use of AI would have embraced all available input factors at the time and recommended, perhaps state by state, the government actions that would optimize some desired outcome, as in the Expedia case. This could be, for example, reduced deaths, reduced hospitalizations, reduced number of severe cases, or other desirable outcomes. It would still be left to the moral agency of humans to choose from among the most optimal actions; but at least the decision makers would know which actions did not optimize the desired result, and whether an action they planned, such as total lockdown, was among these suboptimal choices.
Clearly this larger lesson has a political component. Political decision makers are still free to make a bad choice. But they might at least behave differently if they learned that an action they planned to take led to an outcome they didn’t want. This maxim can be applied to any outcome that has comparative values, such as points in public opinion surveys.
The black line in the graph above shows the Real Clear Politics polling averages for approval of President Biden as of June 19, 2022. It is entirely within the realm of current AI technology to show the comparative values of several potential decisions in terms of their ability to inflect the black line upward to the A position (or better), which was the principle used in the Expedia case.
The difficulty here is not in assembling the model variables that contribute to the inflection, though that’s a cumbersome job. The difficulty is in determining whether decision makers will listen. Even if they don’t, the exercise in Appropriate Use of AI is beneficial to the larger society, as it provides a framework in which the decision maker – in the public or private sector – is told whether the decision he/she intends to make will actually lead to outcomes the society wants.
Jerry has built a veteran marketing consulting firm that consistently produces innovative approaches that yield solid results. Stories of remarkable client success have been featured in The New York Times, on National Public Radio’s All Things Considered, Advertising Age, Adweek, Media Inc, Direct, KOMO TV in Seattle, and numerous radio talk shows.
Jerry has a particular distinction for translating consumer insights into big wins in the marketplace. His work for HP led to the “Mentor” campaign, which lifted oscilloscope sales 7%, for SquareSoft video games led to the “VideoBrat” campaign, which increased sales 12%, and for Group Health Cooperative led to the “Seeker” campaign, which boosted sales 18% and won the Kaiser Permanente Best Practices Award.
Jerry is a frequent speaker, lecturer, and radio talk show guest on marketing and brand development topics. He received his B.A. in Government from Harvard University.