Analytics and Data Science
AI and Inverted Values
Automation Should Start at the Top
by Dallas Lynn
The AI boom can be characterized as fumbling attempts to push half-baked products out the door(eg. Google's AI summaries; Microsoft's Recall) to the aspirational automation of everything from artists to truck drivers. Job automation so far hasn't paid large dividends, because ethically and pragmatically the efforts are being driven in the wrong direction, viz. bottom-up instead of top-down. Dealing with every aspect of driving a vehicle or a customer service interaction that might go in any direction at any time and requires a lot of implicit context are very difficult and don't tolerate failure rates and hallucinations well, in addition to being generally distasteful and frustrating for customers. Thus trying to automate those kinds of tasks exacerbates the primary weakness of the current iterations of LLMs and other forms of "AI." However, there are jobs that are easy, expensive, and consist almost entirely of stringing words together without requiring meaningful reference to what is the case: senior management.
Consider:
-
Easier to Align Incentives
Median CEO compensation is on the order of 2 million dollars per year, even though study after study has found no correlation between CEO compensation and performance. Most of this comes in the form of stock-based compensation, because the board of directors needs to 'align the incentives' of the CEO with the shareholders; which is to say the CEO would like to waste the company's money flying around to golf courses on the company's private jet, and thus needs to become an owner to share the interests of the owners.
AI models however don't like to golf, fly private, abuse their expense accounts, get backdated options, sexually harass anyone, or any of the other expensive and self-aggrandizing habits that executives cost the company.
For small and medium size companies senior executive compensation can be a substantial multiple over the lowest paid workers and large companies like Google can save 250+ million dollars a year compensating their CEO--while not a lot relative to cutting staff, their CEO is clearly useless so it sets a good example that incompetence won't be tolerated.
-
Neither senior leadership nor LLMs are connected to the world in a meaningful way
AI models only know words as they relate to each other in their training corpus--they have no connection to an embodied world, no world-model to speak of. Senior leadership is also, by institutional necessity, the most ignorant and disconnected group in the org. The information that filters up to them is necessarily adulterated and filtered to the benefit of their reports all the way down, tortoise on top of tortoise of damage control and aggrandizement. Even if they were able to filter what was happening on the ground, the volume would quickly become too much.
-
Communication From Senior Management Is Primarily Form Without Content
Boilerplate, bland inactionable commonplaces and class shibboleths are the lifeblood of senior management communication, and generating slightly novel boilerplate for common situations is an AI models greatest strength; having at its disposal all the slight variations on the themes already published by other business leaders.
Here is Daniel Ek, Spotify CEO, announcing layoffs:Over the last two years, we've put significant emphasis on building Spotify into a truly great and sustainable business-- one designed to achieve our goal of being the world's leading audio company and one that will consistently drive profitability and growth into the future. While we've made worthy strides, as I've shared many times, we still have work to do. Economic growth has slowed dramatically and capital has become more expensive. Spotify is not an exception to these realities.
This brings me to a decision that will mean a significant step change for our company. To align Spotify with our future goals and ensure we are right-sized for the challenges ahead, I have made the difficult decision to reduce our total headcount by approximately 17% across the company. I recognize this will impact a number of individuals who have made valuable contributions. To be blunt, many smart, talented and hard-working people will be departing us.
I hope this message finds you well. I am writing to share some important and difficult news about the future of our company. After careful consideration and thorough analysis of our current financial situation and market conditions, we have made the tough decision to reduce our workforce. This is a step we must take to ensure the long-term sustainability and health of our company.
This decision was not made lightly, and I want to emphasize that it in no way reflects the dedication and hard work of our employees. Each one of you has contributed significantly to our achievements, and we are incredibly grateful for your efforts. However, due to various economic pressures and a need to realign our resources and strategy, we find ourselves in a position where these layoffs are necessary.
As we can see, there is no loss of meaning, intent, or eloquence in the generated text.
-
Executive Roles Have A High Toleration For Hallucination
The best known failure mode of AI in 2024 is hallucination: they will happily invent things for you and present them with the exact same confidence as any other text being generated. In a difficult job like sales or support, this can lead to disaster: a customer convincing an AI chatbot to give them deals that will lose the company money; or a bot suggesting solutions that range from incoherent to actively dangerous to the customer; or self-driving the car running into a truck it has identified as a cloud.
The tolerance for hallucination in an easier job like human senior executive however is very high. Everyone around them is used to taking grand pronouncements and translating them into actionable items; nobody expects any customer-facing work product from them, insulating their failures; and senior management is universally highest in sociopathic indicators and unaligned interests with their workforce, thus outright falsehoods in the form of lies and distortions are already expected and accounted for by the rest of the company.
It's unknown how quickly AI can evolve to eliminate these kinds of failure modes, but they are ameliorated when they are filling a role like CEO or VP where the equivalent human failure modes are already accounted for.
-
Executive Roles Already Have Low Approval Ratings
A problem anyone implementing a chatbot has faced is that customers have no confidence in them to empathize with their problems; to be able to do what they want them to do; and will often be met with immediate hostility in customer attempts to reach a human representative as soon as possible.
Executive roles are already considered unempathetic and unwilling to act in the interests of the employees, however, so replacing them with bots will not result in any lowering of morale. In fact, that the foibles of an AI can't reasonably be attributed to selfishness or malice; and its untruths entirely unintentional may cause it to have a higher approval rating among the rank and file.
-
Ethically Superior Job Loss
A simple principle of justice, from Jesus to John Rawls, is that decisions should be made based on what has the best outcome for the least advantaged group. Senior executives, especially CEOs, are likely to be independently wealthy already; more advanced in age; and therefore not existentially threatened by a loss of employment. Therefore, it is only ethical avenue to pursue outsourcing knowledge work to AI.
Contact us for consulting services if you'd like help creating an LLM prompt to give reponses that closely mimic specific forms of managerial vapidity.