I recently attended the Mind the Product conference in San Francisco, marking my return after six years. As a senior product leader now, my perspective on product management has evolved significantly: In 2017, I was an individual contributor, essentially a Product Manager II still learning the ropes, and now that I’m a senior director of product, I don’t find talks about “what makes a good product manager” that useful, since I have my own strong opinions about that. While the conference covered various aspects of product management, two talks particularly stood out to me: Aniket Deosthali’s presentation on the AI Product Arms Race and Janice Fraser’s radical approach to leadership, based on her recent book Farther, Faster, and Far Less Drama. Let’s dive in.
Winning the AI Product Arms Race
While it’s excessively reductionist to dismiss the hype around AI as similar to the dot-com bubble of the 2000’s, the frothiness of the space is undeniable. Many companies are, unfortunately, simply adding ChatGPT interfaces to existing offerings and not solving a real problem. (I’m not sure, for example, why the Sleep Cycle app that I use as a sleep monitoring tool needs a “SleepGPT” feature for me to ask it questions about how I’m sleeping. Why can’t a reporting feature suffice?). Yet it is true that we seem to be in an arms race, with multiple startups that I speak with being asked by their investors, “what’s your take on AI?” regardless of whether there’s a use case there.
The question we should be asking ourselves, though, to take the arms race metaphor a little too far: do we want to spend our time churning out small arms, or do we want to take the time to develop robust weapons that will actually last? Doing the former means you may win the battle but not the war. (I promise I’m done with the odious zero-sum military analogies now.)
“Arms race”, however, is a fitting term if we remember that AI is fundamentally a sustaining rather than disruptive innovation, in Deosthali’s words. In technology, we throw around the word “disruption” a lot, usually to mean something unexpected that upends our understanding of the world (and if you’re a product manager, your roadmap). Yet we need to remember that true disruptive innovation, in the words of Clayton Christensen and others, is something that shifts the balance of power from incumbents to new entrants. Pay-as-you-go cloud computing, for example, is something that I would argue was actually disruptive, because it allowed startups to challenge established firms by rapidly iterating on solutions without incurring large, upfront capital expenses to execute their unvalidated hypotheses. But the power balance of AI continues to accrue to the incumbents. Microsoft, for example, has taken advantage of its speculative investment in OpenAI to leapfrog competitors by not only to helping OpenAI develop those models, but by building products on top of it in the form of, for example, Bing Search with chat, GitHub Copilot, and even Azure OpenAI Service to open the technology to other ISVs. None of this would have been possible if Microsoft didn’t already have massive amounts of capital to invest from previous businesses like the Windows or Office franchises.
Now of course, other competitors to OpenAI exist, and open source LLMs certainly have arisen and will eventually catch up, but incumbents like Microsoft definitely have a first mover advantage especially as it relates to creating ready-to-go AI-powered solutions for various personas.
The rest of Deosthali’s talk (which by the way, is essentially a version of his paper of the same title on Reforge) covered basic facts about AI technology that I think every PM ought to understand. Key learnings, if you didn’t already know them:
- There are two phases to the use of models: training them (feeding a model with a set of curated data so that it can “learn”) and inference (the process of using the trained model to answer questions in real-time). For example, Grammarly first must employ linguists to feed high-quality natural language documents to a model to teach it what good grammar looks like, and then Grammarly can make suggestions inside with your word processor to make your writing more crisp and grammatically correct.
- Successful AI products sit on a power function curve that balances consideration (the amount of time that a user needs to make a decision) versus context (the volume of abstract concepts that the AI needs to know). In other words, the faster you need a response, the more parameters and size of the data set that the model needs to know about. This is why self-driving cars are so difficult to build: safety demands near-instant response, but the quantity of potential obstacles and context that the car needs to understand in order to make good on the user promise is enormous.
- Models will evolve quickly to understand orders of magnitude more data points than they do today, thereby moving the frontier of innovation up and to the right. (GPT-4 for example is trained on 1,000x more data points than GPT-3.) In other words, self-driving cars will eventually be possible – it’s just anyone’s guess as to when.
- AI’s biggest disadvantage today is truth: It can be spectacularly and confidently wrong, and in fact, some of the randomness deliberately introduced into chat-based AI to make it more believable (the famous “0.8 temperature seems to work best”, from Stephen Wolfram’s excellent paper demystifying ChatGPT) exacerbates the situation. For product managers, this suggests that you must seriously consider the downside risk of the AI being wrong in your feature and mitigate it if necessary. For example, GitHub Copilot is positioned as an AI-powered pair programmer and I think that’s the right way to think about it. You still need to understand how to program; it’s not going to instantly turn an English Lit major into a React developer. Yet without compensating controls built into your product, you risk more situations like lawyers assuming that AI is automation rather than suggestion.
Deosthali concluded his talk with a five-step plan for product managers investing in AI:
- Define and prioritize use cases. This is a nice way of saying “what’s the user problem and which are the ones most congruent with where AI could help” and then having a framework for evaluating ones that are most promising to move forward with.
- Formulate a 10x product hypothesis. Related to the first point, how will applying AI create a solution that is 10x better than the status quo?
- Check and manage your risk. In addition to the downside risk of the AI being wrong, there are three other risks to consider: customer risk (does this solve a real problem, back to point #1), business risk (will this actually be profitable, net of the COGS needed to run the service and other factors), and technology risk (is it even feasible in terms of the needed software or hardware – for example, is the data that you need to train the model even available, and do you have enough of it to build a product that sits on the right place of the AI survival curve)
- Rapidly prototype – a/k/a continuous discovery and agile development to get to a true MLP (Minimal Lovable Product), which is the bread-and-butter of product management
- Launch your MLP, which sometimes might involve using off-the-shelf models and data corpuses like OpenAI to get to an end-to-end solution that ultimately creates context-specific data that you can eventually use to create your own models (thereby creating a moat of technology & data that is hard for others to replicate)
Overall, Deostahli’s talk validated a lot of both my skepticism and optimism about AI, yet there is actual utility to the technology assuming we don’t all lose our heads and forget how to do product management.
Farther, Faster, and Far Less Drama
I always admire speakers who take a strong, contrarian point of view about fundamental things in the world that they want to see changed. In this case I happen to vigorously agree with Janice Fraser’s perspective on what makes an effective leader, which is a lot different than what the business world today admires – lionizing, unsurprisingly, the leadership styles of aggressive white men like Elon Musk, Steve Jobs, Jeff Bezos, or Bill Gates. Or Donald Trump, for that matter: 40% of the United States essentially still believes he is an amazing leader.
Unfortunately, for every copy of Farther, Faster, and Far Less Drama sold, I’m sure there are 10x the number of copies of Jack Welch’s Winning flying off the shelves today. However, I do see the tide starting to turn, and Fraser is doing her part in creating that tide. Leadership styles like those of Jack Welch are starting to get discredited (see: The Man Who Broke Capitalism) as both stressful and ineffective at building lasting companies and institutions. It is this thesis which is at the core of Fraser’s book and talk.
To win people over to her point of view, Fraser started out her talk being incredibly vulnerable, describing the trauma that she experienced in her family. Her mother was suicidal, her brother ultimately turned out to be a felon and a terrorist, and her sister eventually spiraled into drug use and homelessness. Fraser’s point in bringing this up is that many of us already come from incredibly difficult backgrounds, particularly recent immigrants to this country, and that we don’t also need to show up to work being hurt more by autocratic, top-down leadership – or even just the regular bickering, politics, drama and infighting that seems to befall most modern corporations. There is another way, and she wants to show us how to build effective organizations that don’t require a lot of yelling and screaming.
Fraser argues that most workplace drama comes down to decision rights and accountability (my words, not hers). Workplaces should neither be autocracies nor democracies in order to be effective. But it’s important to create the space for people who have the most expertise, or are going to be impacted by decisions, to be heard, even if the ultimate decision doesn’t go their way; to hold, primarily decision makers but also implementation teams accountable to outcomes; make decisions that create clarity with persistence (i.e., constantly changing one’s mind post-decision is not actually making a decision), and creating a culture that values quality and candor without being assholes to one another. At least that’s what I took away from her talk; not having read her book yet, I still see these values reflected in her framework of Leverage the Brains, Value Outcomes, Make Durable Decisions and Orient Honestly.
I’ve just started reading her book, and I expect that it marries concepts from The No Asshole Rule with Radical Candor. The former introduced a valuable principle to the world but gave leaders an out in the service of business performance (“we’d love to not be assholes, but our competitors are assholes and they are outperforming us!”). The latter rightly taught us that companies are more effective when individuals are direct with each other about the work and their interactions, but then gave assholes an escape to continue ad hominem attacks in the guise of candor. What often gets lost in people’s interpretation of these two books is that high performance arises from a high-trust, psychologically safe environment where individuals are valued first – and if you don’t have that, no techniques will matter.
Wrapping Up
Mind the Product is one of those conferences that, for some reason, few product managers seem to know about. I still think it’s one of the better PM conferences out there – and one that has been going the longest, which gives it substantial credibility – and I recommend it to anyone working in PM or design. While I certainly wish it were longer and/or had more tracks (e.g., ones for PMs versus leaders of PMs) that does add cost and complexity to the conference. Either way, the conference’s commitment to carefully curating high-quality content has contributed to its enduring success, and I genuinely hope it continues to thrive for years to come.