It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More
Ever since the launch of ChatGPT in November 2022, the ubiquity of words like “inference”, “reasoning” and “training-data” is indicative of how much AI has taken over our consciousness. These words, previously only heard in the halls of computer science labs or in big tech company conference rooms, are now overhead at bars and on the subway.
There has been a lot written (and even more that will be written) on how to make AI agents and copilots better decision makers. Yet we sometimes forget that, at least in the near term, AI will augment human decision-making rather than fully replace it. A nice example is the enterprise data corner of the AI world with players (as of the time of this article’s publication) ranging from ChatGPT to Glean to Perplexity. It’s not hard to conjure up a scenario of a product marketing manager asking her text-to-SQL AI tool, “What customer segments have given us the lowest NPS rating?,” getting the answer she needs, maybe asking a few follow-up questions “…and what if you segment it by geo?,” then using that insight to tailor her promotions strategy planning.
This is AI augmenting the human.
Looking even further out, there likely will come a world where a CEO can say: “Design a promotions strategy for me given the existing data, industry-wide best practices on the matter and what we learned from the last launch,” and the AI will produce one comparable to a good human product marketing manager. There may even come a world where the AI is self-directed and decides that a promotions strategy would be a good idea and starts to work on it autonomously to share with the CEO — that is, act as an autonomous CMO.
VB Transform 2024 Registration is Open
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
Overall, it’s safe to say that until artificial general intelligence (AGI) is here, humans will likely be in the loop when it comes to making decisions of significance. While everyone is opining on what AI will change about our professional lives, I wanted to return to what it won’t change (anytime soon): Good human decision making. Imagine your business intelligence team and its bevy of AI agents putting together a piece of analysis for you on a new promotions strategy. How do you leverage that data to make the best possible decision? Here are a few time (and lab) tested ideas that I live by:
Before seeing the data:
- Decide the go/no-go criteria before seeing the data: Humans are notorious for moving the goal-post in the moment. It can sound something like, “We’re so close, I think another year of investment in this will get us the results we want.” This is the type of thing that leads executives to keep pursuing projects long after they’re viable. A simple behavioral science tip can help: Set your decision criteria in advance of seeing the data, then abide by that when you’re looking at the data. It will likely lead to a much wiser decision. For example, decide that “We should pursue the product line if >80% of survey respondents say they would pay $100 for it tomorrow.” At that moment in time, you’re unbiased and can make decisions like an independent expert. When the data comes in, you know what you’re looking for and will stick by the criteria you set instead of reverse-engineering new ones in the moment based on various other factors like how the data is looking or the sentiment in the room. For further reading, check out the endowment effect.
While looking at the data:
- Have all the decision makers document their opinion before sharing with each other. We’ve all been in rooms where you or another senior person proclaims: “This is looking so great — I can’t wait for us to implement it!” and another nods excitedly in agreement. If someone else on the team who is close to the data has some serious reservations about what the data says, how can they express those concerns without fear of blowback? Behavioral science tells us that after the data is presented, don’t allow any discussion other than asking clarifying questions. Once the data has been presented, have all the decision-makers/experts in the room silently and independently document their thoughts (you can be as structured or unstructured here as you like). Then, share each person’s written thoughts with the group and discuss areas of divergence in opinion. This will help ensure that you’re truly leveraging the broad expertise of the group, versus suppressing it because someone (typically with authority) swayed the group and (unconsciously) disincentivized disagreement upfront. For further reading, check out Asch’s conformity studies.
While making the decision:
- Discuss the “mediating judgements”: Cognitive scientist Daniel Kahneman taught us that any big yes/no decision is actually a series of smaller decisions that, in aggregate, determine the big decision. For example, replacing your L1 customer support with an AI chatbot is a big yes/no decision that is made up of many smaller decisions like “How does the AI chatbot cost compare to humans today and as we scale?,” “Will the AI chatbot be of same or greater accuracy than humans?” When we answer the one big question, we’re implicitly thinking about all the smaller questions. Behavioral science tells us that making these implicit questions explicit can help with decision quality. So be sure to explicitly discuss all the smaller decisions before talking about the big decision instead of jumping straight to: “So should we move forward here?”
- Document the decision rationale: We all know of bad decisions that accidentally lead to good outcomes and vice-versa. Documenting the rationale behind your decision, “we expect our costs to drop at least 20% and customer satisfaction to stay flat within 9 months of implementation” allows you to honestly revisit the decision during the next business review and figure out what you got right and wrong. Building this data-driven feedback loop can help you uplevel all the decision makers at your organization and start to separate skill and luck.
- Set your “kill criteria”: Related to documenting decision criteria before seeing the data, determine criteria that, if still unmet quarters from launch, will indicate that the project is not working and should be killed. This could be something like “>50% of customers who interact with our chatbot ask to be routed to a human after spending at least 1 minute interacting with the bot.” It’s the same goal-post moving idea that you’ll be “endowed” to the project once you’ve green lit it and will start to develop selective blindness to signs of it underperforming. If you decide the kill criteria upfront, you’ll be bound to the intellectual honesty of your past unbiased self and make the right decision of continuing or killing the project once the results roll in.
At this point, if you’re thinking, “this sounds like a lot of extra work”, you will find that this approach very quickly becomes second nature to your executive team and any additional time it incurs is high ROI: Ensuring all the expertise at your organization is expressed, and setting guardrails so the decision downside is limited and that you learn from it whether it goes well or poorly.
As long as there are humans in the loop, working with data and analyses generated by human and AI agents will remain a critically valuable skill set — in particular, navigating the minefields of cognitive biases while working with data.
Sid Rajgarhia is on the investment team at First Round Capital and has spent the last decade working on data-driven decision making at software companies.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!