P
Published on

Product Manager Interview: How to Answer Metrics Questions

Authors

Product metrics questions are common in PM interviews. In this short blog, I will cover how to answer metrics questions in PM interviews and share a sample question and answer.

To answer metrics questions, you can use a common design thinking approach — go broad to go narrow:

  1. Ask clarifying questions around scope, product etc (Common across question types)
  2. Brainstorm relevant metrics — define goal, map actions to goal, and metrics to measure these actions. Go broad at this stage.
  3. Finalize metrics balancing among type of metrics and keeping in mind limited time in the interview (go narrow). Recommend selecting just three metrics and one always being a guardrail metric.
  4. Conclude by talking about how to use metrics (benchmarking etc.) and what things to be careful about (false positives etc.)

Let's take a sample question and answer using this approach.

Q: What metrics will you track for ChatGPT launch?

Here's how I would approach recommending metrics for the launch of ChatGPT:

Step 1: Clarifying questions

Let's begin by addressing some clarifying questions:

  • Is this launch specific to the United States or a global rollout? (Interviewer: US only and in English language)
  • Confirming that we're discussing the initial launch of GPT-3.5 in December 2022, correct? (Interviewer: Yes.)

Step 2: Brainstorm metrics (go broad)

  • Goal: The initial launch wasn't about monetization. There wasn't a pricing model for the chat interface. I think while the team might care about adoption and engagement, the more meaningful goal was whether we can deliver value to users through conversational interface on LLM. It was mostly an experiment at that time (most interviews of Sam Altman points to this). In doing so, they didn't want to give offensive responses, allow abuse of the system etc — AI safety.

  • Actions: To gauge the usefulness of ChatGPT, we should track actions that signal its value, such as users returning for follow-up questions, sharing ChatGPT responses, not giving thumbs down, avoiding repetitive questions, and returning for new inquiries. Additionally, actions related to AI safety include reporting harmful or offensive content via thumbs down and user comments on the Play Store.

  • Metrics: To measure these actions for the first goal, a wide range of metrics including 7-days retention, Percentage of questions with followups, Percentage of users sharing responses, Percentage of users with no thumbs down or regenerate and 7-days active users (good word of mouth) can be used. To measure AI safety: Percentage of queries resulting in thumbs down and reporting harmful/offensive comment, answer quality checks with human-in-the-loop for a test corpus, automated quality checks on test corpus etc.

Step 3: Finalize metrics (go narrow)

Here is the final list of metrics I would use:

  1. Track usefulness:

    • 7-days retention (that means customers see value)
    • 7-days active users (good word of mouth and referral likely resulting from usefulness)
  2. Track AI safety:

    • Answer quality checks with human-in-the-loop for a test corpus
    • Percentage of queries resulting in thumbs down and reporting harmful/offensive comment

Step 4: Conclude

In evaluating these metrics, one consideration is novelty effect. As an innovative new product, ChatGPT likely saw a huge initial spike of interest. Need to contextualize metrics with that in mind and not extrapolate early stage. Customers might return frequently and rate their experience higher due to curiosity and undefined expectations.

Monitoring for harmful bot interactions will also be critical as ChatGPT scales. They might impact these metrics. Additional human-in-the-loop quality checks would be important.

If you found this useful, you might want to check out this course: A Simple Approach to Product Management Interviews.