Visualizing Common Product Metrics Using AI

April 9, 2024

In product management, time is a luxury. A typical week can easily be eaten up by road mapping sessions, stand-ups, research, marketing meetings, and all the things needed to get a product developed and into production. 

On top of all of that, all those decisions usually come with the requirement that they are data-driven or data-informed. That used to mean either doing the work themselves or getting an analyst to understand what’s going on and producing the metrics from there. Both are time-consuming and resource-intensive.

With AI data visualization tools, though, product managers don’t necessarily need to do that. They can ask questions about their data directly and, with a few thought-out prompts, get to some of their core product metrics quickly and accurately. 

Here are some data visualization examples of common product metrics, and how AI data visualization tools like Patterns can remove the friction between your workflow and the insights needed to keep it going. 

Monthly/Daily Active Users (MAUs, DAUs)

Identifying active users should be one of the first product metrics to consider. Active users are measured as a function of time - how many unique users are using my product within a period of time? Monthly active users (MAUs) and daily active users (DAUs) are the most common groupings when looking at active users.

Understanding your MAUs and DAUs, and their volume and patterns, can unlock many other derivative metrics like retention and engagement. It can also help you identify usage patterns such as seasonality. These product metrics are good to anchor new product releases to in order to see how they’ve been adopted.

MAUs and DAUs can be visualized using line graphs, where the X-axis represents time and the Y-axis represents volume. Before visualizing MAUs, you should ask the following questions about your dataset:

  • Do I have the core action captured to represent a user as “active”? Are they active when they log in? Or do they need to do something first?

  • Is each action captured with a unique identifier for a user (a user ID, for example) and a time stamp? Both are necessary to capture to plot on a line graph.

  • Does my user base have segments I should consider? Acquisition channels and location are common examples of segmentation, where users will behave differently and may have different MAU patterns.

Visualizing MAUs in Patterns

Utilizing a basic transaction table, we can quickly get to a useful MAU graph over time. The dataset we used contains a table, “orders”, that contains an order_id, customer_id, and the date of purchase. 

We prompted Patterns by specifying the table and the data visualization we were looking for - in this case, a line graph. The prompt was:

Based on orders, can you give me the monthly active users over all time, represented as a line graph?

Patterns wrote the following SQL to get that information. It utilizes a “GROUP BY” statement on a truncated order_purchase_timestamp, and a “COUNT DISTINCT” on customer_id. 

SELECT
  DATE_TRUNC(DATE(order_purchase_timestamp), MONTH) AS month,
  COUNT(DISTINCT customer_id) AS monthly_active_users
FROM
  olist.orders
GROUP BY
  month
ORDER BY
  month

The data visualization it produced is a simple line chart that shows the growth in the number of unique users ordering over time. 

For orders, the amount spent can be an interesting segment to explore. In this dataset, we also loaded a table called “order_payments” that shows the amount of each order. For simplicity, the sample dataset didn’t have multiple orders per user in a given month. 

We promoted Patterns to segment this data using $50 as a cutoff between the two segments we wanted to look at. When interacting with AI tools like this, it is possible to give the model some room to decide on a cutoff. However, specificity here is welcome when possible as it can translate and understand your prompt faster.

The prompt was:

Can you segment this monthly active users graph on the amount for each order? Let's use $50 as a cut-off point and segment it on orders above/below that.

The output SQL utilized a “CASE” statement to specify the payment values for the segments, a Common Table Expression (CTE) to join the “orders_payments” and “orders” tables and attach the segments on the customer_id level, then did a similar query on the table as the previous graph. 

WITH OrderAmounts AS (
  SELECT
    o.customer_id,
    DATE_TRUNC(DATE(o.order_purchase_timestamp), MONTH) AS month,
    CASE
      WHEN p.payment_value > 50 THEN 'Above $50'
      ELSE 'Below $50'
    END AS amount_segment
  FROM
    `olist.orders` o
  JOIN
    `olist.order_payments` p ON o.order_id = p.order_id
)
SELECT
  month,
  amount_segment,
  COUNT(DISTINCT customer_id) AS users
FROM
  OrderAmounts
GROUP BY
  month, amount_segment
ORDER BY
  month,

The data visualization now shows the two segments and shows that most of the active user growth came from customers ordering more than $50 worth of items in a given order.

Marketing Qualified Leads (MQLs)

Marketing Qualified Leads (MQLs) are leads towards the top of the funnel that are likely to become customers. They can do certain actions or come in through specific sources to become an MQL, or if your product is wide enough an MQL can be anyone targeted by marketing and engaging with that content or channel.

For product metrics, MQLs might seem like a metric too removed from product development to be useful. Most marketing organizations tend to run in parallel with product. However, understanding where your MQLs are coming from can inform how you improve your go-to-market strategies for your product. When used in conjunction with metrics further down the funnel like sentiment, MQLs can offer a deeper understanding of the customer journey.

MQLs are ripe for segmentation over time, as most marketing strategies will search for leads in multiple places. Before visualizing these times of metrics, you should ask the following questions about your dataset:

  • Do my leads have unique identifiers? And how are those attributed and tracked, when more and more people are thinking about privacy online?

  • Are all my marketing channels important to look at? Do they provide the same data, or does my data need transformation before looking at them together?

  • How do I tie this back to improving my product?

Visualizing MQLs in Patterns

Our orders dataset has a table, “marketing_qualified_leads,” that houses a cleaned-up dataset of all MQLs over time by origin (channel) and what their landing pages were.

We’re interested in first understanding what our most popular MQL origins are over time. As a product metric, this can help us understand if our product is gaining traction in places like social media, email, organic search, and the like. That could influence how we develop the product going forward.

Because of how clean this table is, prompting Patterns AI to visualize this metric was simple:

Can you visualize our MQLs by origin, by month?

The output SQL utilized a “DATE_TRUNC” to group the leads by month, counting the MQL IDs by month, and grouping them by origin as our segment. 

SELECT
  DATE_TRUNC(first_contact_date, MONTH) AS month,
  origin,
  COUNT(mql_id) AS total_mqls
FROM
  `olist.marketing_qualified_leads`
GROUP BY
  month, origin
ORDER BY
  month,

The data visualization it produced showed all of our origin channels as a grouped line chart, allowing us to compare channel performance against one another over time. 

What stood out to us is how organic search took off in January. It was always outpacing our other origins for traffic, but this spike indicates that something happened with how this product was getting ranked in search engines.

We can dive further into this because, for each MQL, we also have a landing page ID. Seeing what our top-performing pages were might indicate what’s resonating with people.

This is also another simple prompt for Patterns AI:

Can you visualize the most popular landing pages that were hit through organic_search?

In the output SQL, we see that Patterns made some decisions for us. It decided to group these not over time, but over the entire dataset. It also decided to rank them descending and show us the top 10 only.

SELECT
  landing_page_id,
  COUNT(mql_id) AS total_hits
FROM
  `olist.marketing_qualified_leads`
WHERE
  origin = 'organic_search'
GROUP BY
  landing_page_id
ORDER BY
  total_hits DESC
LIMIT 10

For our purposes here, this was enough. The data visualization was exactly what we needed: a read on what pages were performing best in organic_search.

Sentiment and Survey Results (Net Promoter Scores, Likert Scores)

Sometimes the best way to gauge how users are enjoying a product is to just ask them! That’s where surveys come in. A well-designed survey administered at the right point in the product can produce some invaluable insights that go beyond product metrics focused on usage or actions.

You can administer surveys using a variety of platforms. Email and in-product are two common examples. They can also use different scales and types of measurements depending on the methodology. You can administer a Net Promoter Score survey to gauge how likely someone is to recommend a product, or a Likert Scale-based question at the end of a transaction to gather information on how a specific point in a funnel is working. Longer, open-text surveys can also gather a wealth of qualitative data that may yield results you weren’t expecting. 

Visualizing survey results can vary based on the methodology. It is common to visualize this data as a pie chart or a 100% bar chart. Both charts are easy to understand, even without the methodology present. 

Assuming your methodology fits what you’re trying to measure, and before visualizing survey results, you should ask the following questions about your dataset:

  • What scoring system are we using?

  • Were respondents able to skip the survey, yet appear in the data? And how do we want to treat null responses?

  • Are there time bounds to consider, such as when the survey opened/closed?

Visualizing Likert Scale Scores in Patterns

In the Orders dataset, we worked with in the previous examples, we also have a table called “reviews.” This table has a column, “review_score,” that contains a Likert-scale value of 1 to 5, with 1 representing “Very Dissatisfied” and 5 representing “Very Satisfied.”

We prompted Patterns AI to visualize this data as a pie chart of overall reviews. This is a simple, yet effective chart to show us how people are rating their orders.

When we were writing our prompts, we noticed that there was no lookup table for the values that are assigned to each Likert Scale value. Asking the AI to visualize this without that information would provide us with the 1-5 values without any labeling. We address this by specifying the labels in the prompt itself.

We also specified how to calculate the percentages, as that ensures we’re getting them as a percentage of the entire population of scores.

The prompt was:

In the reviews table, there are 5 different values for "review_score". Let's assign the following values to each score: 1 - Very Dissatisfied, 2 - Somewhat Dissatisfied, 3 - Neither Satisfied nor Dissatisfied, 4 - Somewhat Satisfied, 5 - Very Satisfied. Now let's calculate each score as a percentage of all scores. For example, if there are 20 "1" values, and there are 100 review scores, then 1 - Very Dissatisfied would be 20%. Let's represent this data as a pie chart.

The output SQL utilized a “CASE” statement to specify the review labels as described in the prompt. It also did the score calculation in the “SELECT” statement, and segmented the scores through the “GROUP BY” statement.

SELECT
  CASE review_score
    WHEN 1 THEN 'Very Dissatisfied'
    WHEN 2 THEN 'Somewhat Dissatisfied'
    WHEN 3 THEN 'Neither Satisfied nor Dissatisfied'
    WHEN 4 THEN 'Somewhat Satisfied'
    WHEN 5 THEN 'Very Satisfied'
  END AS review_label,
  ROUND((COUNT(review_score) / (SELECT COUNT(*) FROM `olist.reviews`)) * 100, 2) AS percentage
FROM
  `olist.reviews`
GROUP BY
  review_score
ORDER BY


The data visualization was as expected: a simple pie chart showing our Likert Scale scores as a percentage of the entire reviews table.

Analytics in natural language