Skip to main content

2 posts tagged with "use cases"

View All Tags
Chris Stanley


Link to the App discussed in this blog

Lead Scoring: The backdrop for aligning Sales and Marketing

One of a data team’s most frequent and important customers is the Sales team.  Ensuring their time is focused on the right opportunities is one of the most high leverage investments any Go To Market can make.

In this post, we will be reviewing:

  • The importance and process of managing lead scoring
  • Lead scoring made easy in Patterns

At Patterns, we are of the firm belief that it is never too early to start your Lead Scoring process, however, Business and Operation teams are asked to set-up lead scoring at several different inflection points.  Popular problems that companies encounter as they scale can be solved with an effective lead scoring program. These include:

  • Sales is focusing on low value opportunities or the wrong prospect persona
  • Sales and Marketing goals are difficult to measure independently
  • Expanding a Sales team so that a “green field” approach is no longer manageable

Lead scoring, simply put, is determining a methodology that a Go To Market team can use to assign a relative value of prioritization to leads or prospects- generally, a score between 1-100.  Companies allocate valuable data resources for managing this model because it provides a clear way for Marketing and Sales teams to effectively segment, track, and focus on a large number of leads.

Prioritizing the right kinds of leads provides a clear path to increased revenue, even if customer conversion rates drop.  Additionally, an effective lead score can help improve Marketing’s ability to generate the most valuable leads for a Sales team, providing a better prospect flow to a Sales team that is, in turn, prioritizing those leads.

Setting up a lead scoring model for the first time should be a collaborative process.  The first thing that many teams do is create ‘buying personas” for each of their products. By talking to the Sales team, checking marketing analytics, and understanding your existing customer mix- you can start to identify data points across 4 types of customer analytics. These are:

  • Descriptive - like company industry, job title, age
  • Interaction - like opened 3 marketing emails or invited a teammate
  • Behavioral - like registration or purchase history
  • Attitudinal- like preferences or sentiments (NPS scores are a great example!)

Start off by building a heuristic model

Once you’ve identified data points that are available (either through Marketing, Enrichment, or other tools for collecting user information), you can assign sample number values to each attribute.  Then, looking at your existing customer base, you can toggle those values until your highest value users boil to the top.  By this point in your work, it’s likely your Sales team has already helped you suss out what attributes should have the highest weight when comparing leads.  They will be your best sounding board for iteration (and likely will want to be involved in the process). Below is an example of the simple heuristic based model used in the lead scoring app.

from datetime import datetime
from patterns import Stream, Table, Parameter

enriched_leads = Stream("enriched_leads")
high_priority = Stream("high_priority", "w")
scored_leads = Table("scored_leads", "w")

high_priority_threshold = Parameter("high_priority_threshold", type=float, default=15.0)

def calc_score_size(size):
if size is None:
return 0
return {
"0-10": 0,
"11-50": 3,
"51-200": 5,
"201-500": 10,
"501-1000": 10,
}.get(size, 20)

def calc_score_linkedin(linkedin_id):
return 0 if linkedin_id is None else 5

def calc_score_country(country):
return (
if country in ("united states", "australia", "canada", "united kingdom")
else 0

def score_lead(record, high_priority_threshold=15):
return sum(

for record in enriched_leads.consume_records():
score = score_lead(record)
record["score"] = score
record["received_at"] =
if score > high_priority_threshold:

Lead Scoring Best Practices

  • Lead scores should be accessible - Lead scoring should impact how Marketing and Sales communicate — to that end it’s a resource used on the frontlines. Data team should meet their business partners where they are by pushing scoring data to CRMs and email marketing tools.
  • Lead Scores should be transparent - Not only is this required in some industries like lending, but being able to interpret why a lead is important helps inform next steps for sales to take and informs future iterations of the lead scoring model.
  • Model should be easy to change - Like nearly everything in life, start small and iterate. The first version of your model should be a simple rules based heuristic model with increasing complexity as the business warrants or opportunities for model improvements become apparent. Don’t get too attached, being able to change your model quickly will save you resources!
  • Simple to replicate - Invariably, you will need to backtest or add another lead score to customers (for different products, promotions, or Sales teams)- start by building a model that you can isolate and duplicate.
  • Assign lead values quickly - For many Sales organizations, “speed to lead” is a determining factor for customer attribution. A lead score should be assigned to a lead as soon as possible so that the appropriate team can begin to work with them.

Lead Scoring with Patterns


Many of our early customers (mostly Growth oriented users) implemented a version of lead scoring, enrichment, and routing within their initial set-up of Patterns.  After seeing this use-case be one of the most prevalent for our early users, the Patterns team built an app as a foundation for companies to be able to clone from the marketplace and customize for your needs.  You can access it here!

In this app, new leads are:

  1. Imported via Webhook to a stream store
  2. Enriched via a Person API call to People Data Labs (support for Clearbit too)
  3. The enriched leads are then scored with a Python node. Lead score is based on company location, company size, and availability of a LinkedIn profile. This is a simple heuristic model that is simple to customize and extend.
  4. Then Leads are summarized in a table, connected to a Chart node for visualization, and instantly synced to a CRM
  5. If the lead score exceeds 15, a Slack message is posted to the Sales team for immediate outreach

This process is entirely customizable, easy to digest, and simple to update/version. More importantly, as this process is kicked off with a webhook, a team’s speed to lead is no longer limited by “Scheduled Syncs” with CRM’s.  Patterns’ reactive orchestration ensures instant delivery of leads to the right stakeholders.

As discussed previously, the Audit process for a lead scoring model is incredibly important.  Sales folks, Sales leaders, and Marketing leaders generally structure their entire customer outreach strategy around the identification and scoring of leads.  As teams change their process, identify different customer profiles, or want to track attribution - the Patterns app provides a detailed log of all events and model versions.

Lead scoring is a perfect example of a data intensive project that requires cross-functional alignment between non-technical and technical teams. Fundamental to a Patterns App is the low-code data graph, that provides a common interface for collaboration between stakeholders of all technical ability. The graph makes it trivial to understand the process behind the lead scoring and routing flow - operations teams no longer need separate diagrams for documenting operations and data flows. Additionally, it is immediately transparent for attribution- which can limit (sometimes tricky) internal attribution conversations.

To get started, just clone the app, grab the webhook URL, and start importing your own leads.  Scoring can be updated with simple tweaks to the Python node to ensure that your team’s definition of the “best leads” is as dynamic as possible.  Use markdown nodes to comment and note areas of improvement.

This Patterns App is another great example of how the Patterns platform is built for collaboration, scale, and simplicity.  Thanks to all of our early users who helped identify this process as such a strong use-case within Patterns- give it a try today!

Chris Stanley


Example Activity Schema App Modeling Revenue Retention

What is the Activity Schema?

An activity schema is a single time series data structure, modeled as a table in your data warehouse, that describes events occurring within your business. From this single table, nearly all of your business analytics can be computed with ease. The activity schema approach is in contrast to a dimensional data warehouse modeling technique that structures data as objects with relationships to produce fact and dimension tables.

The structure of an activity schema is simple, universal, and intuitive: Who → What → When

user123Opened email10-23-2022 12:30:20
user123Signed up10-23-2022 12:31:25
user123Subscribed10-23-2022 12:32:15

That is, someone or something, taking an action or producing an event, and a datetime for when it happened. You can optionally add additional columns for more detailed analytics. From this single schema, you can analyze with ease the most common business questions facing your business such as:

  • Conversion funnel analysis
  • Retention rates
  • Cohort performance
  • Monthly recurring revenue
  • and many many more ...

Why use an EventSchema?

  • Faster time-to-value
  • Leverage re-useable queries and analysis
  • Easier to integrate, model, maintain, and update
  • Vastly simplified data catalog and lineage

How do I build an Activity Schema?

Step 1 - Identify the data sources required

The best way to build an activity schema is by leveraging the data that you already have. This data typically comes from the following sources:

  • Event collection tools - such as Posthog, Snowplow, Amplitude, Mixpanel, Segment, etc.
  • Databases - your production database (MySQL, Postgres, MongoDB, etc.) will already contain most of the data required to build an event schema.
  • SaaS APIs - such as Stripe, Intercom, Salesforce, or really any other operational tools that track data about the customer or business operation you wish to measure

Step 2 - Import data into your data warehouse

If this data is not already in a centralized database, you will need to extract it from it's source and import it into a data warehouse. If you want your analytics to be up to date and fresh, you will need to automate this job. This is a core feature of the Patterns platform -- we have webhooks for ingesting arbitrary data, managed integrations with databases, APIs, and event collection tools. You can explore all of them in our marketplace, or contact us if you can't find what you need.

Step 3 - Model your raw data into an activity schema

This is the most difficult part about implementing an activity schema for analytics. Once you’ve got all your data in the same place, you need to normalize your data into the Who -> What -> When format. Because your data will come from a number of different sources, likely with bespoke structures (unless coming from standard SaaS APIs like Stripe or Salesforce), you will need to write custom SQL data pipelines to arrive at an activity schema. Building and automating the execution of data pipelines is another core feature of Patterns.

What can I do once I have an Activity Schema?

Build analytical queries against the activity schema

There are hundreds of analytical questions that the activity schema can answer. However, most businesses have the same objectives and so operate the same way -- acquire, monetize, and retain customers -- most company analytics also look the same:

  • Conversion funnel analysis
  • Modeling revenue, monthly recurring or other
  • Calculating cohort retention and churn by count and by revenue
  • Calculating customer lifetime value (LTV)

Here is a Patterns app that you can clone to play around with an activity schema and investigate the calculations for each of the analytical questions above.