Skip to main content
Chris Stanley


Real-time Twitter Sentiment App

Social media sentiment analysis can show data teams what’s being said about a topic, product, service, or even political candidates.

Building and deploying a sentiment analysis application often includes numerous steps of configuration and infra setup. In the example app displayed above, we show you how to get started with sentiment analysis in a few easy steps. The below app is a plug-and-play solution to quickly gather Tweet sentiment values and metadata on any topic. We’ve used these tools to monitor the sentiment of Tweets in the Arizona, Georgia and Pennsylvania Senate races.

In this post we will discuss the following:

  • New real-time and recent historical Twitter sentiment components in the Patterns Marketplace.
  • Our election-tracking data app, which we’ve used to analyze the sentiment of Tweets that mention candidates from several high-profile senate races.


We have developed two components to help data teams quickly gather Tweet text, clean it and run sentiment analysis on it.

The two components offer sentiment analysis on real-time tweets, or a sample of recent historical tweets. Both are capable of returning the results of multiple queries against more than one keyword.

When streaming real time Tweets, the component will analyze and return any that match a list of keywords, as the Tweets are published.

For recent historical tweets, the component will run a search for each keyword or keyphrase in a list.

The sentiment analysis is done by the TextBlob package and returns a tuple of polarity and subjectivity values, alongside Tweet metadata, as a Patterns Table.

High Profile Senate Races

We created an app that tracks the sentiment of Tweets mentioning Democratic and Republican candidates in the Arizona, Georgia and Pennsylvania Senate races.

Using the component that analyzes real-time Tweet sentiment, our data app compares the number of positive and negative tweets mentioning each candidate, as the Tweets are published.

Using the component that analyzes a sample of recent historical tweets, the app also compares Tweet sentiment for each candidate. We did this by finding the sum of Tweet polarity for each candidate, to see if Tweets about them skewed positive or negative.

For nearly all the candidates, Tweets skewed positive, with the exception of Herschel Walker, whose Tweets skewed slightly negative.

We also looked at the sum of polarity across all tweets that mentioned either Republican or Democratic candidates. Tweets mentioning Democrats skewed more positive than tweets mentioning Republicans, but both party’s tweets skewed positive.

Check out our election data app here.

Feel free to fork it to do your own analysis.

As a note: This analysis was run on Oct. 31 to demonstrate the capabilities of the platform. Findings should not be considered conclusive. The Twitter API only draws from Tweets published in the last seven days, when conducting basic Tweet searches, like those in this analysis. Patterns searched for a sample of 50 Tweets for each candidate.

Chris Stanley

Lead Scoring and Routing with Patterns and PDL

Ensuring that a sales team is focused on the “right” opportunities is an essential part of an operations team’s responsibilities. Pursuing leads with a scatter-shot approach often leads to wasted sales and marketing resources. The path to conversion for leads includes various nurturing steps that are traditionally owned by marketing. Then, when certain inflection points are hit and those leads become “Sales Ready”, sales assumes ownership for the remainder of the buying journey.

Lead scoring is a foundational concept that allows marketers to measure prospect engagement and interest in your product Factors include demographics (ex. Job title, seniority) and firmographics (ex. Company size, industry). This results in a consistent, quantifiable, and testable approach to managing the marketing to sales handoff and can lead to higher close rates.

People Data Labs’ Person Enrichment and Company Enrichment products are ideal for providing much of the demographic and firmographic data points necessary to effectively score leads. When paired with Patterns’ data operating system, implementing lead enrichment, scoring, and routing becomes a simple process that combines speed, transparency, and reproducibility within a flow that can be implemented in just a few easy steps.

What is People Data Labs?

People Data Labs builds B2B data for developers, engineers, and data scientists.

They empower their clients to build and scale innovative data-driven products using 3 billion unique, highly-accurate B2B records. Every day, their clients use People Data Labs’ data to build person profiles, enrich person records, power predictive modeling, drive artificial intelligence, and build new tools to make their teams more efficient, productive, and successful

People Data Labs is proud to be the preferred data partner to the data science and engineering teams building the next generation of data-driven products and services. They are the single source of truth in B2B data serving enterprise and startup clients across a range of data-enabled businesses.

Patterns and People Data Labs

Patterns offers pre-built components for hundreds of apps, including our new People Data Labs Integration. This means that you can now easily include features like Person Enrichment and Company Enrichment directly into your workflows.

For example, we can now dynamically deploy person and company enrichment within an existing lead scoring and routing model:


Deploying the Lead Flow App with Person and Company Enrichment

  1. Create accounts for both People Data Labs and Patterns
  2. Retrieve your PDL API key, and create a new connection in Patterns
  3. Clone the Lead Scoring App Template into your organization in Patterns
  4. In the external system that handles lead intake, configure the Patterns webhook as an endpoint to receive data every time a new lead is processed
  5. Configure the PDL component by adding the PDL API Key Secret as a parameter value
  6. Configure the Slack component by adding the incoming webhook URL

This is an example of a simple configuration of lead scoring using a webhook as input and Slack channel as output. This template can be configured differently to ingest data from a database, CRM, or any API that contains your lead data. Additionally, you can also configure Patterns to export enriched and scored leads to your CRM, email marketing tool, or database of your choice.

Closing Thoughts

Patterns provides an amazing layer for managing data pipelines, empowering users to create automations that ensure the right data is in the right place at the right times. People Data Labs’ products and features allow you to easily enrich your leads directly from our industry-leading “person” and “company” datasets without the complexity of managing a data pipeline.

As seen in this overview, we can easily incorporate a variety of lead sources into a single flow to enrich, score, and route highly actionable leads. This includes the best data available for phone numbers, emails, job titles, locations, demographics, and more- funneled into your CRM, and more importantly, directly into your team’s workflow. There are many other opportunities for teams to incorporate person and company enrichment data into workflows. Patterns provides the flexibility to deploy specific solutions for narrow use-cases. If you have any questions or suggestions about the Patterns/PDL use-cases, please get in touch with one of our data consultants today!

Chris Stanley

IDE Autocomplete and Gutter Icons for Patterns' Objects

Patterns' Objects (Table, Stream, Parameter) are essential to the development experience of your Python and SQL nodes. Ensuring nodes are correctly connected is a requirement for building any flow - so we’ve made them obvious to connect and use by implementing autocomplete and gutter icons to access additional configuration.

Template Library

Core to our product thesis is cloneable applications with re-useable components that help developers get up and running. We’re happy to announce improvements to that experience with a Template library users see when they create a new app.

Performance Upgrades

We’ve made big performance improvements to our webhooks, which can now process high-volume data and show you their ingestion statistics in the the UI. We shaved one second off the Node run time, allowing you to iterate on your code even faster.

Chris Stanley


Link to the App discussed in this blog

Lead Scoring: The backdrop for aligning Sales and Marketing

One of a data team’s most frequent and important customers is the Sales team.  Ensuring their time is focused on the right opportunities is one of the most high leverage investments any Go To Market can make.

In this post, we will be reviewing:

  • The importance and process of managing lead scoring
  • Lead scoring made easy in Patterns

At Patterns, we are of the firm belief that it is never too early to start your Lead Scoring process, however, Business and Operation teams are asked to set-up lead scoring at several different inflection points.  Popular problems that companies encounter as they scale can be solved with an effective lead scoring program. These include:

  • Sales is focusing on low value opportunities or the wrong prospect persona
  • Sales and Marketing goals are difficult to measure independently
  • Expanding a Sales team so that a “green field” approach is no longer manageable

Lead scoring, simply put, is determining a methodology that a Go To Market team can use to assign a relative value of prioritization to leads or prospects- generally, a score between 1-100.  Companies allocate valuable data resources for managing this model because it provides a clear way for Marketing and Sales teams to effectively segment, track, and focus on a large number of leads.

Prioritizing the right kinds of leads provides a clear path to increased revenue, even if customer conversion rates drop.  Additionally, an effective lead score can help improve Marketing’s ability to generate the most valuable leads for a Sales team, providing a better prospect flow to a Sales team that is, in turn, prioritizing those leads.

Setting up a lead scoring model for the first time should be a collaborative process.  The first thing that many teams do is create ‘buying personas” for each of their products. By talking to the Sales team, checking marketing analytics, and understanding your existing customer mix- you can start to identify data points across 4 types of customer analytics. These are:

  • Descriptive - like company industry, job title, age
  • Interaction - like opened 3 marketing emails or invited a teammate
  • Behavioral - like registration or purchase history
  • Attitudinal- like preferences or sentiments (NPS scores are a great example!)

Start off by building a heuristic model

Once you’ve identified data points that are available (either through Marketing, Enrichment, or other tools for collecting user information), you can assign sample number values to each attribute.  Then, looking at your existing customer base, you can toggle those values until your highest value users boil to the top.  By this point in your work, it’s likely your Sales team has already helped you suss out what attributes should have the highest weight when comparing leads.  They will be your best sounding board for iteration (and likely will want to be involved in the process). Below is an example of the simple heuristic based model used in the lead scoring app.

from datetime import datetime
from patterns import Stream, Table, Parameter

enriched_leads = Stream("enriched_leads")
high_priority = Stream("high_priority", "w")
scored_leads = Table("scored_leads", "w")

high_priority_threshold = Parameter("high_priority_threshold", type=float, default=15.0)

def calc_score_size(size):
if size is None:
return 0
return {
"0-10": 0,
"11-50": 3,
"51-200": 5,
"201-500": 10,
"501-1000": 10,
}.get(size, 20)

def calc_score_linkedin(linkedin_id):
return 0 if linkedin_id is None else 5

def calc_score_country(country):
return (
if country in ("united states", "australia", "canada", "united kingdom")
else 0

def score_lead(record, high_priority_threshold=15):
return sum(

for record in enriched_leads.consume_records():
score = score_lead(record)
record["score"] = score
record["received_at"] =
if score > high_priority_threshold:

Lead Scoring Best Practices

  • Lead scores should be accessible - Lead scoring should impact how Marketing and Sales communicate — to that end it’s a resource used on the frontlines. Data team should meet their business partners where they are by pushing scoring data to CRMs and email marketing tools.
  • Lead Scores should be transparent - Not only is this required in some industries like lending, but being able to interpret why a lead is important helps inform next steps for sales to take and informs future iterations of the lead scoring model.
  • Model should be easy to change - Like nearly everything in life, start small and iterate. The first version of your model should be a simple rules based heuristic model with increasing complexity as the business warrants or opportunities for model improvements become apparent. Don’t get too attached, being able to change your model quickly will save you resources!
  • Simple to replicate - Invariably, you will need to backtest or add another lead score to customers (for different products, promotions, or Sales teams)- start by building a model that you can isolate and duplicate.
  • Assign lead values quickly - For many Sales organizations, “speed to lead” is a determining factor for customer attribution. A lead score should be assigned to a lead as soon as possible so that the appropriate team can begin to work with them.

Lead Scoring with Patterns


Many of our early customers (mostly Growth oriented users) implemented a version of lead scoring, enrichment, and routing within their initial set-up of Patterns.  After seeing this use-case be one of the most prevalent for our early users, the Patterns team built an app as a foundation for companies to be able to clone from the marketplace and customize for your needs.  You can access it here!

In this app, new leads are:

  1. Imported via Webhook to a stream store
  2. Enriched via a Person API call to People Data Labs (support for Clearbit too)
  3. The enriched leads are then scored with a Python node. Lead score is based on company location, company size, and availability of a LinkedIn profile. This is a simple heuristic model that is simple to customize and extend.
  4. Then Leads are summarized in a table, connected to a Chart node for visualization, and instantly synced to a CRM
  5. If the lead score exceeds 15, a Slack message is posted to the Sales team for immediate outreach

This process is entirely customizable, easy to digest, and simple to update/version. More importantly, as this process is kicked off with a webhook, a team’s speed to lead is no longer limited by “Scheduled Syncs” with CRM’s.  Patterns’ reactive orchestration ensures instant delivery of leads to the right stakeholders.

As discussed previously, the Audit process for a lead scoring model is incredibly important.  Sales folks, Sales leaders, and Marketing leaders generally structure their entire customer outreach strategy around the identification and scoring of leads.  As teams change their process, identify different customer profiles, or want to track attribution - the Patterns app provides a detailed log of all events and model versions.

Lead scoring is a perfect example of a data intensive project that requires cross-functional alignment between non-technical and technical teams. Fundamental to a Patterns App is the low-code data graph, that provides a common interface for collaboration between stakeholders of all technical ability. The graph makes it trivial to understand the process behind the lead scoring and routing flow - operations teams no longer need separate diagrams for documenting operations and data flows. Additionally, it is immediately transparent for attribution- which can limit (sometimes tricky) internal attribution conversations.

To get started, just clone the app, grab the webhook URL, and start importing your own leads.  Scoring can be updated with simple tweaks to the Python node to ensure that your team’s definition of the “best leads” is as dynamic as possible.  Use markdown nodes to comment and note areas of improvement.

This Patterns App is another great example of how the Patterns platform is built for collaboration, scale, and simplicity.  Thanks to all of our early users who helped identify this process as such a strong use-case within Patterns- give it a try today!

Michael Hathaway

Preview and clone the Apps discussed in this article here:

Over the past few weeks, the Patterns platform has been used to help eCommerce businesses reduce costs by automating and optimizing their operations. In this post, we will discuss how Patterns customers are using our platform to make their businesses more efficient- specifically, by:

  • Providing a clear overview of existing Order Statuses, from placement to delivery or return
  • Automating preemptive customer messaging to limit inquiries and re-market
  • Provide an audit trail for performance of fulfillment partners- from 3PL’s to Carriers like USPS or UPS

Trackdown Your Orders

Order management is stressful.  The complexity of the problem lies in the number of tools that most eCommerce businesses use.  Yes, these tools are designed to make life simple- but the sheer number of dashboards necessary to monitor all of an eCommerce business’s metrics has never been higher. This is the exact problem that Patterns solves for eCommerce businesses.

As shown, Patterns connects directly to all of your data-sources, imports existing order statuses, and provides a clear Master tracking of orders throughout the fulfillment lifecycle. Within Patterns, you can call API’s, use data from webhooks, and run SQL queries (all in the same environment) to help automate the exact tracking dashboard your team is looking for.

Patterns brings big data to every business.  Our data app marketplace allows for data to be ingested from popular sources easily, manages the complicated data infrastructure process, and helps get real-time data in front of the right decision makers and analysts.

In this example- once an order is created, it shows up as a line item in Patterns’ table with an expected “next step” and sends a message to the right team if action is required.  This makes it simple for businesses to reference a single order, a day’s worth of orders, or zoom out to analyze bigger trends across their order history.

This is a prime example of how Patterns simplifies the ingest, transformation, and visualization processes- and all can be set-up in minutes!

For your ETL processes, Patterns also connects to popular messaging services via API or webhook, allowing customers to automatically push notifications to their customers based on an Order’s status.

Delight Your Customers

Limiting customer inquiries is essential to minimizing operational costs and providing a high quality customer experience.

That high quality customer experience is essential because, according to industry analysts, 96% of consumers say customer service is an important factor in their choice of loyalty to a brand. Upon placing an order, 72% of inquiries are related to the current status of the order on its way to be delivered to the buyer.

Patterns’ customers are using the platform to power messaging automations, reducing inquiries and improving customer lifetime value- both decreasing OpEx costs and increasing revenue.

In this use case, identifying delays a business experiences in the delivery process was the first step.  These broke down into:

  • Fulfillment delays
  • Inventory delays
  • Carrier package delivery delays

Each client then set a standard SLA to be assigned to each step in the delivery process- ie. Fulfillment step should be completed within 24 hours of an order being placed.

Now into Patterns.  Using a combination of cloned data workflows and custom SQL and Python nodes, either internal or external alerts were sent.  Examples looked like this:

  • If an Order is placed and should be fulfilled “in-house” and is not updated to Shipped within 24 hours, send a Slack alert to the Owner and a text message to the fulfillment team
  • If a package should be delivered within 4 days and is not updated to delivered by the 5th day, send an email to the customer alerting them of a delay and offering a 5% discount on future orders.

The secret sauce to this step is 2 fold.

First, multiple platforms typically are responsible for tracking a current order’s status.  For instance, Shopify may be tracking the initial order view but a 3PL is responsible for shipment of that item- so an eCommerce business might be tracking an order both in Shopify and in a platform like ShipBob.  Patterns combines data sources into a single source of truth for your business.

Second, the data for managing a carrier’s delivery status is archaic. By making assumptions about a package’s delivery status update (or lack thereof) and its location, businesses can identify delays and preempt a customer inquiry with a proactive message.  This has proven an excellent opportunity for those businesses to remarket- often providing a discount on a future order for the small inconvenience- increasing LTV.  Patterns powers this entire ingest, transform, and ETL process within a single flow.

Spend and Negotiate Wisely

A huge part of the cost of running a successful eCommerce business is managing fulfillment costs.  Industry analysts estimate that 18 percent of every dollar gained through online sales goes towards fulfillment costs.

In order to help manage these costs, eCommerce businesses need insight into their specific costs for:

  • Shipping
  • Fulfillment labor
  • Warehousing

Using Patterns, customers ingested spend data from their Accounting Software (Brex and Quickbooks), their shipping quote service (Shippo, EasyPost, or within Shopify), and their fulfillment services (ShipHero and ShipBob).  Then, using SQL nodes within the pipeline, customers were able to create independent (and high level) views of 3rd party performance.

Example 1:

Customer A used a 3PL to send out their shipments.  That fulfillment company agreed to ship out their packages within 2 days of receiving the order.  Customer A found through their analysis in Patterns that the promised SLA was not being met over 15% of the time. This data was not available within the 3PL’s platform and Customer A was able to negotiate a lower fee in their next contract.

Example 2:

Customer B paid UPS for 3 Day delivery for 12% of their orders.  They found that the actual performance of UPS 3 Day delivery averaged nearly 4 days.  Customer B immediately reduced their usage of this service and negotiated a lower cost for UPS 3 Day Service in their next contract with UPS.

Wrapping Up

Managing the costs of running an eCommerce business is essential to success- especially Customer Service and Fulfillment costs.  By marrying all of your data in a single point of truth, creating an easy to digest visualization of that data, and then automating actions based on specific parameters- businesses are able to lower their OpEx costs dramatically.  If you haven’t thought about how your data can save you money, it’s time to start having that conversation with your team (or ours!).

Preview and clone the Apps discussed in this article here:

Chris Stanley

Public beta release

We're excited to announce that after a year of heads-down product development, a beta version of Patterns is open to the public to self-serve.

Create an account here.

This is an important milestone on our mission to make data more accessible. One of the hardest parts about data science is knowing where to get started, what tools to learn, what business problems to study, etc.

I remember in 2014, when I left my job in finance, and set out on a journey to reskill myself to join a tech company, I learned SQL through Alan Beaulieu's Learning SQL, learned data science through Harvard's CS109 Data Science course, and eventually landed a job a Square.

During my whiteboard SQL interview there, I wrote my answers in a single run-on line becasue this was the only way to write SQL in a terminal:

select customer_id, date, sum(amount) as amount from payments group by customer_id, date 

Lucky for me, my boss was understanding of my inexperience, and saw my commitment to learning technical subjects. To get this far I had to overcome setting up a local db, setting my bin/bash profile (I still don't know what I'm doing here), download and run python (never have the right version)... and this is just 1/10th of what's required to set up a bare bones analytics stack at a company. It. Should. Not. Be. This. Hard.

With Patterns, it's no longer this hard.

A new way to add nodes

We want using our product to be as enjoyable as playing a video game. To that end, we copied a common UX pattern in video games such as Command and Conquer for adding new items to a gameplay canvas. See below for show

Linear version history

While it's not quite git style version control (which we support by managing graphs via our devkit), a linear version history, and the ability to revert back to a prior version, is a powerful feature set for managing development on complex projects. After viewing the state of a prior project you can easily revert the current app back to that prior state, download the zip file, or clone the version into a entirely new app.

Posthog App

We use Posthog internally for a ton of different analytical and operational use cases. We support receiving events from Posthog via a Posthog App, integratef within their product. We also support extracting data from Posthog's API and have built a component within Patterns for this.

Chris Stanley


Example Activity Schema App Modeling Revenue Retention

What is the Activity Schema?

An activity schema is a single time series data structure, modeled as a table in your data warehouse, that describes events occurring within your business. From this single table, nearly all of your business analytics can be computed with ease. The activity schema approach is in contrast to a dimensional data warehouse modeling technique that structures data as objects with relationships to produce fact and dimension tables.

The structure of an activity schema is simple, universal, and intuitive: Who → What → When

user123Opened email10-23-2022 12:30:20
user123Signed up10-23-2022 12:31:25
user123Subscribed10-23-2022 12:32:15

That is, someone or something, taking an action or producing an event, and a datetime for when it happened. You can optionally add additional columns for more detailed analytics. From this single schema, you can analyze with ease the most common business questions facing your business such as:

  • Conversion funnel analysis
  • Retention rates
  • Cohort performance
  • Monthly recurring revenue
  • and many many more ...

Why use an EventSchema?

  • Faster time-to-value
  • Leverage re-useable queries and analysis
  • Easier to integrate, model, maintain, and update
  • Vastly simplified data catalog and lineage

How do I build an Activity Schema?

Step 1 - Identify the data sources required

The best way to build an activity schema is by leveraging the data that you already have. This data typically comes from the following sources:

  • Event collection tools - such as Posthog, Snowplow, Amplitude, Mixpanel, Segment, etc.
  • Databases - your production database (MySQL, Postgres, MongoDB, etc.) will already contain most of the data required to build an event schema.
  • SaaS APIs - such as Stripe, Intercom, Salesforce, or really any other operational tools that track data about the customer or business operation you wish to measure

Step 2 - Import data into your data warehouse

If this data is not already in a centralized database, you will need to extract it from it's source and import it into a data warehouse. If you want your analytics to be up to date and fresh, you will need to automate this job. This is a core feature of the Patterns platform -- we have webhooks for ingesting arbitrary data, managed integrations with databases, APIs, and event collection tools. You can explore all of them in our marketplace, or contact us if you can't find what you need.

Step 3 - Model your raw data into an activity schema

This is the most difficult part about implementing an activity schema for analytics. Once you’ve got all your data in the same place, you need to normalize your data into the Who -> What -> When format. Because your data will come from a number of different sources, likely with bespoke structures (unless coming from standard SaaS APIs like Stripe or Salesforce), you will need to write custom SQL data pipelines to arrive at an activity schema. Building and automating the execution of data pipelines is another core feature of Patterns.

What can I do once I have an Activity Schema?

Build analytical queries against the activity schema

There are hundreds of analytical questions that the activity schema can answer. However, most businesses have the same objectives and so operate the same way -- acquire, monetize, and retain customers -- most company analytics also look the same:

  • Conversion funnel analysis
  • Modeling revenue, monthly recurring or other
  • Calculating cohort retention and churn by count and by revenue
  • Calculating customer lifetime value (LTV)

Here is a Patterns app that you can clone to play around with an activity schema and investigate the calculations for each of the analytical questions above.

Chris Stanley

The Graph is Alive

User interface and interactivity is one way we differentiate our product. To that end, this month we implemented websockets and animated our graph with events coming from the server to communicate the state of our data graph at any point in time. With a client app, api server to manage, and cloud infrastructure where the functions actually execute, this was a cross-functional feature in the making for the past few months. Never again be left in the dark about the state of your pipelines.

Introducting Apps

As we continue our development on dashboards, it didn't seem write that we were calling everything a Graph, we need a new term to encapsulate the idea of a directed graph and dashboard as user interface objects --- we arried at App.

IDE modal

It's hard to keep an entire system in your head at the same time. Graphically laying out your data flow is incredibly helpful for understanding complex system. Even more helpful is being able to have multiple IDE windows open at the same time.

New node designs

The graph is an incredibly powerful interface, and can help with communicating complex ideas if make intuitive enough. In Patterns the graph can contain python or SQL functions

OAuth and Shopify

We layed foundational work for secure OAuth connections to authenticate access to push/pull data to external systems such as SaaS APIs. The first one we implemented was Shopify.

Ken Van Haren

Businesses across the board are experiencing rising costs and a shift in the consumer environment. In order to maintain a consistent profit margin, businesses need to have a flexible approach to pricing their products. In order to properly maintain a consistent net profit, a pricing strategy needs to incorporate:

  • Higher shipping costs
  • Supply chain disruptions (and associated price increases)
  • Inventory costs rising
  • Tightening price competition
  • Customers losing purchasing power
Chris Stanley

Marketplace, dashboards, connections, and our new name... Patterns

  • Marketplace - A core value prop to our product is the ability to share, discover, and clone entire data solutions. This can be either a single component to your product, like a Python script to ingest Stripe Charges, or a whole use case like an ML model to score new inbound leads and sync'ing this data with Salesforce. With our Marketplace now live, anyone can discover, share, and clone components and data apps.
  • Dashboards - Patterns is now great for sharing your analytical work with external stakeholders with dashboards. Now you can drag and drop views of your markdown, chart, or table nodes into a view only dashboard for sharing with external stakeholders via public or private url.
  • Connections - There are now two ways to authenicate with external services. From day 1 we supported secrets, used in authentication calls to external APIs and services. Now you can authenticate with OAuth and use the provided token in API
  • Patterns, our new name - After much deliberation and thought, we've decided to rebrand to Patterns. We think this is a better name to communicate our full vision of building a data warehouse operating system, with reuseable components and entire data apps (patterns...)