Skip to main content

Quick Start

Build your first Patterns App in minutes


Patterns lets you quickly build data pipelines and interactive dashboards from any sort of database or API. This guide walks you through building a pipeline that analyzes and charts Amazon product review data. We will:

  1. Import data from a CSV with a Python node
  2. Write a data transformation with a SQL node
  3. Visualize data with a Chart node

Demo

You can also clone a completed copy of this app by clicking the image below and following the button to clone.

1. Import data from a CSV with a Python node

Log in to Patterns, then click + Create App. If you don't have an account yet, sign up for free.

Every app begins with import some data into Patterns. You can do this by linking a database, using a component, querying an API, or loading a CSV. In this example, we will use a Python node to import a CSV file hosted on Github.

Try it out

  • Click on the +Add button and select Python to add to your graph.
  • Click on the node to open the preview, then click on the expand button to enter the full-screen IDE.
  • Add the following code to your node, configure an output table, press run, and view the results.
import-example-data.py
from patterns import (    Parameter,    State,    Stream,    Table,)import pandas as pd# Declare your table output imported_data = Table('imported_data', mode='w')# https://github.com/patterns-app/public_dataamazon_reviews = "https://raw.githubusercontent.com/patterns-app/public_data/main/data/amazon_reviews.csv"# Using pandas, import via csv, replace with any of the above linksdata_set = pd.read_csv(amazon_reviews) # Write the imported data_set to the imported_data table in Patterns, be sure to configure it as an output  imported_data.write(data_set, replace=True)

On line 11, we declare an output table to write data to On line 16, we use pandas to import the hosted CSV and store it in a dataframe On line 20, we write the dataframe to the table imported_data

Line 11 and 20 are required to configure your graph and set node dependencies. Now, any downstream nodes that we build from imported_data will automatically execute when any changes are detected.

2. Write a SQL data transformation

Now that we have data in our table imported_data we will write a SQL query to demonstrate how to build a SQL data pipeline.

Try it out

  • First, press the + button, select a SQL node, and place it on your graph
  • Click on the SQL node you just placed to open the editor
  • Rename from sql to compute daily average review
  • Type the below query into the editor, while you type, you will see the connection form between table imported_data and SQL node compute daily average review. These nodes now have a reactive relationship.
compute_daily_average_review.sql
select review_time as review_time
, AVG(overall) as average_review
from {{ Table("imported_data") }}
group by review_time
  • Click to open and rename output table to daily_avg_review
  • Run your node!

3. Visualize data with a Chart node

Great, we now have aggregated data in a table, we'll use a chart node that uses Vegalite syntax for building a data visualization.

Try it out

  • Add a new chart node to the app
  • Go to the config panel and add daily_avg_review as the data source
  • Add in the below Vegalite syntax
chart.json
{
"description": "Bar Chart",
"mark": "point",
"encoding": {
"x": {"field": "review_time", "type": "temporal"},
"y": {"field": "average_review", "type": "quantitative"}
}
}