Skip to main content

Introduction

banner

Build and deploy operational data apps, quickly

Patterns is a reactive graph architecture with powerful node abstractions — Python, SQL, Table, Chart, Webhook — and structurally-typed data interfaces that let you build and deploy data apps, automations, ETL, and pipelines in minutes, both from scratch as well as using components and forkable apps from our marketplace. Every Patterns app is fully defined by git-backable code, with two-way versioning to our in-app UI experience.

We’re on a mission to make data more accessible for everyone that’s part of the data workflow - data engineers, scientists and analysts.

Design Principles

  • Composable - you can easily connect and assemble nodes in various different combinations. Entire data systems can be easily nested and cloned.
  • Multi-modal - native support for both record and table operations within streaming and batch environments.
  • Dev == Prod - avoid issues that arise when moving between development and production envs. Develop in an identically configured environment that you will run in production, and move between the two with ease.
  • Your data, your code - all your work in Patterns is backed by a human-readable graph.yml configuration file and node files that make it portable and easy to version control. Your data is backed by it's own self-contained database, accessible through other programs via database address.

Architecture

patterns_architecture

Use Cases

Patterns has a simple but powerful design that makes it easy for you build solutions to many common data problems.

  • Data extraction and replication from external databases and APIs to a central data warehouse
  • Streaming data ingestion via webhooks
  • Data modeling and pipelines, ETL/ELT, SQL or Python transformation functions
  • Training and deploying machine learning models
  • Reverse ETL / data sync between SaaS apps
  • Data visualization and dashboard building
  • Data lineage, schema mapping, and documentation
  • Metrics definition and standardization

What’s the Big Deal?

No longer will you require 5 different tools for each part of your data stack! No switching tabs and managing multiple logins, billings, or user access between data products. No longer do you need to check the state of 5 different applications to see if your job successfully ran. If you make a change to a schema, it will propagate through the entire system seamlessly.

This powerful and extensible toolset enables you to tackle any kind of data problem making it great for collaboration between scaling data teams of data engineers, scientists, and analysts.