Nash's Operational Intelligence Bot, Satoshi II
December 21, 2023
Nash: Bringing equilibrium to the last-mile delivery market
Nash, a San Francisco based logistics tech company, empowers businesses of any size to provide dependable local delivery services for their customers. This solution is ideal for businesses, such as restaurants, groceries or local dry-cleaning businesses, which can effortlessly implement pickup and drop-off services at times convenient for their customers, without the need to manage their own delivery fleet.
Established in 2021, Nash gained early traction and joined YCombinator, followed by a $20 million Series A funding led by a16z. Leveraging this investment, Nash has impressively tripled its workforce, demonstrating a strong commitment to growth and service excellence.
The results speak for themselves: the business has expanded quickly internationally and into multiple verticals with corresponding volume growth. This exponential growth not only underscores Nash's operational efficiency but also cements its position as a leader in reliable and scalable delivery solutions for businesses aiming to enhance their customer service experience.
The Challenge: Efficiently Scaling Complex Operations
Navigating the intricacies of the on-demand delivery market, known for its operational complexity and substantial capital requirements, the Nash team continuously seeks innovative solutions to enhance their operational scalability.
At the forefront of this endeavor is Jerry Shen, Nash's Head of Data. Faced with the challenge of making data more accessible to operational teams while avoiding the expansion of the analyst team, Jerry discovered Patterns, a transformative solution that addressed his needs.
Prior to adopting Patterns, Nash utilized Mode Analytics in conjunction with a Fivetran and Snowflake infrastructure. Although Mode Analytics proved effective for creating static dashboards and tracking key performance indicators, it fell short in its ability to anticipate and address the diverse and dynamic queries arising from the various operational teams Jerry oversees.
Recognizing the potential of Large Language Models (LLMs) in developing AI agents capable of automatically responding to queries, Jerry initially considered creating a bespoke solution using LangChain. However, Patterns emerged as a more suitable choice, offering a comprehensive, specialized solution for end-to-end analytical question answering. This strategic shift promises to significantly streamline Nash's data management, ultimately enhancing their operational efficiency in the fast-paced delivery market.
Welcoming Satoshi II - Nash’s AI Bot
Jerry recently introduced Satoshi II to their team, named affectionately after a colleague's dog, Satoshi. The onboarding process for Satoshi II was akin to welcoming a new team member. Over the course of a week, Jerry provided Satoshi II with context on Nash’s business operations and metrics — all the context needed to do its job.
Below are the tailored instructions Jerry formulated to ensure Satoshi II's quick onboarding and proficiency in understanding Nash's unique business environment.
Creating Satoshi II's Global Context
Global Context is appended to Patterns' build in LLM context and enables anyone to customize the bot's functionality. Jerry broke down Global Context for Satoshi II into three different categories 1) business context 2) table join examples and 3) response hints. Read about these below:
Teaching Satoshi II metrics and calculations
Nash's team relies on Satoshi to generate correct metrics. To be sure Satoshi isn't making anything up, Jerry created a metrics document that defines metrics in pseudo-SQL. Essentially, he describes in natural language how you would go about calculating a metric from the given tables. Here is an example of some of the metrics that drive Nash's business:
Bottom-line
Treat Satoshi II as a junior analyst. Follow these principles when you interact with LLM powered AI bots (just like how you interact with a junior analyst)
Be articulative and describing your need accurately. AI analysts are much better pulling well defined data pulls (i.e. pull this metric by this dimension by these filters) vs. vague questions (what’s the performance of xyz).
Clean up your data model and only expose useful tables and columns to the AI analyst. AI analysts make more mistakes when interacting with complex joins, success rate is much higher if AI analyst works with cleanly modeled tables
Start simple and iterate. It is not always guaranteed AI analysts can do well on ambiguous tasks, success rate would be much higher if you follow these steps (each step is a prompt): pull a simple statistic → repeat the analysis by adding filters and dimensions → make visualizations → Ask AI what conclusion it can draw from the analysis? For example instead of asking AI a question like “how is changing pickup window going to be impacting the drop off on-time rate”, I would ask AI “first, make a plan to analyze: how is changing pickup window going to be impacting the drop off ontime rate.”, get some inspiration from the plan.
Satoshi II’s Performance Review
In its first month of operation, Satoshi II, Nash's AI-bot, has demonstrated remarkable efficiency and effectiveness. By autonomously answering 162 questions for Jerry, the tool has proven to be a significant time-saver. Considering each ad hoc question typically requires about 30 minutes to address, Satoshi II has effectively saved Jerry approximately 75 hours of work in just one month.
This achievement sheds light on a common challenge within operational teams: the reluctance to inundate the data team with queries, despite having numerous questions that could enhance decision-making and operations. In many cases, the solution to this issue has been to hire an analyst solely for responding to queries and proactively building dashboards. However, this approach is not always feasible or efficient.
Patterns, through its generative bots like Satoshi II, offers an innovative solution to this problem. As demonstrated by Nash's experience, these AI tools can effectively fill the gap, providing timely and accurate data analysis without overburdening the data team. This approach not only optimizes the workflow but also ensures that operational teams have access to the insights they need without hesitation or delay.