Partnering with LangChain: The LLM Application Framework
Harrison, Ankush and their team are building the AI programming framework every engineer needs.
We are at the dawn of a new age of applications, with large language models (LLMs) as the enabling technology. But programming an LLM remains more art than science. The instructions that generate robust, reliable performance out of these non-deterministic models remain poorly understood. We are still in a wild west of prompt hacking, eyeballing outputs and a lot of duct tape when it comes to Generative AI.
We have seen this movie play out before. In Software 1.0, frameworks such as React and Next have abstracted, modularized and organized the most common building blocks of software engineering, making it fast and reliable to build and test your next mobile app or website.
Generative AI, too, deserves a proper programming framework. Developers are craving a common set of best practices and composable building blocks for their LLM applications, to simplify the complex and to avoid reinventing the wheel. LangChain founders Harrison Chase and Ankush Gola have designed just that: an open-source toolkit for building sophisticated, data-aware applications on top of LLMs.
Engineering teams use LangChain to accomplish a myriad of tasks: accessing datasets to power retrieval-augmented generation (LangChain boasts over 600 integrations); running “chains,” or collections of calls, to enable more complex applications (some companies run chains with thousands of links or more); giving LLMs access to external tools; and using LLMs as full-fledged “agents” capable of planning and reasoning to accomplish goals.
Zooming out, LangChain is a platform that allows companies to build the scaffolding of their applications’ cognitive architectures. It is paving the way for developers to build much more powerful LLM applications, going beyond simple one-shot queries to the intelligent, interconnected systems in which these reasoning engines truly shine. The more complex LLM operations are what allow for jaw-dropping results, and we at Sequoia believe that LangChain can be a force multiplier on the pace and quality of AI application and agent development.
Harrison was a standout machine learning engineer at our portfolio company Robust Intelligence when he started LangChain as a nights-and-weekends side project, with no plans to build a company. But after he released the first version, it quickly took off. He quickly started working with Ankush, a star colleague from Robust. From the start, they have been humble, responsive, and fast—all key ingredients for keeping up in the quickly evolving world of generative AI. And above all, they are community-first, spearheading a community of more than 2,000 open-source contributors.
Today, more than 50,000 LLM applications have been built using LangChain, with use cases ranging from internal apps to autonomous agents to games to chat automation to security scanners and more. And while the platform is already a mainstay at weekend hackathons, we have also been excited to see many of our portfolio companies moving their LangChain use cases into production. Soon, your delivery order or customer support or gaming experience may be powered by LangChain.
And there is more to come. To continue the Software 1.0 analogy, developers benefit not just from open-source frameworks like React and Next, but also services such as GitHub, Datadog, Vercel and Heroku to deploy and monitor their applications into production. LangChain’s ambitions are similarly expansive.
The company’s latest “big bet” is LangSmith, a product that helps developers debug, test and monitor their LLM applications. As customers move from prototyping into production, LangSmith is providing the visibility and tooling they need to make their generative AI applications more performant and reliable. Elastic powers their Elastic AI Assistant for security in LangChain and leverages LangSmith for visibility helping them get to production fast. Rakuten relies on LangSmith for rigorous testing and benchmarking so they can make tradeoff decisions about their copilot Rakuten AI for Business, built in LangChain, in a systematic way. And Moody’s relies on LangSmith for automated eval, easy debugging and experimentation so they can quickly iterate and innovate.
Since the announcement of LangSmith’s closed beta in July, 70,000 users have signed up and more than 5,000 companies now use LangSmith every month. We look forward to seeing that list grow with the product now in GA. More recent product launches include LangServe, an easy way to deploy LangChain into production, and LangGraph, a library for building stateful, multi-actor applications with LLMs.
Even all of this is still just a taste of what LangChain has in the pipeline. The team is building in a rich problem space with plenty to explore, and they are guided by a passionate user community with no shortage of important problems for them to solve.
We are thrilled to partner with LangChain and lead their Series A.