Skip to main content

Sequoia founder Don Valentine would ask founders two questions: “why now?” and “so what?” At the heart of these questions is the combination of curiosity and rigor that asks what has changed in the world (why now?) and what will this mean (so what?). 

AI has a compelling “why now?” with the development of large language models (LLMs) trained with the Transformer architecture. Transformers are well-suited to GPUs, making it practical to marshall immense amounts of data and compute to train AI models with billions and trillions of parameters.

We also now have a persuasive “so what?” These technologies have enabled a whole new user interface for computers: human language. Just like the graphical user interface made the personal computer accessible to millions of customers in the 1980s, so too have the new natural language interfaces made AI accessible to hundreds of millions of users worldwide in the past year

Artificial intelligence has endured many up and down cycles. When AI is in ascent, as it is now, the term is overused to include not only the leading edge of predictive technology but also any software that is in some sense “smart.” During previous AI Winters, however, researchers retreated to safer terms like machine learning.

The AI effect

John McCarthy coined the term AI in the 1950s to distinguish his research from his older rival Norbert Wiener’s cybernetics. Yet McCarthy himself became disenchanted with the term, complaining, “As soon as it works, no one calls it AI anymore.” McCarthy called the tendency of people to rename past AI efforts with more functional descriptions as soon as they were sufficiently solved the “AI effect” and it affects us to this day. The history of AI is littered with accomplishments that have worked well enough to no longer be considered sufficiently intelligent to earn the aspirational monniker.  

As a quick refresher, consider computer vision which underpins current advances in image generation. For a long time, detecting objects in images or videos was cutting edge AI, now it’s just one of many technologies that allow you to order an autonomous vehicle ride from Waymo in San Francisco. We no longer call it AI. Soon we’ll just call it a car. Similarly, object detection on ImageNet was a major breakthrough of deep learning in 2012 and is now on every smartphone. No longer AI.

On the natural language side, there’s a long history before ChatGPT, Claude and Bard burst on the scene. I remember using Dragon Speech-To-Text circa 2002 to type emails when I had a broken arm. What was once called AI is now just “dictation” and is on every phone and computer. Language translation and sentiment analysis, once hard problems in NLP, are now mostly table stakes. Not AI.

It’s also easy to forget how much of what we take for granted on the cloud emerged from previous AI disciplines like recommendation systems (Netflix and Amazon) and path optimization (Google Maps and UPS). The more everyday a thing becomes, the less likely we are to call it AI.

The trough of disillusionment

What is considered AI and what is not is important to founders because in the long run it’s always better to underpromise and overdeliver. In what Gartner has described over decades of technological hype cycles, the wild enthusiasm is invariably followed by disappointment—the trough of disillusionment. 

Founders benefit in the short term from the buzzy marketing, but at a cost. Arthur C. Clark famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” But he was a science fiction writer. Machine Learning practitioners are scientists and engineers and yet at first blush their efforts always appear to be magic—until one day it’s not.

The current AI paradigm is like chasing a carrot on a stick while running on a treadmill. For today’s founders I think it’s time to break this cycle by understanding what’s really going on.

A more precise vocabulary

There is a linguistic reason we keep making the same mistakes. If we use the Oxford English Dictionary to recursively dissect the term “Artificial Intelligence” we find:

  • Artificial: made or produced by human beings rather than occurring naturally, especially as a copy of something natural.
  • Intelligence:the ability to acquire and apply knowledge and skills.
    • Knowledge: facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject.
    • Skills: the ability to do something well.

Each branch of this recursive definition hinges on either “human beings” or “a person.” So by definition we think of AI as imitative of humans. Think of the Turing test. But as soon as a capacity is firmly in the realm of machines, we lose the human reference point and we cease to think of it as AI.

Part of this is human exceptionalism. Over the centuries we have elevated the aspects of intelligence that seem uniquely human: language, imagination, creativity and logic. We reserve certain words for ourselves. Humans think and reason, computers calculate. Humans make art, computers generate it. Humans swim, boats and submarines do not. And yet “computer” was once a 17th century job title for a human that calculated, and we employed rooms full of them before we formalized mechanical and electronic computers.

The AI effect is actually part of a larger human phenomenon we call the frontier paradox. Because we ascribe to humans the frontier beyond our technological mastery, that frontier will always be ill-defined. Intelligence is not a thing that we can capture but an ever-approaching horizon that we turn into useful tools. Technology is the artifice of intelligence forged over millennia of human collaboration and competition.

Back in 2018, I was inspired by a post from Berkeley Statistics and Computer Science professor Michael Jordan. “Whether or not we come to understand ‘intelligence’ any time soon,” he wrote, “we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of ‘artificial intelligence,’ it can also be viewed more prosaically—but with no less reverence—as the creation of a new branch of engineering.”

This led me to write my own post questioning the usefulness of calling this endeavor AI at all.  Five years later, are we any closer to Jordan’s vision of a practical infrastructure for human augmentation? I believe we are, but we need a more precise vocabulary to harness the computational opportunity ahead.

Why now?

The amazing effectiveness of LLMs to generate coherent and believable language has taken almost everyone by surprise. The ability of diffusion models to generate highly-detailed and aesthetically appealing images from text descriptions has also surpassed conventional assumptions. And there’s much more on the horizon in terms of further improvements in language and images, generalization to video, and new innovations in robotics, autonomous vehicles, biology, chemistry and medicine.

All of these advances benefit from the infrastructure for distributed computing that the last wave of hyper-scaled tech companies built in the cloud. They also benefit from the sheer scale of data that has accumulated on the internet, particularly thanks to the ubiquity of highly usable mobile devices with their cameras, sensors and ease of data entry.

But calling all of these things AI confuses the public and founders about what really needs to be built and how to bring it all together in safe and moral ways that encourage both experimentation and responsible behavior. 

Given all of this amazing infrastructure, AI as a science project at the intersection of computer science, physics, cognitive and neuroscience will surely advance along the frontier of understanding. It will continue to contribute useful applications as well, but if we call all of them AI, the term will quickly lose its meaning and its novelty.

So what?

Experts estimate that even with information retrieval, LLMs are accurate ~90% of the time. There is still a lot of research and scaling to get to 99%. However, once they reach that 99% they will no longer be AI, they will be “language interfaces” or simply LLMs. You will be able to write code on the fly, communicate with people in other languages, learn or teach anything we are interested in, and more. The impact will be real. But we will not call it AI. These new capabilities will become invisible to us, additional parts of our extended minds along with our search engines and smartphones. 

This is the frontier paradox in action. AI is accelerating so quickly that it will soon simply be technology and a new frontier will be AI. Graduating to technology should be seen as a badge of honor for an idea that previously was on the cutting edge of possible. The frontier paradox means AI will perpetually refer to aspirational approaches, while technology will refer to what can be put to work today. It is our belief that we need both. 

What’s next?

At Sequoia we have tried to become more precise about how we discuss AI internally and with founders. We focus on specific technologies that can be put to work, like transformers for large language models or diffusion for image generation. This makes our ability to evaluate a venture much more explicit, tangible and real.

The entrepreneurial journey starts with language. It is only through language that companies can express the uniqueness of their product and its benefit to customers long before it is ready to ship. The precision of language is the key to category creation, company design and market leadership—the components that make enduring companies.

This precision is even more important as founders surf the rising waves of AI to stay right on the frontier. The founders who can define language as this frontier turns into everyday technology will have a distinct advantage.


If you’re founding a company pushing the frontier of AI into useful technology, we’d love to hear from you. Ambitious founders can increase their odds of success by applying to Arc, our catalyst for pre-seed and seed stage companies. Applications for Arc America Fall ’23 are now open—apply here.

Related Topics