How Google Labs is Pushing the Frontier of AI Applications, with Josh Woodward
Training Data: Ep34
Visit Training Data Series PageAs VP of Google Labs, Josh Woodward leads teams exploring the frontiers of AI applications. He shares insights on their rapid development process, why today’s written prompts will become outdated and how AI is transforming everything from video generation to computer control. He reveals that 25% of Google’s code is now written by AI and explains why coding could see major leaps forward this year. He emphasizes the importance of taste, design and human values in building AI tools that will shape how future generations work and create.
Stream On
Summary
Google Labs VP Josh Woodward shares insights into the innovative work and culture within the lab, emphasizing rapid prototyping, market fit exploration and the evolving landscape of AI-driven products. His experience highlights the intersection of creativity, technology and strategic thinking necessary for AI founders and innovators to succeed in a fast-paced industry. Here are the key insights from the conversation:
Iterating on both product and market is crucial. While many focus on product development, finding the right market fit is equally important to ensure that the product meets the needs and interests of the target audience. This dual iteration process can lead to more successful and sustainable AI products.
The future of AI interaction is moving away from traditional text prompts. Woodward suggests that the context provided to AI models will evolve to include more natural inputs like images, videos and voice, making the interaction more intuitive for end users and opening new avenues for AI application.
Fosters a culture of rapid experimentation and acceptance of failure. By moving quickly from ideas to user testing, Google Labs maintains a startup-like environment within a large corporation, encouraging creativity and innovation. This approach allows small teams to develop and test products that have the potential to scale significantly.
The application layer in AI is where real value is created. While there is significant development in AI models and tools, the integration of these technologies into applications that enhance user workflows and creativity is where substantial impact and opportunities lie.
Focus on amplifying human creativity rather than replacing it: AI continues to advance, particularly in video and agent technologies. By aligning AI products with the natural progression of models becoming smarter, faster and cheaper, founders can ensure their solutions remain relevant and beneficial in the long term.
Transcript
Chapters
- Writing prompts is old fashioned
- What is Google Labs?
- What projects to take on next?
- Veo 2: solved and unsolved problems in AI video
- Building things that don’t quite work
- Is the pace of progress accelerating?
- The future of video consumption
- Pixel stream vs 3D models?
- Agents and Google Mariner
- The allure of infinite context
- When will computer use be good enough?
- Hiring any other authors?
- Where to build in 2025?
- What’s overhyped in AI right now?
- What’s under the radar?
- Lightning round
Contents
Josh Woodward: What I found, too, building products over the years is it’s very common—everyone talks about product-market fit, you’ll know it when you see it and all that, which is true. But at least for me, I’ve always felt in the first part of building products, you iterate a lot on the product, and sometimes you forget to iterate on the market. And finding the right market side is also just as important as the right product. And you have to connect those two.
And so I think that in these early stage things with Mariner, that’s where we are. It’s like, is it possible for a computer to—like, an AI model to drive your computer? Yes, that’s a huge new capability. Is it accurate? Sometimes. Is it fast? Not at all yet. Like, that’s kind of where we are in terms of the actual kind of use case or the capabilities. And then it’s about finding the right market.
Writing prompts is old fashioned
Sonya Huang: Josh, thank you so much for joining me and Ravi today. We are excited to hear everything that you’re doing over at Google Labs. Maybe first to start, you mentioned a provocative topic to me on your way in here: Writing prompts is old fashioned. What do you mean by that?
Josh Woodward: Okay, so thanks for having me. I do think it’s old fashioned. We’ll look back at this time from an end user experience and say, “I can’t believe we tried to write paragraph-level prompts into these little boxes.” So I kind of see it splitting a little bit right now. On the one hand, as a developer, an AI engineer, you should see some of the prompts that we’re writing in Labs right now are these beautiful, like, multi-page prompts. But I think for end users, they don’t have time for that. And you have to be almost like some sort of whisperer to be able to unlock the model’s ability. So we’re seeing way more pull and traction. I’ve kind of seen this in other products in the industry, too, right now. How can you bring your own assets, maybe as a prompt, drag in a PDF or an image, sort of recombine things like that to sort of shortcut this giant paragraph writing? So I think it’s going to kind of divide. I think as engineers, AI engineers, you’ll keep writing long stuff, but I think most people in the world, we’re probably in a phase that’ll sort of fade out here pretty soon.
Ravi Gupta: So the form of the context will change, right?
Josh Woodward: That’s right.
Ravi Gupta: You still have to give the model something.
Josh Woodward: Yeah.
Ravi Gupta: But it might be that you can communicate it via a picture or communicate it via, like, just look at this set of documents.
Josh Woodward: Yeah.
A voice, a video, any of that. These models love context.
Ravi Gupta: Yeah.
Josh Woodward: So the context is not going to go away, but we’re making a lot of bets right now that the type of context and the way you deliver the context, that’s changing really fast right now.
What is Google Labs?
Sonya Huang: I love it. Okay, we’re going to go deeper into the future of prompts and multimodal models in this episode. Maybe before we do all that, say a word on what is Google Labs? What’s the mission? And tell us a little bit more about where you sit inside Google.
Josh Woodward: Yeah. So Google Labs, if anyone’s heard about it, we had one a long time ago that went dormant for a while, and this is kind of back about three years ago it got started. It’s really a collection of builders. We’re trying to build new AI products that people love. So they can be consumer products, B2B products, developer products. It’s all zero to one. It tends to attract an interesting mix of people, many people who have been at Google a while, but also a bunch of startup founders and ex-founders.
And so we kind of mix these people together and we basically say, “What’s the future of a certain area going to look like?” Say the future of creativity or software development or entertainment. And they go off in small little teams and they just start building and shipping. And so that’s how it operates. And it sort of sits outside the big traditional Google product areas, but we work a lot together. But there’s kind of an interesting interplay there—and I think that’s been part of what’s been fun about it, is you can kind of dip in and maybe work with Search or Chrome or other parts of Google, but you also kind of have the space to explore and experiment and try to disrupt, too. And that’s kind of what we’re up to.
Ravi Gupta: How do you create the culture inside a lab that you want? If you think about—there’s got to be a lot more failure, presumably than there are in other parts. There’s got to be a different metric for success than there is at just the sheer scale of Google.
Josh Woodward: Yeah.
Ravi Gupta: So what is the culture you’re trying to create and how do you create it?
Josh Woodward: So we really pride ourselves in trying to be really fast moving as a culture. So we’ll go from an idea to end users’ hands, 50 to 100 days. And that’s something that we do all kinds of things to try to make that happen. So speed matters a lot, especially in kind of an AI platform shift moment.
The other thing is we think a lot about is sort of big things start small. And one of the things if you’re in a place like Google, you’re surrounded by some products that have billions of people using them. And people forget that all these things started with solving usually for one user and one pain point. And so for us, we get really excited if we get, like, 10,000 weekly active users. [laughs] It’s like, you know, we’ll celebrate that. That’s a big moment when we’re starting a new project. And for a lot of our other kind of groups inside Google, their dashboards don’t count that low, right? So there’s kind of this moment where the size of what we’re trying to do is very small. It probably looks a lot like companies you all work with, honestly, from that perspective.
And I think the other thing we’re trying to do is because we sit outside the big groups at Google, we kind of have one foot in the outside world. We do a lot of building and kind of co-creating with startups and others, but also one foot inside Google DeepMind. And so we’ve got kind of a view of where the research frontier is, and more importantly where it’s going. And so we’re often trying to take some of those capabilities in. So we take a lot of pride in sort of finding people who are very creative, people who almost, like, see themselves as underdogs. They have kind of a hustle to them. And so we have this whole doc called “Labs in a Nutshell.” And my favorite section in the doc is called “Who Thrives in Labs?” And there’s, like, 16 or 17 bullets that just list them out. And that’s kind of how we try to build the culture. But you do have to normalize things like failure. You have to think about things differently around promotion, compensation, all these things that you kind of would do in a company, too.
Sonya Huang: You mentioned the DeepMind links. I think that is super cool.
Josh Woodward: Yeah.
Sonya Huang: What have you found is the kind of ideal kind of product builder persona inside Labs? Is it somebody with a research background? Is it somebody who comes from a successful consumer product background? Is there the magical unicorn that’s great at both research and products?
Josh Woodward: Yeah, we take as many unicorns as we can find. [laughs] And we actually have found some, which is great. You do look for that kind of deep model expertise as well as kind of like a consumer sensibility in terms of …
Sonya Huang: And those people exist.
Josh Woodward: They exist. They’re great, too, if you can find them. And we also kind of have found ways to kind of train or develop people. So that’s another thing we think a lot about is, like, how do you bring in often people that might not be the normal talent that you look for? So, like, we’re always in the interesting kind of zone of like who’s undervalued, who’s kind of like really interesting, but maybe not on paper, but when you interact with them, you look at their GitHub history, I mean, there’s all these different signals you can look at. But yeah, that’s kind of how we would think about it.
What projects to take on next?
Sonya Huang: Really cool. How do you decide what projects to take on next? Is it bottom up? Top down? How does that work?
Josh Woodward: Yeah, great question. We kind of do a little bit of a blend, actually. So at the top down side, we’re looking at what are the areas that are kind of on mission for Google, that are strategic to Google. Because we sit inside it, so we’re thinking about ourselves in that broader context.
So that may be for example, like, what would the future of software development look like? There’s tens of thousands of software developers at Google, and obviously this is an area that AI is clearly going to make a big change in. So we’ll be thinking about, could we build things for other Googlers, but also externally how do we build things like that? So we take that kind of top down view. Think of it as almost—I’m from Oklahoma, we like to fish a lot in the summer. But, like, you’re trying to figure out what’s the right pond to fish in. So we put a lot of thought into those, like, ponds to fish in.
Sonya Huang: Okay.
Josh Woodward: But then we let a lot of these teams—often they’re four- or five-person teams—come up with the right user problems to go try to solve. And that’s where we kind of meet in the middle, and I think for a lot of other teams, they might look at what we do as a little chaotic. You know, we don’t have, like, multi-quarter roadmaps. Like, we’re trying to survive to the next, whatever, 10,000-user milestone and then try to grow it. But I would say it’s kind of that sort of blend.
Ravi Gupta: What’s one of the products that you guys have built that you’re excited about now?
Josh Woodward: Oh, yeah. So I guess if you’ve ever used the Gemini API or AI Studio or Notebook LM or any of Veo, any of these things, these are products that we’ve kind of worked on from Labs. I mean, maybe I’ll talk about one that’s maybe well—better known and one that’s coming up. So very excited about where Notebook LM is going. I think we’ve hit on something where you can bring your own sources into it, and really AI grips into that stuff and then you’re able to kind of create things. So a lot of people maybe have heard the podcast that came out last year. There’s so much coming that follows this pattern, so watch this space.
Ravi Gupta: [laughs]
Josh Woodward: There’s just a lot you can do with that pattern. And I think what’s really interesting is it gives people a lot of control. They feel like they’re steering the AI. We have this term on the team, and actually one of the marketing people came up with was like an AI joystick, that you’re kind of controlling it. So that’s interesting.
I would say there’s a lot of stuff coming right now. We’re very excited about Veo, Google’s imagery model and sort of video model, and where those kind of come together. So we’ve got really interesting products coming along in this space. I think maybe we can talk about that some at some point. But I think Generative Video has kind of moved from this moment of almost possible to possible. And I think…
Veo 2: solved and unsolved problems in AI video
Ravi Gupta: Well, let’s talk about it now. Tell us.
Josh Woodward: Yeah. Yeah. Well, I think it’s interesting because these models are still huge. To run, like, Veo 2 takes hundreds of computers, right? So the cost is very high. But just like we’ve seen with the text-based models like Gemini, and even the ones from OpenAI and Anthropic, the cost has reduced, like, 97 times in the last year. So if you kind of assume cost curves like that, what you’re going to see with these Veo models, what’s kind of brand new, say with Veo 2, is it’s really cracked really high quality and physics. So the motion, the scenes. If you talk to a lot of these AI filmmakers, they talk about what’s your cherry pick rate? Which is a term for, like, how many times do you have to run it to pick out the things that’s really good? And what we’re seeing with something like Veo is the cherry pick rate is going down to, like, one time, got what I want. And so the instruction following, the ability for the model to kind of adhere to what you want is really cool. So I think when you put that in tools, you’re now able to convey ideas in a whole different way.
Sonya Huang: What do you think are the solved problems and the unsolved problems in AI video generation? Because I remember last year, it was like—even last year there was so much talk about generative video is a physics simulator, for example.
Josh Woodward: Right. Right.
Sonya Huang: It can kind of emulate physics. And it’s like, that’s amazing. Is the physics stuff solved, do you think? Like, what else is—you know, what’s done and then what’s to be solved?
Josh Woodward: Yeah. I would say physics is a hard thing to solve forever, but it’s close. I would say it’s close enough. Yeah, but six months ago, a year ago, few years ago, you had Will Smith eating, you know, pasta. It was a disaster. And then even last year you had kind of these videos of, like, knives cutting off fingers, and there were six fingers. You know, it was like, that’s where we were.
Sonya Huang: Yeah.
Josh Woodward: So I think physics? Tons of progress. The ability to do photorealistic quality, very huge progress. The ability to kind of do jump scenes and jump cuts and different sort of camera controls, that’s really coming into almost solved. There’s paths to solve all this stuff. Still going to solve the efficiency and serving cost, I would say, and probably still have to figure out a little bit more of, like, the application layer of this. Because I think this is another big opportunity, as we’ve seen, like a lot of other modalities with AI. You get kind of the model layer, you get kind of the tool layer, and then the real value, we think, is in this application layer. And so I think that’s really interesting to rethink workflows around video. And I think that’s pretty wide open right now.
Sonya Huang: Do you think that models are capable of even having video that is malleable at the application layer? So for example, if I want to have character consistency between scenes, are the models even capable of that? Or I imagine you want model steerability in order to be able to kind of work with it at the application level. Like, what is model readiness, and what’s required in order to be able to do magic at the application level?
Josh Woodward: Yeah. So I was talking to a couple of AI filmmakers this week, and what they’re really interested in is exactly what you’re saying: Character consistency, scene consistency, camera control. It’s almost like we need to build an AI camera. You think of some of the cameras that are kind of filming us right now. This is sort of like decades of technology that’s kind of been perfected for a certain sort of input/output. And I think we’re on the verge of kind of needing to create a new AI camera. And when you do that, you can generate infinite number of scenes. You can generate like, oh, you’re wearing a red sweater; now make it blue. And not just in that scene, but in, like, a whole two-hour film.
So there’s all kinds of ways that we’re starting to see these prototypes that we’re working on, too, internally, where this is here. Like, it’s coming. Where kind of entire, I think, things that used to either be too expensive or too timely or it required a certain skill level—we kind of talk internally on the team about how do you kind of lower the bar and raise the ceiling? And what we think about that when we’re building products is how do you make something more accessible, or how do you make, like, the pros take it and just blow the quality out of the water and make incredible stuff. So that’s what we’re seeing with video. It’s kind of right at that point where both are happening.
Building things that don’t quite work
Ravi Gupta: There was an interesting tweet from—or post from Paul Graham recently on this idea, I think, of based on the pace of progress he’s like, you sort of want to be building things that kind of don’t quite work.
Josh Woodward: Yes.
Ravi Gupta: And are way too expensive.
Josh Woodward: Yes.
Ravi Gupta: Right?
Josh Woodward: Yes.
Ravi Gupta: Because they’re going to work and their cost is going to come way down. And so I would imagine that has applicability for you guys, too, particularly Nvidia.
Josh Woodward: That’s exactly how we do it. Yeah. I mean, right now, I don’t know off the top of my head, but each video eight-second clip generated is obscenely expensive. But we’re basically building for a world where this is going to be like, you’re going to generate five at a time, not even think about it. One of the actual principles I’ve kind of learned just over the last few years working all this AI stuff is make sure your product is aligned to the models getting smarter, cheaper, faster. And if your core product value prop can benefit from those tailwinds, you’re in a good spot. If any of those are not right, question your existence. That would be my summary takeaway on that.
Ravi Gupta: I like that.
Sonya Huang: How far do you think we are from having economics of video generation that are right side up, where it costs less to generate the thing than the economic value of generating it?
Josh Woodward: Yeah. Oh wow, this is tough. This is a prediction you’re never really sure about. I don’t know, but I would say one thing we’re seeing just as we’re modeling out a lot of costs because we’re starting to put Veo into some of our own tools that are coming out, is we’re probably going to need innovation on the business model side in addition to just the product and the application layer. And what I mean by that is you could—our first thought was oh, let’s just make a subscription and then just charge per usage on top. That might be a way to do it. Another way to do it is, when you talk to some of these creatives, whether they’re in Hollywood or even these AI filmmakers that are popping up, they’re kind of like, “Okay, I want this output and I’m willing to pay this much.” And it’s kind of a pay-per-output, which you’ve seen in other cases—AI companies are starting to do some of this too, but for sort of film and video, that’s a little bit how you’d think of doing a project if you’re a producer.
Ravi Gupta: Yeah.
Josh Woodward: But now you’re kind of imagining at, like, the individual creative level, which is kind of interesting. So that’s more of, like, almost like an auction-type model, potentially. So I think there’s a lot to explore. I think we’re probably, though, you know, the pace that things are moving, it’s on the scale of like quarters, I think, where it starts to get interesting, as opposed to, like, many, many years. So that’s—yeah, I think there’s a path.
Is the pace of progress accelerating?
Ravi Gupta: You talked about the pace of progress a couple of times.
Josh Woodward: Yeah.
Ravi Gupta: Do you think it’s accelerating? You have a unique view in the DeepMind, and let’s use that as a harbinger for some of the others.
Josh Woodward: Yeah. Yeah, as a proxy. Yeah. Yeah.
Ravi Gupta: Where are we at? Are we accelerating? Are we, you know, on a crazy trajectory and maintaining the same one? Like, I’m interested.
Josh Woodward: Yeah. Yeah. I keep thinking it will slow down, and it’s never slowed down in the last three years. So, you know, you think, oh, pre-training might be plateauing. Inference time compute, a whole ‘nother horizon opens up. And I think there’s so much—there’s an author on the team we actually hired. His name’s Steven Johnson. He co-founded Notebook LM when we first brought him on. And he talks about this notion of, like, there’s adjacent possibles. He has this really interesting book on the history of innovation. And I feel like right now it’s like you walk into this room, and there’s all these doors that are opening up into these adjacent possibles. And there’s not just, like, one room and one door. It’s like one room with, like—like, it feels like 30 doors that you can go explore. So I think that’s what it feels like on the inside.
Ravi Gupta: I love that visual of the rooms and then the adjacent possibles.
Josh Woodward: Yeah.
Ravi Gupta: I’m going to steal that and maybe take it and call it my own.
Josh Woodward: [laughs]
Sonya Huang: Classic VC over here.
Ravi Gupta: [laughs]
The future of video consumption
Sonya Huang: What do you think the future of video consumption looks like for us as consumers? Like, am I still looking at Hollywood-style feature films that are created by Hollywood studios, just done a lot more cost efficiently? Am I looking at a piece of content that’s dynamically generated to what you know about me and it’s only for me to watch? Like, what do you think the future of consumption is as a consumer?
Josh Woodward: So this is one of those that could go and spider in many different ways, I would say. I’d say some of the things we’re excited about and what we see—so I think the future of entertainment is way more steerable. So right now you think about you sit on your couch like this, and you maybe scroll through something or whatever, you cast it on, you bring it up on the TV. So it’s going to be way more steerable where you can kind of interject if you want and maybe take it certain ways. We think that’s one area.
We think another is personalization, like you said. If you think today about YouTube, TikTok, any of these algorithms that can kind of figure out this is what you’re interested in, imagine that, I think, way more extreme, that could be kind of fine tuned to sort of what you want to share with the model. I think the other bit is a lot of this, I think, is going to be generated on the fly. So another theory we have is that just like there was a rise of kind of a creator class, couple—whatever, 10, 15 years ago that powered YouTube and the rest, there’s going to be a shift or maybe it’s a different set of people that we think of as, like, curators, where you curate stuff and you work with the model to maybe create things.
And I think another loop in that is how you can remix all this. And so that’s another big part of what we see in the future of entertainment is that there will be like, “Oh, I kind of like that, but then make it more like this.” And if you think, you know, at some level, the cost, the time, the skills required of this is literally maybe just like tapping a button or just describing it and you get kind of different versions, that’s kind of where we see some of this going.
It will be really interesting to see if, like, some of these same percentages hold. Like, we know today that a lot of times a certain percentage, like 90, 95 percent just consume from platforms, and you have a very small creator class. Like, will that balance change? But I see totally different ways you could think about content platforms that have some of these native controls. Like, for example, will we expect UIs that have a ‘join’ button where, you know, today our UIs maybe have a ‘play,’ ‘pause,’ whatever, ‘save’ ‘bookmarks,’ ‘star,’ ‘heart this.’ Like, will there be like new things where you join? And they’re like, “Oh, hey, Sonya, Ravi. What do you want to talk about?” Do you know what I mean? And I think, like, that’s totally possible. We’re building that in the Notebook LM today. So that you can imagine play it forward, you’ve got avatars or human-like characters or not with lip reanimation, voice cloning, all that can come together in sort of new ways, I think.
Sonya Huang: Do you think movies and games start to blur?
Josh Woodward: Yeah, I think that’s a real possibility. Yeah. There’s a whole interesting intersection that’s happening right now between movies or video content, games and sort of world building and 3D. And it’s really unclear to us right now where that’s going to go, but there’s so many areas right now where we’re seeing learnings from each. And even down to some of the training techniques, we’re finding things like that.
Pixel stream vs 3D models?
Sonya Huang: So actually, that was one of my questions. Like, if you look at all the companies building generative video models right now, some people are kind of going straight from the pixel stream, so to speak. And some people are going from the 3D angle with the idea that, you know, to really do video right, you need to get 3D.
Josh Woodward: Yeah.
Sonya Huang: Do you have an opinion on that?
Josh Woodward: Yeah, we actually got bets on both sides right now. [laughs] I don’t know. I don’t know. [crosstalk]. So on the 3D side, we have this project we got started where we basically said, like, take six pictures of a sneaker and create a 3D spin of it. And we put that on search. It’s been really great, and it’s amazing how it fills in the details. But I think what’s interesting is we’ve been going down that path, something like Veo 2 shows up. Now you don’t need six photos anymore, you need, like, two or three. And you can basically do, like, an entire product catalog, like, every product that’s ever been indexed at Google, just overnight sort of can create it.
So now you’ve got a 3D object basically of any object—bookshelf, chair, whatever—from any angle that you can pan, tilt, zoom, relight. And now that’s like an object that you can drop in anywhere. So that’s kind of the 3D angle. From the video angle, it’s interesting in kind of the world building. We had this little prototype we built. We were like, “Wouldn’t it be cool if you could recreate landing on the moon for, like, every classroom and, like, give teachers a tool where they could put the kids in the, like, you know, lunar module as it’s coming down.” So we built this thing. It’s kind of terrifying, actually, because we also built a little side panel where you can inject problems where it’s like, “Oh, no! Something’s on fire in the back!”
Ravi Gupta: [laughs]
Josh Woodward: They, like, simulate things. We had a little fun with it. But that was interesting because the models, you could say, like, “Look right,” and it would actually fill in the details. And so you start to get this—that’s where it feels like it’s kind of blurring, and I guess why we’re hedging on both sides right now. Yeah, we’re not sure.
Ravi Gupta: 2025, everyone’s talking about agents.
Josh Woodward: Yes. Yeah.
Ravi Gupta: Computer agents.
Josh Woodward: Yeah, you just said it three times. [laughs]
Ravi Gupta: Yeah, exactly.
Sonya Huang: Probably being a VC again.
Agents and Google Mariner
Ravi Gupta: Exactly. I’ve been called a VC twice today. This is a very big insult. Can you talk to us about Google Mariner?
Josh Woodward: Yeah. Yeah, so Mariner is one we put out in December last year. This is a fun one, actually, because we started seeing this capability developing in the model. We’re trying to understand if you could let these models control your computer or your browser, what would happen—good and bad. And so that was a good example of a project where we went from, “Hey, this capability is kind of showing up. Let’s put it into—” right now it’s a Chrome extension, just because it was quick to build—idea in people’s hands, 84 days. Very fast, very fun. A lot of memories made on that.
But I think what’s interesting is you’re seeing both across Anthropic, OpenAI, obviously Google and a bunch of other startups in the space are all hitting on kind of the same idea, that models are not just about maybe knowledge and information and synthesis and writing, they can do things.
Ravi Gupta: Right.
Josh Woodward: And they can scroll, they can type, they can click. They can not only do this in one browser in one session, but, like, an infinite number in the background. So I think with Mariner, what we’re really trying to pursue is, like, of course there’s the near term thing of, like, can it complete tasks in your browser? But the bigger thing is, what’s the future of human-computer interaction look like when you have something like this kind of not just one of these things, but basically like an infinite number kind of at your disposal. And so that’s what we’re chasing with that project.
Sonya Huang: What do you think the ideal use cases are, maybe even in the near term, for Mariner? Because I think all the demo videos I see, not necessarily from Mariner specifically, but with computer use more broadly are, you know, here, have this agent go book a flight for me or go order a pizza on DoorDash for me.
Josh Woodward: Right.
Sonya Huang: Like that’s nice but, like, I like doing those things.
Josh Woodward: Yeah, yeah. Yeah, you’re pretty good on those phones.
Sonya Huang: Booking a flight is one of my delights in life. And so what do you think are the killer kind of consumer use cases?
Josh Woodward: Yeah, well that’s what’s interesting. It may not be consumer, it may be enterprise. And one of the things we’re seeing when we do all the user research right now on Mariner, because we have an entrusted tester and people are playing with it and giving a lot of feedback, is it’s really these high-toil activities. ‘Toil’ is kind of an old fashioned word that doesn’t get used a lot. But this is when people talk about it it’s like, “This is what makes me grumpy, and this thing is helping me solve it.”
But what’s interesting is a lot more of those are showing up on the enterprise side. Just to give you a couple examples from yesterday, we were hearing from one of the teams and they’re basically, they have this co-browser use case. So imagine you’re in, like, a call center somewhere, some customer calls in. They right now have this very complicated way the agent in the call center can, like, remotely take over your machine that’s not working, browse through things and do something for you. They were like, “We would love to have Mariner do this.” And that’s like a way. Another one we heard which was kind of interesting was people, they’re, like, part of a sales team or something. They take a customer call, then they’ve got all these next steps they need to do.
Sonya Huang: Yeah.
Josh Woodward: And they just want to fan that out. And it’s often updating different systems that are all probably, I don’t know, some SaaS subscriptions they’re paying everywhere. And they’re just like, “The UI is clunky. It takes a long time. I just want to see Mariner do all this.” So these are the kinds of things that are kind of interesting that are just naturally coming up. On the consumer side, I don’t know. Have you found one yet in your mind that you like? Because we’ve got a few, but it’s—I’m curious.
Sonya Huang: I’m trying to think what the toil I have in my everyday life.
Josh Woodward: Yeah.
Sonya Huang: Talking to Ravi.
Ravi Gupta: [laughs]
Sonya Huang: I’m kidding. I’m kidding. Talking to Ravi’s the best part of my day.
Ravi Gupta: I appreciate that. But I like the framework. Even if we don’t have the exact use, the framework of, like, what are the things that are the heavy lifting that you don’t enjoy throughout the day that take up time away. And I do think that that was actually the same logic that yielded things like DoorDash or Instacart.
Josh Woodward: Right. Right.
Ravi Gupta: You see how I had to get Instacart in there? I was making sure that that was there. On the enterprise side, when you think about it, how are you testing that? Are you testing that with existing customers? Are you testing that with Google Cloud customers? Who are the enterprises that you guys will actually test things with?
Josh Woodward: Yeah. So in that case, we kind of go across big and small. So there will be some cloud customers. We have a lot of cloud customers who always want the latest and greatest.
Ravi Gupta: Yep.
Josh Woodward: Just give us that. They have, like, Lab’s equivalents inside their companies, right? So those are awesome test beds. We also work with a lot of startups.
Ravi Gupta: Yeah.
Josh Woodward: And I mean, if there’s others listening to this that are interested, like, DM me, let me know. Like, because we’re always trying to learn kind of from different sides of the market. What I found, too, building products over the years is it’s very common—everyone talks about product-market fit, you’ll know it when you see it and all that, which is true. But at least for me, I’ve always felt in the first part of building products, you iterate a lot on the product and sometimes you forget to iterate on the market. And finding the right market side is also just as important as the right product. And you have to connect those two.
Ravi Gupta: Yeah.
Josh Woodward: And so I think that in these early stage things with Mariner, that’s where we are. It’s like, is it possible for a computer to—like, an AI model to drive your computer? Yes. That’s a huge new capability. Is it accurate? Sometimes.
Ravi Gupta: Yeah.
Josh Woodward: Is it fast? Not at all yet. Like, that’s kind of where we are in terms of the actual kind of use case or the capabilities. And then it’s about finding the right market. But yeah, to answer short, it’s kind of in these early days, we do lots of stuff really quickly. And what I kind of coach our product managers on and other people on the team, because we have engineers and UXers, they all go to these sessions, is like, don’t look at the dashboards.
Ravi Gupta: Yeah.
Josh Woodward: It’s too small numbers right now.
Ravi Gupta: Yeah.
Josh Woodward: Look at their eyes. Like, look at the customer’s eyes. And when you show them stuff, do they light up or not? You know what I mean? And, like, that’s kind of the signal you’re following. It’s way more art than science at this stage.
The allure of infinite context
Ravi Gupta: Can we go back for a second just to the context point? Because I was thinking about this vis-à-vis, like, you working at Google, right? And you talked about bringing your own, you know? Is there a world where someone can just opt in of, like, Google knows a lot about me, right? Already. You know my searches, my Gmail, my calendar. Is there a world where you can just sort of opt in and be like, “I don’t want to bring it all now. I just kind of want you to use what you got and make magic.” Right? Is that something that could happen? Because Google’s uniquely suited to be able to do something like that—probably more so than anybody.
Josh Woodward: Yeah. Yeah.
Ravi Gupta: Is that something that you guys can play with in Labs or have a possibility for, or is that not possible?
Josh Woodward: We do some more kind of internally with some of our own, like, data on the team, where I’ve opted into a lot of things, which is like, take it all. Like, let’s make good stuff. But I think you’ll see some of that come through in the Gemini app, too, where you can link different things. But I think it’s actually an area that’s, like, actively kind of being explored, too, of what types of data is the most interesting and the most useful. And of course, also the right controls where people feel like not just giving it away.
Ravi Gupta: For sure.
Josh Woodward: Yeah. So I think that is an area, though, that we do experiment on some, but I’d say right now a lot of the experiments are more on our own stuff as we’re trying to figure out.
Ravi Gupta: You’re gonna have to tell us separately some of the things that you could have done now, now that they know everything about you, you know? Like, what is the magic that can be created for you?
Josh Woodward: Yeah, I think certain things that immediately come to mind that are pretty powerful is you can see things. Like, in my own data, I feel like I have a second brain.
Ravi Gupta: Yeah.
Josh Woodward: That is a true—like, there’s always been this vision of a second brain and tools for thought and all this stuff.
Ravi Gupta: Yeah.
Josh Woodward: And I feel like you can get pretty close to that. And I think the Gemini model specifically is really good at long context, the ability to have, like, this impressive short-term memory. And so Gemini 2.0, that’s an area we’re really trying to exploit right now, like how to use that.
Sonya Huang: And Mariner, similar questions that I asked on Veo.
Josh Woodward: Yeah.
When will computer use be good enough?
Sonya Huang: When do you think we’ll have computer use that is accurate enough and is fast enough to do some of these use cases you talked about?
Josh Woodward: Yeah, that’s another one. It’s kind of hard to tell at the pace though right now. I mean, not just inside Google, but what you’re seeing from some of the other labs, too. They’re on, like, about an every month or two rev. [laughs] So you can imagine just this year we’re going to see four, five, six revs of each of these things, right? Again, that’s just what we know is happening. I think the areas that are a little bit trickier or harder right now is how the computer, like, finely or precisely navigates the XY coordinates, almost. You almost want a lat-long of your screen, and that’s still kind of really interesting jagged edges on that, I would say.
The other big area I would say is like this—it’s more of a human thing. Like, when do you want the human involved or not? When do they want to be involved or not? And kind of creating the right construct, almost. It’s like “Hey, I’m about to buy something. Oh no, I want to know about that.” Or, “I’m okay for $5 but nothing more than that.” Do you know what I mean? And so there’s a whole bunch of almost, like, hardcore, like, HCI research and, like, really going deep on the empathy of, like, how you set those controls.
Sonya Huang: Yeah.
Josh Woodward: That I don’t think any of them, including the Google Mariner one right now, we don’t have—I mean, we do certain very blunt things like, “Don’t buy anything. Don’t consent to any toss.” You know, there’s sort of like crude things right now that you can do, but I think people are going to want a more fine-grained way. So these are some of the things that I consider more unsolved. Again that principle, just banking on the model is going to get smarter, faster, cheaper, and you’re going to get, like, four, five, six or seven revs this year.
Sonya Huang: Okay, I have a meta question.
Josh Woodward: Yeah.
Sonya Huang: How come all of the research labs converged on computer use at, like, as far as I can tell, the same exact point in time? Was that an accident? Was that just all the technology happened to converge at the same time? Like, what happened there?
Josh Woodward: It’s a good question. I mean, this is—I don’t know the specifics there of each of the other labs, but I would say, you know, when you read about the history of innovation, and there’s like all kinds of things on this, it’s not uncommon that discoveries kind of happen around the same time. And I think there’s kind of a new paradigm now with these models, and I think lots of people are seeing the potential in certain ways. And I’m sure there’s also, I don’t know, people changing labs and other things that are cross pollinating all these ideas, too. But it does feel like it’s one of those is kind of how I’m interpreting it is like, I think similar with coding, right? There’s already—even the agent stuff right now, there’s lots of this stuff kind of bubbling, which makes it really fun, but also keeps you on your toes, right? Because this is kind of the underdog mindset here.
Hiring any other authors?
Ravi Gupta: Are you going to hire any other authors? The reason I ask is I was thinking about—I think Matt Ridley is the one who’s written about some of these things about, like, adjacent innovations. And you have Steven Johnson. Maybe why did you hire Steven Johnson? How did that happen?
Josh Woodward: Yeah.
Ravi Gupta: And are you going to think about other people that don’t have obvious backgrounds that you would bring into labs?
Josh Woodward: Yeah, yeah. So the quick story on Steven was the guy who kind of restarted Google Labs was a guy named Clay Bavor.
Ravi Gupta: Our mutual friend.
Josh Woodward: Mutual friend, exactly. And he and I are big fans. We’ve basically read everything Steven had written. And Steven was a very interesting guy because for, like, decades he’s been in search of the perfect tool for thought. And so Clay, Clay cold emailed him. We were both subscribers to his Substack. We kind of messaged him and we’re like, “We love you. Will you come work with us? We can build the tool you’ve been wanting to build.” That’s where it started, actually.
And this was like summer, 2022. So, like, before any of the, you know, ChatGPT moment or anything. And Stephen picked up the phone, he was like, “Yeah, let’s do it.” So he came in, he was a visiting scholar. The job ladder didn’t exist. I had to go figure out with our HR person how to create a role that he could take on. This is very kind of unconventional in that way. And then the rest is kind of history, obviously.
I’ve read a bunch of Matt’s books. I don’t know Matt. He’d be awesome, so if he’s listening, please, like …
[crosstalk]
Josh Woodward: I would say we’ve done this quite a bit. So we’ve actually brought in musicians. Actually really we’re trying to figure out right now, like, a visiting filmmaker.
Ravi Gupta: That’s cool.
Josh Woodward: So it’s kind of a model. Steven kind of pioneered it. He was the first one that it’s like how to bring in—it’s a big value in labs. How do we co-create?
Ravi Gupta: Yeah.
Josh Woodward: We don’t want to just make stuff and throw it out there. We actually want to co-create it with the people that are in the industry. And what we find when we do that is you actually get way beyond the, like, “Oh, that’s a cool toy AI feature,” you get into the workflow. And if you’re working with someone like Steven Johnson, who’s written, you know, a dozen-plus books, there’s a certain way he thinks about and almost like a respect for, like, the sources.
Ravi Gupta: Yeah.
Josh Woodward: And the citations. All that stuff comes through in Notebook LM.
Ravi Gupta: Yeah.
Josh Woodward: And we’re doing similar stuff with music and video and …
Ravi Gupta: That’s awesome!
Josh Woodward: And other stuff. Yeah.
Sonya Huang: Is the goal to create net new products that you can take from one to a hundred to a billion standalone? Or is the goal to, you know, find product-market fit with things like NotebookLM, and then really fold them into the Google mothership, so to speak?
Josh Woodward: Yeah, it’s interesting. So when we first started, I would say it was all about build something, graduate it. So kind of a traditional incubator sort of model. It’s been interesting as it’s gone along. We’ve done that in some cases, like AI Studio and the Gemini API, we graduated and it’s now in DeepMind and they’re kind of running with it. Something like NotebookLM we were just gonna keep in Labs right now for the foreseeable future. Because it’s kind of a different creature. Like, it’s only possible with AI, and a lot of the stuff we’re working on now—I mean, we’ll have to see how many of these we can put together that actually can kind of get escape velocity. But we’re really interested in turning them into businesses and making them sustainable, and kind of, you know, that’s been a lot of the focus actually, is like, take big swings. And that gets back to your point: A lot of these won’t work.
Ravi Gupta: Yeah.
Josh Woodward: Because if they’re all working, you’re not swinging big enough. So it’s like trying to find that balance. But that’s definitely—we start with kind of could we make this a business? Work backwards from that. And if we end up graduating it, that’s still a good outcome for us. Another good outcome is we stop it and it was like cut the losses, we did our hundred-day sprint or whatever. Move on to the next thing. Yeah.
Sonya Huang: You mentioned at the top of the episode that you try to do some top-down thinking of, you know, what are the most interesting pools for us to be building in?
Josh Woodward: Yeah, yeah.
Sonya Huang: What are your predictions on the most interesting pools to be building in for 2025? Like, where are you hiring talent? Like, where are you sniffing around? Where are you co-creating with the DeepMind folks?
Where to build in 2025?
Josh Woodward: Yeah, yeah. There’s a lot happening with agents, there’s a lot happening with video. Some of the things we’ve talked about with computer use.
Sonya Huang: Yeah.
Josh Woodward: But I think about those ponds a little bit different. I think about them—we have this doc called “Labs is a Collection of Futures.” And it’s 82 predictions about the future, which is always dangerous to make one prediction about the future, let alone 82. But the thought experiment on the team where we got to this was: Imagine you’re in a room like this. The ceiling just opens up and this little capsule comes down. We all jump in it and it slings us into the future. It’s 2028. You can get out, you get five minutes, look around, write down everything and you’re brought back to the present. And then write what you saw. And that’s what this doc is. So what’s the future of knowledge look like? What’s the future?
Ravi Gupta: Even though prompts are old fashioned, that’s a pretty good prompt you gave to the team. I was gonna tell you right now.
Josh Woodward: Yeah. So that’s—you know, we think about it at that level, at kind of a high level. So say something like what’s the future of knowledge going to look like? We think it’s going to be one piece of that prediction, one of the 82, is that it’s infinitely remixable, and anything that comes in can be transformed and become anything on the way out. If you believe that, then you take certain bets and you build products kind of with that future in mind. So that might be one of them. But I think, like, going back to maybe some of the ones that a lot of people might be listening or building, I do think we’re kind of at the moment for video, we’re at the moment for very interesting agent stuff with the thinking and reasoning models. And I think there’s also maybe something kind of under the radar right now a little bit. I still think coding has major leaps we’re going to see this year. And so those would be some of the ones that are top of mind for us.
Ravi Gupta: Are you guys doing work on coding out of Labs, too?
Josh Woodward: Yeah, we are.
Ravi Gupta: Okay.
Josh Woodward: We are. So right now at Google, 25 percent of all the code’s written by AI.
Ravi Gupta: Yeah, I saw that.
[crosstalk]
Josh Woodward: Yeah, that’s right. That’s right. And that’s up a lot in the sense of just how fast the progress is. This is an area, though, that I think there’s kind of two approaches you could think about. Like how—again, think of lower the bar, raise the ceiling, right? How do you make coding available for people who could never write code before? Massive opportunity.
Ravi Gupta: Like Sonya. You know, like, I’ve been coding my whole life. I mean, Sonya …
Sonya Huang: [laughs]
Josh Woodward: Well, it’s kind of interesting because some of the most interesting stuff happening here—I don’t know if any of you have played with, like, Replit’s agent stuff.
Sonya Huang: Yeah.
Josh Woodward: Really interesting, right?
Sonya Huang: I agree.
Josh Woodward: A couple of weekends ago, I’m with my fourth grade son. We are struggling right now in our household to implement chores. We created a chore tracking app. 28 minutes, 45 cents.
Ravi Gupta: Wow!
Josh Woodward: Done. We’re daily active users. And so it’s a way to kind of get into software and a world of kind of software abundance that’s really interesting. So we’ve got some stuff in that area. We’re also interested in how do you take a professional trained SWE programmer and make them, like, 10x to 100x?
Ravi Gupta: Yeah.
Josh Woodward: And there’s kind of, I think, interesting bets on both sides of that.
What’s overhyped in AI right now?
Sonya Huang: What do you think is overhyped in AI right now?
Josh Woodward: Oh, that’s an interesting question. I wish we’d move beyond the chatbot interface a bit. [laughs] Like, that’s one area that feels like we’re kind of reusing that in a lot of places—Google included. I’m also not sure—there’s still a lot, I think, of people jamming AI into stuff. Like, AI itself is a bit overhyped. I wish we were a little more precise about how disruptive or where to apply it.
Sonya Huang: Yeah.
Josh Woodward: And so I think again, we’re trying to think a lot about workflows, not just take an existing product and bolt on AI. So I think that’s maybe a little—there’s a race. You’re seeing the first generation of AI, put it in. And it reminds me a lot—actually, when I first started at Google, it was, like, right as the iPhone moment was kind of just happening and taking hold. When Steve walked on stage in 2007, said, “This is the iPhone,” if you look at the App Store three years later, which is roughly where we are in this AI revolution, the App Store in 2009-ish—I went back and checked—websites that have been shrunken down to fit on your phone, flashlight apps and fart apps. [laughs] These are like the highest top-downloaded things that were happening. So I think we’re kind of in this stage where the real stuff is going to start to come out kind of this year, next year, the next year. That’s when you start to see the Ubers, the Airbnbs, the Instacart, the things that really change kind of how you do stuff. And so that’s kind of my thought on it.
What’s under the radar?
Ravi Gupta: All right then, Sonya asked you the overhyped question. I’ll ask you the under the radar, underhyped question. What are some areas that deserve more attention within AI?
Josh Woodward: Yeah. We talked about coding a little bit. Maybe just one other thought on that is I think if you could get code models that can kind of write code and self correct and self heal and migrate and do all this stuff, it just makes—you think the pace is fast now? That totally changes the curve. So I think that’s a huge—I still think it’s underhyped.
Ravi Gupta: Yeah.
Josh Woodward: Like, it’s hyped a lot, by the way, but I think as hyped as it is, it could be hyped more. That’s one. I don’t think we fully internalized the notion of, like, what does long context or, like, infinite context mean? It gets to some of your personalization questions potentially, but it also gets at some of the stuff we were talking about around how can you make things like a Mariner literally just keep going?
Ravi Gupta: Yes. Yes.
Josh Woodward: And so that whole notion of long context, I mean, you see a lot from Google, but we’re investing a lot in that because we think that’s a strategic lever that’s important, especially as you get more agentic, chained together kind of workflows. Maybe another one, I think there’s not enough talk about taste.
Ravi Gupta: Yeah.
Josh Woodward: Like, I think if you believe the value is going to be in the application layer, if you believe there’s going to be some percentage of AI slop, you can just see a few of these trends.
Ravi Gupta: Yep.
Josh Woodward: And I think there’s going to be a value in good taste and good design. And it doesn’t mean it has to be human created necessarily, although I think there’s going to be a high value on that too as, like, human-crafted content becomes more artisan. But I think that’s another one, I would say. I think maybe related to that is, like veracity and truth.
Ravi Gupta: Yeah.
Josh Woodward: And sort of what is real. Like, these are things that I think are going to become way more important than they already are today.
Ravi Gupta: I think the context point within there I, like, really firmly agree with on, like, what can happen if you—your infinite context point. Because if you think about the relationship in your life where you have, like, the most context, shared context, is probably with your spouse.
Josh Woodward: Yeah.
Ravi Gupta: Right? And if you think about that, what ends up happening is you can communicate with your spouse literally with just like, the flick of an eye, right? And all of a sudden they know exactly what you mean. They know it’s time to leave the party, whatever it might be.
Josh Woodward: Yeah, that’s right.
Ravi Gupta: And you think about that’s the aspiration for what can happen with infinite shared content.
Josh Woodward: We know that’s the ceiling.
Ravi Gupta: Exactly. Think about, your—like, think about how far away that is from now.
Josh Woodward: Yeah.
Ravi Gupta: Where you’re, like, typing things in.
Josh Woodward: Yeah.
Ravi Gupta: About what it is. And your point of, like, well, hold on. There’s all these different ways you can communicate it and they can get to know you better if it has memory.
Josh Woodward: That’s right.
Ravi Gupta: And so I think there’s so much gold in there of it just being able to keep going.
Josh Woodward: Yeah.
Ravi Gupta: Right?
Josh Woodward: Yeah.
Ravi Gupta: But giving it the right context. And whenever it needs.
Josh Woodward: We think of any company that you all back—or even Google. Like, what’s one of the most painful things is when a long-term employee leaves, because all that context walks out the door. So I think it’s exactly right. Whether it’s a personal relationship or a work relationship. Yeah.
Lightning round
Sonya Huang: Okay, we’re going to wrap with a rapid-fire round.
Josh Woodward: All right. Yeah, sounds good.
Sonya Huang: Okay. Favorite new AI app.
Josh Woodward: Oh, I mentioned it earlier. I’m having a lot of fun with Replit.
Sonya Huang: Love it.
Josh Woodward: The new agent thing, and on the phone. I think they’re doing some really interesting stuff there.
Sonya Huang: You know, one of our partners, Andrew Reed, is known for, like, creating these amazing memes and sending them around.
Josh Woodward: Uh-huh.
Sonya Huang: It’s now so easy to create an app. He just creates these all the time and sends them to me. They’re really good.
Josh Woodward: Yeah. We have this concept of, like, disposable software.
Sonya Huang: Yeah.
Ravi Gupta: Oh, that’s interesting.
Josh Woodward: You use it once, and you kind of throw it out after you’re done with it. Yeah.
Sonya Huang: Okay. What application or application category do you think is going to really break out this year?
Josh Woodward: Video.
Sonya Huang: Okay. Recommended piece of content or reading for AI people.
Josh Woodward: Ooh, that’s an interesting one. You know, this one’s not a traditional AI pick, because I think probably a …
Sonya Huang: Good, because we have too much of that. [laughs]
Josh Woodward: I was gonna say over the break—I read a lot. And one of the books I picked up was actually is the Lego story, and it’s the history of Lego.
Ravi Gupta: That’s awesome.
Josh Woodward: And it’s on its third generation of family ownership. I’d recommend that one. It’s really interesting. Yeah. Here’s why, though. There’s a pivotal moment in the company’s history where they had 260 products. And maybe for a lot of founders that are listening, you can imagine your company could go in, like, all these different ways, you’re trying to figure it out. And the grandfather, the CEO at the time, basically identified, like, the little building blocks. This is it. And he bet the company on it. And he bought these incredibly expensive machines. And so I think it’s, like, an incredible—I like to read biographies a lot, and this was one that really stood out.
Ravi Gupta: Josh has an incredible taste in books, and he has this wonderful reading list that he’s been kind enough to share with me.
Sonya Huang: Oh, no way!
Ravi Gupta: That’s really wonderfully curated. It has this very good formatting as to when it’s something you really gotta read versus not. And so you should—to all the listeners, you should take Josh’s suggestions seriously.
Sonya Huang: I actually really want a great AI reading app. That’s like my wish list app.
Josh Woodward: What would it do for you?
Sonya Huang: In part, because I have terrible memory, but out of everything I’ve ever read or listened to, which I think is a different set of things than all the books on the planet.
Josh Woodward: Yeah.
Sonya Huang: Like, there’s all these things that are kind of on the tip of my tongue and ideas that connect.
Josh Woodward: Yeah.
Sonya Huang: But, you know, they’re all kind of in an abyss, and they’re all pretty inaccessible to me. And so something that surfaces, some of those thoughts and ideas that I’ve had, things that I’ve read, you know, that next layer of thought I have from reflecting on two different things that I’ve read.
Josh Woodward: And the connections probably across them.
Sonya Huang: Yeah.
Josh Woodward: Hmm. It’s a good idea.
Ravi Gupta: I think even within that, like, just the hard copy version, the Kindle version and the audiobook version being, like, you know, seamlessly intertwined, like you mentioned, at the most basic level.
Sonya Huang: Yes.
Ravi Gupta: You know, so that you can continuously pay attention to something that you like. And then we can get to the version you said.
Josh Woodward: Yeah.
Sonya Huang: Request for startup. Okay. Pre-training hitting a wall. Agree or disagree?
Josh Woodward: Maybe lean agree. I think there’s still stuff to squeeze out there, but I think a lot of the focus has shifted.
Sonya Huang: Yeah.
Josh Woodward: Yeah.
Sonya Huang: Nvidia: Long or short?
Josh Woodward: I don’t give stock advice. [laughs] Index fund.
Ravi Gupta: Do you ever sit with Demis and be like, “Look, as someone—between us, we won a Nobel Prize.” Do you ever start with that? You know, because, you know, that feels like something that’s true. You know, between the two of you, there’s one Nobel Prize.
Josh Woodward: It’s all one directional. Demis and John Jumper. Those are the people that won the Nobel Prize, not Joshua Woodward.
Ravi Gupta: [laughs]
Sonya Huang: Okay. Any other contrarian takes in AI?
Josh Woodward: Any other contrarian takes? I guess maybe I’ll leave it with this. I think we are kind of—one thing is like, what a time to be alive and building. Because I feel like there’s this window where there’s so many adjacent possibles opening up. I think the second would just be like, I’d encourage people listening to, like, really think about, of course, there’s the models and who’s winning and the back and forth, but, like, what are the values you’re building into your company? Because I think this is one of those moments where there’s going to be, like, tools created that shape, like, follow-on generations. I think it’s really important for people think about that. And, like, are you trying to replace and eliminate people, or are you trying to amplify human creativity?
I mean, there’s like, one that’s like, you know, that immediately comes to mind when I’m thinking of video, for example. I’m on the side of wanting to amplify human creativity. But I think there’s, like—there are these moments that happen in our valley here where, like, things change, and they change often for generations. And they can change for good or bad. And so I would just encourage people that are in spots where you’re building and you have this incredible technology that’s only getting smarter and faster and cheaper to put it to good use and think about the consequences downstream.
Sonya Huang: Thank you so much, Josh, for joining us today. We love this conversation.
Josh Woodward: Yeah. Thanks again.