Waymo’s Vincent Vanhoucke on Embedding AI into Physical Systems

Flight Path

Flight Path is a new Wing series for AI startup founders that draws on candid conversations with C-suite executives at leading corporations and unpacks what it takes to become a trusted AI partner in a variety of different industries.

How do you scale AI in physical systems and legacy infrastructure — not just in theory, but in practice? In this Q&A, Google veteran and current distinguished engineer at Waymo, Alphabet’s EV arm, Vincent Vanhoucke unpacks the engineering and organizational challenges of embedding AI into complex systems. Vanhoucke speaks directly to the technologists building the next wave of applied AI, offering a candid, deeply technical view of what it takes to fuse cutting-edge AI with physical systems, legacy infrastructure, and organizational culture.

Q. You’ve worked in two major areas within Google — at Google Research and now as distinguished engineer at Waymo. How are you seeing AI usage play out? It’s easy to imagine that there’s a use for it in all aspects at a tech giant, but maybe that’s not true.

In my role at Google Research, I took it as a given that we were going to assume AI was the solution to a problem, and then look at how far we could take it. It’s something you have the luxury of doing when you’re in a research setting: You basically get to put a premise on your work and then go from there. 

Now, in the real world environment, you can’t do that. You have to sort of understand where AI can be used and where it’s not beneficial.

I find it harder and harder to find places where AI is not really beneficial in some ways. Often the challenge isn’t “AI or not,” it’s “AI and…” meaning AI plus all the scaffolding that enables you to operate safely at scale. The hard part is making AI work smoothly with systems that aren’t AI-based — being able to have that deep interface between the two is often where the problem lies.

Q. How would you guide any startup founders who are also agents of change in this space, whether they’re working in AI generally or in robotics, about how to think about AI? 

There are two different problem spaces that you could apply AI to. One: where the nominal performance of your system is what drives value. The other: where the exception to that nominal behavior is what drives all of the problems. For example, you have an AI that helps you write code. If you use it as an assistant to help you build AI, the nominal behavior of that system drives the value. If you can improve the system — say you have a system that gives you the right answer 80% of the time and you increase it to 90% — you win, right?

But if you’re using this AI to write code in a completely automated fashion (so, same technology but different problem setting where you’re not an assistant to a human anymore but actually writing code that directly affects your production), suddenly climbing from 80% to 90% performance is not the problem to solve. The problem is what happens when things go wrong. It’s the non-nominal situation. It’s the exceptional situation. 

Q. But isn’t it OK to want to improve your systems? Can you say more about that?

I see a lot of people focusing on nominal behavior: improving their system and climbing up that hill of performance. They’re not looking hard enough at their real problem and what will happen when things go wrong. 

That’s why a lot of AI systems today are built to feel like “human” assistants in some ways, because feeling like you have a “human” in the loop gives you that protection that you don’t really need to perform at 100%. 

I think a lot of the value of AI should be in this area where you want full autonomy and understand exactly what happens when something goes wrong so you can protect against that. Building that infrastructure technology approach and getting the validation to enable this is very difficult, but on the flip side, it can be very valuable to bring new AI technologies to market that otherwise won’t see the light of day when you’re using the more traditional hill-climbing techniques.

Q. Many companies are talking about building an AI-first culture. What kind of leaders bring about that kind of culture, and how can founders both nurture it in their own companies and also help the companies they work with encourage it from within?

That’s always been very fascinating to me, the way to do this. It’s not a pure technical problem, it’s a social technical problem. You’re essentially driving disruption from within without a lot of authority to make sweeping changes. That’s because the domain expert is usually the person on the receiving end of your disruption. Enabling people to do that has been a core component of Google’s culture over the years and why Google has been so nimble in its evolution in AI.

This is really where the culture really matters in innovation. It’s about really enabling people to see things from a data-driven perspective, and it’s also about implicitly trusting and respecting your peers who are coming in to try and change the status quo — spreading the message that everybody’s in it together, working at the same goals. I’ve seen it work extremely well at Google. It’s not been a success in every organization who has tried to make this AI transition. And I think the big gap here is not technical, it’s really organizational and cultural.

A lot of companies try to strictly separate the roles of technical lead and manager. And it’s true — those roles demand different skills. But if you can find someone who can do both, your organization becomes much stronger. 

Read Full Article
The Wing Team
Author
No items found.
Wing Logo in blue, all lower case letters.
Thanks for signing up!
Form error, try again.