Whatever the future role of computers in society, Jeff Dean will have a powerful hand in the outcome. As the leader of Google’s sprawling artificial intelligence research group, he steers work that contributes to everything from self-driving cars to domestic robots to Google’s juggernaut online ad business.
WIRED talked with Dean in Vancouver at the world’s leading AI conference, NeurIPS, about his team’s latest explorations—and how Google is trying to put ethical limits on them.
WIRED: You gave a research talk about building new kinds of computers to power machine learning. What new ideas is Google testing?
Jeff Dean: One is using machine learning for the placement and routing of circuits on chips. After you’ve designed a bunch of new circuitry you have to put it on the chip in an efficient way to optimize for area and power usage and lots of other parameters. Normally human experts do that over many weeks.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
You can have a machine learning model essentially learn to play the game of chip placement, and do so pretty effectively. We can get results on par or better than human experts. We’ve been playing with a bunch of different internal Google chips, things like TPUs [Google’s custom machine learning chips].
W: More powerful chips have been central to much recent progress in AI. But Facebook’s head of AI recently said this strategy will soon hit a wall. And one of your top researchers this week urged the field to explore new ideas.
JD: There’s still a lot of potential to build more efficient and larger scale computing systems, particularly ones tailored for machine learning. And I think the basic research that has been done in the last five or six years still has a lot of room to be applied in all the ways that it should be. We’ll collaborate with our Google product colleagues to get a lot of these things out into real-world uses.
But we also are looking at what are the next major problems on the horizon, given what we can do today and what we can’t do. We want to build systems that can generalize to a new task. Being able to do things with much less data and with much less computation is going to be interesting and important.
W: Another challenge getting attention at NeurIPS is ethical questions raised by some AI applications. Google announced a set of AI ethics principles 18 months ago, after protests over a Pentagon AI project called Maven. How has AI work at Google changed since?
JD: I think there’s there’s much better understanding across all of Google about how do we go about putting these principles into effect. We have a process by which product teams thinking of using machine learning in some way can get early opinions before they have designed the entire system, like how should you go about collecting data to ensure that it’s not biased or things like that.
Source: WIRED