You are currently viewing –


The Article is from Wired, the article rights & obligation belongs to Wired. Enjoy reading!



Kai-Fu Lee, AI researcher turned VC


Fei-Fei Li, AI researcher and activist

October 2018. Subscribe to WIRED.

Plunkett + Kuhr Designers

In 1990, Kai-Fu Lee packed his bags and left Carnegie Mellon University, where he had been teaching artificial intelligence and speech recognition. He headed west to his first Silicon Valley job, running a new group trying to build speech interface technologies at Apple. Eight years later, Lee was hired by Microsoft with a specific mission: to go to China, start a research group, and develop a technology hub—and talent.

Today, China’s prowess in artificial intelligence can trace many of its roots back to that research group. (Among the people Lee hired are the current president of Baidu, the chair for technology at Alibaba, and Microsoft’s head of AI and research.) In a now infamous move, Lee left Microsoft and—after prevailing against the company when it sued him for violating a noncompete agreement—went to Google in 2005 to lead Google China.

In 2009, after more than 25 years working (largely) in AI, Lee started his own venture firm, Sinovation, that now focuses on entrepreneurs using AI. Lee talked to WIRED executive editor Maria Streshinsky about China, AI, and Fei Fei Li, the Stanford University professor and researcher in artificial intelligence.

What was it like, starting Google in China in 2005?

Google was always very concerned about entering China. So it went in with several important caveats. Some of them were about Google’s concern over personal privacy and censorship and the like.

I ran Google China like an almost independent company. That was good for some things, like getting results, and not so good for other things, because people at Google felt we were too … independent, and maybe not sufficiently following the Googly way of doing things.

The first year was very difficult because we had to hire new people, connect them to headquarters, understand how to run things in China. But once we built up a critical mass, the second and third years we did extremely well. We went from 9 percent market share up to 24 percent in search. In revenue we went from zero to probably half a billion at the time I left.

The last year, things became quite tough. I think there was just increasing lack of agreement and alignment between what the Chinese government’s rules were and what Google was willing to put up with. I saw that tension. I saw that Google was going to lose market share. The corporate brand was not powerful enough in the country. It was penetrating the white-collar, well-educated in the top cities, but it was not everywhere. So I did a lot of things to try to persuade Google that there are things you need to do to win.

Like singing and dancing and doing magic tricks on TV?

We needed to get exposure, and exposure at the time was not yet internet or mobile internet, it was through TV. Now, it’s outrageous in Google headquarters to do TV ads. But to prove TV in China was useful, I took my team on the number-one entertainment show in the country.

I was originally going to cook on the show. I was going to make a wonderful dish. But then the CCTV tower caught on fire, and the government said no more cooking on the shows. I have no talent! I can’t sing, I can’t dance. So I figured, OK, I can do magic! I invented a card trick [laughs]. It was a mind-reading trick.

The rest of the team sang and danced. And then we embedded Google products in our show. The next day, Google servers almost broke. And we did it without paying anything.

But we needed to sustain a marketing campaign after that, and we still didn’t get any funding—despite our demonstration. I saw the writing on the wall.

So what did you do?

Mobile internet was going to be the next big thing. Being at Google was helpful—we could see the progress Android was making. And we knew that would be the answer in China. So when I left Google—this was nine years ago—I started an investment company specifically for mobile internet, mostly Android-based. This was Sinovation Ventures. We invested in social networks, education, entertainment. We got very good in these areas before AI.

Kai-Fu Lee

First computer hack: “In 1980 I wrote a password guesser and got most of my friends’ passwords, then I used their accounts to post silly messages on bulletin boards.”

Secret rabbit-hole obsession:
“I was once addicted to Dance Dance Revolution and was really good.”

First AI project:
“Natural language. In 1980, I wrote an Eliza that mimicked my professor.”

But Sinovation would become a major investor in AI companies.

Yes, that began four years ago, with our investment in Megvii’s Face++. They’re a computer vision company that began with face recognition. There are interesting applications, such as using it as a badge replacement at your office, using it to enter a country, using it to unlock your phone or beautify your selfies. Also, in China, when a mobile payment system is unsure that you are who you say you are, facial recognition can take multiple photos of your face to prove it. At the time AI wasn’t a hot area, but we thought the team at Face++ was excellent. Now they’re building product lines that could be heavily monetized, and they’re also expanding beyond pure computer vision of faces. They could recognize gait, gestures, emotion, and all that could be fed into education applications, e-commerce applications, retail applications.

Imagine you go to a store, pick something up, smile—and then you put it back. Facial recognition could figure out you were tempted. Maybe you didn’t buy it because of the price. If you picked it up and looked disgusted, then it might draw a different conclusion. Computer vision can be used to link each person’s behavior, intent, and emotion with respect to a commercial product—even more accurately than your online behavior. Online you would click on stuff, but here your face is being captured, and it’s even more useful. After Face++, we saw that the day of AI would come.

There are obviously worrisome elements to such a tool. You’ve talked publicly about your worries surrounding AI development, mostly about the loss of jobs.

Yes. We’re already seeing it. Citi recently warned that big layoffs could be coming based on automation-related replacements. Entrepreneurs are trying to build things that save cost. There’s no way you can stop that. So yes, this is a big concern. For specific domains, AI will take over in a couple of years.

The first concern is what I call low-compassion, low-creativity jobs—probably half the jobs that humans have. These are for sure going to be taken by AI over the next 15 years. Maybe not a full job, maybe 60 percent, or 40 percent. And some economists say, oh, if you only take over 40 percent of a job, that doesn’t count. I think it does. If you have a pool of paralegals, and 40 percent of the job is gone, you’d lay off 40 percent of your pool, right? Or you’d pay them 40 percent less. That’s not acceptable. I think it’s a big social problem, and a lot of AI companies are not yet acknowledging it and starting to see what they can do.

You’ve talked about Fei-Fei Li at Stanford, and how we should listen to what she is saying about AI. Why?

I met her in 2016 when I took our entrepreneurs to the Bay Area. She was very inspirational. She talked about the future of AI and wanting it to be a lot more than just, you know, simple replacements of humans.

She talks about a symbiotic human-AI relationship, about interactive technologies that make human-AI interaction more productive and valuable. And an AI system that can improve itself, adapt to human capabilities, do more of what humans are not good at, and help humans amplify their own thinking and capabilities.

Humans will shine where it’s difficult for AI to replace them. Think about teachers. If an AI system shows a kid doesn’t know multiplication, we need to drill multiplication before getting to division. The teacher would step in to find ways to encourage the child, help find their curiosity. AI as the core—but humans as the delivery.

Kai-Fu Lee

Android or Apple?: “I have about 20 Apple devices in my home.”

First computer prank:
“In 1983, I wrote a blackjack program that stacked the cards. I always beat my wife. She never found out.”

Favorite April Fools’ prank:
“1993: At Apple, we put a Mac on top of an elevator, connected the speech recognition to the elevator controls, and put a sign ‘Talk to Me’ on top of the buttons. When people said ‘give me five,’ it would go to the fifth floor.”

So AI as a partner? Is that coming?

You could imagine a lot of really useful domains for AI, but there might not be enough economic incentive at the moment to go after them. These kinds of things—teachers, care workers—don’t necessarily make the best investments for a large company. They’re not immediately going to make money. And that’s why this is hard.

For example, a VC would probably never fund an elderly care company. VCs fund companies that have exponential economic return in value, like Uber or something. I’m making this up, but you could imagine we had sensors on human elderly care, so the machines learned about giving baths, cleaning the beds, that kind of stuff. But then how do we build an AI that’s able to do some of those jobs? And reduce hazard and deaths in those cases? There’s not much money in things like that.

With all this taken together—the capabilities of AI, the capabilities of humans, the way we invest now, the coming loss of jobs—what should we be doing?

Maybe we can start to change some human perception and beliefs. Maybe certain types of people don’t have to work as many hours. Maybe work isn’t going to be as important as today. If we feel elderly caretaking is an important thing to do, a responsible thing to do, we can make it high-paying.

How would we do that?

If you had a conglomerate that’s large enough, it could, within itself, make those decisions. I haven’t studied this enough to see if someone’s perhaps already doing that, but what’s going to be needed is for people’s pay to be based on a kind of hybrid of economic value and maybe social value, or moral value.

There would need to be some sort of system, some kind of a stipend. A government could say, for example, that your future social security is contingent either on learning new skills—skills that AI cannot do—or doing something of clear social value, like volunteering. And if you don’t do any of those things, then you only get subsistence-level food stamps and living-quarter assistance.

Do you imagine such an idea would ever be taken seriously?

I do. I think we have to. Otherwise the 50 percent of people losing work will cause so much turmoil for society.

And you think we’ll get there?

Yes, except people haven’t really optimized on that. It’s because there’s so much low-hanging fruit today for AI applications. Areas like loans, credit card fraud, ecommerce. Then there’s insurance adjustment, customer service, robotics, factory applications.

Do you think Fei-Fei will help shape the future?

Yes. I think of her as the conscience of AI. Most AI researchers are nerdy. They want to write papers, show results, and then go back to their labs. Very few would stand up and call for things that are important for the future of humankind. It is refreshing. She has a big heart. —Maria Streshinsky

Fei-Fei Li Is Bringing Humanity to AI

In 2012, Fei-Fei Li was thinking about two seemingly unrelated but troubling issues. She was on maternity leave from Stanford University and reflecting on her experience of being one of the only women on the faculty at the AI lab. At the same time, she grew concerned about some of the stereotypes about AI. “There was already a little bit of rumbling about how AI could be dangerous,” she says. It clicked that these concerns were connected. “If everybody thinks we’re building Terminators, of course we’re going to miss many people”—including women—who might otherwise be interested in AI but would be turned off by its aggressively negative image, Li adds. “The less we talk about the human mission, the less diversity we’ll have, and the less diversity we have, the more likely the technology will be bad” for humans.

This was particularly upsetting to Li because she had played a foundational role in the contemporary emergence of the field. In 2007, as an assistant computer science professor at Princeton, Li had embarked on a project to teach computers to read pictures. It was an endeavor so ludicrous, laborious, and expensive that Li had trouble getting funding. The project required people to tag millions of images; for more than a year, this work was the largest employer for Amazon’s Mechanical Turk. The resulting database, ImageNet, became the key tool for training machines to recognize images; it’s part of the reason Facebook can tag you in a photo or Waymo’s self-driving cars can recognize street signs.

As long as she has studied computer science, Li has advocated for working across disciplines to make artificial intelligence more useful. At Stanford, she worked with medical school researchers to improve hospital hygiene. When she left Stanford for a two-year stint as chief scientist for AI in Google’s Cloud division, she helped lead the rollout of developer tools that let anyone create machine-learning algorithms.

This fall, Li returns to Stanford as a professor of computer science, though she continues to advise Google, and will help launch an initiative combining AI and the humanities. Her field, she says, needs to work with researchers in neuroscience, psychology, and other disciplines to create algorithms with more human sensitivity. This also means working with government institutions and businesses to ensure that AI helps people do their jobs rather than replace them. Li believes AI has the potential to free us from more mundane tasks, so we can focus on things that require creativity, critical thinking, and connection. A nurse, for example, might be freed from managing medical equipment so he can spend more time with a patient. “If you look at the technology’s potential,” she says, “it’s unbounded.” But only, she notes, if you put humans at the center. —Jessi Hempel

This article appears in the October issue. Subscribe now.

MORE FROM WIRED@25: 2008-2013

  • Editor’s Letter: Tech has turned the world upside down. Who will shake up the next 25 years?
  • Opening essay by Clive Thompson: The dawn of Twitter and the age of awareness
  • Jack Dorsey and ProPublica: Experimental journalism
  • Jennifer Pahlka and Anand Giridharadas: Less elite philanthropy, more democracy
  • Elizabeth Blackburn and Janelle Ayres: Germs gone good
  • Kevin Systrom and Karlie Kloss: Closing the gender gap

Join us for a four-day celebration of our anniversary in San Francisco, October 12–15. From a robot petting zoo to provocative onstage conversations, you won’t want to miss it. More information at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.