Kai-Fu Lee is the chairman and CEO of Sinovation Ventures, a venture capital fund that invests in early-stage Chinese and U.S. companies. Before founding Sinovation in 2009, Lee was the president of Google China. He also held executive positions at Microsoft, SGI, and Apple. He received his bachelor’s degree in computer science at Columbia University and a PhD from Carnegie Mellon University. Lee is the author of eight books, most recently “AI Superpowers: China, Silicon Valley, and the New World Order”.
Kai-Fu Lee spoke for YCW Beijing in September 2018.
Young China Watchers (YCW): How did you become interested in artificial intelligence (AI), and what prompted you to devote your career to it?
Kai-Fu Lee (KFL): It seemed like the natural geeky thing to do at the time. As scientists, we tried to understand how humans think by building machines that act like people. We wanted to make scientific progress and after watching science fiction movies, it seemed like an exciting thing to do. Of course, I was all wrong about those assumptions. What we ended up building is a fantastic pattern-recognizer. AI today is no more than a glorified version of the machine-learning work that was invented in the 1960s and 1970s. The main difference today is that the algorithms have gotten better, we’re better at building them, and most importantly, we have a ton more data. We didn’t realize at the time that the amount of data would make the biggest difference.
YCW: What are the biggest misconceptions Americans have about AI and technology in China?
KFL: I think there is a belief that the U.S. is way ahead and that China is way behind, and that’s really only true in a qualified way. The top 100 or 10,000 researchers are mostly American, but if you look at the top 100,000 or 1 million, you’ll see that a large percentage are Chinese and they’re climbing the pyramid.
Back in the 1980s, the AI community started to use the same data and same conditions for testing and when they made improvements, everyone gained. There was no notion of national superiority in terms of academics. Today the U.S. gives out more and other countries have more to gain. Over time as Chinese researchers get better, they’ll give more. It’s a misunderstood belief that is perhaps exacerbated by Putin’s comment that whoever dominates AI dominates the world. He’s wrong. It’s inaccurate to make these analogies that make AI into a cold-war mentality when AI is naturally an open community. I think the whole world of AI wants to make progress together and these efforts to define it nationalistically are misunderstood.
The other misconception is the notion that China just has copycats. The top rising startups in AI in China are phenomenal. If you look at Tencent and Alibaba, they have some of the most powerful datasets anywhere. Take mobile payments: These two companies easily have 100 times more data than PayPal, and probably 10 times more data than Mastercard or Visa, and Mastercard or Visa don’t know how to use AI in the same way.
YCW: You write in your book, “AI Superpowers: China, Silicon Valley, and the New World Order,” that the battle for supremacy won’t be fought in the U.S. or in China but in secondary markets and developing countries. Who do you predict will win that battle and why?
KFL: Both will be successful. English-speaking countries and Western Europe will continue to use American technologies. It’s unlikely that the Chinese will make any inroads there because of the economic bonds and the strong entrenchment of Google, Facebook, and Amazon. Chinese companies have been very clever in building partnerships with Southeast Asian countries, the Middle East, and Africa. They do not expect to go in and implement a full platform using the Chinese brand. Instead, they partner with a local company and provide the technology so the local company can provide service equal to American companies but for local consumers, so those regions will use more Chinese technology but through local products.
YCW: Data collection has come under fire in the U.S. and Europe with the Cambridge Analytica story. Chinese consumers rejected Baidu CEO Robin Li’s assessment that they valued convenience more than data privacy. How do you think consumer attitudes toward their data being used will impact the advancement of AI, and do you expect governments to limit data collection moving forward?
KFL: Companies like Facebook, Baidu, or Google currently collect data to use within their applications in ways that comply with local laws and deliver user benefits, and I think there’s nothing wrong with that. A fundamental question to answer, though, is: Can an internet company collect your data usage to make the product better and help the company earn more money; and if that is okay, what are some of the limits or things that they cannot do? Europe has the General Data Protection Regulation (GDPR) to help address this. I think there will be GDPR in the U.S. and in China. Most likely the U.S. version will be a little weaker than Europe’s, and China’s will be even a little weaker than that, but protecting consumers from bad behavior by companies is a fair thing to do and it’s going to be up to each country to set its own standards.
A more serious issue is the transfer of private information because when you give data to a third party, it’s hard to control and is a clear case of irresponsible behavior, and that’s what happened at Facebook. China already has laws that are stricter than the U.S. in this aspect, as selling data is a criminal offense in China.
YCW: You argue that in the future, wealth will be concentrated into the hands of a few behemoth companies that will need to disperse that wealth among the general population. Companies today are well-known for their ability to skirt paying taxes and storing their money in off-shore accounts. Why would this change in the future?
KFL: In the next 10 or 15 years, it will have to change. The top 1 percent of the wealthiest people in the U.S. have already accumulated more wealth than the bottom 50 percent, and they’re on their way to having more wealth than 90 percent of the population in the not-too-distant future. That has not happened before. We’re reaching historical numbers and authors are starting to paint the future in a very bleak way. I’m not talking about robot overlords but the use of the word “useless” to classify people, and I think it’s dangerous for us to drift toward a societal view where the majority of people are branded useless. A lot of our resources will need to be spent on retraining and the redistribution of wealth, otherwise I don’t see how the world can remain stable.
No matter the form of government, there is a public responsibility to ensure that citizens have the means to have a self-respecting way of life moving forward. That is the first priority, whether government chooses to use the power of AI to give people more things to look forward to or redistribute wealth in a society that is self-organizing and maybe more self-regulating. One way or another, the core issue isn’t blaming AI and it isn’t restricting government’s ability to use AI. We, as inhabitants of Earth, have to focus on things that we have control over and I’d like to believe that all governments want their people to do better. No government wants the creation of a useless class, so energy should be put toward win-win scenarios where governments will see why redistribution is needed.
YCW: Where would you recommend young people today put their time and energy?
KFL: First and foremost, learning how to learn is the most important skill because change is going to come faster than ever and whatever you do learn will be obsolete faster than ever. The hottest jobs of today didn’t exist 10 years ago and that will be true moving forward. Whatever profession you pick, find an area where AI tools emerge, learn and embrace them. By mastering them, you can amplify your capabilities. Avoid professions where AI is clearly going to take over like stock trading. My last piece of advice is go into a field with human-to-human interaction because it will become more important than ever as that’s something machines won’t be able to do. People aren’t going to accept having a robot doctor, nanny, or teacher.
— Interview by Jordyn Dahl