Interview with Dr. Milind Tambe

Dr. Milind Tambe is the Gordon McKay Professor of Computer Science and Director of the Center for Research in Computation and Society at Harvard University. He is also the Director of AI for Social Good for the recently announced Google Research labs in Bangalore, and a fellow of the Association for Advancement of Artificial Intelligence (AAAI). His research focuses on advancing AI and multi-agent systems for social good. He has worked on AI for public health, safety, and for the protection of endangered species. He also focuses on fundamental problems in computational game theory, machine learning, and multi-agent systems. The English Press Club interviewed Dr. Tambe about his research and his views on AI—the following is the transcript of said interview, edited for length and clarity.

About Dr. Milind’s Work

How was your experience in BITS and Carnegie Mellon University? 

I did a four-year MSc. in Computer Science here (at BITS) during 1982–87. Coming to such a remote place and making new friends are all great experiences that I went through. It was very formative, and the university provided a huge boost to my career. I went straight to CMU for my PhD after that. The PhD was intimidating in the beginning, as it was quite the cultural shock. The transition from BITS to CMU was also quite intimidating, but I got to work with many great professors, like Allen Newell, whom I learned a lot from. Overall, it was a wondrous experience.

You have some very niche interests within AI. How did your interest in these areas develop?

I had initially thought that I’ll go into social welfare when I’m at the fag end of my career. But I’ve been very fortunate to pursue theoretical research and apply AI, finding a way to use it to advance the cause of social justice. Coming to my passion for social work, I grew up in Mumbai, and seeing many calamities and terrorist attacks inspired me to work in these fields. Subsequently, I’ve worked in the domain of security—applying AI to optimize security force allocation for counterterrorism purposes. We have deployed this with the TSA and coast guard in the US. Similarly, in LA, one sees homeless youth on a daily basis, and I felt compelled to help them somehow. Instead of thinking, ‘I’d love to work on this but I have to focus on lab work’, I thought of combining the two.

Google mentioned two pillars that the Bengaluru labs will be working onfundamental AI research and applications in agriculture, healthcare etc. Will you be working on both or just the latter?

I’ll focus on both. Thus far, I’ve usually oriented my research towards producing theoretical breakthroughs via application. The gap between the applied and the theoretical is not that huge. Although people often make the mistake of thinking they’re opposite to one another, this is not the case. At the Bangalore research center, we’ll focus on applying AI to sections of the populace that haven’t benefited from it; towards people who can have tremendous benefits from using AI in their work and lifestyle. I’ll continue to push for using AI to assist wildlife conservation and public health. As the center expands, we’ll also focus on education and agriculture and many other topics.

You have been hailed as a unique academic for your ability to combine “serious societal impact” with “rigorous academic research”, how have you been able to do so?

It’s a very difficult thing to do in general. One has to be really interested in the practical side of research to bridge this gap, and making that application a primary outcome of theoretical research is uncommon. It requires a lot of effort. I wish more people would take this approach, given its ability to minimize the time delay between an AI breakthrough and it going outside the lab and helping solve real-world issues. Part of the problem is that academic publications, which have incentives structured in such a way that the application of research is not really rewarded or judged on a separate basis. We’ve been trying to change the paper review criteria in academic journals so that societal outcome is given more weightage. In the AAAI conference, I’ve been leading a track called “AI for societal impact”, wherein papers submitted to this track get reviewed differently. We don’t just look at whether there’s been an algorithmic improvement, we also look at the societal impact of the paper. Since publications are an integral part of an academic’s career, influencing that will have a domino effect. It will affect how people get tenure and how they get reviewed, among other things. For example, my students who are professors now often lament how it’s hard to publish in this domain, given how a lower amount of publications reduces their chance of moving up the ladder.

My theoretical focus has been computational game theory. Whenever you have multi-agent systems interacting with each other, they can be adversarial or have differing interests etc. These are the zero-sum and non-zero sum parts of game theory, which involves figuring out how multiple agents interact with each other in a strategic fashion. For example, optimizing the use of the police for determining where to have security checkpoints in a large area. We deployed this model at the Los Angeles airport. Even though some terminals get a majority of the traffic, just placing checkpoints at those terminals doesn’t make sense as terrorists would then attack terminals with less traffic. The allocations thus have to be randomized, and game theory and decision theory (which is where reinforcement learning originates from) give us methods to figure out how an adversary may react to changes in their environment. We have also used this to determine where park rangers should patrol to protect against trophy hunters in conservation parks.

On AI

Recently, a majority of research in AI has shifted to Deep Learning. Do you think that this has negative effects on the diversity of research in AI? Should research in other paradigms such as unsupervised learning be abandoned completely in favor of DL?

Yes, I do think so. Personally, my theoretical focus has been on computational game theory, for example. I’ve witnessed cycles in AI where one particular technology suddenly takes over, and DL has certainly carried its weight with its wide-ranging applications. As we get closer to limits in these approaches, it’ll become evident that other facets of AI need to be explored. In our own research, we have worked with homeless shelters, wildlife conservationists, and other environments where there isn’t a large dataset to begin work with. The kind of approaches that become relevant in such conditions generally stray away from DL. There’s an entirely different paradigm called symbolic reasoning, which was the dominant paradigm in AI reasoning until about 20 years ago. This approach may see a resurgence as we inch towards computational systems of higher-order thinking. In my opinion, these approaches should be taught and practiced, so that research in these domains is kept alive.

What is the strangest problem you’ve encountered while conducting research?

This isn’t exactly strange, but a weird quirk I’ve observed is that people are often hesitant towards applying AI towards security work. One of my students was using campus police data to determine where crimes may occur in and around campus. When we presented this at a conference, people were generally taken aback by our findings. This alerted me to the fact that people are quite concerned about predictions that AI may make, and one has to be careful when dealing with vulnerable sections of society and ensure there aren’t any inherent biases in the model.

How adversely do you think the advent of AI will affect the job market? 

During my time at Pilani, there was a huge concern in India about computers taking away a large chunk of jobs. I remember some bankers union restricting the deployment of computers in banking offices in fear of having their jobs automated. In fact, some IBM engineers had told me that they had to install desktops in factory offices covertly so that the workers wouldn’t riot. Cut to the present situation, and computers are everywhere. Computerization has helped the country immensely. While people saw all the jobs that would go away, they couldn’t even fathom the new ones that would come up. I’ve been to numerous meetings where there are economists and researchers on both sides of this issue.

In my opinion, there needs to be a rehabilitation of sorts for the classes of society that will face the brunt of job loss, like truck drivers when it comes to self-driving tech. While many people will face some crises in this transition, I believe it’ll be a good change in the end. In that sense, I’m more optimistic about AI’s effect on jobs.

What next milestones in AI research that you anticipate/ are most excited about?

In general, we’ve seen ML and its applications become more abundant. So far, the domain of algorithms and the ML domain have been quite different. I’m interested to see how the research community addresses the need to integrate other facets of AI (like computational game theory) to influence ML models. Inserting other aspects of AI instead of just doing Deep Learning end-to-end is a challenging task, but it will be an interesting trend to see in the near future.

Working In AI

What advice would you give to someone looking to working at Google Research labs on ML/AI? 

We need lots of people who excel in ML/AI. I would advise those interested to develop working expertise in these fields and keep themselves up-to-date with the state-of-the-art. The type of people I would want to hire are people who have genuine concern for the cause and can work in an interdisciplinary manner to address critical issues. Google India Research as a whole will be looking to hire researchers from many domains, not just those working on the social impact of AI.

How can students who don’t necessarily have access to data/computing power that Google or Harvard provide, harness AI to make an impact?

This is an important concern, not just for researchers but for the actual deployment of AI models. You can’t expect a forest ranger in Uganda to be perennially connected to a high-performance computer. Addressing the situation of low computing power is a research problem in of itself, and new algorithms are required to work with such restrictions. For college students, we could have students from BITS for their PS-II projects working in the Bangalore labs, that’s certainly something that can be worked on.

Do you have any advice for the average BITSian?

I would advise students to start working on real-life projects as soon as possible. They could, for example, do a project at CEERI, or work on some cutting-edge project with other organizations and NGOs to solve some real problems. Research in AI is moving so fast that even the newest textbooks get outdated in a couple of years, so practical experience with the newest tools and frameworks is crucial.

BITS is a wonderful place with a lot to offer, and making the maximum use of the available opportunities is very important for anyone.