Uri Feige popped in for a brief visit of the Microsoft Research Theory group. After lunch on Thursday, Yossi Azar, Nikhil Devanur, Uri and myself stood in the hallway of Building 99 chatting. Here is my recollection of our conversation.
Claire: Why do you do research?
Uri: Because I get paid for it.
Yossi: Come on, we all know the real answer to that question. We do research because we enjoy it, of course. It's fun.
Claire: How do you pick research problems?
Uri: There are two ways. Some problems I pick because I think that they are good problems, that the questions are important, that they have lasting value. The other way is opportunistic. There are problems that I work on because I can. They come up, and I know how to solve them.
Claire: Can you give an example of a paper of yours that was opportunistic?
Uri: I have many papers that are opportunistic. It would be a better question to ask me which papers are not opportunistic.
Yossi: for example, your result on the hardness of approximation of set cover
Uri: My own contribution to that paper was opportunistic. It's not one of my favorite papers, actually, in terms on my personal investment into it. I had the right skills at the right time, for a problem that begged to be solved, so, for me, it was a matter of applying the techniques that I already had available at that time. Atthe time there was a different question I really cared about: parallel repetition. That was quite important to me even before it started having impact. I was interested in understanding it and willing to invest a lot of time into it.
Claire: Do you wish you were Ran Raz?
Uri: I am happy being myself, thank you.
Claire: Tell us more about questions that have lasting value.
Uri: I am not so sure what people will think, in one hundred years, about the distinction between polynomial and exponential time. Maybe they will think it was an exotic topic that researchers in the late 20th century and early 21st century got stuck in, some odd topic temporarily in fashion.
Yossi: I'm not sure I agree.
Uri: Also, our evaluation of running time is based on asymptotics. How can you be sure that asymptotics have lasting value? It all depends on how large n has to be. If n is more than something like 2^50, or, let's say, 2^100 (just to be on the safe side), then no one cares whether an algorithm takes polynomial or exponential time.
Claire: No one cares about anything at all when n is 2^100
Uri: No, that's not true. 2^100 may come up, for example, if you are manipulating implicitly defined subsets of objects. Then, what you want is a sublinear algorithm.
Nikhil: Approximation I think is more of a weak point in our models. Why we care about approximation ratios: I am not sure it's so fundamental. Between approximation ratios of log(n) or log(n)^2, I don't know that it really matters.
Uri: Also, consider the area of algorithmic game theory. There, many questions come up that have to do with people's behavior. How good the questions are depends on how faithfully our models capture the behavior of human beings. But it is not clear whether our questions are the right questions, whether that's the way in which people think. It's not like physics.
Nikhil: There is nothing similar to the law of thermodynamics.
Uri: So we don't really know whether what we do has any lasting value.
Nikhil: what do you prefer, algorithms or lower bounds?
Uri: I prefer algorithms. Then there are some problems, for example some "hat" problems, that are unrelated to algorithms or to lower bounds, but that I personally care about, although solving them won't make a STOC or FOCS paper. As for computation, I am not sure how fundamental, long-lasting it is. Some integrality gaps results have lasting value, more than the corresponding NP-hardness proof. The hardness result may matter more right now at the present time, but in the future, gap results might have a more lasting impact. They do not depend on any Turing machine or computation model.