Sunday, October 9, 2011

Algorithms: a success story (Part 3/2)

To recap:

1- an applied problem was modeled as a standard combinatorial optimization graph problem (prize-collecting Steiner tree).
2- an algorithm was designed for it, and the authors write that this was critical to the project's success. It is based on a general-purpose algorithmic design paradigm that is just now starting to receive a little bit of attention in TCS.
3 - the algorithm was implemented.
4- the program was used for the input arising from the applied problem. That input was neither random nor grid-like nor anything so well structured (in which cases some ad hoc heuristic might have been better suited than general-purpose algorithmic techniques), but seemed like a pretty arbitrary graph.

Is this about algorithms? Yes.
Did this work have impact? Yes.
Is this good work? I believe so.
Would it have found its place at SODA? No.

I never see any paper of that type at SODA. It would not fit in, because it does not have a theorem. When I go to Operations Research conferences, I see some presentations of the type: "Here is a problem that came up in my company (for example, a scheduling problem). Here is how I modeled it, and the heuristic I used to solve it. Here are my experimental results after implementation. We beat the previous results by x%." But I never see that kind of presentations at SODA. Isn't that regrettable? Even if such presentations are sometimes not so exciting theoretically, wouldn't they serve an important role in keeping us grounded, connected to the real world, and in helping guide our taste?

The ALENEX workshop might serve that purpose, but in reality I don't think it does. As far as I can tell, it's mostly about implementing algorithms - taking the proof-of-concept that a high-level algorithm really is, and translating it into a real program, dealing along the way with a host of issues that come up. A necessary step for sure, to prevent sophisticated algorithms from becoming disconnected from programs, but it's only about linking steps 2 and 3 in the above list, not about the context given by steps 1 and 4 in the above list.

The thing that I find really challenging is step 4. Step 1 is easier. We do, once in a while, read papers from other fields and try to make up a theoretical model to capture what seems to us to be the interesting features. But then we go off and merrily design and analyze algorithms for that model, without ever going to step 4 (actually, we usually don't even get to step 3). All it gives us is a good motivational story for the introduction, and a way to feel good, reassuring ourselves that what we do potentially matters and is hypothetically relevant to something. But that hypothesis is never put to the test.

If that was just my research modus operandi for me personally, it would not be a big deal: I concentrate on the section of the work where my skills lie, I do my part there and others will do their part as well, and altogether the field can advance smoothly. But when it is the entire field that seems to share my lack of concern for demonstrated ultimate practical impact, then I think that we might have a problem.

17 comments:

  1. There are at least a few papers like that at SODA occasionally. SODA charter explicitly asks for applied algorithms' work, and sometimes publishes such.

    ReplyDelete
  2. Neal, that's a great example. But SODA accepts roughly 130 papers per year: how many are of that type? Was 1998 the last time a paper like that was accepted? I don't know if they even get submitted any more.

    ReplyDelete
  3. I don't think anyone's saying such types of papers never get into SODA, or that they couldn't. But they're quite rare.

    Here's one way I always try to look at it. How many papers in SODA include performance results? It would be interesting to see a plot of such papers over time (and I admit I haven't checked carefully in a while). Obviously one wouldn't expect an actual implementation and results in all papers, but given that this a symposium on Algorithms, it's always remarkable to me how rare an event it is.

    ReplyDelete
  4. I have no idea how many SODA papers include performance results.
    It seems that that cannot be calculated automatically: we need a human for that, and it would take some number of hours. It would be an interesting statistic, but who's going to take the time to do it? The weather outside is so nice...

    ReplyDelete
  5. So far as I can tell, most of the algorithms community seems not to attach much value to such papers. As a result, there is little incentive to write them.

    ReplyDelete
  6. It certainly seems that much of the algorithms community does not believe that SODA is the right venue for such papers. (Whether the community values these papers is harder to judge.) Perhaps it's more common to see such papers appearing in applied venues like WWW, which accepts theoretical papers in tracks like Internet Monetization, Social Networks, etc.

    Coming back to SODA, Cliff Stein (on behalf of the ALENEX steering committee) recently proposed merging ALENEX with SODA, but this idea was shot down. It's not clear that ALENEX papers go through the full range of 1-4, but Cliff's proposal would likely have at least meant more papers with some performance results, and perhaps an evolution towards more comprehensive papers that cover all four of these steps.

    Over the last year at Google, I've begun working in this manner, from modeling problems to algorithm design to implementation and testing on 'real' instances. The last step is obviously crucial if one wants an algorithm to be actually used anywhere. The process is surprisingly enjoyable, and the performance on real instances often reveals interesting things about the algorithm. I'm sure people at MSR, Yahoo! Research, and others have similar experiences.

    ReplyDelete
  7. Nitish: that work at Google sounds great. I believe that that is one reason why industrial research labs are play a critical role for academic research in the US.

    ReplyDelete
  8. There is also a point of view that maybe SODA is not the ideal venue for such papers. Assuming papers are submitted to one conference, it seems to me that science is better served by the paper appearing in a conference where computational biologists wold read it..
    Additionally, does our community really understand enough, in this case, computational biology to judge how interesting or novel or important the question studied is? Maybe in this case, but an equally impactful comp bio work could easily be about something less popular than proteins.
    I think this is a problem with interdisciplinary works in general: neither community may fully understand the work. Maybe what this post and some commentators are trying to say is that the communities shold work more closely and we should have comp bio people on the committee. That is a valid point and will solve part of the problem.
    But given the wide applicability of algorithms, there will be areas where algorithm design helps that are not represented on the committees. Eventually, the community needs to find other ways than SODA papers to incentivize such work. Maybe have a session or two on invited papers from other conferences to be re-presented. Or better still, highlight such works on blogs as examples of research that the community appreciates. :)

    ReplyDelete
  9. There was in fact a SODA paper on applications of the Prize Collecting Steiner tree to network design that contained implementation results (as well as new theorems, most of them correct..): "The Prize Collecting Steiner Tree Problem: Theory and Practice," Johnson, Minkoff, and Philips, 11th SODA (2000), 760-769. This paper studied the effectiveness of the Goemans-Williamson primal dual algorithm for the PCST, using the same idea of performing runs with different scaling factors. Did the "belief propagation" paper compare its results to those for Goemans-Williamson?

    ReplyDelete
  10. Oh, right, I remember that paper!

    The "belief propagation" paper is accessible at
    http://www.pnas.org/content/suppl/2010/12/22/1004751108.DCSupplemental/Appendix.pdf
    They say: "we have run a systematic check of the
    performance of the algorithm over the SteinLib database, a renowned benchmark repository of
    minimum Steiner Tree problems [5] (http://steinlib.zib.de)"

    And then later:
    "A comparison on a subset of these instances was also done with the algorithm DHEA from [6] linked to the library CPLEX. "

    I don't know if [6] is based on the Goemans-Williamson algorithm, but they do cite both GW and your paper with Minkoff and Phillips.

    [6] Ljubi ́ , I, Weiskircher, R, Pferschy, U, Klau, G, Mutzel, P, & Fischetti, M. (2006) An algorithmic framework for the exact solution of the prize-collecting Steiner tree problem.
    Mathematical Programming 105, 427–449.

    ReplyDelete
  11. I would venture that this particular paper had more appeal to you because of belief propagation rather than the applied stuff. If it was the same application but the algorithm was based on a cutting plane approach or bounded treewidth approach (just to give two examples of standard are techniques) it may not be very appealing.

    ReplyDelete
  12. Chandra: Indeed, if the algorithm was based on a well-understood technique such as bounded tree width, I would not get so excited. One reason for my enthusiasm is certainly the use of a poorly understood method, that signals that there is something for us to explore and that we are missing out on something significant.

    ReplyDelete
  13. The paper that references Johnson-Minkoff-Phillips is the Bailley-Bechet-Braunstein-Zecchina paper available at http://www.normalesup.org/~mbailly/MBaillyBechetLNBI2009.pdf
    which preceded the paper Claire initially cited. Our paper is only cited for the problem definition, not for the algorithms, and I didn't spot a Goemans-Williamson citation. They clearly did not test the GW algorithm, which doesn't involve calls to an LP solver like CPLEX. For all we know, GW could have done just as well or better than belief propagation for their application. If I had been a referee for their paper, I would have insisted that they compare their results to the current state-of-the art (GW) or at least provide a good explanation for why they chose to ignore that and invent a new algorithm.

    ReplyDelete
  14. In electronic design automation (EDA), most of the optimization problems are NP-hard but they need to be "solved" as designers need EDA tools to design their chips. As a result, the conferences and journals for EDA are full of the papers of the type you mentioned above. Some top conferences in this area are the design automation conference and the int. conf. on computer-aided design. Both IEEE and ACM also have journals on design automation too, eg, ACM transactions on design automation.

    ReplyDelete
  15. Thanks David Johnson for his comment.
    We did not interpret from the recent literature the GW algorithm or its derivatives as being the state of the art.
    We decided to focus our comparison mainly with LP based solvers, on which the literature is heavily biased (e.g. in the SteinLib repository --admittedly only Minimum Steiner Tree and not general PCST-- the best bounds on open problems come from such solvers).
    Our PhD student (Indaco Biazzo) has undertaken this comparison (both on the biological networks and on random graphs), and we must say that although the GW+Pruning+MST algorithm is not the winner, it gives very good performance.
    We observe that on the biological networks (Yeast PPI), the cost is about 2% worse than the BP or LP result (which are very close). On the random graphs we tested, this difference could grow up to 5%.
    About time complexity, GW is quadratic in the number of nodes, while we expect BP to be better (the exact asymptotics is not easy to compute because the number of iterations needed for convergence is variable, but empirically about linear in the number of nodes for a sparse graph)

    The rationale for designing a new algorithm was that the final scope would be to redo these analyses on human protein networks which are much larger and we did observe the LP based algorithms to scale badly with system size. We will surely keep the GW algorithm in mind in future work, it might be relevant for very large networks.
    We briefly mention that our method is not simply BP but a generalization of BP which improves convergence.


    Alfredo Braunstein and Riccardo Zecchina

    ReplyDelete
  16. Ali Dasdan: thanks for dropping by. I looked at the web page of one of those conferences; the topics look interesting. I plan to take a closer look on the hypothetical day when I have more time...

    ReplyDelete

Note: Only a member of this blog may post a comment.