Wednesday, February 2, 2011

How program committees shape the field

Today I got a STOC submission rejected. It's a technical result on which I (along with my coauthors) spent many months to build a fragile construction of techniques that had to be tweaked to fit together in just the right way. I had been working on it off and on since 2007, if not earlier. From a purely technical viewpoint, it may be my most intricate paper ever. So, what to make of this rejection?

Maybe they didn't like the paper because of all of its technicalities. In which case, in order to get papers accepted in the future, I should stick to less complicated projects.

Maybe they couldn't follow the arguments. In which case, in order to get papers accepted in the future, I should stick to less complicated projects.

Maybe they didn't appreciate the difficulties and thought it was poorly written. In which case, in order to get papers accepted in the future, I should stick to less complicated projects.

Maybe they didn't care for the result. In which case perhaps my time was not well invested - huge amounts of time spent solving a problem that people apparently don't care about. In which case, in order to get papers accepted in the future, I should stick to projects with a more immediate appeal.

Maybe the paper really was poorly written. Yet we spent months just on the writing, which was almost beyond us. Am I up to spending another couple of months on this proof to try and see if it can be streamlined? I don't think so. In which case, in order to get papers accepted in the future, I should stick to less complicated projects, where I master the big picture.

Maybe they were not impressed. I only imagine it's difficult because my brain doesn't work as fast as it used to when I was a student, but, actually, that result is not that hard. Hmm... I don't like that idea. I will need more evidence before I'm ready to entertain that possibility!

I had been planning to devote a good part of my time working on follow-up projects, digging deeper in even more technical developments. Maybe I should reconsider.

Or maybe I don't care. After all, what's tenure good for, if not to pursue our research interests even when the wind of fashion (or the majority's taste) blows in a different direction?

But maybe - gasp - a reviewer found a mistake in the proof! Since that proof is a bit too big to fit in the cache in my brain, I was only able to check it one part at a time, so an error is a real possibility. That's a worrisome thought... what if the result of our efforts fell apart?

Then, there is also the problem of advising students: maybe this rejection suggests that this research direction is not a good one to steer them towards, because it's high-effort, low-reward, and would not be good for their future job applications. That's another concern.

And that is how program committees shape the field.


  1. Likewise (w.r.t. a STOC submission). I guess I'll wait for the feedback before writing more, but "high effort, low reward" seems to be a defining feature of theoretical CS.

  2. I got the exact same feeling some days ago. I got a paper rejected (despite some good reviews) from a prestigious conference, after expending months on it.

    It seams that the "best" approach is not to take too much risks. Look for the little, easy to get improvement. And, yes, publish as many of those papers as you can! Quantity matters. Quality, not.

  3. Funny what every one writes.
    I got the exact same feeling but standing on the "other" side of the road:

    When I was a M1 student (first year of Master in Europe), I had two trainingship to do. One of 3 month, the other of 2 month.

    At the end of both trainingships, I felt like my contribution had been close to null. Though both my advisors were happy with what I did and wanted to submit it to a conference.

    The first one has already been accepted, I'm waiting for the second one.

    I am actually a bit ashamed having my name on it, and fear of the possible consequences in the future.
    I really felt like the people reading those did not pay attention to the real difficulty of those papers (even if the result in the paper accepted was interesting because of its "novelty", there was not a lot of work in it).

    I did not really understand the need for my advisors (which are both well known professors in their domain) to publish something like this instead of waiting and working a bit more on thoses.

    Now could I have said I did not wanted my name on those (given I did most of the work)?

  4. Why don't you submit your paper to a journal ?
    The reviewers will have more time to understand the technical details, if they think they found a mistake they will tell you where and you well get a chance to correct it.
    That's the way things work in all other scientific fields (and even I believe in CS journals).

  5. This comment has been removed by the author.

  6. Acceptance and rejection are somewhat random. A professional poker player had the following advice. During a run of bad luck, many players question their playing strategies, and to try to adjust their strategy based on the recent experience, and so end up with a worse strategy. The advice was: as long as your strategy is good in the first place, you should stick with it, especially during a run of bad luck.

  7. (Please note this is meant tongue-in-cheek, and I'm not actually angry at any reviewers. Some review comments were helpful, and some of reviewers are probably my colleagues and friends. :) )

    Dear STOC Program Committee:

    Here is my review of the reviews for my recently rejected STOC submission. I regret to inform the reviewers that I must recommend rejecting these reviews from consideration for the following reasons:

    - One review makes the false claim that we achieve a suboptimal quantitative bound of S, when in fact we achieve the optimal bound of O, as we plainly state in the abstract, introduction, and theorem. The review even calls out the fact that S is suboptimal and O would have been better, so this was not merely a typo. I encourage the reviewers to carefully check the abstract (and, if they have another 5 minutes free, the introduction) before re-submitting their review to another conference.

    - One theorem answers an open question from a STOC 2002 paper (by completely different authors), and was characterized as "unmotivated" by a reviewer and "uninteresting" by others. None of the reviews mention the fact that this was a STOC open question.

    - One review claims we use "no new techniques" as we answer the STOC 2002 open question and make the first progress in 10 years on a STOC 2001 open question. I suggest the reviewers carefully check the details of our proofs to help them notice the new techniques we required to make progress on these questions, even if we didn't make up ridiculous names for our techniques to puff them up.

    - Finally, the reviews introduce no new techniques. The above techniques were explicitly used to reject a good paper from STOC 2005, and were implicit in a review from a FOCS 1987 rejection that later went on to win best paper at STOC.

  8. I received the detailed reviews, and although they were interesting, they did not shed much light on the reason for rejection. Oh well.

    Pascal, journals are good, but they get a lot less exposure than conferences, so a result that is only published in a journal takes a lot longer to get disseminated in the wider research community.

  9. @Claire: I am not sure that conferences get more exposure than journals. I think (without any data and only personal experience) that journals are subscribed to by many more people than who may the conferences. Nevertheless, I think I am wrong when it comes to CSE.

    CSE is an unique field in as much it is more conference oriented than other fields (EE, Biology, etc.) That is why the reviewing-acceptance-rejection for conferences becomes a significant event for CSE. Moreover, these decisions for conferences are generally much faster than corresponding iterative reviewing process for journals (except PLoS).

    This may, unfortunately, lead to summary rejexcutions (sic) and in the process making, as you suggest in the post, CSE a tad bit more volatile than other fields.

  10. Claire --

    In an attempt to be controversial, let me suggest an alternative view of the problem -- why do we give PCs so much power anyway?

    I'll be at a workshop focused on information theory next week. They invite people to come present; the papers will show up in the IEEE online system. It's essentially a 5 day conference, with no reviews. (Inf. theory people send extended versions of important stuff to journals.) The Allerton workshop for years worked in a similar way. Their flagship conference, ISIT, does have a PC but has about a 50%+ acceptance rate; attendance is usually in the 700+ range.

    There are pros and cons to both systems, to be sure. But I'm always surprised the negative reaction TCS people have to this approach (when I've brought it up). It's not like there's not plenty of good TCS papers; why not have a large-scale, less "prestige-oriented" conference, and cut back a bit on our rejection-oriented culture?

    (At the other extreme, of course, there was just the SIGCOMM deadline; I wonder if it will have the hyper-competitive 10% acceptance rate again...)

  11. Mike, in my case the problem was solved by a good night's sleep, so it doesn't seem like such a big deal.

    I am keeping this in mind for my next suggestion of research project for PhD students, though.

    As for changing the way the field works, I suppose that those of us who have tenured jobs can do it gradually in our choices of how we spend our time (reviewing journal submissions or conference submissions? Accepting PC tasks or journal editor tasks? Attending selective conferences or workshops such as the one you're mentioning?) and of what we do with the result of our research.


Note: Only a member of this blog may post a comment.