I am getting tired of rejecting submissions to SIAM Journal of Computing. I keep getting referee reports along the lines of "interesting work, but not quite up to the very high standards of SiComp". What does that mean? It's not like a conference, where we simultaneously see all the submissions and can easily put them side by side and try to see which ones we like better. Instead, it's some kind of absolute threshold that apparently exists in people's heads. Because the mandates of SiComp specify that the journal must be extremely selective, I have been following referees' advice and rejecting papers that are fine papers albeit clearly not contenders for best paper awards at STOC or FOCS. But I'm not sure I believe in those hypothetical thresholds.
When I worked for Algorithmica, I normally did not accept a submission unless there was at least one referee who stated with confidence: "this is a clear accept". But for SIAM, hardly anyone seems to be willing to be so affirmative. The journal is supposed to be for the "most significant work taking place", and that is supposed to translate into "significantly more selective than STOC/FOCS/SODA", but it's depressing to reject papers that are clearly worth publishing somewhere.
So, I am looking for more objective criteria. For example, between the time when the result appeared on arxiv or at a conference, and the time when it was submitted to the journal, has the paper been read? Is the result known by people in the field? If so, then it's a sign of impact and maybe that should be compelling evidence to accept the paper. Could I ask referees to compare the paper to similar results previously published in SiComp or in JACM? That would be extra work for them. Could we request authors to provide a short list of comparable papers previously published there? Then at least I would have a benchmark and would not be working in that murky area where the "good, but not good enough" assessments sound so arbitrary.