1- an applied problem was modeled as a standard combinatorial optimization graph problem (prize-collecting Steiner tree).
2- an algorithm was designed for it, and the authors write that this was critical to the project's success. It is based on a general-purpose algorithmic design paradigm that is just now starting to receive a little bit of attention in TCS.
3 - the algorithm was implemented.
4- the program was used for the input arising from the applied problem. That input was neither random nor grid-like nor anything so well structured (in which cases some ad hoc heuristic might have been better suited than general-purpose algorithmic techniques), but seemed like a pretty arbitrary graph.
Is this about algorithms? Yes.
Did this work have impact? Yes.
Is this good work? I believe so.
Would it have found its place at SODA? No.
I never see any paper of that type at SODA. It would not fit in, because it does not have a theorem. When I go to Operations Research conferences, I see some presentations of the type: "Here is a problem that came up in my company (for example, a scheduling problem). Here is how I modeled it, and the heuristic I used to solve it. Here are my experimental results after implementation. We beat the previous results by x%." But I never see that kind of presentations at SODA. Isn't that regrettable? Even if such presentations are sometimes not so exciting theoretically, wouldn't they serve an important role in keeping us grounded, connected to the real world, and in helping guide our taste?
The ALENEX workshop might serve that purpose, but in reality I don't think it does. As far as I can tell, it's mostly about implementing algorithms - taking the proof-of-concept that a high-level algorithm really is, and translating it into a real program, dealing along the way with a host of issues that come up. A necessary step for sure, to prevent sophisticated algorithms from becoming disconnected from programs, but it's only about linking steps 2 and 3 in the above list, not about the context given by steps 1 and 4 in the above list.
The thing that I find really challenging is step 4. Step 1 is easier. We do, once in a while, read papers from other fields and try to make up a theoretical model to capture what seems to us to be the interesting features. But then we go off and merrily design and analyze algorithms for that model, without ever going to step 4 (actually, we usually don't even get to step 3). All it gives us is a good motivational story for the introduction, and a way to feel good, reassuring ourselves that what we do potentially matters and is hypothetically relevant to something. But that hypothesis is never put to the test.
If that was just my research modus operandi for me personally, it would not be a big deal: I concentrate on the section of the work where my skills lie, I do my part there and others will do their part as well, and altogether the field can advance smoothly. But when it is the entire field that seems to share my lack of concern for demonstrated ultimate practical impact, then I think that we might have a problem.