My adult life so far has been split almost exactly 50-50 between France and the US. Now things are changing once again: I am shortly going to move back to France. While I plan to still spend a couple of months per year at Brown, in the foreseeable future I will be primarily based at Ecole Normale Superieure, in the Latin quarter in Paris.

Should anyone wish to come to Paris for a work visit, it would be a pleasure to welcome foreign visitors, collaborators, and interns.

Blogging will be light for a while, as I am occupying my time with various tedious tasks related to my move.

## Tuesday, February 28, 2012

## Wednesday, February 15, 2012

### The year of the TSP

When I was at ITCS, last month, I talked to Huyng-Chan An who told me about his recent work with David Shmoys and Bobby Kleinberg (to appear in STOC 2012).

They study the s-t path variant of TSP, that asks for the shortest path going from a pre-specified start vertex s to a pre-specified end vertex t while visiting all other vertices along the way. What is the motivation for this problem? Well, imagine that you are a postman delivering letters. You pick up letters at the post office, deliver them, and when you're done, you go home: you have just built an s-t path, with the post office as s and your home as t. Or, imagine that your building is on fire, it is early in the morning, and the fire alarm is not working. You want to leave your office, visit all offices in the building as quickly as possible to tell anyone who might be there already that they must leave, and then you want to get out. No way you want to go back to your starting point in the end!

Christofides' algorithm, adapted to that variant, is not a 3/2 but a 5/3 approximation in the metric setting. An, Kleinberg and Shmoys have a very simple algorithm that improves the 5/3=1.66… to 1.61… The funny thing is that I did not know about the 5/3 approximation until An told me that they had improved upon that twenty years old bound! For that reason, I am hesitant to call it a breakthrough, and also because it comes along with several recent papers making progress on special cases of TSP. As a friend recently suggested, this is shaping up as "the year of the TSP"! But I looked at their result in more detail, and it is a clean, more or less self-contained, clever application of modern techniques. That's what makes it appealing.

What's the algorithm?

Nothing special, really:

(1). Solve the natural variant of the Held-Karp linear programming lower bound for TSP

(2). Express the solution as a convex combination of spanning trees (that's known to be possible by polyhedral combinatorics)

(3). Pick a random such tree T* according to the distribution given in (2), and

(4). use it in an execution of the natural variant of the Christofides heuristic.

The fact that it is "nothing special", in my view, is a good thing. It means that it can be implemented at little additional cost. (The difficulty is in the analysis.)

What's the "natural variant" of the Held-Karp constraints? Some constraints are different: every s-t cut must be crossed by a minimum of just one edge (instead of a minimum of two edges, in the usual version).

What's the "natural variant" of the Christofides heuristic? Instead of finding a perfect matching of the odd degree vertices of T*, it finds a perfect matching (T-join) of the "off-degree" vertices of T*, that is, the vertices whose degree has the wrong parity: the difference with the usual setup is that now, vertex s has the wrong parity if its degree in T* is even, and similarly vertex t has the wrong parity if its degree in T* is even.

So, that's the algorithm.

What's the analysis?

First, they present an elegant proof of the 5/3 approximation result, then modify it. Because of that, we first have to understand why the algorithm is a 5/3 approximation. The tree T* from step 3 has expected cost bounded by OPT, so the sole focus is on bounding the cost of the T-join. That is achieved by bounding the value of a particular fractional T-join. What's a fractional T-join? It's a feasible solution to the following LP:

* for every edge e, variable y(e) must be between 0 and 1

* for every proper cut (S,V-S), if S contains an odd number of "off-degree" vertices, then at least one edge crosses the cut - or in other words, fractionally, the sum, over every edge e crossing the cut, of y(e), must be at least 1.

As it happens, that polytope has integer vertices, so the optimal (integer) T-join is also an optimal fractional T-join, and so the cost of the T-join built by the algorithm in step (4) can be bounded by the cost of any fractional T-join. The proof exploits this freedom to carefully choose a particular fractional T-join whose expected cost is less than 2/3 OPT. How can it prove that the cost is less than 2/3 OPT? By proving that it's less than the 2/3 times the Held-Karp lower bound. So, the proof has now neatly moved to the realm of polyhedral analysis, and the rest is now a matter of manipulating vectors y=(y(e)).

The 5/3 result is proved by choosing, for our fractional T-join, a scaled version of an average of two solutions that each achieve the Held-Karp lower bound, namely: the fractional solution x* obtained in step (1), and the tree T* obtained in step (3). Indeed, x* satisfies all the fractional T-join constraints, but in addition, observe that there are some constraints where it has slack: for some cuts, the T-join constraint requires a weight of 1 across the cut, but x* guarantees a weight of 2 across the cut. The tree T* also satisfies all the fractional T-join constraints, but in addition, observe that there are also some constraints where it has some slack: for some cuts, the T-join constraint requires a weight of 1 across the cut, but T* guarantees a weight of 2 across the cut. Really? Well, yes: if S separates s from t and contains an odd number of "off-degree" vertices, then a straightforward counting argument shows that at last 2 edges of the tree must come out of S. Now, the funny thing is that T* has slack precisely for every constraint where x* does

How do they improve on this analysis? By fiddling with the factors cleverly. To get a fractional T-join with cost slightly better than 2/3, instead of x*/3+T*/3, they pick y=(1/3-2c)x* + (1/3+c)T*+(correction vector for the fractional T-join constraints that are now violated), and hope that the cost of the correction vector is less than c OPT. Realizing that hope requires a careful analysis of the violated constraints. They observe that

* even if constraints are violated, they're

* given that T* is drawn at random in step (2), those cuts are

So the correction vector z will simply have z(e)=deficiency times Pr(e in cut where the constraint is violated) times x*(e)=O(c^2)x*(e).

Then the cost of the fractional T-join y is (2/3 - c + O(c^2))OPT, which is less than 2/3 for c small enough constant.

There's an additional technicality: the above reasoning is buggy when the violated constraints correspond to non-disjoint cuts, but they show that they can make the cuts disjoint up to introducing another small factor 1-O( c). And with that, they beat the 5/3 approximation. (There's more to the analysis to refine it and get further improvements, but that's beyond my attention span...)

What's the take-home message from that proof? Perhaps, it is a hymn to the power and flexibility of LP feasibility. The technique seems general. They pushed it through for the s-t path TSP problem, but why not use it for other problems, whenever there's an integer polytope behind the scenes?

They study the s-t path variant of TSP, that asks for the shortest path going from a pre-specified start vertex s to a pre-specified end vertex t while visiting all other vertices along the way. What is the motivation for this problem? Well, imagine that you are a postman delivering letters. You pick up letters at the post office, deliver them, and when you're done, you go home: you have just built an s-t path, with the post office as s and your home as t. Or, imagine that your building is on fire, it is early in the morning, and the fire alarm is not working. You want to leave your office, visit all offices in the building as quickly as possible to tell anyone who might be there already that they must leave, and then you want to get out. No way you want to go back to your starting point in the end!

Christofides' algorithm, adapted to that variant, is not a 3/2 but a 5/3 approximation in the metric setting. An, Kleinberg and Shmoys have a very simple algorithm that improves the 5/3=1.66… to 1.61… The funny thing is that I did not know about the 5/3 approximation until An told me that they had improved upon that twenty years old bound! For that reason, I am hesitant to call it a breakthrough, and also because it comes along with several recent papers making progress on special cases of TSP. As a friend recently suggested, this is shaping up as "the year of the TSP"! But I looked at their result in more detail, and it is a clean, more or less self-contained, clever application of modern techniques. That's what makes it appealing.

What's the algorithm?

Nothing special, really:

(1). Solve the natural variant of the Held-Karp linear programming lower bound for TSP

(2). Express the solution as a convex combination of spanning trees (that's known to be possible by polyhedral combinatorics)

(3). Pick a random such tree T* according to the distribution given in (2), and

(4). use it in an execution of the natural variant of the Christofides heuristic.

The fact that it is "nothing special", in my view, is a good thing. It means that it can be implemented at little additional cost. (The difficulty is in the analysis.)

What's the "natural variant" of the Held-Karp constraints? Some constraints are different: every s-t cut must be crossed by a minimum of just one edge (instead of a minimum of two edges, in the usual version).

What's the "natural variant" of the Christofides heuristic? Instead of finding a perfect matching of the odd degree vertices of T*, it finds a perfect matching (T-join) of the "off-degree" vertices of T*, that is, the vertices whose degree has the wrong parity: the difference with the usual setup is that now, vertex s has the wrong parity if its degree in T* is even, and similarly vertex t has the wrong parity if its degree in T* is even.

So, that's the algorithm.

What's the analysis?

First, they present an elegant proof of the 5/3 approximation result, then modify it. Because of that, we first have to understand why the algorithm is a 5/3 approximation. The tree T* from step 3 has expected cost bounded by OPT, so the sole focus is on bounding the cost of the T-join. That is achieved by bounding the value of a particular fractional T-join. What's a fractional T-join? It's a feasible solution to the following LP:

* for every edge e, variable y(e) must be between 0 and 1

* for every proper cut (S,V-S), if S contains an odd number of "off-degree" vertices, then at least one edge crosses the cut - or in other words, fractionally, the sum, over every edge e crossing the cut, of y(e), must be at least 1.

As it happens, that polytope has integer vertices, so the optimal (integer) T-join is also an optimal fractional T-join, and so the cost of the T-join built by the algorithm in step (4) can be bounded by the cost of any fractional T-join. The proof exploits this freedom to carefully choose a particular fractional T-join whose expected cost is less than 2/3 OPT. How can it prove that the cost is less than 2/3 OPT? By proving that it's less than the 2/3 times the Held-Karp lower bound. So, the proof has now neatly moved to the realm of polyhedral analysis, and the rest is now a matter of manipulating vectors y=(y(e)).

The 5/3 result is proved by choosing, for our fractional T-join, a scaled version of an average of two solutions that each achieve the Held-Karp lower bound, namely: the fractional solution x* obtained in step (1), and the tree T* obtained in step (3). Indeed, x* satisfies all the fractional T-join constraints, but in addition, observe that there are some constraints where it has slack: for some cuts, the T-join constraint requires a weight of 1 across the cut, but x* guarantees a weight of 2 across the cut. The tree T* also satisfies all the fractional T-join constraints, but in addition, observe that there are also some constraints where it has some slack: for some cuts, the T-join constraint requires a weight of 1 across the cut, but T* guarantees a weight of 2 across the cut. Really? Well, yes: if S separates s from t and contains an odd number of "off-degree" vertices, then a straightforward counting argument shows that at last 2 edges of the tree must come out of S. Now, the funny thing is that T* has slack precisely for every constraint where x* does

*not*have slack! So if we take the average of x* and of T*, for each constraint of the fractional T-join, we'll have (1+2)/2=3/2 edges crossing at least, and so, if we scale by 2/3, it's still feasible! That shows that y=(2/3)(x*/2+ T*/2) is a feasible fractional T-join. To recap, its cost is at most 2/3 times the average cost of x* and of T*, which is at most 2/3 OPT. The T-join built in step (4) has cost less than that, and that's how they get a 1+2/3=5/3 approximation.How do they improve on this analysis? By fiddling with the factors cleverly. To get a fractional T-join with cost slightly better than 2/3, instead of x*/3+T*/3, they pick y=(1/3-2c)x* + (1/3+c)T*+(correction vector for the fractional T-join constraints that are now violated), and hope that the cost of the correction vector is less than c OPT. Realizing that hope requires a careful analysis of the violated constraints. They observe that

* even if constraints are violated, they're

*not violated by much*: the sum across the cut is still at least 1-O( c). The deficiency is O( c).* given that T* is drawn at random in step (2), those cuts are

*rare*: an elementary argument uses Markov's inequality to show that their probability is O( c).So the correction vector z will simply have z(e)=deficiency times Pr(e in cut where the constraint is violated) times x*(e)=O(c^2)x*(e).

Then the cost of the fractional T-join y is (2/3 - c + O(c^2))OPT, which is less than 2/3 for c small enough constant.

There's an additional technicality: the above reasoning is buggy when the violated constraints correspond to non-disjoint cuts, but they show that they can make the cuts disjoint up to introducing another small factor 1-O( c). And with that, they beat the 5/3 approximation. (There's more to the analysis to refine it and get further improvements, but that's beyond my attention span...)

What's the take-home message from that proof? Perhaps, it is a hymn to the power and flexibility of LP feasibility. The technique seems general. They pushed it through for the s-t path TSP problem, but why not use it for other problems, whenever there's an integer polytope behind the scenes?

## Tuesday, February 14, 2012

### 2=1, 1=2

Two people in love: when in a romantic mood, one is tempted to think of them as two persons united by love into a single being. 2=1: a small miracle!

A woman giving birth: for those of us who don't have enough imagination to think of a child in the womb as a completely real person, what we see is that, one moment there was one person, and the next moment there were two. 1=2: another small miracle!

A woman giving birth: for those of us who don't have enough imagination to think of a child in the womb as a completely real person, what we see is that, one moment there was one person, and the next moment there were two. 1=2: another small miracle!

## Saturday, February 11, 2012

### The Murphy's law approach to grading

Say that the assignment that you are grading is ambiguous, so that you are not sure what the students meant. One interpretation leads to a correct solution, the other one, to an incorrect solution. What to do then?

The Murphy's law approach: if something can go wrong, then it will be wrong. The student kept the writing ambiguous because he or she was not sure which specific interpretation was correct. Then, it should be counted as wrong.

But doesn't that contradict the mustard seed approach? Which approach will be taken then depends on ... on the grader's philosophy? on the grader's mood? on what he or she had for lunch? on some random coin flip, but decided consistently across all graders?

The Murphy's law approach: if something can go wrong, then it will be wrong. The student kept the writing ambiguous because he or she was not sure which specific interpretation was correct. Then, it should be counted as wrong.

But doesn't that contradict the mustard seed approach? Which approach will be taken then depends on ... on the grader's philosophy? on the grader's mood? on what he or she had for lunch? on some random coin flip, but decided consistently across all graders?

## Friday, February 10, 2012

### The mustard seed approach to grading

Today I learned a new thing about grading.

What do you do if a student presents an algorithm in two ways, at a high-level and with pseudo-code, and if the two versions are different, one correct and one incorrect? Should one count the answer as correct or incorrect?

The "mustard seed" response: there's some good in there, so focus on what's good and count it as correct.

What do you do if a student presents an algorithm in two ways, at a high-level and with pseudo-code, and if the two versions are different, one correct and one incorrect? Should one count the answer as correct or incorrect?

The "mustard seed" response: there's some good in there, so focus on what's good and count it as correct.

## Monday, February 6, 2012

### The fate of retired professors in Greece

rue89.com has an article about the impact of the Euro crisis on the daily life of people in Greece. A student interviewed 10 people and wrote a paragraph about each of them. One is a recently retired university professor.

Nikos, 65, retired university professor

Having retired this past August, Nikos received no pension at the beginning of January this year. He says he may be able to receive a pension in six months - but: "From month to month, we do not spend that much because we do not know if we will receive our pension next month. We, who belonged to the wealthy middle class - my wife is a retired professor - we are becoming poor. Before, we left on vacation every Easter, every Christmas, to go to Norway or Egypt. Today that is impossible; we buy only what is needed: no more clothes, no more trips. Only food."

In the midst of a great depression, it may seem ironical to sympathize with people whose biggest problem is that they can no longer travel for vacation, but this is a retired professor: in other words, one of our colleagues, so it hits close to home. Barring a war, it is hard to imagine that a day might come when financial problems would cause me to go hungry, but who knows what the future has in store?

*Nikos, 65 ans, professeur d'université à la retraite*

Retraité depuis août, Nikos n'a reçu aucune pension au début du mois de janvier. Il dit qu'il pourra peut-être recevoir une retraite dans six mois – et encore : « D'un mois sur l'autre, nous ne dépensons que l'essentiel car nous ne savons pas si on nous donnera le mois prochain une retraite. Nous qui appartenions à la classe moyenne riche, ma femme étant professeur à la retraite, nous devenons pauvres. Avant, nous partions en vacances chaque Pâques, chaque Noël : en Norvège, en Egypte. Aujourd'hui, cela est impossible ; nous n'achetons que ce qui est nécessaire : plus de vêtements, plus de voyages. Que de la nourriture. [...]Retraité depuis août, Nikos n'a reçu aucune pension au début du mois de janvier. Il dit qu'il pourra peut-être recevoir une retraite dans six mois – et encore : « D'un mois sur l'autre, nous ne dépensons que l'essentiel car nous ne savons pas si on nous donnera le mois prochain une retraite. Nous qui appartenions à la classe moyenne riche, ma femme étant professeur à la retraite, nous devenons pauvres. Avant, nous partions en vacances chaque Pâques, chaque Noël : en Norvège, en Egypte. Aujourd'hui, cela est impossible ; nous n'achetons que ce qui est nécessaire : plus de vêtements, plus de voyages. Que de la nourriture. [...]

Nikos, 65, retired university professor

Having retired this past August, Nikos received no pension at the beginning of January this year. He says he may be able to receive a pension in six months - but: "From month to month, we do not spend that much because we do not know if we will receive our pension next month. We, who belonged to the wealthy middle class - my wife is a retired professor - we are becoming poor. Before, we left on vacation every Easter, every Christmas, to go to Norway or Egypt. Today that is impossible; we buy only what is needed: no more clothes, no more trips. Only food."

In the midst of a great depression, it may seem ironical to sympathize with people whose biggest problem is that they can no longer travel for vacation, but this is a retired professor: in other words, one of our colleagues, so it hits close to home. Barring a war, it is hard to imagine that a day might come when financial problems would cause me to go hungry, but who knows what the future has in store?

## Friday, February 3, 2012

### Hiding clues in lectures

I find a singular pleasure in occasionally hiding clues in lectures, when I have time to think about it in advance. Using 5773 in a numerical example at Rosh Hashanna, quoting some book without attribution, having a pictorial reference to my favorite children's picture book, etc. Then, if a student catches on, it is as if we were sharing a secret in plain daylight. We know the hidden meaning the others are unaware of. We may exchange a glance, and then we know: I know that he or she understood me, and they know that it wasn't chance, and that I know that they know. We understand one another.

What is the point?

There is no point as far as I can tell. It's just nonsense. Most of the time it has no pedagogical value. But it is delightful.

What is the point?

There is no point as far as I can tell. It's just nonsense. Most of the time it has no pedagogical value. But it is delightful.

## Thursday, February 2, 2012

### Courses on video: all good.

This semester for the first time I'm having my course videotaped.

First consequence: I hear myself on tape. What a shock! What a strong accent I have! I never knew it! Can I fix it?

Second consequence: two students already told me that they had watched the first lecture on video; this is shopping period and it's a convenient way for them to test several courses being taught at the same time.

Third consequence: some of my TAs are going to watch parts of lectures, for the more challenging parts of the course or when they need a refresher (for example: log*n analysis of union find at the end of lecture this past Tuesday).

So far so good.

Another developing trend: some students had difficulty with a homework exercise using recurrences (which I did not cover in class). Instead of pointing them to a textbook, or to lectures notes on the web, the TAs pointed them to a video of a lecture on the subject at MIT, from a number of years ago: so, these videos are becoming resources for learning, competing with textbooks.

All good developments!

First consequence: I hear myself on tape. What a shock! What a strong accent I have! I never knew it! Can I fix it?

Second consequence: two students already told me that they had watched the first lecture on video; this is shopping period and it's a convenient way for them to test several courses being taught at the same time.

Third consequence: some of my TAs are going to watch parts of lectures, for the more challenging parts of the course or when they need a refresher (for example: log*n analysis of union find at the end of lecture this past Tuesday).

So far so good.

Another developing trend: some students had difficulty with a homework exercise using recurrences (which I did not cover in class). Instead of pointing them to a textbook, or to lectures notes on the web, the TAs pointed them to a video of a lecture on the subject at MIT, from a number of years ago: so, these videos are becoming resources for learning, competing with textbooks.

All good developments!

## Wednesday, February 1, 2012

### Writing searchable emails

A colleague of mine saw my post of how to write emails and commented that I missed the main feature. His rule is:

Emails should be phrased with appropriate keywords so as to be easily searchable.

In other words, instead of expecting your application to adapt to your needs when searching your mailboxes for an email that you wish to recover, you are the one who ought to adapt to the application and provide the words that make search easy.

My email typically has one intended recipient. Should I also keep in mind, as I am writing it, the search program, and imagine that it is looking over my shoulders as I type and gives me advice on my choice of words? Doesn't this strike anyone as a bit twisted?

Emails should be phrased with appropriate keywords so as to be easily searchable.

In other words, instead of expecting your application to adapt to your needs when searching your mailboxes for an email that you wish to recover, you are the one who ought to adapt to the application and provide the words that make search easy.

My email typically has one intended recipient. Should I also keep in mind, as I am writing it, the search program, and imagine that it is looking over my shoulders as I type and gives me advice on my choice of words? Doesn't this strike anyone as a bit twisted?

Subscribe to:
Posts (Atom)