QuickTopic free message boards logo
Skip to Messages

TOPIC:

Learning Segmentation with Random Walks

4
sameer agarwal
10-11-2001
02:04 PM ET (US)
To answer Hsin-Hao's second question.

The original formulation of the segmentation problem is discrete in the sense, that the indicator vector which tells us whether a pixel belongs to which of the two segments, can only be composed to values. In case of normalized cut it was 1 and -b (where -b was a particular constant related to the volume of the graph).

By allowing the vectors to be continuous, we are let these component values range over the entire set of reals, i.e. they can take any real value.

The key insight to understanding how spectral methods in this case are the following three oberservations

1. if the indicator vector is indeed an eigen vector then the value of the normalized cut is equal to the value of the associated eigenvalue.

2. The constraint yD1=0 is an orthogonality constraint wrt to the first eigenvector of that matrix.

3. The theorem about rayleigh constant which relates the minimization of the rayleigh expression (or the Ncut expression) to its eigen vectors.

hope this helps
sameer
3
Wu Junwen
10-11-2001
02:53 AM ET (US)
I think how to decide symilarity matrix S is a key problem. And choosing S including two aspects, firstly, how to choose the model of S? Secondly, if some good model for S has been chosen, as said in "A Random Walks View of Spectral Segmentation", choose some positive symmetrical matrix, how to decide the parameter of the model? In "learning segmentation by Random Walks", the authors have shown a training method to decide S's parameters for a supervised case, but how about an unsupervised case? Thinking intuitively, S should have great influence on the following steps, so I want to know how to deal with it in real application.

To Hsin-Hao:
I can't agree with your idea about there is no difference between segmentation and clustering. Clutering is just one technique to deal with segmentation problem. In some segmentation technique, for example, just use thresholding and masking, edge/boundary detection and region growing, it has nothing to do with clustering.
Edited 10-11-2001 02:54 AM
2
Hsin-Hao Yu
10-10-2001
11:46 PM ET (US)
Here are my uninsightful contribution. I guess you can easily tell from my questions that I don't understand this paper.

1. What's the difference between segmentation and clustering? I thought segmentation is only clustering with affinity metric that is related to image properties. Is this correct, or are there any more profound differences. It seems that the algorithms that we talked about can be used directly to non-image data (such as linguistic corpus.)

2. I don't really understand how spectral methods can be understood as continous approximations of discrete graphs. What does "continous" mean in this sense?

3. I think Proposition 2 is neat in that it gives a more complete description to the eigenvectors. However, the idea of agrregated Markov chain is quite abstract to me, and I don't have any intuition about what it means in terms of image. It seems to say that "the eigenvectors are piecewise constant when the image can be segmented in to k partitions (eg. k Markov processes)", but isn't that circular?

4.I don't understand how the KL divergence comes into play
in the algorithm (probably because I don't know any Markov chain). It seems suggest that the whole algorithm can be interpreted in a information-theoretical framework.

5. Not related to this paper: What are the useful features if we want to segment 3D images (eg. MRI)?
1
Dave KauchakPerson was signed in when posted
10-10-2001
07:54 PM ET (US)
First, I'd like to say that I thought it was very interesting that by posing the segmentation problem as a Markov random walk problem they were able to formulate a framework where the could perform some sort of supervised learning. I would have liked to have seen a bit more on the effectiveness of this learning and what sort of segmentation "features" could be found.

Another thing that has become apparent to me looking at the last few papers is that it would be very nice to develop some sort of standardized test set for examining the performance of image segmentation techniques. Does anyone know of any such set? I partially understand why one doesn't exist given that segmentation may be a more subjective metric. Given an image, there may not be one right way to segment it. I think the papers are beginning to do a good job of theoretically showing why these methods work, but I think an experimental analysis would allow insight into how these methods might work in a practical application.

Print | RSS Views: 610 (Unique: 435 ) / Subscribers: 0 | What's this?