QuickTopic free message boards logo
Skip to Messages


Normalized cuts and image segmentation

Joe Drish
02:11 PM ET (US)
I also liked normalized cuts and segmentation. I think it would be interesting to see this work on more difficult input data, and to see comparisons to other similar, but still differing techniques.
Gyozo Gidofalvi (Victor)
02:00 PM ET (US)
I really had a difficult time to follow every single detail of the derivationon the 3rd page of the Shi and Malik paper, but i really liked the idea of normalized cuts.
As the authors emphasize this is in the paper one of the best property of this measure is that minimizing this messure naturally leads to the maximization of the normalized association within sub-groups.
The results presented were promissing, but were not intuitive why the segmentation algorithm selected certain parts (od the zebra) and neglected others which were clearly different from their environment.
Ian Fasel
01:43 PM ET (US)
I have two questions:

1) I was wondering if other similarity measures have been used in this problem (as well as in the Pilu paper). The algorithms presented do a remarkable job using these simple similarity measures, but they are not perfect. Certainly, as argued in the paper, an attempt can be made to have a higher level combine the results of these algorithms into a single object (e.g., combining the zebra pieces into a zebra), however is this a better approach than having a better (but more complex) similarity measure at the lower level? So the first question is, how about using a very complex similarity measure, such as distance between Gabor jets (vectors composed of coefficients of dot products of the image at each location with a rosette of Gabor wavelets), or, to be even more complex, using a set of ICA derived filters instead of Gabors?

Also, this leads me to the second question:
2) Do recognition algorithms that first do segmentation and then do recognition perform better than algorithms that don't include segmentation as an explicit step? Where else has segmentation followed by some other algorithm proven to be more effective than the best a non-segmentation-using algorithm can do?
Markus Herrgard
01:33 PM ET (US)
The two papers by Meila and Shi that I'm going to be presenting on Thursday give some insight into how and why the normalized cut method works by studying the problem as a random walk on a graph. These papers extend the study by Weiss and provide some interesting connections between spectral segmentation and concepts from the theory of Markov chains such as aggregation, conductance and mixing time.

The reason for _not_ using the first eigenvector is that it is always equal to 1 (unless the smallest eigenvalue = 0 is degenerate). This can be seen both in the original NCut formulation and especially through the random walk interpretation. The reason for using the other k-1 first eigenvectors is a bit more complicated so that I'd better leave it for my presentation.

The generalized eigenvector used for segmentation will have one component for each pixel of the image. Figures 6-9 all show particular eigenvectors plotted as the value of the particular component at the corresponding position of the pixel in the image. The cross-section is just a particular subset of components of the eigenvector obtained e.g. by slicing through the image in the x direction at a particular value of y (the value of y is not mentioned anywhere). The image is like a map where lighter shade indicates high elevation and darker shade low elevation and the cross-section is just an elevation profile along a particular line in the map.
Edited 10-09-2001 01:38 PM
Junwen WU
12:25 PM ET (US)
Question: What is the meaning of cross section? In Weiss's paper, P7, Fig.7.c, Fig.9.b, Fig.9.d, they mentioned it, but I don't know what kind of indicator it is.
andrew cosand
06:05 AM ET (US)
One question I had reading the Shi and Malik paper was: Is there an intuitive explanation for why the _second_ smallest eigenvector is the one which segments the graph? It doesn't seem quite logical to me for it to work that way.
Dave KauchakPerson was signed in when posted
03:00 AM ET (US)
I thought it was great that we read these two papers at the same time. One of the things that I didn't think the Shi and Malik paper did a very good job of was explaining why and how the method worked.

The Weiss paper was a start, but I still think there is some more interesting things that could be investigated. Weiss showed why the algorithms tend to work well and how the various eigenvector problems are similar. What I would have liked to see a better explaination of from the papers is why the method works the way it does. Why do those specific portions of the zebra get partitioned instead of others?

I think an interesting study would be to further analyze the algorithm(s) like the Weiss paper did with simplistic examples so that it can clearly be seen exactly what is happening.

Print | RSS Views: 752 (Unique: 531 ) / Subscribers: 0 | What's this?