top bar
QuickTopic free message boards logo
Skip to Messages


normalized cuts

Deleted by topic administrator 02-02-2010 05:25 AM
no prescrition 37.5 mg ph
01:07 PM ET (US)
gqb0bB Great site. Good info.
Deleted by topic administrator 08-03-2009 02:07 AM
03:07 AM ET (US)
Tomasz Malisiewicz
09:58 PM ET (US)
I just read a very interesting paper by a fellow CMU roboticist -- Dave Tolliver -- titled "Multilevel Spectral Partitioning for Efficient Image Segmentation and Tracking."

In this paper, the lattice geometry of images is exploited to define a set of coarsened graph partitioning problems. A coarse solution is propagated to increasingly finer resolutions and refined using subspace iterations. This hierarchical approach gives a 10x to 100x speedup over other Ncut methods which try to use clever sampling schemes.

If you're into spectral-based image segmentation, then check it out.
Tomasz Malisiewicz
09:39 AM ET (US)
An algorithm for Spectral Clustering that I really like is presented in Andrew Ng's "On Spectral Clustering: Analysis and an Algorithm." (

The basic idea is to form a matrix of K eigenvectors, normalize the rows to unit norm, and the k-means clusters the rows (points on the K dimensional sphere) into k clusters.
David Lee
12:03 AM ET (US)
"seed the higher resolution image segmentation"
What does this exactly mean?
One thing I could think of is to process high resolution just along the edges obtained from the low resolution image for better edge localization. Is this what it means?
Mohit Gupta
10:46 PM ET (US)
"....Treating every pixel as a node in a graph sounded scary at first..."

Li et al handle this issue in 'Lazy Snapping, SIGGRAPH 2004' by pre-segmenting (unsupervised) the image into small color coherent regions, and constructing the graph on resulting regions, instead of pixels. This simple idea reduces the graph complexity by orders of magnitude, and makes their segmentation run at almost interactive rates. (0.2 seconds per image or so).
Joseph DjugashPerson was signed in when posted
09:34 PM ET (US)
"Multiresolution implementation" is where you would first perform image segmentation on a low resolution image and then use this result to seed the higher resolution image segmentation. This approach basically confines high resolution processing to a small fraction of the image, thus resulting in faster convergence.
David Lee
06:15 PM ET (US)
Does anybody know what "multiresolution implementation" mentioned in section 4.1 is?

Treating every pixel as a node in a graph sounded scary at first, but they seemed to have managed it pretty well.
Pete Barnum
12:57 AM ET (US)
I really like the mathematical simplicity of the work in Weiss '99, and I think it could be an interesting foundation of a practical algorithm. I don't like how they seem to imply that it works great all by itself, by showing pictures of baseball players neatly segmented out of their background. I'm suspicious of something so mathematically simple corresponding so closely to the qualitative human perception of an object. I don't know if this was on purpose, but I think they should have spent more time emphasizing what you can do with such segmented results. I haven't seen the results on motion, but that seems like it could be good by itself. Also, I think using this segmentation in sort of a super-pixel way could give some cool results for other algorithms.
Gunhee Kim
11:24 AM ET (US)
As an opponent to NCut, I'm going to briefly introduce the followin paper:

"Blobworld: Image segmentation using Expectation-Maximization and its application to image querying", C.Carson, S.Belongie, H.Greenspan, and J.Malik, PAMI 2002.

It's a simple approach to segmentation in feature space based on mixture models.
Dave BradleyPerson was signed in when posted
05:33 PM ET (US)
Please post your thoughts on Normalized cuts and segmentation using eigenvectors here.

Print | RSS Views: 2165 (Unique: 910 ) / Subscribers: 0 | What's this?