
Deleted by topic administrator 02022010 05:25 AM

no prescrition 37.5 mg ph
08022009
01:07 PM ET (US)

gqb0bB Great site. Good info.


Deleted by topic administrator 08032009 02:07 AM

Agwawuxf
07142009
03:07 AM ET (US)

miIt6S

Tomasz Malisiewicz
02152006
09:58 PM ET (US)

I just read a very interesting paper by a fellow CMU roboticist  Dave Tolliver  titled "Multilevel Spectral Partitioning for Efficient Image Segmentation and Tracking."
http://www.cs.cmu.edu/~rcollins/Papers/tolliverwacv05.pdf
In this paper, the lattice geometry of images is exploited to define a set of coarsened graph partitioning problems. A coarse solution is propagated to increasingly finer resolutions and refined using subspace iterations. This hierarchical approach gives a 10x to 100x speedup over other Ncut methods which try to use clever sampling schemes.
If you're into spectralbased image segmentation, then check it out.

Tomasz Malisiewicz
02152006
09:39 AM ET (US)

An algorithm for Spectral Clustering that I really like is presented in Andrew Ng's "On Spectral Clustering: Analysis and an Algorithm." (http://ai.stanford.edu/~ang/papers/nips01spectral.pdf)
The basic idea is to form a matrix of K eigenvectors, normalize the rows to unit norm, and the kmeans clusters the rows (points on the K dimensional sphere) into k clusters.

David Lee
02152006
12:03 AM ET (US)

"seed the higher resolution image segmentation" What does this exactly mean? One thing I could think of is to process high resolution just along the edges obtained from the low resolution image for better edge localization. Is this what it means?

Mohit Gupta
02142006
10:46 PM ET (US)

"....Treating every pixel as a node in a graph sounded scary at first..."
Li et al handle this issue in 'Lazy Snapping, SIGGRAPH 2004' by presegmenting (unsupervised) the image into small color coherent regions, and constructing the graph on resulting regions, instead of pixels. This simple idea reduces the graph complexity by orders of magnitude, and makes their segmentation run at almost interactive rates. (0.2 seconds per image or so).

Joseph Djugash
02142006
09:34 PM ET (US)

"Multiresolution implementation" is where you would first perform image segmentation on a low resolution image and then use this result to seed the higher resolution image segmentation. This approach basically confines high resolution processing to a small fraction of the image, thus resulting in faster convergence.

David Lee
02142006
06:15 PM ET (US)

Does anybody know what "multiresolution implementation" mentioned in section 4.1 is?
Treating every pixel as a node in a graph sounded scary at first, but they seemed to have managed it pretty well.

Pete Barnum
02132006
12:57 AM ET (US)

I really like the mathematical simplicity of the work in Weiss '99, and I think it could be an interesting foundation of a practical algorithm. I don't like how they seem to imply that it works great all by itself, by showing pictures of baseball players neatly segmented out of their background. I'm suspicious of something so mathematically simple corresponding so closely to the qualitative human perception of an object. I don't know if this was on purpose, but I think they should have spent more time emphasizing what you can do with such segmented results. I haven't seen the results on motion, but that seems like it could be good by itself. Also, I think using this segmentation in sort of a superpixel way could give some cool results for other algorithms.

Gunhee Kim
02102006
11:24 AM ET (US)

As an opponent to NCut, I'm going to briefly introduce the followin paper:
"Blobworld: Image segmentation using ExpectationMaximization and its application to image querying", C.Carson, S.Belongie, H.Greenspan, and J.Malik, PAMI 2002.
It's a simple approach to segmentation in feature space based on mixture models.

Dave Bradley
02052006
05:33 PM ET (US)

Please post your thoughts on Normalized cuts and segmentation using eigenvectors here.
