top bar
QuickTopic free message boards logo
Skip to Messages



Deleted by topic administrator 07-25-2009 02:11 AM
no prescripton e pharmaci
07:15 PM ET (US)
Incredible site!
01:39 AM ET (US)
Deleted by topic administrator 07-22-2006 09:31 AM
Pete Barnum
12:06 AM ET (US)
Yeah, at first I though the results looked decent, but I notice that they're only discussing some of them. When you look at the pictures, it looks like it does well challenging objects like trees and motorbikes. But when you get to the confusion matrix, they've dissapeared, not leaving behind much. Still, I agree that the general idea of the model is well thought out.
David Thompson
10:04 PM ET (US)
True... and inference in a generative graphical model seems like a really elegant way to fuse bottom-up and top-down segmentation. This also helps us abstract away from the particular approximation mechanism and instead focus on which model parameterizations best balance learnability and performance.
Edited 04-09-2006 10:07 PM
Tomasz Malisiewicz
09:27 PM ET (US)
I agree with Dave. The authors assume that object shape is consistent, the variability of color/texture within a single instance of an object is limited, and that objects are moderately large (about 15-30% of the image). Even though the objects of interest aren't in the exact center of the image, foreground/background separation isn't very difficult. Other limitations are: the authors flip asymmetric objects to face a consistent direction and images contain only one object of interest.

On page 7 of the paper the authors display the 88.6% segmentation accuracy for LOCUS with no class model applied to horses (they simply learn a mask and color model for each image so as to minimize the visual entropy within each part of the scene). I think that if they want to show the strengths of an unsupervised learning technique, then they need to analyze a more difficult data set where segmentation accuracy without using class information is low (about 50%).

On a more positive note, this paper does a good job at demonstrating the recent 'being bayesian about segmentation' trend. The authors have a generative model for images of an object class and by being bayesian they segment images automatically using "all images together."

In summary, I'm not impressed by the results; however, I like the Bayesian approach of LOCUS.
David Thompson
07:16 PM ET (US)
I enjoyed this paper. However, the unsupervised categorization would have been more compelling for me if they'd used a dataset that wasn't Caltech 101. Reason? Caltech includes exemplar object types posed with neutral backgrounds. This would encourage a strong prior on those objects' edges, essentially solving the figure-background problem for the whole class of objects. By posing the object against a neutral background, a photographer is "segmenting" the foreground for the learning algorithm; isn't there still a pseudo-labeling process going on here? The same could be said of the objects' common sizes and positions.

My point I guess is that they're actually solving a much more restricted learning problem than their super-flexible, zillion-parameter generative model might lead us to believe.
Edited 04-09-2006 07:25 PM
Dave BradleyPerson was signed in when posted
08:40 AM ET (US)
please post your thoughts on LOCUS here

Print | RSS Views: 1875 (Unique: 817 ) / Subscribers: 0 | What's this?