QuickTopic free message boards logo
Skip to Messages


Using Multiple Segmentations to Discover Objects and their Extent in Image ...

yingxPerson was signed in when posted
06:27 AM ET (US)

It is possible to obtain customization of diamond tiffany heart bracelet is quite simple, on the other hand, you will find more obvious unique when you investment in the development of diamond jewelry. Manufacturers tiffany heart Tag bracelet is vastly different jewelry and other creation in the crowd. Just a great movement is tiffany heart Tag earrings bracelet. These jewels is almost zero, it is a fresh round for thousands of years, but for some reason, Wendy is different for the rest of your life.
Air Jordan Flight 45
04:32 AM ET (US)
Air Jordan 1LeBron VII (7) Heroes Pack Deion Sanders and Penny Hardaway were the big names for Nike in the 90s, and surely enough, we all had a pair of Pennys or Deions Diamond Turf Trainers. Paying homage to two of LeBron James childhood idols, the LeBron VII was fashioned in two great colorways, making up the Nike Air Jordan 2010 Heroes Pack. Pennys pair features the trademark crackled swoosh from the Air Jordan 1 of Blue White Black Penny while Primetimes pair features a gaudier, spotlight stealing patent leather red with gold accents. Like Deion returning an INT for a TD or Penny driving to the hole,Air Jordan Flight 45 both pairs will be out of our reach as the LeBron VII Heroes Pack will not see a public release.
Spam deleted by QuickTopic 08-18-2010 02:04 AM
07:13 PM ET (US)
04:43 PM ET (US)
If someone could try to briefly explain pLSA and LDA in simpler terms I'd appreciate it.
Tom Duerig
04:40 PM ET (US)
Alright, so I get a really easy comment this time. I HIGHLY recommend reading Ian Fasel's thesis at http://mplab.ucsd.edu/~ianfasel/IanThesis.pdf Similar content, VERY different approach and viewpoint.
04:25 PM ET (US)
The paper talks a lot about large sets of images. Most large sets of images currently is found on the internet (and that set is growing). However, the vast majority of image data today is accompanied by human labeling. I think that as a follow up to this paper, text labels should be incorporated into the process of learning classes of objects. Clearly if images on the web were perfectly labeled, there would be no need for this kind of research. However, the fact that text exists with images in almost every case (thanks to the web), cannot be ignored.
02:19 PM ET (US)
The paper chose Normalized Cuts for its segmentation algorithm, because the size of the segments were similar to the size of possible objects. Could you give a quick overview of Normalized Cuts and maybe some top competitors that could have been used instead...
02:38 AM ET (US)
A main point of the paper seems to be the use of the multiple segmentations. Yet the results don't seem to show that the multiple segmentations provide that much of a benefit. Why would the single-segmentation approach outperform in the top 20 returned images (it doesn't over the top 500)?
01:48 AM ET (US)
This paper is a clear rip off of the seminal work by Ben-Haim et al. (http://www.cs.ucsd.edu/~sjb/slam06.pdf).

In all seriousness though, I think it would be really cool if this type of approach was applied to image search engines.
01:26 AM ET (US)
matt: I was looking at their website and they seem to have given images for all the topics at http://people.csail.mit.edu/brussell/resea...iscovery/index.html
Edited 10-12-2006 01:31 AM
01:00 AM ET (US)
I am bit confused by the results they show on cars (Figure 6, top-left), in which a black and white car falls under same topic. This is not possible by using visual words (in color or b/w images) if the descriptor has any sort of color or intensity information.

btw do they use color images in their dataset or b/w?
09:05 PM ET (US)
In figures 5-7, I'm somewhat troubled by the fact that they've selected a handful of the topics out of the set of algorithm-discovered topics. It makes me wonder about the topics they didn't select.

I also wonder whether the algorithm could be adapted to partly labeled data (where (some of) the contents of a picture are labeled, but no knowledge of where they occur in the image is provided). Since it doesn't seem like this method is likely to scale well to cases where the data set isn't already set up to have a small selection of objects, it seems like these labels would already be available. (There isn't as much of a correlate in the text-mining community for this kind of approach, but...)

Print | RSS Views: 2538 (Unique: 1175 ) / Subscribers: 0 | What's this?