QuickTopic free message boards logo
Skip to Messages

TOPIC:

Unsupervised Improvement of Visual Detectors using Co-Training

8
yingxPerson was signed in when posted
08-20-2010
06:31 AM ET (US)

At first, we rented a circle on the company’s eyes. Tiffany 1837 Pendant Difuni company is ready, so all of life, from the imperial family system in its unique – the design of fresh again use cheap Discount Tiffany Accessories store difuni.
tiffany 1837 Bracelet company premiership jeweler and diamond authority, introduces the 2010 collection.
7
c4trictrlano
08-16-2008
01:41 PM ET (US)
relorbast
6
Deleted by topic administrator 05-17-2008 10:16 AM
5
Ben Laxton
10-25-2005
04:24 PM ET (US)
My questions follow the theme of emphasizing that (1)this is a learning framework and (2) that combining the resulting classifiers after co-training is better than training on classifier with all the features.

My first question is how the classifiers are combined after training.

Secondly, I was thinking about this process and wonder if it couldn't be done with a tree-like cascaded classifier. For example, first run some basic classifiers on both images in one cascade level, then split and run just on gray-level and on backsub separately, but at the same level, then feed the resulting outputs of each to the distinct parallel classifier or something like this.
4
Brendan Morris
10-25-2005
02:33 PM ET (US)
I'm a bit confused about a couple of things. I don't quite understand the subsampling of the negative unlabeled examples.
I notice also in Fig 5 that the inital classifiers have multiple detections at many scales but the co-trained classifiers have much fewer (they aren't completely removed). This probably leads to Robin's question about the peaks in the scoring function, but why is it the gray classifier removes most these but the backsub does not.
3
Anup Doshi
10-25-2005
04:13 AM ET (US)
Very interesting paper. I am still not quite convinced that two cotrained classifiers together would work better than a combined classifier trained by itself (using both feature sets). In the paper it seems like they only compare the individual classifiers with and without cotraining. But I guess I should look into the proof that they cite.

They mention that it might be possible to extend this cotraining idea to multiple classifiers. I would be interested to see how well this might work and how statistically independent they would need to be.
2
Robin Hewitt
10-25-2005
03:12 AM ET (US)
The single biggest assumption in this seems to be the one at the end of Section 2.1: "the margins of the two classifiers are only weakly related." Would one be able to evaluate that, given the limited information available from initial training? It seemed to me the inequality they refer to only measures whether a margin exists (assuming class distributions are the same between training and test). I don't see anywhere that it gives assurances about being able to evaluate what that margin looks like. Did I miss something here?

I also have questions about the peakedness requirement at the end of Section 5.3. Peaks is plural. Does that mean that *each* stage of the cascade must be a local maximum? What's the domain for peakedness...is it (x,y,scale) space? Finally, I didn't understand this bit in the last paragraph of 5.3: "...sum of the weights above the selection threshold and are nearby the peak." Can you elucidate? Thanks!

This is all very interesting. I'm looking forward to your seminar!
1
Erik Murphy-Chutorian
10-25-2005
01:42 AM ET (US)
As the paper mentions, it is undoubtably useful for a recognition system to improve it's performance without the need for many costly training examples. The co-training method described in the paper provides a good example of the feasibility, but the resulting system still posesses all of the weaknesses of the original viola-and-jones system, which include the inability to build a decent cascade for multiple object, or substantially different poses of an object.

Print | RSS Views: 2280 (Unique: 1062 ) / Subscribers: 0 | What's this?