QuickTopic free message boards logo
Skip to Messages

TOPIC:

Active Learning for Visual Object Recognition

  Messages 10-6 deleted by author between 05-17-2008 10:16 AM and 07-21-2006 09:02 AM
5
David
11-01-2005
04:43 PM ET (US)
I've run across this reference before. I havn't looked at it but it supposedly does some analysis concerning illumination invariant features.

Y. Moses and S. Ullman, "Limitation of Non-Model-Based Recognition Schemes" Proc. European Conf Computer Vision, pp820-828,1992
4
Ben Laxton
11-01-2005
03:59 PM ET (US)
I will go into the illumination independent features and what I think are some of their weaknesses. As for Robin's comment, I agree, the authors don't do too much to formally prove that this is a valid thing to do, but rather rely on the results. I think it makes intuitive sense though and if it were proven would be very valuable.
3
Erik Murphy-Chutorian
11-01-2005
03:26 PM ET (US)
The authors very quickly briefly the illumination independent features that they use. Would you be able to descibe them in more detail during the presentation?
2
Anup Doshi
11-01-2005
02:58 PM ET (US)
They say that the point beyond which an image is no longer considered a positive example is when >20% of the person is occluded. But in Figure 3, it seems there are some pretty clear views of almost whole people, which are considered negative examples. I wonder how they came up with this 20% idea and how they enforce it in practice.

Also, these features that they use - I presume their performance might go down if there is some point-wise noise like salt-and-pepper-ish stuff.
Edited 11-01-2005 03:07 PM
1
Robin Hewitt
11-01-2005
12:29 AM ET (US)
I'm still wondering how we can know, a priori, when and whether a particular co-training approach will work. Specifically, the Schapire proof requires random sampling from the same distribution, and neither this approach, nor that of Levin et. al. does that. In this paper, the authors do at least directly confront that fact. The problem I have here though is that I don't see how their argument for effectiveness (in Section 4) can be considered convincing.

It seems to hinge on, "This...is true when the initial unbiased sample is sufficient to restrict the set of good classifiers to a unique approximation of the optimal classifier." They then apparently claim that the fact it worked in this particular case constitutes a general proof for the method itself. I think calling that a proof is going too far. Rather, it strikes me they've simply reformulated the effectiveness question as, "How can one know when an initial sample is sufficient to approximate the optimal classifier closely enough that this will work?" But I don't think they've answered it.

Unless I've missed or misunderstood something here, this could be an interesting line of analysis. There does seem to be reasonable evidence that these methods work. Schapire's proof doesn't cover whether or when, but maybe gives some hints about how. If someone were to fill in the rest of the details and produce strong criteria for evaluating these methods, that would be a nice contribution!

Print | RSS Views: 2630 (Unique: 1119 ) / Subscribers: 0 | What's this?