QuickTopic free message boards logo
Skip to Messages

TOPIC:

Dimensionality Reduction by Learning an Invariant Mapping

7
Carolina GalleguillosPerson was signed in when posted
11-16-2006
04:37 PM ET (US)
what is the impact of the prior knowledge when we need to classify more sophisticated classes?
6
Deborah
11-16-2006
04:19 PM ET (US)
Will you please mention what a kernel matrix is? Thank you! =)
....

okay so (correct me where i'm wrong) but i think it is a type of similarity matrix, ref: http://www.cbrc.jp/~kato/kerncomp1/kerncomp1-e.html

Also, it is a kernel matrix for high-dim data that lies on/near a low dim manifold which:
1. implicitly maps data into a nonlinear feature space
2. is constructed by maximizing var of feature space, subject to local constraints to preserve angles and distances between nearest neighbors.

(.. so is it a conformal mapping in neighborhoods?)

ref http://delivery.acm.org/10.1145/1020000/10...35&CFTOKEN=64122404
Edited 11-16-2006 04:37 PM
5
Adam
11-16-2006
01:42 PM ET (US)
Are the manifolds they generate here useful for classification? The visualizations are cool, but it seems they define their neighborhoods relatively simply. It would be interesting to see actual performance numbers.
4
Anton
11-16-2006
01:11 PM ET (US)
If there's time, could you give a quick overview of what loss functions are and how they're obtained? thanks!
3
Boris
11-16-2006
12:44 PM ET (US)
If anyone is interested in learning pair similarities, Greg Shakhnarovich's thesis (http://people.csail.mit.edu/gregory/thesis/thesis.html) is a boosting approach to a similar problem. This paper presents a method which seems more powerful, though.
2
Matt
11-15-2006
10:02 PM ET (US)
Since the papers really don't go into any detail on convolution networks, I figured I'd take a small stab at what they from what I've picked up. Basically, they're multilayer neural networks that get full translation invariance, and partial scale and rotation invariance. They do so by sharing weights. Normally to get a network that detects an object anywhere in an image, you'd need training data of that object in each location. Here they used shared weights from each location (and scale?), so that when the weights are changed for one location they're changed everywhere, giving networks the power of convolution without massive amounts of additional training. (Take all this with a grain of salt, since it's not something I know much about.)

It seems like the principle here, of pulling neighbors together and pushing non-neighbors apart to achieve dimensionality reduction is quite powerful - and also used elsewhere.

I'm reminded of Kohonen networks, another neural network sometimes used for dimensionality reduction, but with the differences (as I
understand) of:
a) generally only attracting neighbors (and not repelling non-neighbors)
b) tending to work in unsupervised environments
c) having 'neighbors' defined by the structure of the manifold you're trying to fit to the data, rather than by labels
1
Nadav
11-09-2006
07:27 PM ET (US)
http://www.cs.toronto.edu/~hinton/csc2535/readings/chopra-05.pdf

The above is a paper which goes into more detail about some of the aspects that are not so clear in the paper I'm presenting. It is from the previous CVPR.

Print | RSS Views: 2446 (Unique: 1078 ) / Subscribers: 0 | What's this?