I think the underlying math is pretty involved( thats a sophisticated way of saying that I dont understand quite a lot of it!). This is what I have got from the paper. We have a training sequence of images which we call Z*. We have another set of images on which we intend to run the algorithm, its called Z. Again, every image can be thought as a geometric deformation of an "ideal" image. The set of ideal images is X. The basic idea of the paper is to generate X from Z*( the set of training images). And then estimate Z using a transformation of X. Assuming this interpretation of the algorithm, I have some questions which I will raise in the class.

1

Kristin Branson

11-21-2002

02:13 PM ET (US)

I'm having trouble understanding the main algorithmic idea in this paper. Is the idea that there are some observed samples {z} which are some geometric transformation of some latent data {x}, z = T_alpha x? Somehow, k examplars are chosen from these unobserved {x} as centers. Then, conditional probability distributions P((alpha_t, k) | (alpha_{t-1}, k_{t-1})), analagous to Kalman filters. If this is what the idea is, then my main confusion is how you know x and alpha given z. It seems like you need these for the training set and the test set.