Apart from the fact that the paper makes fairly heavy reading, I have just one comment and one question. The idea that we can unify various kinds of regressions through an appropriate choice of a regularization functional is very elegant indeed. I especially like the characterization of the solution of in terms of the Basis function G and its null space. which brings up the question, what is a seminorm ? also its interesting, that the authors talk about the radial basis functions as having a basis function which results in a proper norm instead of a seminorm, resulting in a null space which only has zero entries. But this comes at a cost of adding another adjustable parameter "beta". I am curious, and this is something the authors do not address, is it a general pheonomenon, that for basis functions that result in a norm, we will always end up adding one of more adjustable parameters to choice of functions ? also, how much of saving is it anyways, since they talk about choosing the appropriate beta by using a technique like cross validation. Which is surprising, since the whole point of the exercise is to have a theoretical basis of choosing good regression estimators and not having to rely on empirical techniques like crossvalidation. Also isn;t the choice of the form of the prior P[f] which explicitly uses the smootnness functional inits expression a bit forced ? Edited 10042001 02:03 PM
