QuickTopic free message boards logo
Skip to Messages


Empirical Evaluation of Dissimilarity Measures for Color and Texture

Markus Herrgard
01:52 PM ET (US)
In principle one could penalize the likelihood in the EM algorithm by whatever seems to be reasonable considering the target application, but my understanding is that it is not anymore guaranteed that the EM algorithm will converge. This will of course depend on what exact kind of penalization one uses and how strong the contribution of the penalty term has. An example I have in mind is one where the likelihood is penalized by a Markov random field type term that forces EM to preferentially group neighboring pixels together. This type of penalization leads to problems with convegence if the penalty term is too large (i.e. the Gibbs field strength is too large).
Edited 10-30-2001 01:56 PM
Gyozo Gidofalvi
01:40 PM ET (US)
I felt that the paper was too brief in the discussion of the individual dissimilarity measures. For a person who has been in the field of computer vision for several years this is probably an excelent paper that summarizes and compares meausres for different tasks using a reasonable common ground for comparison. Wonder if i'm the only one who felt this way?
sameer agarwal
12:47 PM ET (US)
I agree with JunWen, I am not entirely certain why would one want to use an asymmetric dissimilarity measure ?

on anand's idea I would like to point out, in the normalized cut, the similarity and dissimilarity between clusters turned out to be symmetric, and maximizing one was equivalent to the other.

for the average cut and the average association cut, one of the two measures dominate, and you get clustering which is appropriately biased.

so my guess is, if you can find a similarity measure that is linearly related to the dissimilartity measure, you will save yourself the trouble of selecting a regularization term (the penalty scaling factor) and get good EM performance too.
andrew cosandPerson was signed in when posted
06:09 AM ET (US)
Hrm. This paper is somwhat difficult to read, as situation which is somewhat exasperated by the fact that part of the PDF blinks. Does anyone else have this problem? (Perhaps I sholud sleep some and try again when it's light outside)
Edited 10-30-2001 06:10 AM
Junwen Wu
03:29 AM ET (US)
1.What is the disadvantage of the nonsymmetric dissimilarity measurement?

2.Will the feature space selected in the dissimilarity measurement influence the performance? If not, it is really good, but if yes, I think when using such measurements in real application, such as Anand's idea about combining it with the EM algorithm, the key becomes the feature selection part, which is also an important issue in clustering.
I think one of the most contribution of the paper is its systematic methodology in dealing with benchmark comparison stuff.
Edited 10-30-2001 03:29 AM
01:07 AM ET (US)
I wish to know if the dissimilarity measures mentioned in the paper can be used to improve the performance of existing clustering algorithms like the EM algorithm by regularization. It is well known that the EM tries to maximize the likelihood function. We can modify the likelihood to obtain a penalized likelihood where the penalizing term is a function of the dissimilarity measure between clusters. By maximizing this modified objective function, we can obtain clustering algorithms that provide better segmentation.

Any thoughts on this ?
Edited 10-30-2001 01:08 AM

Print | RSS Views: 484 (Unique: 320 ) / Subscribers: 0 | What's this?