Maksym
02282017
01:08 AM ET (US)

[1] "Sepal.Length == small & Sepal.Width == average & Petal.Length == tiny & Petal.Width == tiny"
Does anybody know how to pass this character type of value to subset function, so that it will do subset(data, Sepal.Length == small & Sepal.Width == average & Petal.Length == tiny & Petal.Width == tiny)?
Is there any smart way in R?

Ilya Kuzovkin
04192016
03:49 AM ET (US)

The correct PDF is up now, thanks!

Kristjan Jansons
04182016
12:41 PM ET (US)

For a quick fix I think last year slides are the same: courses.cs.ut.ee/MTAT.03.227/2014_spring/uploads/Main/lecture10.pdf

Anti
04182016
12:22 PM ET (US)

Both, lecture 10 and lecture 11 have lecture 11 slides in the courses website. There are no lecture 10 slides.


Deleted by author 03242016 09:50 AM

Sulev
03212016
11:54 AM ET (US)

If anyone is struggling with plot.neuralnet which is not printing anything through R markdown but opens the graph in separate window, then you should redefine the function as shown here: https://groups.google.com/forum/#!topic/rropen/qS7Fki9pj8k


Deleted by author 03152016 06:14 AM

Sulev Reisberg
03142016
06:47 PM ET (US)

I think you should use "lower or equal" in the formula:
classification.result*data$y <= 0
In that case 0 also means incorrect classification. This was what I did in that exercise.
Sulev

Andrii
03142016
06:14 PM ET (US)

May anyone to approve or disapprove about the next. In ex. 1112 we are starting to minimize our cost function from point (0, 0). In this point the function value equal to 0. As cost function bounded below by 0 (9th slide from the lecture), does it mean that we are in a minimum and we don't need to do any iteration except the case just to show that the gradient is, indeed, 0 vector (as function is in minimum)?

Andrii
03142016
04:54 PM ET (US)

@Markus
Thank you! Actually, it makes sense to handle nonpositive cases as misclassifications.

Markus
03142016
04:47 PM ET (US)

In exercise 1112 we are supposed to first prove that an element is misclassified iff its functional margin is negative and right after that we have an exercise where we initialize weights to zero, thus getting all functional margins zero.
I believe we are supposed to count every nonpositive margins as misclassified and that is also suggested in Introduction to Statistical Learning: http://wwwbcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf
I might just be easily confused but the ordering of these exercises did get me confused. Maybe you could add this as a hint to the homework?

Andrii
03142016
04:43 PM ET (US)

May anyone clarifies according Ex. 1112. It seems that we don't have bias term w_0 in perceptron. So, if we start algorithm from point (0, 0), it means that functional margin will always be 0, hence, the error also equal to 0. And gradient is also equal to 0 as we don't have misclassified points. It means that our initial point will not update. Where am I wrong?

Dmytro
03082016
09:11 AM ET (US)

@Anti and @Kristjan Jansons
Thank you a lot, you did my half an hour before deadline less stressful!

Kristjan Jansons
03082016
08:34 AM ET (US)

@Dmytro Projection is something like this  http://www.euclideanspace.com/maths/geomet.../projectLine300.gif
Second question. I think in task 2628 you can use the same stoch_df as in task 23, but with the constraint as specified  so if it gets out of the allowed range then do projection. In task 2930 it's something like sum(min(d1^3, d2^3)) if I understand correctly.

Anti
03082016
08:29 AM ET (US)

@Dmytro
1. We project the point (w1,w2) orthogonally back into the area that satisfies given constraint. I did it so that if (w1,w2) went outside of the area, I projected the point (w1,w2) orthogonally onto the line that bounds the area where the constraint is satisfied.
2. Yes, in the last exercises we have function with multiple arguments.

Dmytro
03082016
07:56 AM ET (US)

I am slightly stuck with home assignments 26  30. I would like to ask what exactly means by "project", what we project where ?
The second question, if I understand correctly that we will have function of two arguments f(w1, w2) = sum[ min(l(w1), l(w2) ] ?
