사이버보안학과
202020670
박지희
A loss function looks at the scores and then tells us how bad quantitatively is that W.
This allows us to quantify for any given value of W, how good or bad it is.
We actually need to find and come up with an efficient procedure for searching through the space of all possible Ws and actually come up with what is the correct value of W that is the least bad, and this process will be an optimization procedure.
A first example of a concrete loss function is the multi-class SVM lossRegularization
S_j : false class
S_yi: true class
We're going to sum over all the incorrect categories, and then we're going to compare the score of the correct category, and the score of the incorrect category.
And if the score for the correct category is greater than the score of the incorrect category, greater than the incorrect score by some safety margin that we set to one.
If that's the case that means that the true score is much, or the score for the true category is if it's much larger than any of the false categories, then we'll get a loss of zero.