Methods for multi-class cost-sensitive learning are based on iterative
example weighting schemes and solve multi-class cost-sensitive learning
problems using a binary classification algorithm. One of the methods
works by iteratively applying weighted sampling from an expanded data
set, which is obtained by enhancing each example in the original data set
with as many data points as there are possible labels for any single
instance, using a weighting scheme which gives each labeled example the
weight specified as the difference between the average cost on that
instance by the averaged hypotheses from the iterations so far and the
misclassification cost associated with the label in the labeled example
in question. It then calls the component classification algorithm on a
modified binary classification problem in which each example is itself
already a labeled pair, and its (meta) label is 1 or 0 depending on
whether the example weight in the above weighting scheme is positive or
negative, respectively. It then finally outputs a classifier hypothesis
which is the average of all the hypotheses output in the respective
iterations.