A modified large margin perceptron learning algorithm (LMPLA) uses
asymmetric margin variables for relevant training documents (i.e.,
referred to as "positive examples") and non-relevant training documents
(i.e., referred to as "negative examples") to accommodate biased training
sets. In addition, positive examples are initialized to force at least
one update to the initial weighting vector. A noise parameter is also
introduced to force convergence of the algorithm.