B****x 发帖数: 17 | 1 ACM Paris Kanellakis Theory and Practice Award奖励对计算机应用有重大影响的理论
发现. http://awards.acm.org/kanellakis/
2008
Cortes, Corinna
Vapnik, Vladimir
Paris Kanellakis Theory and Practice Award
2008 – Vladimir Vapnik
Fellow of NEC Laboratories/Columbia University (2008)
Citation
For the development of Support Vector Machines, a highly effective
algorithm for classification and related machine learning problems.
Full Citation
In their 1995 paper titled "Support vector networks," Cortes and Vapnik
introduced Support vector machines (SVMs).
In typical computational learning problems, the designer first provides
training data by labeling data as positive and negative. The data, for
instance, can be profiles of typical customers, and is modeled as a
point in a high-dimensional geometric space, where each dimension
corresponds to a user attribute such as age.
The label for each data point can record, for instance, whether the
corresponding customer likes or dislikes a particular product. The
learning algorithm then computes a decision boundary that separates
positive examples from the negative, so that in the future, this
boundary can be used to predict whether a new customer will like or
dislike the product based on her profile. Prior to the publication of
this paper, this method could be applied only when a decision boundary
specifiable by a hyperplane (that is, a linear constraint over attribute
variables) was guaranteed to exist for the training data.
Unfortunately, this is never the case for real-world training data which
contains many errors. The paper by Cortes and Vapnik revolutionized the
field by providing for the first time an accurate algorithm that
converges in polynomial time even when the training data is not
separable by a hyperplane.
This is achieved by mapping training data points to a higher-dimensional
space, and then determining a hyperplane that minimizes the error on the
training set, while maximizing the distance, or the margin, between the
positive and negative examples. The margin-maximizing solution is
obtained by solving a quadratic optimization problem, and works well in
practice.
With this algorithm, the machine learning community was armed with a
general technique that could be used in noisy real-world scenarios. The
theoretical foundations articulated in this paper have inspired
extensive research making SVM one of the most frequently used algorithms
in the machine learning literature. The algorithm and its variants have
been used successfully in a wide variety of practical applications such
as handwriting recognition, speech synthesis, medical diagnosis, protein
structure prediction, face detection, weather forecasting, intrusion
detection, and text mining. | p*********g 发帖数: 226 | 2 基本上是Vapnik先打下基础,然后
Corinna Cortes 拓展到 kernel (Wahba 的实在不能算)
Bennett and Mangasarian 1992 提出 soft-margin loss (但不在SVM背景下)
再前面点,RKHS were invented by Aronszajn and coworkers in the 50s.
kernel + soft-margin 之后才迎来了10-15 年左右的辉煌,不过现在让位于 sparse/
low rank 了。 |
|