As mentioned in my previous post, PAC theorems are general results regarding the generalizability of learning algorithms in a particular setting. “In a particular setting” usually involves assumptions on the hypothesis class H (equivalently, concept class C), and nearly always involve the assumption “H contains an error-free classifier – the underlying decision boundary/formula/algorithm”.

In the last post I mentioned a brief lemma used in generating PAC-Bayesian theorems, including one which bounds the average generalization error of a subset of hypotheses. These bounds, by the way, don’t offer much improvement over Vapnik’s uniform convergence theorem: for a hypothesis class with VC-dimension ,

However if your VC-dimension is very large, the PAC bounds give an alternate view of why learning is feasible: if a large component (measured by your prior) of your hypothesis space gives low classification error, then you’re likely to generalize well (as opposed to, perhaps a small collection of hypotheses overfit to your data – this is the 1/P(U) term).

Luckiness gives a related but slightly different view on generalizability. Whereas PAC theorems (and in particular Vapnik’s convergence theorem) provide uniform bounds over an entire hypothesis class, and PAC-Bayesian theorems provide an average bound over a prior-influenced subclass of hypothesis, luckiness provides nonuniform bounds over a hierarchy of the hypothesis class; and in particular provides better bounds for hypotheses where the data (distribution) is serendipitously aligned to the class.

**Motivating Example: Kernel-SVM with a Gaussian RBF**

I recall hearing (or reading) an exchange about Gaussian kernel SVMs, which teased apart two views of the subject. In the first view, because radial-basis SVMs can shatter any data set, they have VC-dimension infinity. Obviously, because of this, the uniform convergence theorem above (Vapnik’s) doesn’t apply. But they really do work well, so why should that be?

Enter the second view. SVMs in general work really well when there is a large margin, regardless of the VC dimension of the underlying hypothesis class. This view is the one taken by Vapnik for his generalization bounds for SVMs (both those involving the number of support vectors, and those contrasting the margin with the span of the data).

The second view is striking. The whole hypothesis class of RBFK-SVMs does not have an error bound, because of its VC-dimension. However, in the event that, *serendipitously*, the hypothesis class contains a subclass which, on the training data, has a small number of support vectors, or a large margin, we can say quite a bit about that subclass.

This approach, exploiting a serendipitous congruence between the data and the hypothesis class, falls into the framework of Luckiness functions. From this point of view, model and variable selection can be thought of as methods which attempt to maximize the degree of luckiness of the learning process.

**What is Luck?**

For many, luck is defined by unpredictability more than anything else. In Norse culture, the case is quite the opposite in that luck had nothing to do with what we would refer to as coincidence or chance. On the contrary, luck was a quality inherent in the man and his lineage, a part of his personality similar to his strength, intelligence, or skill with weapons, at once both the cause and the expression of the success, wealth, and power of a family. Luck expressed itself partially in skills, beauty, and other desirable characteristics, but also in events shaping themselves according to the wishes of the lucky man. One might have luck in specified areas but not in others, such as fishingluck or weatherluck for example. But the so-called “man of luck” was the man who possessed luck generally, not just in one specific area. People possessed luck in different measure and one was helpless against an opponent who had greater luck. Kings especially were great men of luck to the degree that they were able to send forth their luck to assist others. Luck was not a thing to be sought or found by coincidence; one had the luck that one was accorded by fate. Yet, in certain cases luck could be diminished or lost..

– Sommer,

The Norse Concept of Luck

This particular view of luck (which brings back fond memories of reading Egil’s Saga) is actually pretty close to what is meant in terms of machine learning. Just change “person” to “hypothesis class”, and “Kings” to “analysts” (or data scientists or whatever you want to call yourself. It’s good to be the King). Some classes are “lucky” for certain kinds of distributions on the input space, while others can be “unlucky” on the same kinds of distributions.

Specifically, a luckiness function is a mapping:

that’s all it is. Similarly you might define an unluckiness function in exactly the same way. Given a labeled sample , the hypothesis class can be ordered according to luckiness or unluckiness: a level is associated with each hypothesis (and parametrized by ):

that is, it’s the number of alternate classifications for which there’s some hypothesis which, given , has greater luck than $h$ does on the input classification. For classes equipped with a Luckiness function, if the level is low, the degree of serendipity is high, and one might expect better-than-average performance. Put another way: Let be the concept class of binary decision trees, and let , that is, the number of leaves in . Let’s say you’ve trained and . What then is the probability that, given some alternate classification on , I’d have generated such a small tree? It’s exactly . Thus the level of a hypothesis (with respect to a luckiness function ) can be thought of as a kind of probability over a uniform distribution of classifications of . (Same with unluckiness). Those with a statistics background might recognize this as the probability resulting from a permutation test.

**PAC-Bayes as Luckiness**

Notice that the function is not constrained to use information about (or even for that matter). In that sense, the probability prior as a mapping from can be viewed as a luckiness function. This interpretation reveals something a little bit amiss with the level of the hypothesis class: a bad prior means nothing is lucky, which could implicate bad performance, when in fact the performance can be good in spite of the prior. Basically the complaint is that with such freedom in the choice of , there needs to be some kind of constraint in the analysis of to really define what luckiness entails.

Enter probable smoothness. I’m not sure the reason behind the nomenclature, I’d prefer something like “lucky *with respect to*,” but we’ll stick with the canonical term “Probably Smooth”. If a Luckiness function is probably smooth, it means that we can say something about how lucky we expect to be after doubling the number of data points. In particular, we can with high confidence bound the growth rate of the hypothesis level, and by extension, the generalization error.

Probable Smoothness is defined with respect to two functions, and . The latter, , is the bound on the growth of the hypothesis level; while the former, is a kind of fudge factor (i.e. we can’t necessarily bound the hypothesis level of the *whole* double-set, but we can bound the hypothesis level of all subsets of a certain [large] size). Slightly more mathematically, given a hypothesis with luckiness on the first half of a sample (i.e. the first points), with high confidence (), for most (all but ) of the points in the whole sample, the level of the hypothesis can’t be more than . Completely symbolic:

where the notation means that is a subvector of missing at most a fraction of ‘s coordinates.

So the solution to the view of PAC-Bayes priors as a kind of luckiness function is to indicate that not only must you find a mapping from the instance space and hypothesis space to , but it has to be probably smooth on the instance and hypothesis space. It’s probable smoothness, rather than Luckiness, that encodes serendipitous congruence between data and hypothesis; and so by observing that a Bayesian prior on the hypothesis space is a kind of luckiness function, asking that it be probably smooth is effectively asking that it encode (by magical intuition of the prior’s author) something about situations in which the hypothesis classes perform well.

The standard example is one of a decomposable class hierarchy

and the VC-dim of is . One could, for instance, construct a prior so that . In such a case

Similarly, we can define an unluckiness function

This unluckiness function is probably smooth with respect to and . This gives a similar bound:

Thus the structure of the hypothesis class is captured by the unluckiness function U, and the function with respect to which it is smooth.

**Some idle speculation**

As mentioned in my previous post, an increasingly hot topic in machine learning is the use of data geometry to improve classifier and regression performance. The way in which manifold methods are used is effectively a transformation of the instance space (typically selecting the least significant eigenvectors of the graph laplacian). Thus this kind of structure can’t be directly incorporated into a luckiness framework (but can be viewed, along with model/variable selection as a method for trying to maximize luckiness). Nevertheless, if the transformation could be incorporated into the framework, one could put bounds on the generalization error given, say, the curvature of the underlying manifold and the number of labeled *and unlabeled* points available.

Edit: also here’s a lecture video I found

## Leave a Reply