(Update 6/2013 – I’ve edited and extended this old post from 10/2012. I had begun writing a new related post, and decided the material was better placed within this one as an extension.)

Two recent observations set me down a dark and lonely road, and they are unsurprisingly related. They both have to do with sampling and the kinds of confounding biases resulting from uneven sampling with a dichotomous trait, or even in a quantitative trait. The first is the observation that genetic studies (typically) have differential power to find protective and risk variants, an observation which may explain the lack of druggable targets, and perhaps even the ghost of “missing heritability” (although I really doubt that this alone could explain the latter). The second is the observation that correcting for stratification wreaks havoc on rare variant tests.

I think neither of these things are surprising on its own, but I feel it’s worth fleshing out the underlying root of both problems. It all boils down to study design, and more specifically sampling: specifically covariation between dichotomous trait and sample group, or and inflation of group representation above the natural proportion (that is, inflating or deflating group proportions).

**Digression: Deriving the Score Statistic**

So it turns out I’m really bad at remembering formulae. I blame Wikipedia. And while I wouldn’t pull out limits of Riemann sums to derive integrals of the form

understanding the Score (and also: Wald and LRT) test is important enough that it’s worth knowing the derivation. This is taught in every introductory graduate statistics course, but is present in the literature in such a kaleidoscopic number of forms that it’s almost necessary to know the derivation not to be confused, for instance, when a paper writes “it can be shown[1] that the variance of this statistic is” (some formula that doesn’t look quite right). The derivation is a rather fun exercise. The key thing to remember is that “Score” is a terrible name for this test. I think the econometricians got it right: it’s a Lagrange Multiplier Test.

So you have some parametric distribution (typically GLMM, but it will work for any parametrized distribution) that has a likelihood function , which is a function of your parameter vector and your sample . “Fitting” your data is the process:

which, by definition, and so long as is smooth in , is guaranteed to have the property

noting that this is the zero *matrix. *From the standpoint of optimization, what is the hypothesis that ? That is to say, if I asked for the maximum likelihood (and corresponding ) for which , what would we do? We’d try to restrict the domain of . We’d just do a constrained optimization! That is:

Indeed we can expand to arbitrary constraints (such as lies in the unit ball) quite easily, by taking an arbitrary vector-valued constraint function:

You might imagine breaking into constrained components and unconstrained components, but this turns out not to be too helpful. We need to make use of the standard result involving maximum likelihood: namely that

Which means that the distribution of the fitted (unconstrained) parameters vary jointly, and thus the variance of will necessarily involve unconstrained components. We constrain the likelihood by a function , and thus our Lagrangian is

which gives first order conditions

Where is the maximum of the constrained optimization. By contrast, we’ll call the unconstrained maximum, and the true value of .

A very typical form for is , which constrains this set of parameters to 0. In this case the first order conditions have a particularly nice form: ; and in particular if is a scalar, this gives the equation , which gives rise to the standard nomenclature: the partial derivative of the likelihood with respect to a given parameter is the “score” for that parameter.

At any rate, we want to get around to testing that score, which means somehow we need a value for . So we’ll Taylor expand around :

Now while we don’t know much about , we do know the distribution of , so we examine another expansion:

So if is such that , then we’re effectively done, because under the null hypothesis the constraints do not bind, and so the binding will be distributed as , and the non-binding will be distributed as . The one last step is to derive the *conditional* distribution of the constrained parameters *given* the unconstrained ones, so let’s write as where now are constrained to 0, and are not. Then, having fit :

Evaluated at . However if is not of such a nice form, we cannot directly appeal to the null distribution of as we did here, because we relied on the fact that . In the general case we need to know these values, so we play the Taylor expansion trick again:

Now , as that is our constraint. This yields a relation for the Jacobian:

And from above we have a relation for the likelihood derivatives (because by definition :

We can combine these two equations, yielding

But we care about . Remember that we have . Then:

Which, because we have a symmetric inner product which will be invertible, gives

And we can use the Taylor expansion way above for , combined with the fact that we know how is distributed, to calculate that (under H0 where ):

and therefore

And that’s the Lagrange Multiplier (alt. Rao’s Score) test.

[1] Obscure, archaic textbook from 1972

**Part 1: Oversampling**

As a bit of motivation, and because the mathematics will be largely the same, we review the impact of oversampling on a linear model. For binary outcomes, this amounts to inflating or deflating the rate of outcome in the experiment as compared to the natural rate; for example, sampling 50% cases and 50% controls for a disease with only a 15% prevalence in the underlying population. The basic idea behind oversampling is to note that the tables

are substantively different (in terms of say, Fisher’s Exact Test), despite the fact that shows a large enrichment for cases in both situations. A philosophical question that’d be worthwhile for the biological world to answer (but that we won’t get into here): if table 2 is the result of a replication experiment, is the signal in table 1 replicated, or not? While it may seem strange that a frequency of 8% in cases should in an orthogonal experiment come back at a frequency of 2.4% in cases, it only takes a tiny bit of structure.

Anyway, it’s somewhat useful to quantify the degree to which oversampling can reduce power in this way. Following the standard latent-variable setup, we let where has some cumulative distribution . Let be the rate of sampling, so the number of cases is given by . Also set the frequency of in cases and controls to and respectively. A score test for the effect of will be

(Recall that the likelihood is just the sum of ), where sums are over the case and control sets.

the variance of this statistic, as derived above, is

and relies on

All of these are evaluated under the null hypothesis . For every distribution of , we have that , by definition. The values of and vary with the particulars. For logistic regression,

(note: so ). Finally . Therefore:

So then the score test will be

Sums can be simplified using and :

and . Note, however, that this is just to get a sense of at the expected allele counts in cases and controls. This does not speak to the *power* of the test, nor even to the *mean value* of (as we’ve evaluated , which is different from , due to the nonlinearity of .

This can be extended to weighted group tests (e.g. burden tests). Let the weighting vector for the variants in a gene be , and let now be a vector of genotypes for sample . Define and the score test becomes

In both cases, you find that the test statistic evaluated at the expected genotypes (given and ) is maximized for evenly matched case-control sampling, or . But, maximizing is not necessarily the same as maximizing power. One easy way to demonstrate that is to compare the behavior of to that of .

Calculating PowerExamining the formula for the test statistic , we notice that only four quantities are needed: the allele count in cases, allele count in controls, total number of alleles, and total number of homozygotes (for ). This leads to the following obvious means of simulating a draw of genotypes from a population.

Given and , the frequency of the variant in cases and controls (which can be calculated from the odds ratio and the overall frequency in the population), draw

and if by chance , set . Then T can be calculated. A draw of 10,000 realizations for a site of MAF=1% with OR=1 and 5000 samples is very well calibrated:

So the quantiles of the normal and uniform distributions line up very nicely. There is (as you would expect) some quantization near 0, due to the fact that allele counts are discrete. The power can be estimated from these simulations by, for a given nominal

p-value , counting the number of simulations for which (where is the inverse normal cdf).

One notices that while is nicely symmetric, the power seems to be asymmetrically distributed.

The figures here take a 10,000-sample balanced (5000 case, 5000 control) study, where the prevalence of the disease is 8%. Notice that the distribution of (left plot) is perfectly symmetric, whereas the *power* is biased towards *risk-increasing* alleles (middle plot, right plot logscale). So what’s happening to break the symmetry? Consider what case/control frequencies correspond to an OR=1.5 and f=0.05 allele. Using Bayes’ rule:

where is the prevalence, and is the log-OR, and is just the logit. What this means is that for a 1% frequency variant

The case allele frequency (red) increases drastically when the allele confers risk (shaded circles), but decreases drastically when the allele is protective (triangles). At the same time, the control allele frequency (blue) does not significantly change in either case. One potential explanation for the differences in power is that the variance of observed allele counts is *higher* when the allele is risky: is increasing for , and it could be that this variance of observation dominates any gain from oversampling cases. There’s actually quite a lot in the figure. For diseases even of moderate prevalence (8%, 10%), the control frequency does not change all that much with the odds ratio, either for protective or risk alleles, because the disproportionate number of controls in the population serves as a buffer. So effectively, only the case frequency is changing (and this is particularly true of rare diseases with a prevalence of 1% or less!). Risk alleles increase the case frequency, and the case allele count can increase with it. By contrast, protective alleles lower the case frequency, but the case count can’t decrease with the frequency in the same way, because of (solidifying yet another parallel between genetics and economics) the *zero lower bound*. You can’t get negative case counts, so in 2500 case samples, the difference between a 0.1% allele and a 0.05% allele is masked by the fact that the realized allele count distribution becomes zero-inflated. What this means is that while a risk allele can increase a 10:5 to 30:5 to 90:5 without any limit, a protective allele can decrease a 5:10 to 0:10 **and no** **lower**.

Nevertheless, oversampling does increase power, but it does so differentially on risk versus protective alleles — but always the risk alleles seems to be the one most empowered.

Here are the sampling ratio/power profiles for 1% variants with a risk/protective odds ratio of 2: on the left, the disease prevalence is 8%, while it is set to 30% on the right. As the disease becomes more prevalent, the optimal ratio seems to shift away from balanced samples, and in the latter case it is possible to sample cases to the pont where you are more powered to detect protective alleles.

**Coming up: Part 2 – Frequency Confounding, Part 3 – Population Stratification (PCA-confounds)**

This too is an older post. See here, and a related post on fixed and random effect models here.

[…] for error covariates, and in modeling the distributions of read counts or genotyping errors. I have written about specific cases before. The general conclusion is that likelihood-based regressions result in […]

Pingback by Differential *omics in theory and in practice | heavytailed — April 15, 2015 @ 8:22 pm