heavytailed

June 10, 2013

On meta-analysis: Rescuing the power of multiethnic studies

Filed under: Uncategorized — heavytailed @ 3:53 pm

Last year, I wrote about the effect that oversampling (in a case-control study) has on the power of a study (using the Score test). I recently updated that post with a bit more information regarding the difference between evaluating the test statistic under various choices of the input space, and calculating power. In this post, we move from the analysis of a single study, to the analysis of multiple studies: meta-analysis. This has particular applications to multiethnic genetic studies, as one typical approach to testing for genetic association is to test for association within each population, and meta-analyze genetic mutations across the populations in which they appear. By virtue of genetic drift, selection, and isolation-by-distance, many variants will be unique to a single population, and many variants will show large frequency differences between them. We will find that this structure significantly impacts the power of standard meta-analysis approaches, identify the cause, show why alternate methods work, and propose a novel method that performs slightly better in this particular context.

Meta-Analysis

The need for meta-analysis typically arises when several studies have studied the same phenomenon, and (more importantly) asked the same question of data sampled in potentially slightly different ways. “The same” here means, statistically, having tested the same null hypothesis against the same alternate hypothesis.[1] While p-values can typically be interpreted as probabilities, they are in fact random variables, and one can get into extreme danger when treating them as such, and it is almost never the case that p_{meta} = 1-(1-p_1)(1-p_2)\dots(1-p_n). Instead, one can treat \vec p, the p-values for the studies, as a draw from a multivariate distribution. Along the same lines, one might use the test statistics \vec T and their variances \vec V as draws from multivariate distributions. One can then ask the probability if these draws under a suitably defined null distribution (in particular, p_i \sim U(0,1) and T_i \sim N(0,V_i)).

Methods for Meta-Analysis in Genetics

Let’s say you have N studies, producing p-values p, z-values z, with variances v. Within each study, the variant of interest has frequency f, which can change between study populations. Then

Fisher’s Method:

X \leftarrow -\sum\log(p_i)

X \sim \chi^2(2N)

Sum of Uniform:

U \leftarrow \sum p_i

U \sim N(N/2,N(N-1)/12)

Inverse Variance Weighting:

W_i \leftarrow 1/V_i

Z \leftarrow \frac{\sum W_iT_i}{\sum W_i}

Z \sim N(0,1/\sum W_i)

Sample Size Weighting:

W_i \leftarrow \sqrt{N_i}

Z_i \leftarrow T_i/\sqrt{V_i}

Z \leftarrow \frac{\sum W_i Z_i}{\sqrt{\sum W_i}}

Z \sim N(0,1)

Inverse Covariance:

X \leftarrow T^\dagger \mathrm{diag}(V)^{-1} T

X \sim \chi^2(N)

Random Effects:

Model the between-study mean and variance as (\mu,\tau) then estimate

\mu_{n+1} = \frac{\sum \frac{T_i}{V_i + \tau_{n}}}{\sum \frac{1}{V_i + \tau_{n}}}

\tau_{n+1} = \frac{\sum \frac{(T_i-\mu_{n+1})^2 - V_i}{(V_i + \tau_{n})^2}}{\sum\frac{1}{(V_i + \tau_{n})^2}}

And formulate a likelihood ratio X as

X \leftarrow \sum \log \frac{V_i}{V_i+\tau} + \sum \frac{T_i^2}{V_i} - \sum \frac{(T_i - \mu)^2}{V_i + \tau}

X \sim 0.5 \chi^2(1) + 0.5 \chi^2(2)

Comment:

One needs to be careful when applying Fisher, Sum Uniform, or Inverse Covariance as these can be two-tailed meta analyses. In particular, for Fisher and Sum Uniform, if all the p-values were generated by two-tailed tests, then the Fisher and SU meta-statistics are also (akin to) two-tailed, i.e. there’s no checking that the statistics from the individual studies are directionally consistent. The Inverse Covariance approach, being an inner product, also has this property: a large negative statistic will not “cancel” a large positive statistic. Nevertheless, even if you expect directional consistency, finding two large statistics of opposite signs does suggest something’s going on, if only artifactual.

Ideally the meta-statistics will be 1) Calibrated or conservative under the null, and 2) Sensitive to the alternate hypothesis. Not much more to it than that.

[1] Likelihood ratio tests are a common way to violate the second part of this condition, it’s easy to include different explanatory variables, which means the models  generating the LR may be different between the two studies.

Evaluating the Power of Meta-analyses

Following the approach in the previous post, we can calculate score statistics within two studies (or two populations) where the variant of interest potentially has different frequencies. Each of these statistics can then be combined into a meta-statistic and meta-p-value, and this process can be repeated to find the power of the meta-analysis at a particular false-positive level (in this case, alpha is 5\times 10^{-6}). Doing this with Inverse Variance Weighting reveals a startling phenomenon:

\begin{array}{c|c|c|c} f_1 & f_2 & \mathrm{OR} & \mathrm{Power} \\ \hline 0.03 & 0 & 2.4 & 92.4\% \\ \hline 0.03 & 0.005 & 2.4 & 42.5\% \\ \hline 0.03 & 0.03 & 2.4 & 100\% \end{array}

That’s right. Seeing the variant more times (in another population) kills your power if the variant is of a lower frequency in the second population. What?!

Okay so what is going on here. Let’s take a careful look at the realized distributions (since we’re generating simulated genotypes in order to estimate power anyway,  this is just sitting in system memory). First we plot the test statistics (under the alternate hypothesis) within each of the populations, low-frequency in red, high-frequency in blue. Adjacent, we plot in green the meta-statistic that results from 1) multiplying the statistic in the first plot by sqrt(variance) [this recovers the un-normalized test statistic] 2) weighing the results by 1/var 3) normalizing by the root of the sum of 1/var. (In other words the IVW meta-analysis statistic)

TwoPop_Assoc_OR2.4TwoPop_Assoc_OR2.4_META

The vertical dotted line is the Z-value associated with our alpha-level (basically \texttt{qnorm(1-5e-6)}). Basically, by averaging the two distributions on the left in this way, we will (obviously) get a distribution “between” the two, one that has more power than the red distribution, but less than the blue. Thus, weighting like this kills your power in multiethnic studies, or for meta-analyses where the variant frequency is different between the studies you’re analyzing.

This motivates a search for potentially “better” meta analyses, ones that might outperform inverse-variance weighting. Ideally, these methods should perform about as well as or better than max(\vec p), while remaining calibrated under the null hypothesis. Given the above, clearly inverse-variance weighting is going to perform worse than max(\vec p). We might first identify well-calibrated statistics by simulating under the null. Let there be N populations (or studies). For each one (i) draw a Vi from a chi-squared distribution, and then draw a Zi from N(0,Vi). We consider the cases of 5 and 10 populations, under a high- and low- variance setting. The distribution of variances under the “high” and “low” heterogeneity settings look like:

highvariance_realizedlowvariance_realized

And the distributions of the meta-statistics under these scenarios look like (for 5 populations):

MetaDist_NullSim_HighVar_05PopMetaDist_NullSim_LowVar_05Pop

And for 10 populations:

MetaDist_NullSim_HighVar_10PopMetaDist_NullSim_LowVar_10Pop

Inverse-variance weighting seems undercalibrated compared to all other statistics. Fisher’s method is overly aggressive when the variances are allowed to change drastically between populations, and the sum-of-uniform has this property as well, but becomes better calibrated as the number of populations increase. Sample-size weighting (assuming 2000 samples, with no relation to the variance) is uniformly over-aggressive. By contrast, the random effects model is only slightly conservative (it was made for heterogeneity, after all), and the Inverse Covariance method is well-calibrated throughout.

However: When running a meta-analysis, you don’t get provided with the actual variances of the distributions from which the test statistics were drawn, you’re instead provided with an estimate. In addition, this estimate will covary with the test statistic itself. In the case of the score test, as the (observed) frequency increases, both the computed value and the variance of the statistic also increases. Performing a simulated meta-analysis with OR=1 (so no association) generates the following null calibrations:

meta_approaches_OR1.0_highVarmeta_approaches_OR1.0_lowVar

Where here, the “high variance” environment (left) has frequencies of (20%,0.5%) in two populations, while the low variance environment (right) has frequencies of (5%,0.5%). All of a sudden, the meta-analysis statistics all (except for the random effects) become reasonably calibrated under the null hypothesis, strikingly even the sample-size weighted meta analysis! The black lines at the top and bottom are “min(P)” and “max(P)” respectively – falling outside the bounds suggests one may be overly conservative or overly aggressive, at least under the null. Of course, power is all about the alternate, so what happens if we raise the odds ratio:

meta_approaches_OR1.3_highVarmeta_approaches_OR2.0_lowVar

On the left we’ve set the OR=1.3, and OR=2 on the right. I’ve plotted on the x-axis “Min(P)” – the minimum p-value in the two populations, as a kind of baseline target for the sensitivity of the meta analysis. So what do these plots tell us? First we find a confirmation that inverse variance weighting is just not as sensitive to the alternate hypothesis as other approaches, and that overconservativeness is compounded when the variance is high. Second, we find that sample-size weighting performs far better, and is comparable to the random-effects meta analysis, though underperforms at variants of large frequency difference. Finally, we find that both the Inverse Covariance and Fisher’s method perform comparably to max(P), while remaining calibrated under the null hypothesis.

Take-home messages: Don’t inverse-variance weight in this circumstance. Even sample-size weighting is preferable, though random effects performs just as well. However, it’s even more advisable to be “old school” and use Fisher’s method or Inverse Covariance. The last observation is that frequency information has a lot of gain. The Red curve (“Fixed (SF)”) is sample-size weighting that takes frequency into account: W_i = \sqrt{2N f_i(1-f_i)}. Simply providing that information (granted, it’s the *actual* frequency, not the *observed* frequency) to the weighting drops the p values by orders of magnitude, drastically increasing power under the alternate hypothesis.

Note: Inverse Inverse Variance Weighting

Most meta-analyses use the Wald statistic to calculate the final meta-statistic. This differs from the above in that we have used strictly the Score statistic. The Score statistic is such that, for a single-parameter logistic regression, the statistic can be calculated in closed form. By contrast, the Wald statistic requires both the intercept and the variable coefficients be computed under the alternate hypothesis, and this needs to be fit manually. They also relate in the following way: we know from the derivation in my prior post

(\tau^*-\tau_t) \sim N(0,-[\nabla^2 \mathcal{L}]^{-1}) (Wald)

(\lambda - \tau_t) \sim N(0,-\nabla^2\mathcal{L}) (Score)

In other words, the variances of the Score and Wald tests are inverses of each other. This suggests that Inverse Inverse Variance Weighting may perform appropriately for the Score test as a meta-analytic statistic.

Inverse Inverse Variance

W_i \leftarrow V_i

Z \leftarrow \frac{\sum W_iZ_i}{\sqrt{\sum W_i}}

Z \sim N(0,1)

Replacing IVW with IIVW does indeed have the desired effect, and it performs comparably to Fisher and Inverse Covariance.

meta_approaches_OR1.0_highVar_IIVWmeta_approaches_OR1.3_highVar_IIVW

I suppose this is a whole lot of hubbub over the fact that the Score and Wald tests have inverse variances. I guess, except that one can find regimes where IVW is terrible, but IIVW works alright, and the reverse. For example, IIVW works well in the above case (frequencies: 20%, 0.5%, 2000 samples, balanced case/control), it works miserably in the case below (frequencies: 5%, 3%, 1%, 0.5%; samples: 1500/500,750/750,600/200,1000/1000):

IIVW_stinker

Here, IVW is the blue, while the orange-red is IIVW. Given the inverse relationship between Wald and Score, this implies that there will be regimes for the Wald statistic where inverse-variance weighting performs poorly as well. One thing to note is that Sample-Frequency weighting as well as Inverse Covariance (and Fisher’s method) perform well throughout.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: