More Than Two Groups

We just showed how testing two or more DVs with two groups is an extension of the t test. In fact, if you read some old statistics books (such as the previous edition of this book), you will see that they refer to this test as Hotelling's T2, so called because it was invented by Hotelling, who took the equation for the t test, squared it, and used some fancy matrix algebra to allow for more than one DV (not all naming conventions in statistics are totally arbitrary). As with many statistical procedures, having one test for a simple situation and a different one for a more complicated version was a result of having to do things by hand and being on the lookout for short cuts. Now that computers do the work for us, the distinctions don't matter, and we won't find anything called "Hotelling's T2" in any of the statistical packages. (We will find something called "Hotelling's trace," but that's something else entirely.)

However, having more than two groups does change how we conceptualize what we're doing. So instead of using the t test as a model, we have to use ANOVA and expand it to handle the situation. In ANOVA, we use an F ratio to determine if the groups are significantly different from each other, and the formula for this test is:

F _ meansquare(treatments) mean square (error)

That is, we are testing whether there is greater variability between the groups (ie, the "treatments" effect) than within the groups (ie, the "error").

In the case of MANOVA, though, we do not have just one term for the treatments effect and one for the error. Rather, we have a number of treatment effects and a number of error terms. This is because in an ANOVA, each group mean could be represented by a single number whereas in a MANOVA, each group has a vector of means, one mean for each variable. So when we examine how much each group differs from the "mean" of all the groups combined, we are really comparing the centroid of each group to the grand centroid. Similarly, the within-group variability has to be computed for each for the DVs.

Again we calculate a ratio, but in this case, we have to divide one series of scores by another group of scores. The techniques for doing this are referred to as matrix algebra and are far beyond the scope of this book. Fortunately, though, there are many computer programs that can do this scut work for us.

However, there are two points that make our life somewhat difficult. First, we have gotten used to the idea that the larger the result of a statistical test, the more significant the findings. For some reason that surpasseth human understanding, this has been reversed in the case of MANOVA. Instead of calculating the treatment mean square divided by the error mean square, MANOVA computes the equivalent of the error term divided by the treatment term. Hence, the smaller this value is, the more significant the results.

The second problem, which is common to many multivariate procedures, is that we are blessed (or cursed, depending on your viewpoint) with a multitude of different ways to accomplish the same end. In MANOVA, there are many test statistics used to determine whether or not the results are significant. The most widely used method is called Wilks' lambda, but there are many other procedures that can be used. Arguments about which is the best one to use make the debates among the medieval Scholastics look tame and restrained. Each method has its believers, and each has its detractors. In most instances, the statistics yield equivalent results. However, if the data are only marginally significant, it's possible that one of the test statistics would say one thing and that the other tests would say the opposite; which one to believe then almost becomes a toss-up. As with Hotelling's T2 test, a significant finding tells us only that a difference exists somewhere in the data, but it doesn't tell us where. For this, we use a simpler ANOVA on each variable.

Because MANOVA is simply an extension of ANOVA, we would expect that there would be a multivariate version of analysis of covariance (ANCOVA). Not surprisingly, there is one; equally unsurprisingly, it's called multivariate analysis of covariance (MANCOVA). In an analogous way, there isn't just one MANCOVA, but a family of them (eg, 2 X 2 factorial, repeated measures), each one corresponding to a univariate ANCOVA.

0 0

Post a comment