IntroductionLike the two-sample t-test, ANOVA lets us test hypotheses about the mean average of a dependent variable across different groups. While the anova research hypothesis is used to compare test e deca dbol pct means between two groups, ANOVA is used to compare means between 3 or more groups. The factors are the independent variables, each of which must be measured on a categorical scale - that is, levels of the independent variable must define separate groups. For example we might look at average test scores for students exposed to one of three anova research hypothesis teaching techniques three levels of a single independent variable. The alternative or research hypothesis is that anova research hypothesis average is not the same for all groups. We can then conclude that the average of the dependent variable is not the same for all groups.
Null hypothesis for a one-way anova
This module will continue the discussion of hypothesis testing, where a specific statement or hypothesis is generated about a population parameter, and sample statistics are used to assess the likelihood that the hypothesis is true. The hypothesis is based on available information and the investigator's belief about the population parameters.
The specific test considered here is called analysis of variance ANOVA and is a test of hypothesis that is appropriate to compare means of a continuous variable in two or more independent comparison groups. For example, in some clinical trials there are more than two comparison groups.
In a clinical trial to evaluate a new medication for asthma, investigators might compare an experimental medication to a placebo and to a standard treatment i. In an observational study such as the Framingham Heart Study, it might be of interest to compare mean blood pressure or mean cholesterol levels in persons who are underweight, normal weight, overweight and obese.
The technique to test for a difference in more than two independent means is an extension of the two independent samples procedure discussed previously which applies when there are exactly two independent comparison groups. The ANOVA procedure is used to compare the means of the comparison groups and is conducted using the same five step approach used in the scenarios discussed in previous sections. Because there are more than two groups, however, the computation of the test statistic is more involved.
The test statistic must take into account the sample sizes, sample means and sample standard deviations in each of the comparison groups. If one is examining the means observed among, say three groups, it might be tempting to perform three separate group to group comparisons, but this approach is incorrect because each of these comparisons fails to take into account the total data, and it increases the likelihood of incorrectly concluding that there are statistically significate differences, since each comparison adds to the probability of a type I error.
Analysis of variance avoids these problemss by asking a more global question, i. The fundamental strategy of ANOVA is to systematically examine variability within groups being compared and also examine variability among the groups being compared.
Consider an example with four independent groups and a continuous outcome measure. The independent groups might be defined by a particular characteristic of the participants such as BMI e. Suppose that the outcome is systolic blood pressure, and we wish to test whether there is a statistically significant difference in mean systolic blood pressures among the four groups. The sample data are organized as follows:. The research or alternative hypothesis is always that the means are not all equal and is usually written in words rather than in mathematical symbols.
The research hypothesis captures any difference in means and includes, for example, the situation where all four means are unequal, where one is different from the other three, where two are different, and so on.
The alternative hypothesis, as shown above, capture all possible situations other than equality of all means specified in the null hypothesis. The test statistic for testing H 0: The table can be found in "Other Resources" on the left side of the pages.
Note that N does not refer to a population size, but instead to the total sample size in the analysis the sum of the sample sizes in the comparison groups, e. The test statistic is complicated because it incorporates all of the sample data. While it is not easy to see the extension, the F statistic shown above is a generalization of the test statistic used for testing the equality of exactly two means.
The test statistic F assumes equal variability in the k populations i. This means that the outcome is equally variable in each of the comparison populations. This assumption is the same as that assumed for appropriate use of the test statistic to test equality of two independent means. It is possible to assess the likelihood that the assumption of equal variances is true and the test can be conducted in most statistical computing packages.
If the variability in the k comparison groups is not similar, then alternative techniques must be used. The F statistic is computed by taking the ratio of what is called the "between treatment" variability to the "residual or error" variability.
This is where the name of the procedure originates. In analysis of variance we are testing for a difference in means H 0: The numerator captures between treatment variability i. The test statistic is a measure that allows us to assess whether the differences among the sample means numerator are more than would be expected by chance if the null hypothesis is true.
Recall in the two independent sample test, the test statistic was computed by taking the ratio of the difference in sample means numerator to the variability in the outcome estimated by Sp. The decision rule again depends on the level of significance and the degrees of freedom. The F statistic has two degrees of freedom. These are denoted df 1 and df 2 , and called the numerator and denominator degrees of freedom, respectively. The degrees of freedom are defined as follows:. If the null hypothesis is true, the between treatment variation numerator will not exceed the residual or error variation denominator and the F statistic will small.
If the null hypothesis is false, then the F statistic will be large. The rejection region for the F test is always in the upper right-hand tail of the distribution as shown below. For the scenario depicted here, the decision rule is: Because the computation of the test statistic is involved, the computations are often organized in an ANOVA table.
The ANOVA table breaks down the components of variation in the data into variation between treatments and error or residual variation. The squared differences are weighted by the sample sizes per group n j. The error sums of squares is:.
The double summation SS indicates summation of the squared differences within each treatment and then summation of these totals across treatments to produce a single value. This will be illustrated in the following examples. The total sums of squares is:. If all of the data were pooled into a single sample, SST would reflect the numerator of the sample variance computed on the pooled or total sample.
SST does not figure into the F statistic directly. A clinical trial is run to compare weight loss programs and participants are randomly assigned to one of the comparison programs and are counseled on the details of the assigned program. Participants follow the assigned program for 8 weeks. The outcome of interest is weight loss, defined as the difference in weight measured at the start of the study baseline and weight measured at the end of the study 8 weeks , measured in pounds.
Three popular weight loss programs are considered. The first is a low calorie diet. The second is a low fat diet and the third is a low carbohydrate diet. For comparison purposes, a fourth group is considered as a control group. Participants in the fourth group are told that they are participating in a study of healthy behaviors with weight loss only one component of interest.
The control group is included here to assess the placebo effect i. A total of twenty patients agree to participate in the study and are randomly assigned to one of the four diet groups. Weights are measured at baseline and patients are counseled on the proper implementation of the assigned diet with the exception of the control group.
After 8 weeks, each patient's weight is again measured and the difference in weights is computed by subtracting the 8 week weight from the baseline weight. Positive differences indicate weight losses and negative differences indicate weight gains. For interpretation purposes, we refer to the differences in weights as weight losses and the observed weight losses are shown below.
Is there a statistically significant difference in the mean weight loss among the four diets? The appropriate critical value can be found in a table of probabilities for the F distribution see "Other Resources". The critical value is 3. In order to compute the sums of squares we must first compute the sample means for each group and the overall mean based on the total sample.
SSE requires computing the squared differences between each observation and its group mean. We will compute SSE in parts. For the participants in the low calorie diet: We reject H 0 because 8. ANOVA is a test that provides a global assessment of a statistical difference in more than two independent means.
In this example, we find that there is a statistically significant difference in mean weight loss among the four diets considered. In addition to reporting the results of the statistical test of hypothesis i. In this example, participants in the low calorie diet lost an average of 6. Participants in the control group lost an average of 1.
Are the observed weight losses clinically meaningful? Calcium is an essential mineral that regulates the heart, is important for blood clotting and for building healthy bones. While calcium is contained in some foods, most adults do not get enough calcium in their diets and take supplements.
Unfortunately some of the supplements have side effects such as gastric distress, making them difficult for some patients to take on a regular basis.
A study is designed to test whether there is a difference in mean daily calcium intake in adults with normal bone density, adults with osteopenia a low bone density which may lead to osteoporosis and adults with osteoporosis. Adults 60 years of age with normal bone density, osteopenia and osteoporosis are selected at random from hospital records and invited to participate in the study.
Each participant's daily calcium intake is measured based on reported food intake and supplements. The data are shown below. Is there a statistically significant difference in mean calcium intake in patients with normal bone density as compared to patients with osteopenia and osteoporosis? In order to compute the sums of squares we must first compute the sample means for each group and the overall mean.
For the participants with normal bone density:. We do not reject H 0 because 1. Are the differences in mean calcium intake clinically meaningful? If so, what might account for the lack of statistical significance? The video below by Mike Marin demonstrates how to perform analysis of variance in R. It also covers some other statistical issues, but the initial part of the video will be useful to you.
The factor might represent different diets, different classifications of risk for disease e. There are situations where it may be of interest to compare means of a continuous outcome across two or more factors. For example, suppose a clinical trial is designed to compare five different treatments for joint pain in patients with osteoarthritis.