Home

Tutorial Menu

More About WINKS

What's New?

Order WINKS


 

WINKS COVER

  Order WINKS

7

 

These WINKS statistics tutorials explain the use and interpretation of standard statistical analysis techniques for Medical, Pharmaceutical, Clinical Trials, Marketing or Scientific Research. The examples include how-to instructions for WINKS SDA Version 6.0 Software. Download evaluation copy of WINKS.

 

Parametric and Nonparametric Statistics

    When analyzing data for a research project you’re often confronted with a decision about what kind of statistical analysis to perform. There are literally hundreds of tests from which to choose and you have to careful to select the one that is the most appropriate for your data. If you select an inappropriate test then you may make an incorrect interpretation about your data and your manuscript will likely be rejected during a journal review process. Although it is impossible to give a definitive method for selecting appropriate tests in a brief article such as this, one aspect of statistical tests that is often confusing will be discussed – the difference between parametric and nonparametric statistical tests.

    When you gather scientific data, one of the first statistics you’ll typically calculate is the mean. This statistic is used to indicate average value of a population or sample. If the mean is combined with another common statistic called the standard deviation, then the pair of number tells the research both the central tendency of the group of number and their spread. A large standard deviation reflects a large spread in the data – the numbers are diverse and far apart. A small standard deviation reflects a tightness of the data – the numbers are close together. However, before you can really depend on these statistics to give you accurate information about the data, you’re required to make the assumption that the data are normally distributed – that is, if you were to plot the data in a histogram, it would create a graph that looks like the well-known bell-shaped curve. When data behave in this way you can make some simple assumptions about the data. For example, the mean plus or minus one standard deviation contains about 65% of the data, and the mean plus or minus two standard deviations contains about 95% of the data. This information is often used to create a range of values in which you might expect future sampled data to appear.

    When statistics are calculated under the assumption that the data follow some common distribution such as the normal distribution we call these parametric statistics. It follows that statistical tests based on these parametric statistics are called parametric statistical tests. Thus, when the data are normal, we can then use a host of well-known parametric statistical tests to analyze our data -- such as t-tests, analysis of variance, linear regression, and others.

    However, what happens when your data are not normally distributed? Suppose you create a histogram of your data and it doesn’t look like the bell-shaped curve. Suppose it has two humps or it has most of its data at one end of the distribution with some of the data trailing off into a long tail. Now what can you do? There are several ways to approach non-normal data, but we’ll only discuss one in this article – using a non-parametric test in lieu of a standard parametric test. Non-parametric tests are also called distribution-free tests since they do not make the assumption that the data follows some distribution.

    For example, suppose you have two independent groups (corresponding to two drugs) on which some measurement has been made – for example, the length of time until relief of pain. You want to determine if one drug has a better overall (shorter) time to relief than the other drug. However, when you examine the data it’s obvious that the distribution of the data is not normal (You can test for normality of data using a statistical test.) If the data had been normally distributed, you would have performed a standard independent group t-test on this data. But since the assumption of normality cannot be made, what can you do? Fortunately for almost every parametric test in the statistical toolbox, there is a corresponding non-parametric test. In this case a corresponding nonparametric test is the Mann-Whitney test. Using the Mann-Whitney test you can calculate a significance level to help you determine the answer to your research question – are the values of the observations from one group significantly lower than the observations from the other group? (Notice that we’re not comparing means.)

    Other standard parametric tests also have corresponding non-parametric counterparts. The Wilcoxon Signed Rank test can be used for the paired t-test. The Kruskal-Wallis test can be used for a one-way independent group analysis of variance, and so on.

    Why not just always use non-parametric tests? Since non-parametric tests do not make an assumption about a distribution of the data, they have less information to use to determine significance. Thus, they are less powerful than the parametric tests. That is, they have a more difficult time finding statistical significance.

    Therefore, if a parametric test is appropriate it should be used because it gives you a better chance of finding significances when they exist. If the parametric test is not appropriate, then a non-parametric test is a reasonable substitute.

        When using WINKS, you may refer to the diagrams in Appendix B (in the printed manual) to help you determine which parametric or nonparametric test is appropriate for your data.

 


| Top of document | Tutorial Index | TexaSoft Homepage | Send comments |
© Copyright TexaSoft, 2007