Introduction to research methods
or approaches Assistive Technology
Research Matters: A Research Primer
NCTI and ATIA have teamed up to develop
a research primer for AT developers, manufacturers, and vendors. Both an
online guide and a series
of webinars with AT industry leaders, these resources will get you up to
speed on research in 2011. Assistive
Technology Research Primer
Research Designs
Key Concepts
Resources
Choosing a statistical test FAQ#
1790 Last Modified 23-March-2012 This
is chapter 37 of the first edition of Intuitive Biostatistics by
Harvey Motulsky. Copyright © 1995 by Oxford University Press Inc. Chapter 45
of the second edition of
Intuitive Biostatistics is an expanded version of this
material. REVIEW OF AVAILABLE STATISTICAL TESTS This
book has discussed many different statistical tests. To select the right
test, ask yourself two questions: What kind of data have you collected? What
is your goal? Then refer to Table 37.1.
Single
Subject Research --- REVIEW OF NONPARAMETRIC TESTS Choosing
the right test to compare measurements is a bit tricky, as you must choose
between two families of tests: parametric and nonparametric. Many
-statistical test are based upon the assumption that the data are sampled
from a Gaussian distribution. These tests are referred to as parametric
tests. Commonly used parametric tests are listed in the first column of the
table and include the t test and analysis of variance. Tests
that do not make assumptions about the population distribution are referred
to as nonparametric- tests. You've already learned a bit about nonparametric
tests in previous chapters. All commonly used nonparametric tests rank the
outcome variable from low to high and then analyze the ranks. These tests are
listed in the second column of the table and include the Wilcoxon,
Mann-Whitney test, and Kruskal-Wallis tests. These tests are also called
distribution-free tests. CHOOSING BETWEEN PARAMETRIC AND NONPARAMETRIC TESTS: THE EASY
CASES Choosing
between parametric and nonparametric tests is sometimes easy. You should
definitely choose a parametric test if you are sure that your data are
sampled from a population that follows a Gaussian distribution (at least
approximately). You should definitely select a nonparametric test in three
situations:
CHOOSING BETWEEN PARAMETRIC AND NONPARAMETRIC TESTS: THE HARD
CASES It
is not always easy to decide whether a sample comes from a Gaussian
population. Consider these points:
CHOOSING BETWEEN PARAMETRIC AND NONPARAMETRIC TESTS: DOES IT
MATTER? Does
it matter whether you choose a parametric or nonparametric test? The answer
depends on sample size. There are four cases to think about:
Thus,
large data sets present no problems. It is usually easy to tell if the data
come from a Gaussian population, but it doesn't really matter because the
nonparametric tests are so powerful and the parametric tests are so robust.
Small data sets present a dilemma. It is difficult to tell if the data come
from a Gaussian population, but it matters a lot. The nonparametric tests are
not powerful and the parametric tests are not robust. ONE- OR TWO-SIDED P VALUE? With
many tests, you must choose whether you wish to calculate a one- or two-sided
P value (same as one- or two-tailed P value). The difference between one- and
two-sided P values was discussed in Chapter 10. Let's review the difference
in the context of a t test. The P value is calculated for the null hypothesis
that the two population means are equal, and any discrepancy between the two
sample means is due to chance. If this null hypothesis is true, the one-sided
P value is the probability that two sample means would differ as much as was
observed (or further) in the direction specified by the hypothesis just by
chance, even though the means of the overall populations are actually equal.
The two-sided P value also includes the probability that the sample means
would differ that much in the opposite direction (i.e., the other group has
the larger mean). The two-sided P value is twice the one-sided P value. A
one-sided P value is appropriate when you can state with certainty (and
before collecting any data) that there either will be no difference between
the means or that the difference will go in a direction you can specify in
advance (i.e., you have specified which group will have the larger mean). If
you cannot specify the direction of any difference before collecting data,
then a two-sided P value is more appropriate. If in doubt, select a two-sided
P value. If
you select a one-sided test, you should do so before collecting any data and
you need to state the direction of your experimental hypothesis. If the data
go the other way, you must be willing to attribute that difference (or
association or correlation) to chance, no matter how striking the data. If
you would be intrigued, even a little, by data that goes in the
"wrong" direction, then you should use a two-sided P value. For
reasons discussed in Chapter 10, I recommend that you always calculate a
two-sided P value. PAIRED OR UNPAIRED TEST? When
comparing two groups, you need to decide whether to use a paired test. When
comparing three or more groups, the term paired is not apt and the term
repeated measures is used instead. Use
an unpaired test to compare groups when the individual values are not paired
or matched with one another. Select a paired or repeated-measures test when
values represent repeated measurements on one subject (before and after an
intervention) or measurements on matched subjects. The paired or
repeated-measures tests are also appropriate for repeated laboratory
experiments run at different times, each with its own control. You
should select a paired test when values in one group are more closely
correlated with a specific value in the other group than with random values
in the other group. It is only appropriate to select a paired test when the
subjects were matched or paired before the data were collected. You cannot
base the pairing on the data you are analyzing. FISHER'S TEST OR THE CHI-SQUARE TEST? When
analyzing contingency tables with two rows and two columns, you can use
either Fisher's exact test or the chi-square test. The Fisher's test is the
best choice as it always gives the exact P value. The chi-square test is
simpler to calculate but yields only an approximate P value. If a computer is
doing the calculations, you should choose Fisher's test unless you prefer the
familiarity of the chi-square test. You should definitely avoid the
chi-square test when the numbers in the contingency table are very small (any
number less than about six). When the numbers are larger, the P values
reported by the chi-square and Fisher's test will he very similar. The
chi-square test calculates approximate P values, and the Yates' continuity
correction is designed to make the approximation better. Without the Yates'
correction, the P values are too low. However, the correction goes too far,
and the resulting P value is too high. Statisticians give different
recommendations regarding Yates' correction. With large sample sizes, the
Yates' correction makes little difference. If you select Fisher's test, the P
value is exact and Yates' correction is not needed and is not available. REGRESSION OR CORRELATION? Linear
regression and correlation are similar and easily confused. In some
situations it makes sense to perform both calculations. Calculate linear
correlation if you measured both X and Y in each subject and wish to quantity
how well they are associated. Select the Pearson (parametric) correlation
coefficient if you can assume that both X and Y are sampled from Gaussian
populations. Otherwise choose the Spearman nonparametric correlation
coefficient. Don't calculate the correlation coefficient (or its confidence
interval) if you manipulated the X variable. Calculate
linear regressions only if one of the variables (X) is likely to precede or
cause the other variable (Y). Definitely choose linear regression if you
manipulated the X variable. It makes a big difference which variable is
called X and which is called Y, as linear regression calculations are not
symmetrical with respect to X and Y. If you swap the two variables, you will
obtain a different regression line. In contrast, linear correlation
calculations are symmetrical with respect to X and Y. If you swap the labels
X and Y, you will still get the same correlation coefficient. « Case
Study | TOC
| Quasi-Experimental
Study » Single
subject research is a study which aims to examine whether an intervention has
the intended effect on an individual, or on many individuals viewed as one
group.
The two most common single subject research designs are the A-B-A-B design,
and multiple baseline design. Each of these designs has two main
components: (1) a focus on the individual and (2) a design in which each
individual is used as his or her own control observation. The focus on the
individual differs from other research designs, such as experimental and
quasi-experimental designs, which look at the average effect of an
intervention within or between groups of people. In single subject research,
researchers often use more than one individual, but results are examined by
using each individual as his or her own control, rather than averaging
results of different groups. Comparisons are made on the behavior of one
individual to that same individual at a different point in time. Single
subject research has an important role to play in identifying and documenting
solutions for individuals with disabilities. The field needs much more
evidence on what works for whom, under what conditions, for which tasks, etc.
Although individuals with disabilities—even those with the same
diagnosis—often experience unique needs, solutions may be adaptable in
different environments, and knowledge sharing can inform others working on
assistive solutions.
Elements of single subject
research
Examples and additional resources
« Case
Study | TOC
| Quasi-Experimental
Study » « Usability
Study | TOC
| Case
Study » Market
research is aimed at determining the needs and expectations of consumers (end
users or purchasers) in different marketplaces. A critical component of
conducting business, market research is meant to guide what businesses
develop and how they market their products. There are two types of market
research: primary research and secondary research. Primary research consists
of organizations conducting first-hand research to solve specific problems,
determine consumer needs, or discover specific opportunities. Primary
research is conducted by the organization or is contracted out to a market
research firm. Secondary research consists of an organization reviewing
pre-existing data and/or information which may help the organization
understand specific problems, determine consumer needs, or discover specific
opportunities.
Elements of market research
Examples and additional resources
« Usability
Study | TOC
| Case
Study » « Introduction
| TOC
| Market
Research » Usability
studies are aimed at determining the ease of use of a particular device,
software, or technology. They are conducted in order to inform product
developers of barriers to user interface and design errors. Usability studies
take place in controlled conditions or natural settings, where researchers
can observe subjects attempting to use a device or technology for its
intended purpose. Many factors are taken into account when measuring usability,
including: the ease in which novice users can accomplish basic tasks, the
length of time it takes users to accomplish basic tasks, the types of
mistakes users make and how frequently they make them, and the attitudes
users take towards the technology. The technology’s compatibility with
various assistive devices, features, and software programs is also a
consideration. · What are the
benefits of conducting a usability study? · When should I
conduct a usability study? · What are the
resources needed to conduct a usability study? Elements of a usability study · Study participant
recruitment · Observation data · Survey
data/questionnaires Examples and additional resources · Further resources · A real world example « Introduction
| TOC
| Market
Research » « Market
Research | TOC
| Single
Subject Research » A
case study is a detailed investigation of a single individual or group. Case
studies can be qualitative or quantitative in nature, and often combine
elements of both. The defining feature of a case study is its holistic
approach—it aims to capture all of the details of a particular individual or
group (a small group, classroom, or even a school), which are relevant to the
purpose of the study, within a real life context.[1]
To do this, case studies rely on multiple sources of data; including
interviews, direct observation, video and audio tapes, internal documents,
and artifacts. The final report or write-up is a narrative with thick, rich
descriptions. Increasingly, case studies are being presented as multimedia
packages, such as a documentary, to showcase the uniqueness and complexities
of the context. Case
studies can be used for descriptive, explanatory, or exploratory purposes
(Yin, 1993).[2]
For any of these purposes, there are two distinct case study designs:
single-case study design and multiple-case study design. Single-case studies
are just that, an examination of one individual or group. In choosing a case,
researchers may purposely select atypical, or outlier, cases. An outlier case
tends to yield more information than average cases. Multiple-case studies use
replication, which is the deliberate process of choosing cases that are
likely to show similar results. This helps to examine how generalizable the
findings may be (see section on validity).
Elements of a case study
Examples and additional resources
« Market
Research | TOC
| Single
Subject Research » |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
What statistical analysis should I use? The following table shows general
guidelines for choosing a statistical analysis. We emphasize that these are
general guidelines and should not be construed as hard and fast rules.
Usually your data could be analyzed in multiple ways, each of which could
yield legitimate answers. The table below covers a number of common analyses
and helps you choose among them based on the number of dependent variables
(sometimes referred to as outcome variables), the nature of your independent
variables (sometimes referred to as predictors). You also want to consider
the nature of your dependent variable, namely whether it is an interval
variable, ordinal or categorical variable, and whether it is normally
distributed (see What
is the difference between categorical, ordinal and interval variables?
for more information on this). The table then shows one or more statistical
tests commonly used given these types of variables (but not necessarily the
only type of test that could be used) and links showing how to do such tests
using SAS, Stata and SPSS.
This page was adapted from Choosing the Correct
Statistic developed by James D. Leeper, Ph.D. We thank Professor Leeper
for permission to adapt and distribute this page from our site. |
|
These concepts can be combined to make a simple model
for choosing the correct statistical test. See link here
Dependent Variable
|
|||
Categorical |
Continuous |
||
Independent Variable |
Categorical
|
Chi Square |
t-test, ANOVA |
Continuous |
LDA, QDA |
Regression |
Reject or
accept the Null Hypothesis
|