It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
Unlike a descriptive study, an experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed. The American Heritage Dictionary of the English Language defines an experiment as "A test under controlled conditions that is made to demonstrate a known truth, to examine the validity of a hypothesis, or to determine the efficacy of something previously untried."
This means that no matter who the participant is, he/she has an equal chance of getting into all of the groups or treatments in an experiment. This process helps to ensure that the groups or treatments are similar at the beginning of the study so that there is more confidence that the manipulation [group or treatment] "caused" the outcome. More information about random assignment may be found in section
Definition: An experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed.
Case Example for Experimental Study
Experimental Studies — Example 1
An investigator wants to evaluate whether a new technique to teach math to elementary school students is more effective than the standard teaching method. Using an experimental design, the investigator divides the class randomly [by chance] into two groups and calls them "Group A" and "Group B." The students cannot choose their own group. The random assignment process results in two groups that should share equal characteristics at the beginning of the experiment.
Experimental Studies — Example 2
A fitness instructor wants to test the effectiveness of a performance-enhancing herbal supplement on students in her exercise class. To create experimental groups that are similar at the beginning of the study, the students are assigned into two groups at random [they can not choose which group they are in]. Students in both groups are given a pill to take every day, but they do not know whether the pill is a placebo [sugar pill] or the herbal supplement. The instructor gives Group A the herbal supplement and Group B receives the placebo [sugar pill]. The students' fitness level is compared before and after six weeks of consuming the supplement or the sugar pill. No differences in performance ability were found between the two groups suggesting that the herbal supplement was not effective.
Research Designs are platforms to use to explore new knowledge in order to better understand phenomena, clarify explanations and identify causative factors. Although there are no real rules for choosing a design; one must realize the consequences for choosing one design over the other. One should choose the design that best attempts to address the conceptual issues presented.
Some questions that would help you decide which quantitative design is most appropriate for your study include:
- How much do you know about the variables of interest?
- Are you manipulating the levels of the independent variable?
- How many independent variables are being tested?
- How many levels do each independent variable have, and are these levels experimental or control?
- How many groups of subjects are being tested?
- How will subjects be selected?
- Can subjects be randomly assigned to groups?
- Is pretest data being collected?
- Are you interested in examining differences or similarities between groups?
- How often will observations of responses be made?
- What is the temporal [time] sequence of interventions and measurements?
- Manipulation. What is actively directed or managed by the researcher. Usually refers to giving a treatment or not giving a treatment [or giving one of many levels of treatment] to a group of subjects. Assigning groups to various levels of the independent variable[s].
- Control. The extent to which the researcher can manage extraneous sources that might affect a study and lead to incorrect scientific conclusions.
- Random selection. Randomly drawing research subjects from the target population.
- Random assignment. Allocating subjects to treatment and control conditions in a nonsystematic way, using a method that is known to be random.
- Probability. [as related to research findings] The likelihood that research findings are low in uncertainty and error; that the findings are trustworthy and believable.
- Bias. The difference between the true and the observed. Undesirable influences on outcomes in research.
- Causality. Determining the cause-and-effect relationship[s] that exist between variables.
The purpose of an experimental design is to provide a structure for evaluating the cause-and-effect relationship between a set of independent and dependent variables.
Some of the elements of an experimental design:
- Manipulation - the researcher manipulates the levels of the independent variable. This usually means that you are looking at the effect of some treatment on one group of subjects [the treatment group], and comparing that to another group of subjects who do not receive the treatment [the control group]. The type of treatment here is the independent variable that gets "manipulated". The idea of manipulation is that the researcher "manipulates" by assigning some subjects to the treatment group, and the other subjects to the control group. The researcher does not have to use a control group, the design may incorporate two or more "treatments", or various levels of the same treatment that are compared. It is the effect of this "manipulation" that is measured to determine the result of the experimental treatment. Another variation on the control group is the "attention control group". This group would get some "neutral" experimental attention/treatment, but not the treatment variable being studied. This allows the researcher to look at three groups [experimental, attention control, and "silent" control] and better control for the Hawthorne effect.
- Control - the researcher incorporates elements of control so that the evidence supporting a causal relationship can be interpreted with confidence. Using a control group is only one aspect of control. Control is acquired through manipulation, randomization, the use of control groups, and methods to handle extraneous variables. [more on control below].
- Randomization - subjects are randomly assigned to at least two comparison groups.
- With randomization, extraneous variables are evenly distributed between or among groups; even variables you have not thought about yet may be controled through randomization.
- Randomization [or random assignment] is not the same thing as random sampling or random selection.
- Selective control - the use of randomization [see above]. For instance: if gender is an extraneous variable, then by randomly assigning subjects to groups, the number of males and females in each group should be evenly distributed, and the variable gender should not effect the outcome for one group versus the other.
- Physical control - control of an extraneous variable by making it a constant. For instance: if gender is an extraneous variable, then only study females. This way the variable gender will not effect the outcome of one group versus the other. The disadvantage of this approach is that it limits your ability to generalize your results. If you only study females, you do not know how this treatment will effect males.
- Statistical control - include the extraneous variables in the design. For instance: add gender as another independent variable in your study. This can be a very powerful way to control the effect of an extraneous variable like gender. You will actually analyze the effect of the variable, and know how gender effects the outcome of your study.
- In order to postulate a cause and effect relationship, the following conditions must be met:
- the causal variable and the effect variable must be associated with each other
- the cause must precede the effect
- the relationship/association must not be explainable by another [extraneous] variable
- The validity of the conclusion of cause and effect depends on the control of extraneous variables.
- can test cause and effect relationships
- provides better control of the experiment and minimizes threats to validity.
- for many studies, all extraneous variables cannot be identified or controlled
- not all variables can be experimentally manipulated
- some experimental procedures may not be practical or ethical in a clinical setting
- the Hawthorne effect may be strong in an experimental situation, and not so in an ex post facto study.
True experimental design: including random selection [random sampling], pretest/posttest, random assignment, manipulation of the levels of the independent variable[s], including a control group.
Posttest-only design: [after-only design]: you must assume that randomization assures pre-experimental group equivalence.
Solomon four-group design: combines the true experimental and the posttest only. Allows you to evaluate the effect of the pretest on the posttest scores, and any interaction betwen the test and experimental condition.
Factorial designs: Allows the researcher to examine the effects of one or more intervention on different factors or levels of variables in the study. Used for statistical control. Tends to increase sample size because you want to have enough subjects in each "cell" of the design.
Counterbalanced [crossover] designs: when more than one intervention [treatment] is used, and you want to know the effect of manipulating the order in which the treatments are given.
DePoy gives three criteria that need to be met to determine that a true experimental design is appropriate:
- there is sufficient development of theory to warrant the proposition of causality
- you have asked a causal quantitative question
- conditions exist, and legal and ethical issues allow for the random selection of subjects [random sampling], random assignment to groups, use of a pretest/posttest, and manipulation of the levels of the independent variable[s], including a control group
QUASI EXPERIMENTAL DESIGNS
It is not always possible to implement a design that meets the three criteria of a true experimental study [manipulation, control and randomization]. Quasi-experimental designs differ from experimental designs because either there is no control group or randomization cannot occur.
Types of quasi experimental designs:
1. Nonequivalent control group design:
- One of the most frequently used designs.
- It is the same as experimental except control group subjects are not randomly assigned.
- Advantages -
- in spite of the lack of randomization, the use of a control group and posttest increases the strength of the design
- good pretest data allows for improvement of analysis results
- Disadvantages -
- Threats to internal and external validity, i.e., selection - threat as subjects are not randomly selected.
- There are limits to extent one can infer causality.
- Similar to after only experimental group design but randomization is not used to assign subjects to groups.
- This design assumes groups are equivalent after the independent variable is introduced.
- This type of design can be used to study effects of natural or man made catastrophes such as hurricanes, floods or plane crashes.
- Use of good demographic data to further describe the subjects in each group would strengthen this type of study design.
- Advantage - Design is simple and quick.
- Disadvantage - Cannot be used to assess causality. Reduces internal validity.
- This is useful for determining trends where it is not possible to have a control group.
- No randomization occurs when only one group is available.
- Time series design significantly increases the strength of the study.
- Advantages -
- Repeated pretreatment observations help to control for maturation.
- Repeated post treatment measures allow one to determine if change is maintained over time.
- Disadvantage - History may threaten internal validity.
Pretest-posttest design: no randomization, not much control. Like a one-shot case study with a pretest.
Static group comparison: still no randomization, but maybe a little more control. You have a "control group", but those subjects are non-equivalent, and there is no pretest to see how equivalent they might have been.NONEXPERIMENTAL DESIGNS
These designs are used in situations where manipulation of an independent variable, control or randomization are not involved. These designs are focused to describe and measure independent and dependent variables. They are sometimes called descriptive research designs. Nonexperimental research does not prove causality. The goal is to describe phenomena and explore and explain relationships between variables.