What kind of research allows researchers to manipulate or control a situation?

It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Unlike a descriptive study, an experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed. The American Heritage Dictionary of the English Language defines an experiment as "A test under controlled conditions that is made to demonstrate a known truth, to examine the validity of a hypothesis, or to determine the efficacy of something previously untried."

True experiments have four elements: ,, , and . The most important of these elements are manipulation and control. Manipulation means that something is purposefully changed by the researcher in the environment. Control is used to prevent outside factors from influencing the study outcome. When something is manipulated and controlled and then the outcome happens, it makes us more confident that the manipulation "caused" the outcome. In addition, experiments involve highly controlled and procedures in an effort to minimize and , which also increases our confidence that the manipulation "caused" the outcome.

 

Another key element of a true experiment is random assignment. Random assignment means that if there are groups or treatments in the experiment, participants are assigned to these groups or treatments, or randomly [like the flip of a coin].

This means that no matter who the participant is, he/she has an equal chance of getting into all of the groups or treatments in an experiment. This process helps to ensure that the groups or treatments are similar at the beginning of the study so that there is more confidence that the manipulation [group or treatment] "caused" the outcome. More information about random assignment may be found in section

Definition: An experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed.

 

Case Example for Experimental Study

Experimental Studies — Example 1

An investigator wants to evaluate whether a new technique to teach math to elementary school students is more effective than the standard teaching method. Using an experimental design, the investigator divides the class randomly [by chance] into two groups and calls them "Group A" and "Group B." The students cannot choose their own group. The random assignment process results in two groups that should share equal characteristics at the beginning of the experiment.

In Group A, the teacher uses a new teaching method to teach the math lesson. In Group B, the teacher uses a standard teaching method to teach the math lesson. The investigator compares test scores at the end of the semester to evaluate the success of the new teaching method compared to the standard teaching method. At the end of the study, the results indicated that the students in the new teaching method group scored significantly higher on their final exam than the students in the standard teaching group.

Experimental Studies — Example 2

A fitness instructor wants to test the effectiveness of a performance-enhancing herbal supplement on students in her exercise class. To create experimental groups that are similar at the beginning of the study, the students are assigned into two groups at random [they can not choose which group they are in]. Students in both groups are given a pill to take every day, but they do not know whether the pill is a placebo [sugar pill] or the herbal supplement. The instructor gives Group A the herbal supplement and Group B receives the placebo [sugar pill]. The students' fitness level is compared before and after six weeks of consuming the supplement or the sugar pill. No differences in performance ability were found between the two groups suggesting that the herbal supplement was not effective.

Research Designs are platforms to use to explore new knowledge in order to better understand phenomena, clarify explanations and identify causative factors. Although there are no real rules for choosing a design; one must realize the consequences for choosing one design over the other. One should choose the design that best attempts to address the conceptual issues presented.

Some questions that would help you decide which quantitative design is most appropriate for your study include:

  1. How much do you know about the variables of interest?
  2. Are you manipulating the levels of the independent variable?
  3. How many independent variables are being tested?
  4. How many levels do each independent variable have, and are these levels experimental or control?
  5. How many groups of subjects are being tested?
  6. How will subjects be selected?
  7. Can subjects be randomly assigned to groups?
  8. Is pretest data being collected?
  9. Are you interested in examining differences or similarities between groups?
  10. How often will observations of responses be made?
  11. What is the temporal [time] sequence of interventions and measurements?
The research design gives backbone structure to a study. There are several concepts to understand when choosing a research design. These are summarized below:Concepts to Consider when designing a Research Study
  • Manipulation. What is actively directed or managed by the researcher.  Usually refers to giving a treatment or not giving a treatment [or giving one of many levels of treatment] to a group of subjects.  Assigning groups to various levels of the independent variable[s].
  • Control. The extent to which the researcher can manage extraneous sources that might affect a study and lead to incorrect scientific conclusions.
  • Random selection. Randomly drawing research subjects from the target population.
  • Random assignment. Allocating subjects to treatment and control conditions in a nonsystematic way, using a method that is known to be random.
  • Probability. [as related to research findings] The likelihood that research findings are low in uncertainty and error; that the findings are trustworthy and believable.
  • Bias. The difference between the true and the observed.  Undesirable influences on outcomes in research.
  • Causality. Determining the cause-and-effect relationship[s] that exist between variables.
QUANTITATIVE RESEARCH DESIGNSExperimenatl v. nonexperimental EXPERIMENTAL DESIGNNON EXPERIMENTAL DESIGNA. Manipulate variables to bring about effect A. Observe variables and effectsB. All relevant variables have been defined so that they can be manipulated, controlled and studiedB. Used to identify/measure/describe variables and/or determine relationships for further [experimental] studyC. Random selection and random assignment occur to improve controlC. Manipulation, control and randomization are lacking.EXPERIMENTAL DESIGN

The purpose of an experimental design is to provide a structure for evaluating the cause-and-effect relationship between a set of independent and dependent variables.

Some of the elements of an experimental design:

  1. Manipulation - the researcher manipulates the levels of the independent variable. This usually means that you are looking at the effect of some treatment on one group of subjects [the treatment group], and comparing that to another group of subjects who do not receive the treatment [the control group]. The type of treatment here is the independent variable that gets "manipulated". The idea of manipulation is that the researcher "manipulates" by assigning some subjects to the treatment group, and the other subjects to the control group. The researcher does not have to use a control group, the design may incorporate two or more "treatments", or various levels of the same treatment that are compared. It is the effect of this "manipulation" that is measured to determine the result of the experimental treatment. Another variation on the control group is the "attention control group".  This group would get some "neutral" experimental attention/treatment, but not the treatment variable being studied.  This allows the researcher to look at three groups [experimental, attention control, and "silent" control] and better control for the Hawthorne effect.
  2. Control - the researcher incorporates elements of control so that the evidence supporting a causal relationship can be interpreted with confidence. Using a control group is only one aspect of control. Control is acquired through manipulation, randomization, the use of control groups, and methods to handle extraneous variables. [more on control below].
  3. Randomization - subjects are randomly assigned to at least two comparison groups.
      • With randomization, extraneous variables are evenly distributed between or among groups; even variables you have not thought about yet may be controled through randomization.
      • Randomization [or random assignment] is not the same thing as random sampling or random selection.
An extraneous variable is a variable that you did not initially intend to include in your design, but this variable might have an influence on your study in a way that would invalidate the results of the study. The researcher attempts to exert control over these extraneous variables in one of three ways:
  1. Selective control - the use of randomization [see above]. For instance: if gender is an extraneous variable, then by randomly assigning subjects to groups, the number of males and females in each group should be evenly distributed, and the variable gender should not effect the outcome for one group versus the other.
  2. Physical control - control of an extraneous variable by making it a constant.  For instance: if gender is an extraneous variable, then only study females. This way the variable gender will not effect the outcome of one group versus the other. The disadvantage of this approach is that it limits your ability to generalize your results. If you only study females, you do not know how this treatment will effect males.
  3. Statistical control - include the extraneous variables in the design. For instance: add gender as another independent variable in your study. This can be a very powerful way to control the effect of an extraneous variable like gender. You will actually analyze the effect of the variable, and know how gender effects the outcome of your study.
Cause and Effect
  • In order to postulate a cause and effect relationship, the following conditions must be met:
  1. the causal variable and the effect variable must be associated with each other
  2. the cause must precede the effect
  3. the relationship/association must not be explainable by another [extraneous] variable
  • The validity of the conclusion of cause and effect depends on the control of extraneous variables.
Advantages of an experimental design:
  • can test cause and effect relationships
  • provides better control of the experiment and minimizes threats to validity.
Disadvantages of an experimental design:
  • for many studies, all extraneous variables cannot be identified or controlled
  • not all variables can be experimentally manipulated
  • some experimental procedures may not be practical or ethical in a clinical setting
  • the Hawthorne effect may be strong in an experimental situation, and not so in an ex post facto study.
Be able to compare and contrast the following types of experimental designs:
    True experimental design: including random selection [random sampling], pretest/posttest, random assignment, manipulation of the levels of the independent variable[s], including a control group.
    Posttest-only design: [after-only design]: you must assume that randomization assures pre-experimental group equivalence.
    Solomon four-group design: combines the true experimental and the posttest only.  Allows you to evaluate the effect of the pretest on the posttest scores, and any interaction betwen the test and experimental condition.
    Factorial designs:  Allows the researcher to examine the effects of one or more intervention on different factors or levels of variables in the study.  Used for statistical control.  Tends to increase sample size because you want to have enough subjects in each "cell" of the design.
    Counterbalanced [crossover] designs:  when more than one intervention [treatment] is used, and you want to know the effect of manipulating the order in which the treatments are given.

DePoy gives three criteria that need to be met to determine that a true experimental design is appropriate:

  1.     there is sufficient development of theory to warrant the proposition of causality
  2.     you have asked a causal quantitative question
  3.     conditions exist, and legal and ethical issues allow for the random selection of subjects [random sampling], random assignment to groups, use of a  pretest/posttest, and manipulation of the levels of the independent variable[s], including a control group
The basic characteristics distinguishing a true experimental design from a quasi-experimental or nonexperimental design are randomization and comparison of groups. If you are trying to distinguish between experimental and quasi-experimental look to see if there is random assignment of subjects to groups, and there is a comparison of these groups. Comparison: is there some experimental group getting an experimental treatment, and another group getting a different experimental treatment, or a group getting no experimental treatment [control group]?
 QUASI EXPERIMENTAL DESIGNS

It is not always possible to implement a design that meets the three criteria of a true experimental study [manipulation, control and randomization]. Quasi-experimental designs differ from experimental designs because either there is no control group or randomization cannot occur.

Types of quasi experimental designs:

1. Nonequivalent control group design:

  • One of the most frequently used designs.
  • It is the same as experimental except control group subjects are not randomly assigned.
  • Advantages -
    • in spite of the lack of randomization, the use of a control group and posttest increases the strength of the design
    • good pretest data allows for improvement of analysis results
  • Disadvantages -
    • Threats to internal and external validity, i.e., selection - threat as subjects are not randomly selected.
    • There are limits to extent one can infer causality.
2. After only non equivalent control group design:
  • Similar to after only experimental group design but randomization is not used to assign subjects to groups.
  • This design assumes groups are equivalent after the independent variable is introduced.
  • This type of design can be used to study effects of natural or man made catastrophes such as hurricanes, floods or plane crashes.
  • Use of good demographic data to further describe the subjects in each group would strengthen this type of study design.
  • Advantage - Design is simple and quick.
  • Disadvantage - Cannot be used to assess causality. Reduces internal validity.
3. Time series design:
  • This is useful for determining trends where it is not possible to have a control group.
  • No randomization occurs when only one group is available.
  • Time series design significantly increases the strength of the study.
  • Advantages -
    • Repeated pretreatment observations help to control for maturation.
    • Repeated post treatment measures allow one to determine if change is maintained over time.
  • Disadvantage - History may threaten internal validity.
PRE-EXPERIMENTAL DESIGNSOne-shot case study: there is manipulation in that a "treatment" [independent variable] is given, and the dependent variable is then measured, but there is no randomization, and essentially no control.
Pretest-posttest design:  no randomization, not much control.  Like a one-shot case study with a pretest.
Static group comparison: still no randomization, but maybe a little more control.  You have a "control group", but those subjects are non-equivalent, and there is no pretest to see how equivalent they might have been.NONEXPERIMENTAL DESIGNS

These designs are used in situations where manipulation of an independent variable, control or randomization are not involved. These designs are focused to describe and measure independent and dependent variables. They are sometimes called descriptive research designs. Nonexperimental research does not prove causality. The goal is to describe phenomena and explore and explain relationships between variables.

What research design allows a researcher to control or manipulate the situation or its subject?

Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “what causes something to occur?” Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.

What is control and manipulation in research?

Manipulation means that something is purposefully changed by the researcher in the environment. Control is used to prevent outside factors from influencing the study outcome. When something is manipulated and controlled and then the outcome happens, it makes us more confident that the manipulation "caused" the outcome.

What type of research allows a researcher to control a variable?

Scientists use controlled experiments because they allow for precise control of extraneous and independent variables. This allows a cause and effect relationship to be established.

What is a manipulation research?

Experimental manipulation describes the process by which researchers purposefully change, alter, or influence the independent variables [IVs], which are also called treatment variables or factors, in an experimental research design.

Chủ Đề