Describe two ways that researchers attempt to control extraneous variables

Extraneous variables are any variables that you are not intentionally studying in your experiment or test. When you run an experiment, you’re looking to see if one variable (the independent variable) has an effect on another variable (the dependent variable). In an ideal world you’d run the experiment, check the results, and voila! Unfortunately…like many things in life…it’s a little more complicated that than. Other variables, perhaps ones that never crossed your mind, might influence the outcome of an experiment. These undesirable variables are called extraneous variables.


A simple example: you want to know if online learning increases student understanding of statistics. One group uses an online knowledge base to study, the other group uses a traditional text. Extraneous variables could include prior knowledge of statistics; you would have to make sure that group A roughly matched group B with prior knowledge before starting the study. Other extraneous variables could include amount of support in the home, socio-economic income, or temperature of the testing room.

Types of Extraneous Variables

  1. Demand characteristics: environmental clues which tell the participant how to behave, like features in the surrounding or researcher’s non-verbal behavior.
  2. Experimenter / Investigator Effects: where the researcher unintentionally affects the outcome by giving clues to the participants about how they should behave.
  3. Participant variables, like prior knowledge, health status or any other individual characteristic that could affect the outcome.
  4. Situational variables, like noise, lighting or temperature in the environment.

Confounding Extraneous Variable

Describe two ways that researchers attempt to control extraneous variables

One type of extraneous variable is called a confounding variable. Confounding variables directly affect how the independent variable acts on the dependent variable. It can muddle your results, leading you to think that there is cause and effect when in fact there is not. In the above example, a confounding variable could be introduced if the researcher gave the text book to students in a low income school, and assigned online learning to students in a higher income school. As students in higher income schools typically take more challenging coursework than students in lower income schools, pre-knowledge becomes a confounding extraneous variable.

Extraneous variables should be controlled if possible. One way to control extraneous variables is with random sampling. Random sampling does not eliminate any extraneous variable, it only ensures it is equal between all groups. If random sampling isn’t used, the effect that an extraneous variable can have on the study results become a lot more of a concern.

Simply Psychology's content is for informational and educational purposes only. Our website is not intended to be a substitute for professional medical advice, diagnosis, or treatment.

© Simply Scholar Ltd - All rights reserved

In contrast to control by elimination, researchers can include the suspected extraneous variables in an experiment. If researchers suspect the gender of the therapist is an extraneous variable, they can include the gender of the therapist as an additional independent variable. Specifically, participants can be assigned to one of four experimental conditions: a treatment with a male therapist, a treatment with a female therapist, a placebo control with a male therapist, and a placebo control with a female therapist. This experimental design enables consideration of the effect of the treatment, the effect of the therapist's gender, and the interaction of both independent variables.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985003273

Experimental Design: Overview

A.M. Dean, in International Encyclopedia of the Social & Behavioral Sciences, 2001

9.1 Nested or Hierarchical Designs

It is not unusual for extraneous variables to be ‘nested.’ For example, if subjects are recruited and tested separately at different testing centers, the subjects are ‘nested within testing center.’ If subjects are animals such as mice or piglets, then the subjects are naturally nested within litters, which are nested within parent, which may be nested within laboratory. The nesting information can be used in matched designs, since the nesting forms natural groupings of like subjects. For within subjects designs, the nesting information can be used during the analysis for examining the different sources of extraneous variation (e.g., Hierarchical Models: Random and Fixed Effects). Designs in which different levels of nesting are assigned different treatment factors are called ‘split-plot designs’ (see Sect. 9.2).

A second type of nesting is a nesting structure within the treatment factors being examined. Examples given by Myers (1979) include memorization of words within grammatical class; time taken to complete problems within difficulty levels. Models and analyses used in such experiments must reflect the nested treatment structure.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767004174

Automated Inference Techniques to Assist With the Construction of Self-Adaptive Software

S. Malek, ... N. Esfahani, in Managing Trade-Offs in Adaptable Software Architectures, 2017

6.5.4.1 Extraneous and confounding variables

Two important risks to knowledge inferred through machine learning are extraneous and confounding variables [40]. Extraneous variables are factors other than features that may also bear an effect on the behavior of the system. An example of an extraneous variable alluded to earlier is the system’s workload, which may impact some of the system’s quality attributes, such as response time. A confounding variable is a special type of an extraneous variable that correlates positively or negatively with both dependent and independent variables. Unlike extraneous variables that introduce an error in the model, a confounding variable could result in identifying incorrect relationships. There are several possible approaches to deal with such problems. One technique is to include factors other than features (e.g., workload) that may influence the behavior of the software in the learning process as additional independent variables. Additionally, there are several known techniques [41] for testing the causality of the learned models that deserve further research.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012802855100006X

The Das–Naglieri Cognitive Assessment System in Theory and Practice

J.P. DAS, JACK A. NAGLIERI, in Handbook of Psychoeducational Assessment, 2001

Subtest Administration Order

It is important to administer the CAS subtests in the prescribed order, to retain the integrity of the test and reduce the influence of extraneous variables on the child's performance. For example, the Planning subtests are administered first, because they give the child flexibility to solve the items in any manner. In contrast, the Attention subtests must be completed in the prescribed order (i.e., left to right and top to bottom). By administering the Planning subtests before the Attention subtests, the amount of constraint increases over time. If the Attention subtests were administered before the Planning ones, the rigid instructions for the Attention subtests might inhibit the child's performance on subsequent Planning subtests.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780120585700500045

Laboratory Experiment: Methodology

J. Bredenkamp, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.1 Internal Validity

Controls to ensure internal validity promote an unequivocal causal interpretation of the relationship between the independent and dependent variables. In order to avoid the confounding of a known extraneous variable with the independent variable, the control techniques of ‘elimination’ and ‘constancy of conditions’ are employed. An internal error exists, for example, if a female experimenter measures the attitude of the subjects under the experimental condition, while a male experimenter measures the attitude under the control condition. The sex of the experimenter may have an effect on the dependent variable. To control this error, a single experimenter could collect all the data (constancy of conditions). An extraneous variable is eliminated, for example, if background noise that might reduce the audibility of speech is removed.

Unknown extraneous variables can be controlled by randomization. Randomization ensures that the expected values of the extraneous variables are identical under different conditions. Specific instructions exist concerning the random assignment of the subjects to the experimental conditions (e.gq., Keppel 1973 see Random Assignment: Implementation in Complex Field Settings).

Despite these controls, there remains the possibility that a factor is present that jeopardizes the internal validity of the experiment. Thus, for example, simply watching a film—regardless of its content—may have an effect on the social attitude. An experimenter who compares the attitudes under the conditions of film/no film, will overlook this possible error, even if the method of randomization was employed to control internal errors. There should therefore be at least one further condition included under which subjects view a film that is neutral with regard to its attitude toward Jews.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767007269

Statistics, Nonparametric

Joseph W. McKean, Simon J. Sheather, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

V.C Analysis of Covariance

In experimental design, we attempt to control all variables other than the factors and the response variable. Often this is impossible, so along with the response we record these extraneous variables which are called covariates or concomitant variables. Hopefully these variables help explain some of the noise in the data. The traditional analysis of such data is called analysis of covariance.

As an example, consider the one-way model (93) with k levels and suppose we have a single covariate, say, xij. A first-order model is Yij = μi + β xij + eij. This model, however, assumes that the covariate behaves the same within each treatment combination. A more general model is

(110)yij=μi+βxij+γixij+eijj=1,…,ni,i=1,…,k.

Hence the slope at the ith level is βi = β + γi and, thus, each treatment combination has its own linear model. There are two natural hypotheses for this model: H0C: β1 = ⋯ = βk and H0L: μ1 = ⋯ = μk. If H0C is true then the difference between the levels of Factor A are just the differences in the location parameters μi for a given value of the covariate. In this case, contrasts in these parameters are often of interest as well as the hypothesis H0L. If H0C is not true then the covariate and the treatment combinations interact. For example, whether one treatment combination is better than another may depend on where in factor space the responses are measured. Thus as in crossed factorial designs, the interpretation of main effect hypotheses may not be clear. The previous example is easily generalized to more than one covariate.

The Wilcoxon fit of the full model (110) proceeds as described in Section II. Model estimates of the parameters and their standard errors can be used to form confidence intervals and regions and multiple comparison procedures can be used for simultaneous inference. Reduced models appropriate for the hypotheses of interest can be obtained and the values of the test statistic FW can be used to test them.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122274105007328

Control Variable in Research

P.D. Mehta, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.1.3 Control variable: validity and differential treatment effectiveness

The use of control variables for statistical adjustment is motivated primarily by a desire to increase the internal validity of the study (see Internal Validity). An alternative way of eliminating confounding due to extraneous variables is to include only those individuals at a specific level of the confounding variable. For example, if ethnicity and gender are related to the treatment assignment and to outcome, the researcher may choose to include only white males in the study. Such control by exclusion limits the generalizability of the findings to the population actually included in the study. In contrast, the regression based approach allows generalization across all levels of the controlled covariate present in the sample if the treatment effects are the same within each level of the covariate (see External Validity). In other words, the possibility of differential treatment effects across levels of the covariate must be ruled out before the results can be generalized. For example, a specific intervention for treating childhood aggression may be more effective at higher levels of initial aggression. In this case, the initial level of aggression could be included in the analysis along with an interaction term with the manipulated variable to test if the magnitude of the treatment effect depends on the level of the observed covariate. When the interaction between the independent variable and a covariate is significant, the covariate is said to moderate the effect of the independent variable on the outcome (see Moderator Variable: Methodology; External Validity). In this situation, the arbitrary distinction between an explanatory and a control variable begins to blur. The researcher must now explore the mechanism of differential treatment effectiveness and ascribe proper causal status to both the covariate and the explanatory variable.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767007348

Elaboration

Carol S. Aneshensel, in Encyclopedia of Social Measurement, 2005

Types of Test Factors

To establish that an association between two variables is indicative of an asymmetrical relationship, six types of test factors are introduced into analysis: extraneous, component, intervening, antecedent, suppressor, and distorter variables. An extraneous variable reveals that an apparently asymmetrical relationship is instead symmetrical because the introduction of the test factor into the analysis diminishes the observed association. This result occurs because the test factor is associated with both the independent and the dependent variables. When a relationship is extraneous, there is no causal connection between independent and dependent variables. The most common case is spuriousness: independent and dependent variables appear to be associated with each other because both depend on a common cause. Figure 1 illustrates an instance of spuriousness in which the joint dependency of the independent variable and the dependent variable on the test factor explains some or all of the original empirical association between these variables. Spuriousness occurs because changes in the “third variable” produce changes in both the focal independent variable and the focal dependent variable; because the latter two change in unison, they appear to be related to each other, whereas they are actually related to the “third variable.” Component variables are subconcepts of global or complex concepts. From the perspective of the elaboration model, the purpose of these test factors is to determine which of the components is responsible for the observed effect on a dependent variable.

Describe two ways that researchers attempt to control extraneous variables

Figure 1. Spuriousness between independent and dependent variables due to a test factor. Reprinted from Aneshensel (2002), with permission.

The intervening variable is a consequence of the independent variable and a determinant of the dependent variable, as shown in Fig. 2. A test factor that is an intervening variable requires three asymmetrical relationships: the original relationship between the independent and dependent variables, a relationship between the independent variable and the test factor (acting as a dependent variable), and, a relationship between the test factor (acting as an independent variable) and the dependent variable. If the test factor is indeed an intervening variable, then its introduction into the analysis accounts for some or all of the original relationship between the other variables.

Describe two ways that researchers attempt to control extraneous variables

Figure 2. The test factor as an intervening variable between an independent variable and a dependent variable. Reprinted from Aneshensel (2002), with permission.

According to Rosenberg, the antecedent variable logically precedes the relationship between an independent variable and a dependent variable. Its introduction into the analysis does not explain the relationship, but clarifies the influences that precede the relationship. As shown in Fig. 3, the test factor acting as an antecedent variable is assumed to be directly responsible for the independent variable, which, in turn, influences the dependent variable; the independent variable now acts as an intervening variable. Thus, analysis of antecedent variables is derivative of intervening variable analysis. For antecedent variables, the causal chain is carried as far back in the process as is theoretically meaningful.

Describe two ways that researchers attempt to control extraneous variables

Figure 3. The test factor as an antecedent variable to an independent variable and a dependent variable. Reprinted from Aneshensel (2002), with permission.

A suppressor variable conceals a true relationship or makes it appear weaker than it is in fact: the full extent of an association emerges only when the suppressor variable is taken into consideration. In this instance, negative findings are misleading because the real association is concealed at first by the suppressor variable; the absence of the bivariate association is spurious. The suppressor variable is a threat to validity because it attenuates the full extent of a true relationship. A distorter variable, however, produces a relationship that is the reverse of that originally observed: this reversal becomes apparent only when the distorter variable is included in the analysis.

The elaboration model, according to Rosenberg (1968), is designed to deal with two dangers in drawing conclusions from two-variable relationships: accepting a false hypothesis as true, and rejecting a true hypothesis as false. Extraneous, suppressor, and distorter variables are designed to reduce the likelihood of making these mistakes. In addition, the elaboration model enables the analyst to explicate a more precise and specific understanding of a two-variable model. Component, intervening, and antecedent variables serve this purpose.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985005612

Business, Social Science Methods Used in

Gayle R. Jennings, in Encyclopedia of Social Measurement, 2005

Experimental and Quasi-experimental Methods

Experiments enable researchers to determine causal relationships between variables in controlled settings (laboratories). Researchers generally manipulate the independent variable in order to determine the impact on a dependent variable. Such manipulations are also called treatments. In experiments, researchers essay to control confounding variables and extraneous variables. Confounding variables may mask the impact of another variable. Extraneous variables may influence the dependent variable in addition to the independent variable. Advantages of experiments include the ability to control variables in an artificial environment. Disadvantages include the mismatch between reality and laboratory settings and the focus on a narrow range of variables at any one time. Laboratory experiments enable researchers to control experiments to a greater degree than those experiments conducted in simulated or real businesses or business-related environments. Experiments in the field (business and business-related environments) may prove to be challenging due to issues related to gaining access and ethical approval. However, field experiments (natural experiments) allow the measurement of the influence of the independent variable on the dependent variable within a real-world context, although not all extraneous variables are controllable. The classical experimental method involves independent and dependent variables, random sampling, control groups, and pre- and posttests. Quasi-experiments omit aspects from the classical experiment method (such as omission of a control group or absence of a pretest).

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012369398500270X

Language Acquisition

Allyssa McCabe, in Encyclopedia of Social Measurement, 2005

Control versus Generalizability of Results

Psychologists have long stressed the importance of standardizing the procedure of a study, or arranging for as many circumstances to be the same for all participants as possible. Through the exercise of such scientific control, experimenters believe that they can attribute outcomes to the independent variable of interest to them rather than some other, extraneous variable. The difficulty is that such control comes at the inevitable expense of generalizability (the extent to which findings can be applied to other situations outside the laboratory). For example, an experimenter might adopt the method Ebbinghaus used in the 1880s to study the acquisition of words. Ebbinghaus used consonant-vowel-consonant trigrams—nonsense syllables—in an effort to avoid contaminating the experimental procedure by the intrusion of meaning on the laboratory experience. He then measured precisely how many repetitions of “JUM” or “PID” were required for subjects to memorize those nonsense syllables. Researchers eventually discovered that such procedures told them very little about how people learn words in the real world; in other words, generalizability had all but completely been sacrificed for the sake of control. Moreover, unbeknownst to researchers, subjects often turned nonsense syllables into meaningful ones (e.g., “JUM” became “JUMP” or “CHUM”) to ease memorization.

On the other hand, simply observing language in the real world, which maximizes generalizability, would not tell us much about which of the many aspects of some particular situation were responsible for triggering the language observed. Once again, multiple methods of assessment are required.

What are two ways that researchers attempt to control extraneous variables?

Methods to Control Extraneous Variables.
1) Randomization: In this approach, treatments are randomly assigned to the experimental groups. ... .
2)Matching: Another important technique is to match the different groups of confounding variables..

What method is the best for controlling extraneous variables?

As discussed previously, random sampling is often the best approach to obtain a representative sample. Random sampling not only controls several extraneous variables, it also allows us to generalize to a given population (increases external validity).

Which of the following techniques are used to control extraneous variable in research?

Hence, Randomization, Matching, and Elimination are the correct answers.

What are the two types of extraneous variables?

What are the types of extraneous variables? There are 4 main types of extraneous variables: Demand characteristics: Environmental cues that encourage participants to conform to researchers' expectations. Experimenter effects: Unintentional actions by researchers that influence study outcomes.