Which quantitative research design involves comparing or contrasting two or more samples of study subjects on one or more variables?
We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data. Show
You can read the details below. By accepting, you agree to the updated privacy policy. Thank you! View updated privacy policy We've encountered a problem, please try again. 10.1. IntroductionIn eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between these groups, while controlling for as many of the conditions as possible such as the composition, system, setting and duration. According to the typology by Friedman and Wyatt (2006), comparative studies take on an objective view where events such as the use and effect of an eHealth system can be defined, measured and compared through a set of variables to prove or disprove a hypothesis. For comparative studies, the design options are experimental versus observational and prospective versus retrospective. The quality of eHealth comparative studies depends on such aspects of methodological design as the choice of variables, sample size, sources of bias, confounders, and adherence to quality and reporting guidelines. In this chapter we focus on experimental studies as one type of comparative study and their methodological considerations that have been reported in the eHealth literature. Also included are three case examples to show how these studies are done. 10.2. Types of Comparative StudiesExperimental studies are one type of comparative study where a sample of participants is identified and assigned to different conditions for a given time duration, then compared for differences. An example is a hospital with two care units where one is assigned a cpoe system to process medication orders electronically while the other continues its usual practice without a cpoe. The participants in the unit assigned to the cpoe are called the intervention group and those assigned to usual practice are the control group. The comparison can be performance or outcome focused, such as the ratio of correct orders processed or the occurrence of adverse drug events in the two groups during the given time period. Experimental studies can take on a randomized or non-randomized design. These are described below. 10.2.1. Randomized ExperimentsIn a randomized design, the participants are randomly assigned to two or more groups using a known randomization technique such as a random number table. The design is prospective in nature since the groups are assigned concurrently, after which the intervention is applied then measured and compared. Three types of experimental designs seen in eHealth evaluation are described below (Friedman & Wyatt, 2006; Zwarenstein & Treweek, 2009).
10.2.2. Non-randomized ExperimentsNon-randomized design is used when it is neither feasible nor ethical to randomize participants into groups for comparison. It is sometimes referred to as a quasi-experimental design. The design can involve the use of prospective or retrospective data from the same or different participants as the control group. Three types of non-randomized designs are described below (Harris et al., 2006).
10.3. Methodological ConsiderationsThe quality of comparative studies is dependent on their internal and external validity. Internal validity refers to the extent to which conclusions can be drawn correctly from the study setting, participants, intervention, measures, analysis and interpretations. External validity refers to the extent to which the conclusions can be generalized to other settings. The major factors that influence validity are described below. 10.3.1. Choice of VariablesVariables are specific measurable features that can influence validity. In comparative studies, the choice of dependent and independent variables and whether they are categorical and/or continuous in values can affect the type of questions, study design and analysis to be considered. These are described below (Friedman & Wyatt, 2006).
10.3.2. Sample SizeSample size is the number of participants to include in a study. It can refer to patients, providers or organizations depending on how the unit of allocation is defined. There are four parts to calculating sample size. They are described below (Noordzij et al., 2010).
Table 10.1Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables. An example of sample size calculation for an rct to examine the effect of cds on improving systolic blood pressure of hypertensive patients is provided in the Appendix. Refer to the Biomath website from Columbia University (n.d.) for a simple Web-based sample size / power calculator. 10.3.3. Sources of BiasThere are five common sources of biases in comparative studies. They are selection, performance, detection, attrition and reporting biases (Higgins & Green, 2011). These biases, and the ways to minimize them, are described below (Vervloet et al., 2012).
10.3.4. ConfoundersConfounders are factors other than the intervention of interest that can distort the effect because they are associated with both the intervention and the outcome. For instance, in a study to demonstrate whether the adoption of a medication order entry system led to lower medication costs, there can be a number of potential confounders that can affect the outcome. These may include severity of illness of the patients, provider knowledge and experience with the system, and hospital policy on prescribing medications (Harris et al., 2006). Another example is the evaluation of the effect of an antibiotic reminder system on the rate of post-operative deep venous thromboses (dvts). The confounders can be general improvements in clinical practice during the study such as prescribing patterns and post-operative care that are not related to the reminders (Friedman & Wyatt, 2006). To control for confounding effects, one may consider the use of matching, stratification and modelling. Matching involves the selection of similar groups with respect to their composition and behaviours. Stratification involves the division of participants into subgroups by selected variables, such as comorbidity index to control for severity of illness. Modelling involves the use of statistical techniques such as multiple regression to adjust for the effects of specific variables such as age, sex and/or severity of illness (Higgins & Green, 2011). 10.3.5. Guidelines on Quality and ReportingThere are guidelines on the quality and reporting of comparative studies. The grade (Grading of Recommendations Assessment, Development and Evaluation) guidelines provide explicit criteria for rating the quality of studies in randomized trials and observational studies (Guyatt et al., 2011). The extended consort (Consolidated Standards of Reporting Trials) Statements for non-pharmacologic trials (Boutron, Moher, Altman, Schulz, & Ravaud, 2008), pragmatic trials (Zwarestein et al., 2008), and eHealth interventions (Baker et al., 2010) provide reporting guidelines for randomized trials. The grade guidelines offer a system of rating quality of evidence in systematic reviews and guidelines. In this approach, to support estimates of intervention effects rcts start as high-quality evidence and observational studies as low-quality evidence. For each outcome in a study, five factors may rate down the quality of evidence. The final quality of evidence for each outcome would fall into one of high, moderate, low, and very low quality. These factors are listed below (for more details on the rating system, refer to Guyatt et al., 2011).
The original consort Statement has 22 checklist items for reporting rcts. For non-pharmacologic trials extensions have been made to 11 items. For pragmatic trials extensions have been made to eight items. These items are listed below. For further details, readers can refer to Boutron and colleagues (2008) and the consort website (consort, n.d.).
The consort Statement for eHealth interventions describes the relevance of the consort recommendations to the design and reporting of eHealth studies with an emphasis on Internet-based interventions for direct use by patients, such as online health information resources, decision aides and phrs. Of particular importance is the need to clearly define the intervention components, their role in the overall care process, target population, implementation process, primary and secondary outcomes, denominators for outcome analyses, and real world potential (for details refer to Baker et al., 2010). 10.4. Case Examples10.4.1. Pragmatic RCT in Vascular Risk Decision SupportHolbrook and colleagues (2011) conducted a pragmatic rct to examine the effects of a cds intervention on vascular care and outcomes for older adults. The study is summarized below.
10.4.2. Non-randomized Experiment in Antibiotic Prescribing in Primary CareMainous, Lambourne, and Nietert (2013) conducted a prospective non-randomized trial to examine the impact of a cds system on antibiotic prescribing for acute respiratory infections (aris) in primary care. The study is summarized below.
10.4.3. Interrupted Time Series on EHR Impact in Nursing CareDowding, Turley, and Garrido (2012) conducted a prospective its study to examine the impact of ehr implementation on nursing care processes and outcomes. The study is summarized below.
10.5. SummaryIn this chapter we introduced randomized and non-randomized experimental designs as two types of comparative studies used in eHealth evaluation. Randomization is the highest quality design as it reduces bias, but it is not always feasible. The methodological issues addressed include choice of variables, sample size, sources of biases, confounders, and adherence to reporting guidelines. Three case examples were included to show how eHealth comparative studies are done.
Appendix. Example of Sample Size CalculationThis is an example of sample size calculation for an rct that examines the effect of a cds system on reducing systolic blood pressure in hypertensive patients. The case is adapted from the example described in the publication by Noordzij et al. (2010). (a) Systolic blood pressure as a continuous outcome measured in mmHg Based on similar studies in the literature with similar patients, the systolic blood pressure values from the comparison groups are expected to be normally distributed with a standard deviation of 20 mmHg. The evaluator wishes to detect a clinically relevant difference of 15 mmHg in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds. Assuming a significance level or alpha of 0.05 for 2-tailed t-test and power of 0.80, the corresponding multipliers1 are 1.96 and 0.842, respectively. Using the sample size equation for continuous outcome below we can calculate the sample size needed for the above study. n = 2[(a+b)2σ2]/(μ1-μ2)2 where n = sample size for each group μ1 = population mean of systolic blood pressures in intervention group μ2 = population mean of systolic blood pressures in control group μ1- μ2 = desired difference in mean systolic blood pressures between groups σ = population variance a = multiplier for significance level (or alpha) b = multiplier for power (or 1-beta) Providing the values in the equation would give the sample size (n) of 28 samples per group as the result n = 2[(1.96+0.842)2(202)]/152 or 28 samples per group (b) Systolic blood pressure as a categorical outcome measured as below or above 140 mmHg (i.e., hypertension yes/no) In this example a systolic blood pressure from a sample that is above 140 mmHg is considered an event of the patient with hypertension. Based on published literature the proportion of patients in the general population with hypertension is 30%. The evaluator wishes to detect a clinically relevant difference of 10% in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds. This means the expected proportion of patients with hypertension is 20% (p1 = 0.2) in the intervention group and 30% (p2 = 0.3) in the control group. Assuming a significance level or alpha of 0.05 for 2-tailed t-test and power of 0.80 the corresponding multipliers are 1.96 and 0.842, respectively. Using the sample size equation for categorical outcome below, we can calculate the sample size needed for the above study. n = [(a+b)2(p1q1+p2q2)]/χ2 n = sample size for each group p1 = proportion of patients with hypertension in intervention group q1 = proportion of patients without hypertension in intervention group (or 1-p1) p2 = proportion of patients with hypertension in control group q2 = proportion of patients without hypertension in control group (or 1-p2) χ = desired difference in proportion of hypertensive patients between two groups a = multiplier for significance level (or alpha) b = multiplier for power (or 1-beta) Providing the values in the equation would give the sample size (n) of 291 samples per group as the result n = [(1.96+0.842)2((0.2)(0.8)+(0.3)(0.7))]/(0.1)2 or 291 samples per group Footnotes1From Table 3 on p. 1392 of Noordzij et al. (2010). What type of quantitative research involves comparing and contrasting two or more samples of study subjects on one or more variables often at a single point in time?Comparative Design – involves comparing and contrasting two or more samples of the study on one or more variables often at a single point of time.
What type of quantitative research is comparing?Casual-Comparative research is a method that works on the process of comparison. Once analysis and conclusions are made, deciding about the causes should be done fastidiously, as other different variables, each far-famed and unknown, might still have an effect on the result.
Which type of quantitative research seeks to determine relationship of one characteristic to the other characteristics?Correlational researchattempts to determine the extent of a relationship between two or more variables using statistical data. In this type of design, relationships between and among a number of facts are sought and interpreted.
What is quantitative research compare and contrast quantitative form qualitative research?The differences between quantitative and qualitative research. |