What is the term for a research study in which neither the researchers or the subjects know who is in the experimental group and who is in the control group?

When people read about a research study, they may not pay attention to how the study was designed. But to understand the quality of the findings, it’s important to know a bit about study design.

According to the widely-accepted hierarchy of evidence, the most reliable evidence comes from systematic reviews, followed by evidence from randomized controlled trials, cohort studies and then case control studies. 

The latter three are research studies that fall into one of two main categories: observational studies or experimental studies.

Observational studies

Observational studies are ones where researchers observe the effect of a risk factor, diagnostic test, treatment or other intervention without trying to change who is or isn’t exposed to it. Cohort studies and case control studies are two types of observational studies. 

Cohort study: For research purposes, a cohort is any group of people who are linked in some way. For instance, a birth cohort includes all people born within a given time frame. Researchers compare what happens to members of the cohort that have been exposed to a particular variable to what happens to the other members who have not been exposed.

Case control study: Here researchers identify people with an existing health problem (“cases”) and a similar group without the problem (“controls”) and then compare them with respect to an exposure or exposures.

Experimental studies

Experimental studies are ones where researchers introduce an intervention and study the effects. Experimental studies are usually randomized, meaning the subjects are grouped by chance.

Randomized controlled trial (RCT): Eligible people are randomly assigned to one of two or more groups. One group receives the intervention (such as a new drug) while the control group receives nothing or an inactive placebo. The researchers then study what happens to people in each group. Any difference in outcomes can then be linked to the intervention.

Strengths and weaknesses

The strengths and weaknesses of a study design should be seen in light of the kind of question the study sets out to answer. Sometimes, observational studies are the only way researchers can explore certain questions. For example, it would be unethical to design a randomized controlled trial deliberately exposing workers to a potentially harmful situation. If a health problem is a rare condition, a case control study (which begins with the existing cases) may be the most efficient  way to identify potential causes. Or, if little is known about how a problem develops over time, a cohort study may be the best design. 

However, the results of observational studies are, by their nature, open to dispute. They run the risk of containing confounding biases. Example: A cohort study might find that people who meditated regularly were less prone to heart disease than those who didn’t. But the link may be explained by the fact that people who meditate also exercise more and follow healthier diets. In other words, although a cohort is defined by one common characteristic or exposure, they may also share other characteristics that affect the outcome.

The RCT is still considered the “gold standard” for producing reliable evidence because little is left to chance. But there’s a growing realization that such research is not perfect, and that many questions simply can’t be studied using this approach. Such research is time-consuming and expensive — it may take years before results are available. Also, intervention research is often restricted by how many participants researchers can manage or how long participants can be expected to live in controlled conditions. As a result, an RCT would not be the right kind of study to pick up on outcomes that take a long time to appear or that are expected to affect a very minute number of people.

Source: At Work, Issue 83, Winter 2016: Institute for Work & Health, Toronto [This column updates a previous column describing the same term, which was originally published in 2005.]

Can J Surg. 2010 Oct; 53(5): 345–348.

PMCID: PMC2947122

* Department of Surgery, University of Western Ontario, London, and the

† Departments of Clinical Epidemiology and Biostatistics and

Find articles by Paul J. Karanicolas

† Departments of Clinical Epidemiology and Biostatistics and

‡ Surgery and the

Find articles by Forough Farrokhyar

† Departments of Clinical Epidemiology and Biostatistics and

§ Division of Orthopaedic Surgery, McMaster University, Ont

Find articles by Mohit Bhandari

Blinding refers to the concealment of group allocation from one or more individuals involved in a clinical research study, most commonly a randomized controlled trial (RCT). Although randomization minimizes differences between treatment groups at the outset of the trial, it does nothing to prevent differential treatment of the groups later in the trial or the differential assessment of outcomes, either of which may result in biased estimates of treatment effects. The optimal strategy to minimize the likelihood of differential treatment or assessments of outcomes is to blind as many individuals as possible in a trial.

Randomized controlled trials of surgical interventions are frequently more difficult to blind than RCTs of medications, which typically achieve blinding with placebos. However, imaginative techniques may make blinding more feasible in surgical trials than is commonly believed by many researchers. In this article we discuss the importance of blinding and provide practical suggestions to researchers who wish to incorporate blinding into their surgical studies.

By the end of this article, the reader will be able to appreciate the significance and rationale of blinding, recognize which individuals to blind, learn strategies for blinding in difficult situations and develop approaches for managing situations in which blinding is impossible. The following article is divided into 4 sections: Why should I blind? Who should I blind? How can I blind individuals in surgical trials? and What should I do if I can’t blind?

Rigorous, well-conducted RCTs provide the best estimates of the impact of surgical interventions.1 However, if RCTs are difficult to conduct rigorously in an area, the methodology is more likely to be faulty, and the results may be misleading. Moreover, rather than performing a critical appraisal of the available literature, clinicians’ decisions may be influenced by the fact that an RCT design was used, and erroneous conclusions may guide clinical practice.

Blinding is a critical methodologic feature of RCTs. Although randomization minimizes the selection bias and confounding that plagues cohort and case–control studies2 and therefore minimizes the likelihood of prognostic differences between intervention groups, its use does not prevent subsequent differential cointerventions or biased assessment of outcomes. Note that allocation concealment is completely different from blinding. The former seeks to eliminate selection bias during the process of recruitment and randomization, whereas the latter seeks to reduce performance and ascertainment bias after randomization.3 Furthermore, if bias is introduced during a trial because of differential treatment of groups or biased assessment of outcomes, no analytical techniques can correct for this limitation. Thus, surgeons must interpret the results from unblinded trials with caution.

Whereas few would question the reduction in bias that blinding can achieve, empirical evidence suggests that blinding in trials does indeed make a difference. In a systematic review of 250 RCTs identified from 33 meta-analyses, researchers observed a significant difference in the size of the estimated treatment effect between trials that reported “double-blinding” compared with those that did not (p = 0.01), with an overall odds ratio 17% larger in studies that did not report blinding.4 Other studies have confirmed this finding.5,6 Therefore, trialists should make every effort to incorporate blinding into their trial designs and readers should look for descriptions in the published reports of which investigators were blinded.

Differential treatment or assessment of participants potentially resulting in bias may occur at any phase of a trial. If possible, trialists should blind 5 groups of individuals involved in trials: participants, clinicians (surgeons), data collectors, outcome adjudicators and data analysts.

If participants are not blinded, knowledge of group assignment may affect their behaviour in the trial and their responses to subjective outcome measures. For example, a participant who is aware that he is not receiving active treatment may be less likely to comply with the trial protocol, more likely to seek additional treatment outside of the trial and more likely to leave the trial without providing outcome data. Those aware that they are receiving or not receiving therapy are more likely to provide biased assessments of the effectiveness of the intervention — most likely in opposite directions — than blinded participants.7 Similarly, blinded clinicians are much less likely to transfer their attitudes to participants or to provide differential treatment to the active and placebo groups than are unblinded clinicians.7

Blinding of data collectors and outcome adjudicators (sometimes the same individuals) is crucial to ensure unbiased ascertainment of outcomes. For example, in a randomized controlled trial of cyclophosphamide and plasma exchange in patients with multiple sclerosis, neither active treatment regimen was superior to placebo when assessed by blinded neurologists, but there was an apparent benefit of treatment with cyclophosphamide, plasma exchange and prednisone when unblinded neurologists performed the assessments.8 Although subjective outcomes are most at risk of ascertainment bias, seemingly objective outcomes often require some degree of subjectivity and therefore are at risk of bias as well.

Bias may also be introduced during the statistical analysis of the trial through the selective use and reporting of statistical tests. This may be a subconscious process spurred by investigators eager to see a positive result, but the consequences are profound. The best method to avoid this potential bias is blinding of the data analyst until the entire analysis has been completed.

This rationale strongly suggests that the blinding of as many individuals as is practically possible limits bias in clinical trials. In the past, many researchers have referred to trials that blinded several groups of individuals as “double-blind.” This term is ambiguous, inconsistently applied, and has different meanings to different individuals.9 Blinding is not an all-or-nothing phenomenon; researchers may blind any of the involved groups. Furthermore, even within one of the groups (such as outcome adjudicators), some individuals may be blinded while others are aware of group allocation. Thus, it is far preferable for researchers to explicitly state which individuals in the trial were blinded, how they achieved blinding and whether they tested the successfulness of blinding.

Blinding is unequivocally more difficult to incorporate in trials of surgical interventions than in trials of medical therapies.10–12 Whereas medical trials usually incorporate placebo medications to achieve blinding, surgical treatments often result in incisions and scars that may differ between groups. Furthermore, if a trial purports to compare surgical therapy to nonoperative management, it will often be impossible to conceal group allocation from at least some of the individuals involved in a trial (such as the patients and surgeons).

Researchers should consider methods to blind each individual involved in a trial separately and search for the simplest, least invasive technique of achieving blinding. Determining the feasibility of blinding patients is usually simple. If the trial involves 2 similar procedures (such as a comparison of division versus nondivision of the short gastric vessels during laparoscopic Nissen fundoplication13), trialists may incorporate blinding by simply not informing patients of their treatment allocation. If, however, researchers are comparing surgical therapy to nonoperative management (such as a comparison of surgery versus surveillance for small aneurysms14), patients can only be blinded with ethically questionable methods like sham surgery.15

Although surgeons can rarely be blinded, it may be possible for researchers to blind other members of the treatment team and thus limit the potential for differential treatment. For example, whereas surgeons would clearly need to know whether patients were assigned to the division or nondivision group of the fundoplication study,13 the nurses, dieticians and other practitioners administering postoperative care could feasibly have been blinded by simply not informing them of the group allocation. In some cases, this might require more creative but feasible blinding techniques such as covering different incisions with large dressings.

Similarly, the individuals collecting data or adjudicating outcomes may often be blinded by use of relatively simple techniques. In a systematic review of all trials in orthopedic trauma over 10 years, researchers determined that over 85% of trials could have blinded at least some of the individuals assessing outcomes.16 In contrast, less than 10% of trials actually incorporated blinding of outcome assessors. The reviewers considered 3 techniques of blinding that could have been incorporated into these trials: using an independent individual unaware of the treatment allocation; concealing incisions or scars; and digitally altering radiographs to mask the type of implant (Fig. 1)

Whereas researchers should search for creative methods such as these to blind individuals in their trials, if they choose to incorporate a novel technique (such as manipulation of radiographs), they must ensure that the blinding process itself does not introduce bias by impairing the ability to accurately assess the outcome. Ideally, trialists will also test the successfulness of the blinding, although this should be undertaken before initiating a trial because there are dangers to testing the success of blinding once a trial has been completed.17 Researchers should look for 3 qualities in a novel blinding technique: it must successfully conceal the group allocation; it must not impair the ability to accurately assess outcomes; and it must be acceptable to the individuals that will be assessing outcomes.18

Finally, researchers can always blind the individuals performing the statistical analysis by simply labelling the groups with nonidentifying terms (such as A and B). Although this seems intuitive, surprisingly few researchers actually report blinding the data analysts in trials.16

Despite careful consideration of methods to blind individuals in trials, situations will invariably arise when some or all groups of individuals simply cannot ethically be blinded. Surgical researchers must accept this reality and incorporate other strategies to minimize bias when blinding is not possible. When patients or clinicians cannot be blinded, trialists should ensure that the 2 (or more) allocation groups are, apart from the intervention, treated as equally as possible. This may involve standardizing the care of participants such as cointerventions, frequency of follow-up and management of complications. Alternatively, researchers may choose to use an expertise-based trial design, in which patients are randomly assigned to different surgeons that each perform one intervention.19 This type of RCT obviates the need for practitioner blinding because each clinician is likely to be biased in favour of the intervention they are performing. Unfortunately, expertise-based trials do not address the potential biases that may be introduced by the lack of participant blinding and may not be appropriate for all research questions.

When data collectors or outcome adjudicators cannot be blinded, researchers should ensure that the outcomes being measured are as objective as possible. Furthermore, the outcomes should be reliable (although reliable outcomes are preferable whether or not the assessors are blinded). Finally, researchers should consider using duplicate assessment of outcomes and reporting the level of agreement achieved by the assessors.

Even if researchers incorporate these methodologic precautions, they should acknowledge the limitations and potential biases introduced by the lack of blinding in the discussion section of the publication.

Blinding is an important methodologic feature of RCTs to minimize bias and maximize the validity of the results. Researchers should strive to blind participants, surgeons, other practitioners, data collectors, outcome adjudicators, data analysts and any other individuals involved in the trial. Useful tips for surgical researchers are provided in Box 1. Although few surgical trials currently incorporate blinding, it may be possible to achieve blinding using novel, creative techniques. If blinding is not possible, researchers should incorporate other methodologic safeguards but should understand and acknowledge the limitations of these strategies.

Blind as many individuals as possible in the trial
  • Participants (patients)

  • Practitioners (surgeons, nurses, dieticians, etc.)

  • Data collectors

  • Outcome adjudicators

  • Data analysts

Blinding may often be possible using simple techniques
  • If possible, do not inform patients of what group they are in

  • Conceal incisions and scars

  • Use independent outcome assessors

  • Alter digital radiographs or images

If blinding is not possible
  • Standardize the treatment of the groups (apart from the intervention)

  • Consider an expertise-based trial design

  • Use objective, reliable outcomes if possible

  • Consider duplicate assessment

  • Acknowledge the limitations

Competing interests: No funding was received in preparation of this paper. Dr. Bhandari was funded, in part, by a Canada Research Chair, McMaster University.

1. Doing more good than harm: the evaluation of health care interventions. Ann N Y Acad Sci; Conference proceedings; New York (NY). 1993 Mar. 22–25; 1993. pp. 1–341. [PubMed] [Google Scholar]

2. Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32:51–63. [PubMed] [Google Scholar]

3. Altman DG, Schulz KF. Statistics notes: concealing treatment allocation in randomised trials. BMJ. 2001;323:446–7. [PMC free article] [PubMed] [Google Scholar]

4. Schulz KF, Chalmers I, Hayes RJ, et al. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408–12. [PubMed] [Google Scholar]

5. Balk EM, Bonis PA, Moskowitz H, et al. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002;287:2973–82. [PubMed] [Google Scholar]

6. Juni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001;323:42–6. [PMC free article] [PubMed] [Google Scholar]

7. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002;359:696–700. [PubMed] [Google Scholar]

8. Noseworthy JH, Ebers GC, Vandervoort MK, et al. The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Neurology. 1994;44:16–20. [PubMed] [Google Scholar]

9. Devereaux PJ, Manns BJ, Ghali WA, et al. Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA. 2001;285:2000–3. [PubMed] [Google Scholar]

10. Kao LS, Aaron BC, Dellinger EP. Trials and tribulations. Current challenges in conducting clinical trials. Arch Surg. 2003;138:59–62. [PubMed] [Google Scholar]

11. Lilford R, Braunholtz D, Harris J, et al. Trials in surgery. Br J Surg. 2004;91:6–16. [PubMed] [Google Scholar]

12. McCulloch P, Taylor I, Sasako M, et al. Randomised trials in surgery: problems and possible solutions. BMJ. 2002;324:1448–51. [PMC free article] [PubMed] [Google Scholar]

13. Yang H, Watson D, Lally C, et al. Randomized trial of division versus nondivision of the short gastric vessels during laparoscopic Nissen fundoplication: 10-year outcomes. Ann Surg. 2008;247:38–42. [PubMed] [Google Scholar]

14. Powell JT, Brown LC, Forbes JF, et al. Final 12-year follow-up of surgery versus surveillance in the UK Small Aneurysm Trial. Br J Surg. 2007;94:702–8. [PubMed] [Google Scholar]

15. Macklin R. The ethical problems with sham surgery in clinical research. N Engl J Med. 1999;341:992–6. [PubMed] [Google Scholar]

16. Karanicolas PJ, Bhandari M, Taromi B, et al. Blinding of outcomes in trials of orthopaedic trauma: an opportunity to enhance the validity of clinical trials. J Bone Joint Surg Am. 2008;90:1026–33. [PubMed] [Google Scholar]

17. Sackett DL. Measuring the success of blinding in RCTs: Don’t, must, can’t or needn’t. Int J Epidemiol. 2007;36:664–5. [PubMed] [Google Scholar]

18. Karanicolas PJ, Bhandari M, Walter SD, et al. Radiographs of hip fractures were digitally altered to mask surgeons to the type of implant without compromising the reliability of quality ratings or making the rating process more difficult. J Clin Epidemiol. 2009;62:214–23. [PubMed] [Google Scholar]

19. Devereaux PJ, Bhandari M, Clarke M, et al. Need for expertise based randomised controlled trials. BMJ. 2005;330:88. [PMC free article] [PubMed] [Google Scholar]