Evidence Strength and Quality
Stakeholders’ perceptions of the quality and validity of evidence supporting the belief that the intervention will have desired outcomes.
Domain: Intervention Characteristics
Siblings
Sources of evidence may include published literature, guidelines, anecdotal stories from colleagues, information from a competitor, patient experiences, results from a local pilot, and other sources [1][2]. Though there is no agreed upon measure of “strong evidence”, there is empirical support of a positive association with dissemination [3]. Support for the role of evidence strength and quality in implementation is mixed, however [3]. Though strong evidence is important, it is not always dominant in individual decisions to adopt nor is it ever sufficient [4]. Evidence supporting the use of an intervention may come through external sources (e.g., peer reviewed literature) or internally from other sources that appear to be credible [2]. The PARiHS model lists three sources of evidence as being key for research uptake: research studies, clinical experience, and patient experience [1]. External and internal evidence, including experience through piloting (see Trialability), may be combined to build a case for implementing an intervention [2]. The more sources of evidence used, the more likely innovations will be taken up [5][6]. Credibility of the developers of evidence, transparency of the process used to develop (see Engaging), and intentionally mapping out the implementation (see Planning) to counterbalance negative and positive perceptions of the intervention by potential users are all important for effective implementation [7].
Inclusion Criteria
Include statements regarding awareness of evidence and the strength and quality of evidence as well as the absence of evidence or a desire for different types of evidence, such as pilot results instead of evidence from the literature.
- “We reviewed the literature and decided that things were kind of ambiguous.”
- “The CDC guidelines gave this an A1 recommendation.”
- “Patients lost a lot of weight when we did this.”
- “Our IV nurse went to a conference and heard Dr. X present and she came back all fired up.”
Exclusion Criteria
Exclude or double code statements regarding the receipt of evidence as an engagement strategy to Engaging: Key Stakeholders.
Exclude or double code descriptions of use of results from local or regional pilots to Trialability.
Check out SIRC’s Instrument Review project and published systematic review protocol, which has cataloged over 400 implementation-related measures.
Note: As we become aware of measures, we will post them here. Please contact us with updates.
- Rycroft-Malone J, Harvey G, Kitson A, McCormack B, Seers K, Titchen A: Getting evidence into practice: ingredients for change. Nurs Stand 2002, 16:38-43.
- Stetler CB: Updating the Stetler Model of research utilization to facilitate evidence-based practice. Nurs Outlook 2001, 49:272-279.
- Dopson S, FitzGerald L, Ferlie E, Gabbay J, Locock L: No magic target! Changing clinical practice to become more evidence based. Health Care Manage Rev 2002, 27:35-47.
- Fitzgerald L, Dopson S: Knowledge, credible evidence, and utilization. In Knowledge to action? Evidence-based health care in context. Edited by Dopson S, Fitzgerald L. Oxford, UK: Oxford University Press; 2006: 223
- Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care 1998, 7:149-158.
- Rycroft-Malone J, A., Kitson G, Harvey B, McCormack K, Seers AT, Estabrooks C: Ingredients for change: revisiting a conceptual framework. (Viewpoint). Quality and Safety in Health Care2002, 11:174-180.
- Graham ID, Logan J: Innovations in knowledge transfer and continuity of care. Can J Nurs Res 2004, 36:89-103.