The degree to which an intervention can be adapted, tailored, refined, or reinvented to meet local needs.

Adaptability relies on a definition of the ‘core components’ (the essential and indispensable elements of the intervention itself) versus the ‘adaptable periphery’ (adaptable elements, structures, and systems related to the intervention and organization into which it is being implemented) of the intervention [1][2]. A component analysis can be performed to identify the core versus adaptable periphery components [3], but often the distinction is one that can only be discerned through trial and error over time as the intervention is disseminated more widely and adapted for a variety of contexts [4]. The tension between the need to achieve full and consistent implementation across multiple contexts while providing the flexibility for local sites to adapt the intervention as needed is real and must be balanced, which is no small challenge [5].

Information about the core components and adaptable periphery can be used to assess “fidelity” [6]. The core components may be defined by a research protocol or “black-box” packaging while the adaptable periphery may consist of factors that vary from site to site. For example, a computerized report system has a fundamental core that users cannot change but it might be accessed from different launch points, depending on workflows of individual organizations. Greenhalgh, et al describe aspects of adaptability under “fuzzy boundaries” and “potential for reinvention” [1, page 596-597]. An intervention that can be easily modified to adapt to the setting is positively associated with implementation [7][8][9].

Inclusion Criteria

Include statements regarding the (in)ability to adapt the innovation to their context, e.g., complaints about the rigidity of the protocol. Suggestions for improvement can be captured in this code but should not be included in the rating process, unless it is clear that the participant feels the change is needed but that the program cannot be adapted.

Exclusion Criteria

Exclude or double code statements that the innovation did not need to be adapted to Compatibility.

Check out SIRC’s Instrument Review project and published systematic review protocol, which has cataloged over 400 implementation-related measures. 

Note: As we become aware of measures, we will post them here. Please contact us with updates.

  1. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004, 82:581-629.
  2. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F: Implementation Research: A Synthesis of the Literature. In Book Implementation Research: A Synthesis of the Literature (Editor ed.^eds.) City: University of South Florida, Louis de la Parte Florida Mental Health Institute; 2005.
  3. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S: A conceptual framework for implementation fidelity. Implement Sci 2007, 2:40.
  4. Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, Wells KB: Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Adm Policy Ment Health 2008, 35:21-37.
  5. Perrin KM, Burke SG, O’Connor D, Walby G, Shippey C, Pitt S, McDermott RJ, Forthofer MS: Factors contributing to intervention fidelity in a multi-site chronic disease self-management program. Implement Sci 2006, 1:26.
  6. Denis JL, Hebert Y, Langley A, Lozeau D, Trottier LH: Explaining diffusion patterns for complex health care innovations. Health Care Manage Rev 2002, 27:60-73.
  7. Gustafson DH, Sainfort F, Eichler M, Adams L, Bisognano M, Steudel H: Developing and testing a model to predict outcomes of organizational change. Health Serv Res 2003, 38:751-776.
  8. Rogers E: Diffusion of Innovations. 5 edn. New York, NY: Free Press; 2003.
  9. Leeman J, Baernholdt M, Sandelowski M: Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs 2007, 58:191-200.