Consolidated Framework for Implementation Research

Overview

The CFIR is a framework for assessing context in terms of existing or potential barriers and facilitators to successful implementation; how the CFIR is used will depend on the type of evaluation you are conducting.

Stetler et al. (2006) outlines the different types of evaluation that can be conducted while implementing a new program or process. The table below organizes the types of evaluation by:

  • Typology: Classification of different types of evaluation
  • Descriptive Label: Layman's description of purpose and phase
  • Purpose: Purpose of each type of evaluation and the components of analysis
  • Examples: Published research using the CFIR in a similar evaluation type
Stetler et al. Evaluation Typology Descriptive Label Purpose of the Research or Evaluation Example Published Study*
Formative Research Pre-Implementation Assessment Obtain diagnostic system-level information prior to development of a future implementation study Damschroder & Lowery, 2013

Connell et al., 2014 (Quantitative measures for two domains)
Formative Evaluation Developmental Evaluation Pre-Implementation Assessment and Adaptation Enhance the likelihood of success of implementation though a diagnostic analysis of:
  1. Actual degree of less-than-best practice
  2. Determinants of current practice
  3. Potential barriers and facilitators to practice change and to implementation of the adoption strategy
  4. Strategy feasibility, including perceived utility of the project
Implementation-Focused Evaluation Concurrent Implementation Assessment and Adaptation Optimize the likelihood of affecting change by resolving actionable barriers, enhancing identified levers of change, and refining components of the implementation intervention through an analysis of:
  1. Discrepancies between the implementation plan and its operationalization
  2. Influences not anticipated through developmental evaluation
Prior et al., 2014 (Study Protocol)
Progress-Focused Evaluation Concurrent Implementation Progress Optimize the intervention and/or reinforce progress via positive feedback to key players through an analysis of:
  1. Impacts and indicators of progress towards goals
Prior et al., 2014 (Study Protocol)
Interpretive Evaluation Post-Implementation Retrospective Evaluation of Implementation Provide working hypotheses to explain success or failure through an analysis of:
  1. Results from previous formative evaluation stages (above)
  2. Data collected at the end of the project on key stakeholder experiences
English et al., 2011

Zulman et al., 2013

Green et al., 2014
Summative Evaluation Post-Implementation Determine the degree of success, effectiveness, or goal achievement of an implementation program through analysis of:
  1. Data on impacts, outputs, products, or outcomes hypothesized in a study

*As we become aware of papers, we will add examples to this table.

Research questions must be fully articulated in order to determine which of the techniques listed below will be appropriate. In addition, budgetary and time constraints, sources of available data, and other factors may affect the applicability of these techniques.

The CFIR is comprised of 39 constructs organized into five domains. It is often not practical to assess all 39 constructs in a single study. Therefore, evaluations may focus on a subset of CFIR constructs.

While considering the research question and evaluation objectives, each construct can be evaluated for its likelihood of: 1) being a potential barrier (or facilitator) to implementation; or 2) having sufficient variation across the units of analysis (e.g., organizations). Related to the last consideration, in our evaluation of the MOVE! Program, the Peer Pressure construct within the Outer Setting domain lacked variation because all of our study sites were publicly funded medical centers and did not have market competition.

It is important to note that the CFIR does not be need to be used to collect data; some researchers use open data collection techniques and only use the CFIR in analysis. In this case, the evaluation would not focus on any particular CFIR constructs prior to the analysis phase. If you do choose to focus your evaluation on a smaller subset of constructs, sufficiently open-ended questions or other means should be used to explore the possibility of other constructs and influences.

Three general approaches have been used to select constructs:

  • Group Deliberations: Each construct is evaluated with respect to its likely influence on implementation based on knowledge of the context. Consensus decision-making is used to prioritize constructs based on the considerations described above.
  • Stakeholder Survey: A survey of local stakeholders was used in a study of specialty care initiatives within Veterans Affairs Medical Centers (VAMCs) to identify important constructs. Users may adapt this for their evaluation.
  • Theoretical Model or Framework: If you are using a specific theoretical model or framework, you may focus on constructs defined in that model or framework. Click here to see how this might be done using a case example based on another published study of the MOVE! Program.

The first step in developing a sampling strategy is to define the unit of analysis. For example, the focus of your evaluation may be at the individual level, unit level (within an organization), or organizational level. Depending on the objectives of the evaluation, the sample may consist of individuals within a single organization or dozens of organizations.

The CFIR assesses potential barriers and facilitators within five domains. One of the domains focuses on Characteristics of Individuals, e.g., Knowledge and Beliefs about the Innovation. In effect, the unit of analysis may be the individual. Note: If this is your domain of focus, consider augmenting the CFIR with the Theoretical Domains Framework, which provides additional constructs to consider.

The remaining four CFIR domains rely on information elicited from individuals (and other sources such as policy documents, meeting notes, and site observations) that is aggregated into a collective perception about, e.g., the Evidence Strength & Quality for the innovation. Thus, the unit of analysis may be the organization or other collective-level entity (e.g., unit, team).

Here are a few example criteria on which selection of units to include in your sample could be made:

At the organizational or unit level:

At the individual level:
  • Organizational Role
  • Tenure
  • Profession

Click here for an example sampling matrix.

It is important to explore ways to share findings with stakeholders to inform refinements to current efforts or help design future implementation strategies. When possible, provide rapid feedback after each data collection point by conducting an abbreviated analysis before full analysis is complete. Feedback often consists of barriers and facilitators to implementation, as well as strategies to overcome barriers.