The CFIR is a framework for assessing context in terms of existing or potential barriers and facilitators to successful implementation; how the CFIR is used will depend on the type of evaluation you are conducting.

Stetler et al. (2006)聽outlines the different types of evaluation that can be conducted while implementing a new program or process. The table below organizes the types of evaluation by:

  • Typology: Classification of different types of evaluation
  • Descriptive Label: Layman’s description of purpose and phase
  • Purpose: Purpose of each type of evaluation and the components of analysis
  • Examples: Published research using the CFIR in a similar evaluation type

Stetler et al Evaluation TypologySub-typologyDescriptive LabelPurpose of the Research or EvaluationExample Published Study*
Formative ResearchN/APre-implementation AssessmentObtain Diagnostic system-level information prior to development of a future implementation studyDamschroder & Lowery, 2013

Connell et al., 2014 (Quantitative measures for two domains)
Formative Evaluation (FE)Developmental EvaluationPre-implementation Assessment and AdaptationEnhance the likelihood of success of implementation through a diagnostic analysis of:
路Actual degree of less-than-best practice
路Determinants of current practice
路Potential barriers and facilitators to practice change and to implementation of the adoption strategy
路Strategy feasibility, including perceived utility of the project
FEImplementation-Focused EvaluationConcurrent Implementation Assessment and AdaptationOptimize the likelihood of affecting change by resolving actionable barriers, enhancing identified levers of change, and refining components of the implementation intervention through an analysis of:
路Discrepancies between the implementation plan and its operationalization
路Influences not anticipated through developmental evaluation
Prior et al., 2014 (Study Protocol)
FEProgress-Focused EvaluationConcurrent Implementation ProgressOptimize the intervention and/or reinforce progress via positive feedback to key players through an analysis of:
路Impacts and indicators of progress towards goals
Prior et al., 2014 (Study Protocol)
FEInterpretive EvaluationPost-Implementation Retrospective Evaluation of ImplementationProvide working hypotheses to explain success or failur through an analysis of: 路Results from previous formative evaluation stages (above)
路Data collected at the end of the project on key stakeholder experiences
English et al., 2011

Zulman et al., 2013

Green et al., 2014
Summative EvaluationN/APost-Implementation Determine the degree of success, effectiveness, or goal achievement of an implementation program through analysis of:
路Data on impacts, outputs, products, or outcomes hypothesized in a study

*As we become aware of papers, we will add examples to this table.

Research questions must be fully articulated in order to determine which of the techniques listed below will be appropriate. In addition, budgetary and time constraints, sources of available data, and other factors may affect the applicability of these techniques.聽

The CFIR is comprised of 39 constructs organized into five domains. It is often not practical to assess all 39 constructs in a single study. Therefore, evaluations may focus on a subset of CFIR constructs.

While considering the research question and evaluation objectives, each construct can be evaluated for its likelihood of: 1) being a potential barrier (or facilitator) to implementation; or 2) having sufficient variation across the units of analysis (e.g., organizations). Related to the last consideration, in our聽evaluation of the MOVE! Program, the Peer Pressure construct within the Outer Setting domain lacked variation because all of our study sites were publicly funded medical centers and did not have market competition.

It is important to note that the CFIR does not need to be used to collect data; some researchers use open data collection techniques and only use the CFIR in analysis. In this case, the evaluation would not focus on any particular CFIR constructs prior to the analysis phase. If you do choose to focus your evaluation on a smaller subset of constructs, sufficiently open-ended questions or other means should be used to explore the possibility of other constructs and influences.

Three general approaches have been used to select constructs:

  • Group Deliberations:聽Each construct is evaluated with respect to its likely influence on implementation based on knowledge of the context. Consensus decision-making is used to prioritize constructs based on the considerations described above.
  • Stakeholder Survey:聽A聽survey聽of local stakeholders was used in a study of specialty care initiatives within Veterans Affairs Medical Centers (VAMCs) to identify important constructs. Users may adapt this for their evaluation.
  • Theoretical Model or Framework:聽If you are using a specific theoretical model or framework, you may focus on constructs defined in that model or framework. Click聽here聽to see how this might be done using a case example based on聽another published study聽of the MOVE! Program.

The first step in developing a sampling strategy is to define the unit of analysis. For example, the focus of your evaluation may be at the individual level, unit level (within an organization), or organizational level. Depending on the objectives of the evaluation, the sample may consist of individuals within a single organization or dozens of organizations.

The CFIR assesses potential barriers and facilitators within five domains. One of the domains focuses on Characteristics of Individuals, e.g., Knowledge and Beliefs about the Innovation. In effect, the unit of analysis may be the individual. Note: If this is your domain of focus, consider augmenting the CFIR with the聽Theoretical Domains Framework, which provides additional constructs to consider.

The remaining four CFIR domains rely on information elicited from individuals (and other sources such as policy documents, meeting notes, and site observations) that is aggregated into a collective perception about, e.g., the Evidence Strength & Quality for the innovation. Thus, the unit of analysis may be the organization or other collective-level entity (e.g., unit, team).

Here are a few example criteria on which selection of units to include in your sample could be made:

At the organizational or unit level:

At the individual level:

  • Organizational Role
  • Tenure
  • Profession

Click here for an example聽sampling matrix.

It is important to explore ways to share findings with stakeholders to inform refinements to current efforts or help design future implementation strategies. When possible, provide rapid feedback after each data collection point by conducting an abbreviated analysis before full analysis is complete. Feedback often consists of barriers and facilitators to implementation, as well as strategies to overcome barriers.