CFIR & Implementation Research

The five steps below guide users through an entire project using CFIR, from the design of the study through dissemination of findings. These steps are based on The Consolidated Framework for Implementation Research (CFIR) User Guide: A five-step guide for conducting implementation research using the framework manuscript. To ensure successful use of CFIR, we recommend having a qualitative methodologist and/or analyst with experience in implementation science methods and/or using CFIR.

Step 1: Study Design

1A: Define Research Question and Implementation Outcome

Users must first define their research question; CFIR is a determinant framework that can be used prospectively to assess determinants of anticipated implementation outcomes (outcomes that have not yet occurred) and/or retrospectively to assess determinants of actual implementation outcomes (outcomes that have occurred) (Damschroder, Reardon, Opra Widerquist, et al. 2022). Some projects use CFIR both prospectively and retrospectively, i.e., both looking back to explain current outcomes and looking forward to predict future outcomes. Consequently, users must define their research question to appropriately collect and analyze data using CFIR.

The CFIR outcomes addendum broadly conceptualizes implementation outcomes as measuring the success or failure of implementation, i.e., the innovation being implemented and delivered as intended in the Inner Setting. Anticipated implementation outcomes are based on perceptions or measures of the likelihood of future implementation success or failure. These outcomes are prospective; constellations of CFIR determinants across domains may predict these outcomes. Actual implementation outcomes are based on perceptions or measures of current (or past) implementation success or failure. These outcomes are retrospective; constellations of CFIR determinants across domains may explain these outcomes (Damschroder, Reardon, Opra Widerquist, et al. 2022). 

Prospective and Retrospective Example Research Questions and Implementation Outcomes

Research Question Temporality CFIR Implementation Determinant Domains Implementation Outcome(s)
What barriers and facilitators influence
anticipated
implementation outcomes?
Prospective Assessment (i.e., before adoption, implementation, or sustainment occurs) Barriers and facilitators related to the Innovation, Outer Setting, Inner Setting, Individuals, and Implementation Process Adoptability,
Implementability, Sustainability
What barriers and facilitators influenced actual implementation outcomes? Retrospective Assessment (i.e., after adoption, implementation, or sustainment has occurred) Adoption, Implementation, Sustainment

Implementation (and innovation) outcomes (and how they are measured) are project-specific and therefore outside the scope of this guidance (see FAQ: What is the most appropriate implementation outcome for my project? and FAQ: What is the most appropriate innovation outcome for my project?). However, it is critical that each project identify an appropriate implementation outcome. This allows users to identify constructs that distinguish between implementation success or failure – constructs that are “difference-makers” – highlighting the most important barriers to be addressed by future implementation strategies (see FAQ: How do I use CFIR to select implementation strategies?) or explaining how implementation strategies and constructs interact (see FAQ: How do I use CFIR to compare the effectiveness of different implementation strategies?). Finally, implementation is not successful unless it is equitable. Ensuring equitable implementation success requires use of equity-focused implementation process models and measurement frameworks (Nilsen and Birken 2020; P. Gustafson et al. 2023; Bradley et al. 2024; Allen et al. 2021).

1B. Define CFIR (Implementation Determinant) Domains

CFIR implementation determinants capture barriers and facilitators across five broad domains: 1) Innovation; 2) Outer Setting; 3) Inner Setting; 4) Individuals: Roles & Characteristics; and 5) Implementation Process (Damschroder, Reardon, Widerquist, et al. 2022). Updated guidance urges users to clearly define each domain as well as the boundaries between domains specific to each project. This allows users to make accurate attribution to implementation outcomes (Pinnock et al. 2017) and thus identify appropriate areas for future intervention. For example, if the boundary between the innovation and implementation process is not clearly defined and implementation fails, it will be impossible to know if implementation failed due to characteristics of the innovation (i.e., there was something wrong with the innovation) or the implementation strategy(s) (i.e., there was something wrong with the implementation strategy(s)).

CFIR Implementation Determinant Domains

Domain Guiding Questions
Innovation What is the innovation being implemented and evaluated? What are its components and features (Campbell et al. 2018; Hoffmann et al. 2014)?
What is the boundary between the innovation and the process or strategy being used to implement the innovation?

What is the (intended) innovation outcome for:
Innovation Recipients
Innovation Deliverers
High-level Leaders/Key Decision-Makers
Individuals: Roles & Characteristics Who are the individuals involved with implementing, delivering, and/or receiving the innovation? What are their roles? What are their characteristics?
Inner Setting & Outer Setting Where is implementation and delivery of the innovation occurring? What is the boundary between the Inner Setting (the unit of analysis and location where the innovation is being implemented) and the Outer Setting (the area outside of the Inner Setting)?
Implementation Process What is the implementation process? Is implementation being guided by a specific implementation strategy or process model (Nilsen and Birken 2020) (e.g., Knowledge to Action Framework (Field et al. 2014), Getting To Outcomes (Chinman et al. 2018), or Getting To Implementation (Rogal et al. 2020))? What are its components and features? What is the boundary between the innovation and the process or strategy being used to implement the innovation?
Back to top

Step 2: Data Collection

2A: Determine Data Collection Approach

Both qualitative or quantitative methods can be used to collect data on CFIR determinants and often projects integrate both and use mixed methods. While data collection on CFIR determinants often relies on using qualitative methods, such as semi-structured interviews or focus groups, additional approaches using quantitatively-focused surveys with open-ended text boxes have been developed more recently to complement interview methods (Rosenblum et al. 2023; Robinson and Damschroder 2022; Fernandez et al. 2018). There are pros and cons to every data collection approach. For example, CFIR surveys may be less resource intensive for the project team and decrease participant burden, thus potentially allowing for wider participation; however, they will yield little or no qualitative data and instruments have yet to be widely validated. In addition, survey questions rely on a priori questions and assumptions, whereas qualitative methods allow interviewers to ask new questions in direct response to answers. 

2B: Develop Data Collection Instruments

We do not recommend including a question about every CFIR construct in data collection instruments. In addition to increasing the length of the instrument, which adds burden for participants, not all constructs are relevant for every project. After defining the research question, each construct should be assessed for its likelihood of 1) being a potential barrier or facilitator to the innovation being implemented and delivered or 2) having sufficient variation across the units of analysis (i.e., the Inner Settings). Identifying relevant constructs may be completed by: 

In addition to CFIR-based questions, open-ended non-construct specific questions must be included to explore the possibility of other determinants or influences not captured in CFIR, e.g., “Why is [Inner Setting] implementing [Innovation]? CFIR Construct Example Questions along with broader implementation questions are available; these questions must be customized to meet the needs of the project and can then be used in data collection instruments. 

Following development of your data collection instrument, we recommend piloting the instrument with project team members, operational partners, and/or individuals with direct knowledge of the innovation and/or implementing setting, and when using qualitative methods, iteratively updating instruments as data collection progresses. 

It is important to note that CFIR does not always need to be used to design data collection instruments. Many researchers use open data collection techniques and apply CFIR during data analysis and/or interpretation phases of the project, however, this may increase the risk of missing important determinants (Kirk et al. 2015).

2C: Develop Sampling Strategy

Although CFIR is used to collect data from individuals, information from individual respondents is aggregated to understand constructs at the Inner Setting (i.e., unit of analysis) level. As a result, the first step in developing a sampling strategy is guided by how the Inner Setting is defined for the project. Depending on the objectives of the project, the sample may consist of individuals from a single Inner Setting or dozens of Inner Settings. For example, if you are conducting a quality improvement project to improve implementation in a single Inner Setting, the sample would only include individuals from that location. In contrast, if you are conducting a study to compare determinants across different Inner Settings and/or implementation strategies (see FAQ: How do I use CFIR to compare the effectiveness of different implementation strategies?), the sample may include individuals from dozens of locations. The following attributes may be useful to develop a purposeful sample (Palinkas et al. 2015) at the Inner Setting level:

After selecting the Inner Settings to be assessed, CFIR should be used to collect data from individuals who have influence and/or power related to implementation and/or delivery of the innovation in the Inner Setting; purposeful sampling (Palinkas et al. 2015) will often include the key decision-makers and individuals implementing and/or delivering the innovation in the Inner Setting, though individuals in external roles, e.g., national level leaders, may sometimes be able to speak to implementation determinants in the Inner Setting (Damschroder, Reardon, Opra Widerquist, et al. 2022; Damschroder, Reardon, Widerquist, et al. 2022). Innovation recipients, e.g., patients or students, are only appropriate to include in the sample of an implementation research study if they have insights into barriers or facilitators to implementation of the innovation in the Inner Setting (see FAQ: Should I use CFIR to collect data from innovation recipients (e.g., patients, students)?). Snowball sampling (i.e., asking current respondents for the names of other relevant individuals to collect data from) (Palinkas et al. 2015) may help identify all the appropriate individuals. The following attributes may be useful to develop a diverse sample at the individual level:

2D: Conduct Data Collection

It is outside the scope of this guidance to offer specific direction around collecting data, and there are many high-quality sources on conducting interviews (Brinkmann 2013; Adler et al. 1995) and focus groups (Acocella and Cataldi 2021), completing observations (Mays and Pope 1995; D. Oswald et al. 2014; Fix et al. 2022) and ethnographies (Palinkas and Zatzick 2019; Haines et al. 2022), obtaining periodic reflections (Finley et al. 2018), gathering archival data (Grant 2022), and administering surveys (Rea and Parker 2014). 

The in-depth qualitative CFIR approach is the most resource-intensive, yet yields the most detailed data, and may be best for use in theory building. The rapid qualitative CFIR approach is less resource-intensive, requiring fewer analyst hours and eliminating the cost of transcription, but requires experienced analysts to simultaneously conduct interviews and write and align (“code”) notes with CFIR constructs. This rapid approach yields bigger picture (i.e., less detailed) data compared to more in-depth qualitative analysis (Nevedal et al. 2021; Reardon and Nevedal 2021). Qualitative data from open-ended text boxes from surveys can be analyzed similar to interview data (Nevedal et al. 2020; Reardon et al. 2023), but likewise typically offers fewer in-depth insights.

Back to top

Step 3: Data Analysis

3A: Determine Data Analysis Approach

Using CFIR often relies on in-depth qualitative analysis methods: completing deductive (codes derived from CFIR constructs) and inductive (codes derived from the data) coding of transcripts using qualitative software, aggregating coded data by construct in detailed Inner Setting memos, and rating each construct as -2 (strong barrier) to +2 (strong facilitator) to implementation (Damschroder et al. 2011; Damschroder, Reardon, Sperber, et al. 2017; Damschroder, Reardon, AuYoung, et al. 2017; Cannon et al. 2019). An Inner Setting Memo Template is available. However, newer approaches have evolved, including rapid qualitative analysis of interview data (Nevedal et al. 2021; Reardon and Nevedal 2021) and open-ended survey data (Nevedal et al. 2020; Reardon et al. 2023; Nevedal et al. 2024) to help reduce time and effort needed for coding and analyses. Many projects use mixed methods, employing both qualitative and quantitative approaches. 

It is also possible to analyze CFIR data from surveys quantitatively (Rosenblum et al. 2023).

Data Collection & Analysis: Trade-offs based on approach*

Approaches Approach 1 Approach 2 Approach 3
Qualitative Quantitative
Data Collection Interviews Surveys
Data Analysis In-Depth Qualitative Analysis
  • Coding: CFIR-based deductive and inductive coding of interview transcripts using qualitative software
  • Data Aggregation: Inner Setting memos containing full data set
  • Rating: Strength and valence assessments based on construct summaries in Inner Setting memos
Rapid Qualitative Analysis
  • Coding: CFIR-based deductive and inductive coding of detailed interview note summaries and audio recordings
  • Data Aggregation: Inner Setting matrix column containing summarized data set
  • Rating: Strength and valence assessments based on construct summaries in matrix


Quantitative Analysis**
  • Descriptive and inferential statistics examining associations with inner setting characteristics
Tradeoffs Qualitative Quantitative
Participant Burden High (time to complete interview) High (time to complete interview) Low (time to complete survey)
Analyst Hours High Medium Low-Medium
Analyst CFIR Expertise Medium-High High (simultaneous data collection and coding with no transcript) Low
Transcription Delay + Cost Yes No N/A
Level of Detail High (transcript + recording; lengthy quotations) Medium-High (recording only; short quotations) No or limited qualitative data
Rigor High High Medium (survey not validated against interview)

3B: Conduct Data Analysis

CFIR provides the initial structure for a qualitative codebook, and detailed CFIR Construct Coding Guidelines are available. These guidelines should be operationalized for each project and further developed throughout the coding process by adding new inductively identified constructs and subconstructs as needed. In addition to coding individual CFIR constructs, analysts can employ causation coding (Saldana, J. 2015) and relationship coding (Kerins et al. 2020; Nevedal et al. 2020) to capture how constructs interact within a project. Causation coding helps identify potential causal links between constructs, while relationship coding captures both unidirectional and bidirectional relationships between constructs. 

We recommend having at least two analysts depending on the scope and intensity of the project. Analysts use a consensus-based and iterative process that involves group and independent coding and resolving discrepancies through discussion (C. E. Hill et al. 1997; 2005). If project resources preclude two independent coders for the entire data set, analysts may be able to code independently after achieving consensus on a smaller training dataset (e.g., 10% of transcripts). 

After coding, data should be aggregated by unit of analysis, i.e., Inner Setting, and CFIR construct. Queries can be developed in qualitative software to aggregate data and the Inner Setting Memo Template that can help with summarizing data. Note: If conducting rapid qualitative analysis, data is aggregated during coding in the CFIR Construct x Inner Setting Matrix Template via a building approach as interviews progress. See previous publication (Nevedal et al. 2021) (Nevedal et al. 2021) and presentation (Reardon and Nevedal 2021) for more detail on completing rapid qualitative analysis using CFIR (and how it compares to the in-depth qualitative approach).

Aggregating data facilitates summarizing and rating data for each construct; ratings are especially useful when there are at least three Inner Settings and there is interest in comparing constructs across Inner Settings based on implementation outcomes. Detailed CFIR Construct Rating Guidelines are available. These guidelines should be operationalized for each project to ensure consistency across ratings for each construct and Inner Setting. As with coding, we recommend using a consensus-based approach to finalize ratings. Depending on the project, it may not be helpful to rate the data, or users may wish to collapse ratings into a binary, e.g., barrier vs. facilitator, and only complete the valence (+ vs -) component of rating.

It is outside the scope of this guidance to offer specific guidance around analysis of quantitative CFIR data (e.g., from Likert items).

Back to top

Step 4: Data Interpretation

4A: Align Implementation Determinants & Outcomes

In order to identify constructs that distinguish between Inner Settings (i.e., unit of analysis) with high and low implementation success – constructs that are “difference-makers” – it is necessary to integrate data on implementation determinants and outcomes. The CFIR Construct x Inner Setting Matrix Template is designed to compare construct ratings (with short summaries of the data and supporting rationale) within and across each Inner Setting in a project. Ratings along with supporting qualitative data can be added for each time point (e.g., pre-implementation, post-implementation) and for each data source (e.g., interviews, surveys, observations), creating a matrix that aggregates the entire data set. This process is similar to a matrixed multiple case study approach (Kim et al. 2020).

The precise method for consolidating qualitative data and ratings across time points will vary, depending on your research aims. However, aggregate ratings are not a simple average of existing ratings. There is a danger of oversimplifying complex, dynamic descriptions of implementation processes and contexts when applying ratings. We strongly encourage reliance on the underlying qualitative data in addition to the aggregate ratings. We recommend using a consensus-based process, with at least two analysts aggregating ratings and resolving discrepancies through discussion (C. E. Hill et al. 1997; 2005). These discussions provide rich rationale for the ratings; therefore, it is important to clearly document the considerations and final rationale.

4B: Determine Data Interpretation Approach

Visual Comparison

With a small sample size, analysts can identify distinguishing constructs visually by sorting the matrix by implementation outcome. For example, in an implementation research study of the VA’s MOVE! Weight Management Program (Damschroder and Lowery 2013), the pattern of ratings (-2, +1, +1, +2, +2) for Relative Advantage appeared to be different between the lower and higher implementation facilities, highlighting that that implementation strategies for MOVE! should include effective communication about the Relative Advantage of the program.

Correlational Analysis or Regression Modeling

With sufficient sample size, analysts can identify distinguishing constructs by calculating the correlation between construct ratings and implementation outcomes. For example, in an implementation research study of the VA’s Telephone Lifestyle Coaching (TLC) program, distinguishing constructs were identified based on correlational analyses with a priori determined cut-offs (Damschroder, Reardon, Sperber, et al. 2017). The presence of enthusiastic and capable TLC program Implementation Leaders (r = .65; p = .03) and effective strategies for Engaging: Key Stakeholders (PCPs and other staff) (r = .66; p = .03) were strongly correlated with implementation success. In addition, with enough statistical power, analysts can use multivariable regression or other more advanced modeling methods to assess the associations between constructs and outcomes, especially if quantitative measures are used to collect the data. In particular, data reduction methods (e.g., principal component analysis) or tree-based approaches (e.g., XGBoost) may help with wide dataset analysis.

Configurational Comparative Methods (CCMs)

With sufficient sample size, analysts can identify “paths” or "recipes” of distinguishing constructs using configurational comparative methods (CCMs), e.g., Coincidence Analysis (CNA), Qualitative Comparative Analysis (QCA) (Haesebrouck and Thomann 2022; Baumgartner and Falk 2023). CCMs consider “equifinality,” meaning that more than one combination of positively (or negatively) rated CFIR constructs may lead to success, as well as causal complexity, where constructs combine in unique ways to produce or not produce an outcome (Cragun et al. 2016). For example, in an implementation research study on VA access related projects, coincidence analysis found that two CFIR constructs, engagement with External High-Level Leaders (i.e., national VA operations) or commitment from Internal High-Level Leaders (local facility leadership) were “difference-makers;” the presence of either (not both) of these constructs consistently led to full or partial implementation of an access-related project (Dodge et al. 2023).

Back to top

Step 5: Knowledge Dissemination

5A: Plan Knowledge Dissemination

Planning dissemination early can help ensure that you collect data that is meaningful to the audience of interest. Visualization approaches may include a traditional narrative that includes descriptions of the findings and representative quotes, a matrix of key barriers and facilitators with exemplar quotes, a table of frequencies of various barriers and facilitators, a “joint display” in which the visual combination of the qualitative and quantitative results draw out new insights (Guetterman et al. 2015), or an implementation research logic model that highlights key barriers and their associations with outcomes and strategies (J. D. Smith et al. 2020). Case reports are also sometimes used, with one summary for each Inner Setting in the project. Regardless of this decision, we recommend summarizing barriers and facilitators that influence success and any recommendations for next steps or approaches to address barriers and leverage facilitators. 

5B: Disseminate Knowledge

It is outside the scope of this guidance to offer specific direction around disseminating knowledge, and there are many high-quality sources on responsible (Ravinetto and Singh 2023), effective (Ashcraft et al. 2020), and innovative (Ross-Hellauer et al. 2020) knowledge dissemination.

Back to top
References