Qualitative Data
Interview Guide
Qualitative data can be collected through semi-structured interviews with stakeholders. An interview guide based on CFIR constructs can be used. Explore the online interview guide tool that you can use to create your guide. It allows you to select constructs and questions from a menu of options and then produces an editable interview guide.
Observation Template
Site visits often generate qualitative data that you can analyze using the CFIR. The Microsoft Excel observation template can facilitate taking notes organized by CFIR construct. This can be an efficient approach to provide rapid feedback to stakeholders.
Meeting Notes Template
If regular meetings are held with stakeholders that generate meeting notes, these notes may be analyzed as data. The Microsoft Excel meeting notes template can facilitate taking notes organized by CFIR construct. Similar to the observation template, this can be an efficient way to provide rapid feedback to stakeholders.
Coding Data
During the analysis phase of research, analysts should be blinded to implementation outcomes to avoid bias. A Microsoft Word codebook template that is pre-populated with CFIR definitions and coding guidelines can help with coding qualitative data. Please contact us with any additional guidance, challenges, or questions regarding the CFIR Construct Definitions so we can continue to improve the CFIR.
An NVivo project template, pre-populated with CFIR construct codes, is available. Please contact us to obtain a copy. Please note that the NVivo file includes two additional constructs that several projects have found helpful: Engaging: Key Stakeholders (e.g., staff) and Engaging: Innovation Participants (e.g., patients), both under the Process domain. The code list also includes codes for three organizations. These codes organize the data and facilitate queries that can be used to develop case memos. The NVivo queries aggregate coded data by CFIR construct and organization. More organization codes will need to be added and the queries will need to be updated to accommodate more than three organizations. If you use other qualitative analysis software (e.g., MAXQDA, ATLAS.ti) and are willing to share, please send us a template and we will post it to the site.
Aggregating Data
After data have been coded, data aggregation queries (see Coding Data above) can be used to create case memos. The Microsoft Word memo template facilitates aggregating, summarizing, and rating data. One memo is typically created for each unit of analysis (e.g., organization) at each time point (e.g., pre-implementation, post-implementation). The memo is organized by CFIR construct and includes the following elements:
- Space for each analyst (a minimum of two is recommended) to apply ratings based on the qualitative data from individual transcripts and the organization as a whole. See the Rating Rules PDF for guidance on assigning ratings.
- Space to summarize all of the data for each construct, e.g., how it manifests in the organization.
- Space to write a rationale for the given rating.
- Space to copy text data directly from the NVivo queries.
Rating Data
Once summary statements and supporting data have been documented in your memo (See Aggregating Data above), you can rate the construct. Review the Rating Rules PDF for guidance. Ratings have great utility for identifying constructs that appear to influence implementation in your study. In addition, rating the data adds significant value to the project because the results can be added to a repository, which allows researchers to synthesize findings more reliably across studies, building an urgently needed knowledge-base.
The Microsoft Excel matrix template is designed to compare ratings (with short summaries of supporting rationale based on your analyses) within and across your unit of analysis. We take ratings from our memos and enter them into an Excel matrix. Ratings and data are added for each time point (e.g., pre-implementation, post-implementation) and for each data source (e.g., semi-structured interviews, meeting notes, and site observations), ultimately creating a matrix that aggregates the entire data set.
The precise method for consolidating qualitative data and ratings across time points will vary depending on your evaluation aims. However, aggregate ratings are not a simple average of existing ratings. We have used a process similar to consensus-based coding. Analysts apply a summary rating, taking all the individual ratings and supporting qualitative summary and rationale into consideration, and then discuss ratings to achieve consensus. These discussions provide rich rationale for the ratings; therefore, it is important to clearly document the considerations and final rationale.
There is a danger of oversimplifying complex, dynamic descriptions of implementation processes and contexts when applying ratings. We strongly encourage reliance on the underlying qualitative data in addition to the aggregate ratings.
A common evaluation objective is to identify constructs that appear to distinguish between organizations with high and low implementation success, which provides insights into the key constructs that influence implementation. This information can be used to design implementation strategies for individual or larger-scale implementation efforts.
After applying ratings at the appropriate levels (see Data Analysis: Rating Data above), analysts can be unblinded to implementation outcomes in order to determine distinguishing constructs.
Analysts can identify patterns by sorting sites by implementation outcome. This approach is particularly useful with small samples. For example, in our evaluation of the MOVE! weight management program, the pattern of ratings (-2, +1, +1, +2, +2) for Relative Advantage (within the Innovation Characteristics domain) appeared to be different between the lower and higher implementation facilities. See table below. It is clear that implementation strategies for MOVE! should include effective communication about the Relative Advantage of MOVE! compared to other available options.

If sample size is sufficient, analysts can conduct simple correlation analyses between construct ratings and outcomes (e.g., referral rates) by organization. For example, in our evaluation of a telephone-based lifestyle coaching program, we identified distinguishing constructs based on correlational analyses with a priori determined cut-offs. See table below.

Analysts can use QCA to provide in-depth information about clusters of constructs that contribute to success or failure of implementation. QCA considers “equi-finality,” meaning that more than one combination of positively (or negatively) rated constructs may lead to success (Kahwati et al. (2011); Kane et al, (2014); and Cragun et al 2016) and can be a powerful approach for building knowledge. We are currently in the process of using QCA methods to analyze ratings across six studies that all used similar evaluation approaches.
If sample size is sufficient, analysts can use regression or other more sophisticated models, especially if quantitative measures are used.