Consolidated Framework for Implementation Research

Ideas for the Future: Learning From Others

This part of the website offers ideas and we very much want your feedback and ideas

Imagine if there were findings from numerous evaluations that all used the CFIR in a repository with these features:

  • Qualitative data were available to describe how that construct manifested during the course of an implementation;
  • Quantitative ratings or measures were available for each construct for each entity; and
  • Information were available about what techniques are recommended or what techniques were associated with improvements in ratings for that construct
matrix

What is all this data?

  • Row 1: Study ID
  • Row 2: Number of sites included in each study
  • Row 3: Designation of site as L(ow), M(edium), or H(igh) implementation effectiveness. Each study had its own defined implementation outcome and sites were rated relative to one another. This is particularly salient when sites were sampled to maximize variation with respect to outcomes (e.g., lowest and highest sites in a system).
  • Row 5+: Ratings by site and by CFIR construct

Some important things to note about this matrix of data:

  • Some constructs for some sites do not have a rating
  • Some constructs are zeros for all medical centers
  • Some constructs have non-numeric ratings (e.g., E) which signifies variation in operationalization of the rating process

Nonetheless, there is much we can learn from this type of data. For example, a Qualitative Comparative Analysis (QCA) could be conducted to explore the following questions:

  • Which constructs are necessary for success? A construct is necessary for success if it is always rated positively in successful (=”High” implementation effectiveness) sites.
  • Which constructs are sufficient for success? A construct is sufficient for success if the site is always successful when ratings for the construct are positive

The following constructs may be found to be necessary or sufficient at varying rates:

High Implementation
Construct Necessity Sufficiency
Adaptability 67% 56%
Design Quality & Packaging 57% 75%
Networks & Communications 67% 70%
Compatibility 71% 68%
Leadership Engagement 62% 72%
Available Resources 43% 60%
Reflecting & Evaluating 76% 89%

For example, 76% of sites with high implementation effectiveness had positive ratings for the Reflecting & Evaluating construct. Conversely, 89% of sites with a positive rating for Reflecting & Evaluating had high implementation effectiveness. This finding reveals that it is very important to ensure positive ratings for Reflecting & Evaluating. Implementation strategies should be sure to include techniques to ensure or improve ratings related to this construct. QCA can be used to identify multiple pathways to success when the dataset is of sufficient size.

Now imagine that qualitative descriptions are available for each construct along with recommendations from real-world scenarios. For example, if you clicked on a cell related to Reflecting & Evaluating, you might learn this from one of the studies:

How useful is this information? How would you use it?

Manifestation in Low Implementation Sites Manifestation in High Implementation Sites Recommendations based on empirical data
Neither facility had time or space in which to reflect on success or issues related to MOVE! nor were the programs actively evaluated. However, monthly regional phone calls with other facility and the regional coordinators does provide limited time and opportunity for reflection:
…nobody is directly communicating with me from leadership so as far as I know, they pretty much let us go and do our own thing. On a national level, I know that they are asking for feedback and numbers and reports…I am unaware of anyone on a local level asking for similar information about the program. [MOVE! Coord; 500]
Staff at both facilities take time to reflect and evaluate in team meetings. To varying degrees, they knew the data and reflected on how program could be improved/expanded:
…on Tuesday after our sessions, every other Tuesday we do have a meeting with all the members of our group to discuss successes and other things that need to be and this nurse brings us a printout of the patients who were seen and how long they’re seen and what their weight is and which way they’re heading ...— improving or worse in their weight loss and things like that…we know exactly how many visits the patients had and the success and everything like that… everyone participates…we carve out like 20 minutes to 30 minutes maximum to meet, to discuss obstacles, to discuss problems, to discuss you know, things that need to be discussed for us to be able to run this program properly. [MOVE Coord; 400]
Facilities actively elicit feedback from patients and make changes based on that feedback.
Arrange for regular team meetings to review progress in terms of process (e.g., enrollment) and outcomes (e.g., weight loss)
Budget time to purposively reflect on and review progress and brainstorm improvements.
Please share your thoughts.

Now imagine a visual display of “profiles” for each site based on ratings of CFIR constructs.

synthesis table

You could “match” your profile with another entity or small sample of entities in the repository and then drill down to qualitative data that may be available to provide insights you can learn from or that may help to inform your own implementation strategy.

How would you foresee using these data in your own work? How useful would this kind of tool be for your work? Please share your thoughts with us.

Return to Design an Implementation