Consolidated Framework for Implementation Research
Ideas for the Future: Learning From Others
Imagine if there were findings from numerous evaluations that all used the CFIR in a repository with these features:
What is all this data?
Some important things to note about this matrix of data:
Nonetheless, there is much we can learn from this type of data. For example, a Qualitative Comparative Analysis (QCA) could be conducted to explore the following questions:
The following constructs may be found to be necessary or sufficient at varying rates:
|Design Quality & Packaging||57%||75%|
|Networks & Communications||67%||70%|
|Reflecting & Evaluating||76%||89%|
For example, 76% of sites with high implementation effectiveness had positive ratings for the Reflecting & Evaluating construct. Conversely, 89% of sites with a positive rating for Reflecting & Evaluating had high implementation effectiveness. This finding reveals that it is very important to ensure positive ratings for Reflecting & Evaluating. Implementation strategies should be sure to include techniques to ensure or improve ratings related to this construct. QCA can be used to identify multiple pathways to success when the dataset is of sufficient size.
Now imagine that qualitative descriptions are available for each construct along with recommendations from real-world scenarios. For example, if you clicked on a cell related to Reflecting & Evaluating, you might learn this from one of the studies:
How useful is this information? How would you use it?
|Manifestation in Low Implementation Sites||Manifestation in High Implementation Sites||Recommendations based on empirical data|
Neither facility had time or space in which to reflect on success or
issues related to MOVE! nor were the programs actively evaluated. However,
monthly regional phone calls with other facility and the regional coordinators
does provide limited time and opportunity for reflection:
…nobody is directly communicating with me from leadership so as far as I know, they pretty much let us go and do our own thing. On a national level, I know that they are asking for feedback and numbers and reports…I am unaware of anyone on a local level asking for similar information about the program. [MOVE! Coord; 500]
Staff at both facilities take time to reflect and evaluate in team meetings.
To varying degrees, they knew the data and reflected on how program could
…on Tuesday after our sessions, every other Tuesday we do have a meeting with all the members of our group to discuss successes and other things that need to be and this nurse brings us a printout of the patients who were seen and how long they’re seen and what their weight is and which way they’re heading ...— improving or worse in their weight loss and things like that…we know exactly how many visits the patients had and the success and everything like that… everyone participates…we carve out like 20 minutes to 30 minutes maximum to meet, to discuss obstacles, to discuss problems, to discuss you know, things that need to be discussed for us to be able to run this program properly. [MOVE Coord; 400]
Facilities actively elicit feedback from patients and make changes based on that feedback.
Arrange for regular team meetings to review progress in terms of process
(e.g., enrollment) and outcomes (e.g., weight loss)
Budget time to purposively reflect on and review progress and brainstorm improvements.
Now imagine a visual display of “profiles” for each site based on ratings of CFIR constructs.
You could “match” your profile with another entity or small sample of entities in the repository and then drill down to qualitative data that may be available to provide insights you can learn from or that may help to inform your own implementation strategy.
How would you foresee using these data in your own work? How useful would this kind of tool be for your work? Please share your thoughts with us.