Innovation Trialability

The innovation can be tested or piloted on a small scale and undone.

The original CFIR elaborated further on this construct, stating that the ability to test the innovation on a small scale, (Greenhalgh, Robert, et al., 2004), and be able to reverse course (undo implementation) if warranted (Feldstein & Glasgow, 2008), are important potential determinants of implementation outcomes. The ability to trial is a key feature of the plan-do-study-act quality improvement cycle that allows users to find ways to increase coordination to manage interdependence (Leeman et al., 2007; Rabin et al., 2008). Piloting allows for individuals and groups to build experience and time to reflect upon and test the innovation (Rycroft-Malone, Kitson, et al., 2002). Usability testing (with deliverers and recipients) promotes successful adaptation of the innovation (Feldstein & Glasgow, 2008) (See Implementation Process: Engaging and Doing.)

Inclusion Criteria

Include statements related to whether the site piloted the innovation in the past or has plans to in the future, and comments about whether they believe it is (im)possible to conduct a pilot. Include descriptions of smaller trials of the innovation before widespread implementation or use of information from local or regional pilots.

  • “There is no way we can pilot something like that.”
  • “I think it would take me a while to get there, but knowing we are able to try it, I think I would certainly be willing to try with some help, trying a few patients on it as long as I know they’ve got follow-up care.”

Exclusion Criteria

Exclude or double code descriptions of use of results from local or regional pilots to Innovation Evidence-Base.

Regarding quantitative measurement of this construct: In a systematic review of quantitative measures related to implementation, Lewis et al. identified six measures (Lewis, Mettert, & Lyon, 2021). Using PAPERS measurement quality criteria with an aggregate scale ranging from -9 to +36 (Lewis, Mettert, Stanick, et al., 2021), the highest score was 3, indicating the need for continued development of high-quality measures.

Note: As we become aware of measures, we will post them here. Please contact us with updates.

Feldstein, A. C., & Glasgow, R. E. (2008). A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf, 34(4), 228–243.

Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Q, 82(4), 581–629.

Leeman, J., Baernholdt, M., & Sandelowski, M. (2007). Developing a theory-based taxonomy of methods for implementing change in practice. J Adv Nurs, 58(2), 191–200.

Lewis, C. C., Mettert, K. D., Stanick, C. F., Halko, H. M., Nolen, E. A., Powell, B. J., & Weiner, B. J. (2021). The psychometric and pragmatic evidence rating scale (PAPERS) for measure development and evaluation. Implementation Research and Practice, 2, 263348952110373.

Lewis, C. C., Mettert, K., & Lyon, A. R. (2021). Determining the influence of intervention characteristics on implementation success requires reliable and valid measures: Results from a systematic review. Implementation Research and Practice, 2, 263348952199419.

Rabin, B. A., Brownson, R. C., Haire-Joshu, D., Kreuter, M. W., & Weaver, N. L. (2008). A glossary for dissemination and implementation research in health. J Public Health Manag Pract, 14(2), 117–123.

Rycroft-Malone, J., Kitson, G., Harvey, B., McCormack, K., Seers, A. T., & C. Estabrooks. (2002). Ingredients for change: Revisiting a conceptual framework. (Viewpoint). Quality and Safety in Health Care, 11(2), 174–180.