New publication highlights application of RE-AIM framework to evaluate effects of community engagement work
A collaboration between Community Engagement and Evaluation & Improvement cores helps demonstrate impact in community settings.
Bidirectional community-engaged work and research are critical to the mission of hubs around the country funded by Clinical and Translational Science Awards. However, there is an ongoing need for rigorous assessment to evaluate the efficacy and determine the real-world impact of these activities and interventions. A collaborative effort between two core groups at the Southern California Clinical and Translational Science Institute (SC CTSI) sought to take on the challenge.
The Community Engagement (CE) and the Evaluation & Improvement (EI) cores used the RE-AIM framework to evaluate their community engagement work. RE-AIM is a public health tool used to plan, implement, and evaluate programs and stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance.

The team applied RE-AIM to their community-based health education workshops delivered in English and Spanish across Los Angeles. They used participant surveys and facilitator feedback data. Ultimately, they found that the workshops in Spanish were more highly attended than the English workshops and the attendees reported more satisfaction with the workshops. The Spanish workshops also ranked higher in short-term effectiveness.
Their publication appears in the Journal of Clinical and Translational Science. This work was led by Brian Do-Golden, MPH, CHES®, Research and Evaluation Analyst for the CE core at SC CTSI. He developed and operationalized the RE-AIM approach, analyzed data, prepared visualizations, and served as first author for the manuscript. It was important to Do-Golden and the CE team that their method reflected real-world needs and perspectives, and could be used again in other activities.
Evaluation of community-engaged work is often challenging and complex, which Do-Golden acknowledges.
“[Evaluation] is context dependent, utilizes a mix of data types, and is difficult to make meaningful comparisons,” he added. “Our team wanted to develop a structured, transparent, and replicable approach that could help us, and potentially other programs, more clearly demonstrate impact, compare findings across contexts, identify areas for improvement, and make data informed decisions. There was a need for an evaluation process that was rigorous but still adaptable to realities of working with and in the community.”
During the collaboration, the team encountered a surprising aspect – how much nuance emerged when they applied the approach to real-world community data. The CE core had a great deal of data from long running, well-established programs, so once they applied the RE-AIM framework, they were able to see ways in which they could improve program adaptation for the community workshops and apply the framework to other contexts.
Do-Golden credits the team’s daily community work–showing up, building trust, creating opportunities, and gathering the data that enabled them to test and refine the method–as the shared vision that made the project and publication possible.
“The most rewarding part was seeing a complex idea transform into a practical tool that our team can use regularly. It was especially meaningful to lead a project based on the hard work our dedicated team is doing out in the community that not only strengthened our internal evaluation capacity but also contributed something useful to the broader community engagement and clinical and translational science fields,” he added.
In the future, the teams will build upon this existing work by refining their approach based on what they learned from the early implementation. They plan to expand to other initiatives and create a dashboarding system for real-time monitoring. They would also like to develop a toolkit for other programs and CTSA hubs to use and adapt this approach at their sites.