Introduction
A throughline of TrestleLink services relates to understanding and replicating what works. If science outreach and engagement efforts are not held to a high standard, we risk wasting time, squandering resources, and doing real harm to relationships and reputation, even if unintentionally.
Problem
Most initiatives for science communication and policy have long rejected the notion that it is possible and desirable to rigorously evaluate programmatic activities. As a result, it is difficult to establish whether funded efforts are achieving the desired impact. Although our academies routinely expect quantifiable metrics of success, evaluation activities are much too infrequent because many are uncertain about how to measure processes and outcomes.
Solution
Our partners benefit from working with consultants experienced in developing a robust data backbone that enables routine assessments of project progress. This includes creating systems for tracking capacity development, planning and executing outreach, and evaluating engagement strategies. Common metrics of interest include tracking legislative activities for impact markers, monitoring engagement, and deploying field experiments that test what works in your own unique context.
Budget
Simple field experiments can be addressed with a relatively modest budget. More complex evaluation techniques can be programmed into tracking activities. Thus, the scope of work for evaluation ranges widely. Larger scopes of work are derived from the frequency and number of evaluation activities, the depth of outcome indicators, and overall effort required for managing data analyses. The starting point for this scope of work is, therefore, based on other relevant activities such as engaging policymakers, building research capacity, or disseminating fact sheets.