The Early Childhood Leadership Commission (ECLC), in collaboration with members of the ECLC Communications Subcommittee, developed Communications Guidelines for Engaging Parents & Caregivers to support early childhood professionals who are…
Data and Evaluation: Sifting Through the Static and the Sound (ECCP Mini-Grant Highlight, Part 1 of 2)
by Tim Walter, Arapahoe County Early Childhood Council
Big Data, Data Analytics, Data Driven Analysis – do these “buzz-words” sound familiar? We have entered the age of “Data” (…and whatever else you’d like to attach to the beginning or end of the word…), and there is no going back. Data can be overwhelming – hard to determine what is important and what is just plain confusing. Data is not all bad, in fact it can be what helps drive and guide program design, implementation, and evaluation efforts.
Program Evaluation for non-profit agencies and organizations is possibly more important now than ever before. Every year non-profit agencies and organizations prepare to write and apply for new and old grants alike; however, what makes your non-profit work so special? Can you show the good work your organization does (and whether it is producing effective results)?
You might be asking: Why is data and evaluation of my program important and what does it have to do with my work?
“Program Evaluation” and data analysis will continue to be the standard by which organizations are judged. For many years, science-driven professions (Engineering, Accounting, Medicine) have been required to quantify and show accountability when presenting results. Only more recently, the social sciences (Public Health and Social Work) have been asked to quantify and show accountability with the work they do. It is now, more than ever, necessary for organizations to “Show Their Good Work” as grant funders want to see “accountability” when awarding dollars to organizations.
The ECCP Mini-Grant supporting the Arapahoe County Early Childhood Council kicked off an inter-agency re-assessment of how we tracked our program outcomes in past years, and we began to ask, what “truly matters” for tracking, as we move into 2017-2018 and beyond. Along the way, we have developed Logic Model templates (modeled after Results Based Accountability theory) to identify program outcome goals and database design needs. Our ultimate goal is to create a data tracking system capable of producing images and visual reports that will show our good work. There will be a variety of other uses for this information, but the process of designing a robust evaluation methodology was far more manageable than previously thought.
Program Evaluation may seem daunting – but my hope is to provide a few more opportunities (and tips) that will provide others easy to use ideas/templates, design tips, and an ability to bring to life the good work being done.
Read more in part 2 of this blog post, coming later this month!