Across the globe, Wikimedia organizations and volunteers are engaging in online and offline activities to get more editors to contribute to Wikimedia projects. There are expansive efforts to attract new editors and to mobilize existing editors who can contribute diverse and high-quality content. With so much activity and energy, it is important to take a deep breath and reflect:
- What are the programs expected to achieve (i.e., what are the program goals)?
- What does it mean for a program to have “impact”?
- How much “impact” equals success?
- How might our programs achieve the most impact?
These are the big questions the Program Evaluation members of the Learning and Evaluation team in the WMF Grantmaking department have begun to explore along with the community. This past month, we completed a beta version of evaluation reports that has begun to put systematic numbers behind a handful of popular programs.
The picture is clear that Wikimedia volunteers do incredible work to create free knowledge and to promote the free knowledge movement. But this picture is incomplete without the data to help tell the story. Putting numbers behind our stories and activities helps the community and the public to better understand what is actually happening on the ground and how our movement programs are making an impact. The evaluation reports measure programs systematically against shared goals to help us see which programs drive impact along various movement goals. From here, we can reflect on what the existing programs are doing and what remains to be done in our strategies to nurture and grow a community of editors and advocates around free knowledge.
A grand total of 119 implementations of 7 programs were analyzed from over 30 countries!
For the first round of reports, data were reviewed from 119 implementations of seven popular Wikimedia programs: Edit-a-thons, Editing workshops, on-wiki writing contests, the Wikipedia Education Program, GLAM content partnerships, Wiki Loves Monuments, and other photo initiatives. Data represented more than 60 program leaders, individual volunteers or organizations, program implementations in over 30 countries. These reports provide a basic sketch and a pilot of high-level analysis of how these programs are influencing the movement. They are also painting a picture of what these programs are in terms of their goals and help to surface the gaps in data and metrics. Here are just a few highlights:
Edit-a-thons seem to be very popular and each edit-a-thon produces an average of 16 pages of text.
Editing workshops typically aim to educate the public about how to edit Wikimedia in order to increase the number of new editors; however, retention is not yet evidenced in the low number of reported workshops.
GLAM partnerships generate a large quantity of media via content release partnerships; most GLAM program leaders believe their partnerships will continue and just under half are secure in believing that new partnerships will develop as a result of their current partnerships.
On-Wiki Writing Contests engage experienced editors, and the average contest creates or improves 131 articles, producing 28 good articles and 10 featured articles.
Wiki Loves Monuments and Other photo contests (like Wiki Loves Earth) produce an average of about 5,600 and 2,000 photos, respectively. While photo use is relatively high, moving photos into the quality rating process seems to be lacking for most of these events.
The Wikipedia Education Program focuses on increasing quantity and quality of content through retaining instructors rather than retaining students. On the average, each student participant in the Wikipedia Education Program produces about a quarter page of content each week.
So, what’s next?
- Examining additional programs! In FY 2014/2015, the goal is to expand the data related to these seven programs and to examine three additional programs: Hackathons, Conferences, and Wikimedian-in-Residence. Through these reports, the evaluation portal, and other pathways, we will continue conversations with the global community to work toward a shared view of program “impact” throughout the movement.
- Help us improve the reports! If you are running a Wikimedia program, start tracking it using the Reporting and Tracking toolkit. You will not only learn a lot about your own programs, but in sharing your data with us, we will be able to conduct stronger analysis on popular Wikimedia programs and we can better learn from one another to make better programs.
- Have you recently implemented a Wikimedia program? Tell us about your program or publish any tips you may have to share in the Learning Pattern Library!
- Questions? Comments? Reach out to us in the comments below or at firstname.lastname@example.org. You can also find us on the Evaluation Portal!
Edward Galvez, Program Evaluation Associate