Wikimedia blog

News from the Wikimedia Foundation and about the Wikimedia movement

Posts Tagged ‘program evaluation’

Digging for Data: How to Research Beyond Wikimetrics

The next virtual meet-up will point out research tools. Join!!

For Learning & Evaluation, Wikimetrics is a powerful tool for pulling data for wiki project user cohorts, such as edit counts, pages created and bytes added or removed. However, you may still have a variety of other questions, for instance:

How many members of WikiProject Medicine have edited a medicine-related article in the past three months?
How many new editors have played The Wikipedia Adventure?
What are the most-viewed and most-edited articles about Women Scientists?

Questions like these and many others regarding the content of Wikimedia projects and the activities of editors and readers can be answered using tools developed by Wikimedians all over the world. These gadgets, based on publicly available data, rely on databases and Application Programming Interfaces (APIs). They are maintained by volunteers and staff within our movement.

On July 16, Jonathan Morgan, research strategist for the Learning and Evaluation team and wiki-research veteran, will begin a three-part series to explore some of the different routes to accessing Wikimedia data. Building off several recent workshops including the Wiki Research Hackathon and a series of Community Data Science Workshops developed at the University of Washington, in Beyond Wikimetrics, Jonathan will guide participants on how to expand their wiki-research capabilities by accessing data directly through these tools.

(more…)

Asking the right questions: Resources for your survey strategies

Wikimedia program leaders can use surveys for many reasons:

RobertFuddBewusstsein17Jh.png
-
I’m curious what people learned
from my class on editing Wikipedia.
—Let’s write a survey!

 

Wiki Loves Monuments went very well.
I wonder what motivated everyone to participate.
—Let’s send out a questionnaire!

 

I wonder which workshops were particularly
useful for the conference attendees.
—Let’s create a feedback form!

Surveys are an excellent strategy for measuring a group’s interests, behaviors, learning methods and other feedback. They are windows into the mind of the movement and they give us insight we otherwise would not have. These insights are gathered from survey questionnaires, where asking questions the right way becomes very important.

Writing a good questionnaire takes time. Drafting the questions and asking colleagues or fellow volunteers to provide feedback are important steps. Along each step, we might ask ourselves: Are we asking the right questions for our survey goals? Are we asking enough questions to give us the full picture? Are we asking too many questions which might discourage respondents from finishing the questionnaire?

A few months back, the Program Evaluation & Design team created a learning module called Designing Effective Questions aimed at providing theories behind how surveys work with examples on how to improve the quality of questions. In addition to the learning module, we are pleased to announce upcoming events and resources for anyone interested in writing surveys to measure their program outcomes:

(more…)

A Collaborative Definition of Impact: Building Metrics Together

Voting wall at metrics brainstorming session, Berlin 2014.

What do metrics not tell us?

As part of the Wikimedia Conference in Berlin, on Thursday, April 10, members of the WMF Grantmaking department’s Learning and Evaluation team held a brainstorming session around metrics with chapter representatives from around the world. The aim of the session was to start a conversation around what the evaluation metrics piloted in the (beta) Evaluation Reports tell us about our current programs and what they do not tell us, in terms of program impact.

Sharing evaluation information across the movement helps program leaders all over the world benefit from each others know-how and strategies for program design. Evaluation metrics are important tools to help make decisions like, how much time and how many resources should I invest? Every program has at least one purpose or goal behind it, and having a systematic way to measure the results of those goals helps program leaders to better tell the story of their programs; what worked, what didn’t, why or why not.

During the brainstorming session, we worked in two groups, one focused on image upload based programs, the other focused on text-centered programs, to start to answer three big questions:

  • What outcomes and story do the pilot metrics bring forward?
  • Where are there gaps in the story, or what outcomes do the pilot metrics not measure?
  • How else might we measure the outcomes that are not yet included in the story?

(more…)

Beginning to Understand What Works: Measuring the Impact of Wikimedia Programs

Radio Edit-a-thon in Argentina, on April 5, 2014.

Across the globe, Wikimedia organizations and volunteers are engaging in online and offline activities to get more editors to contribute to Wikimedia projects. There are expansive efforts to attract new editors and to mobilize existing editors who can contribute diverse and high-quality content. With so much activity and energy, it is important to take a deep breath and reflect:

  • What are the programs expected to achieve (i.e., what are the program goals)?
  • What does it mean for a program to have “impact”?
  • How much “impact” equals success?
  • How might our programs achieve the most impact?

These are the big questions the Program Evaluation members of the Learning and Evaluation team in the WMF Grantmaking department have begun to explore along with the community. This past month, we completed a beta version of evaluation reports that has begun to put systematic numbers behind a handful of popular programs.

The picture is clear that Wikimedia volunteers do incredible work to create free knowledge and to promote the free knowledge movement. But this picture is incomplete without the data to help tell the story. Putting numbers behind our stories and activities helps the community and the public to better understand what is actually happening on the ground and how our movement programs are making an impact. The evaluation reports measure programs systematically against shared goals to help us see which programs drive impact along various movement goals. From here, we can reflect on what the existing programs are doing and what remains to be done in our strategies to nurture and grow a community of editors and advocates around free knowledge.

A grand total of 119 implementations of 7 programs were analyzed from over 30 countries!

For the first round of reports, data were reviewed from 119 implementations of seven popular Wikimedia programs: Edit-a-thons, Editing workshops, on-wiki writing contests, the Wikipedia Education Program, GLAM content partnerships, Wiki Loves Monuments, and other photo initiatives. Data represented more than 60 program leaders, individual volunteers or organizations, program implementations in over 30 countries. These reports provide a basic sketch and a pilot of high-level analysis of how these programs are influencing the movement. They are also painting a picture of what these programs are in terms of their goals and help to surface the gaps in data and metrics. Here are just a few highlights:

Edit-a-thons seem to be very popular and each edit-a-thon produces an average of 16 pages of text.
Editing workshops typically aim to educate the public about how to edit Wikimedia in order to increase the number of new editors; however, retention is not yet evidenced in the low number of reported workshops.
GLAM partnerships generate a large quantity of media via content release partnerships; most GLAM program leaders believe their partnerships will continue and just under half are secure in believing that new partnerships will develop as a result of their current partnerships.
On-Wiki Writing Contests engage experienced editors, and the average contest creates or improves 131 articles, producing 28 good articles and 10 featured articles.
 

 
Wiki Loves Monuments and Other photo contests (like Wiki Loves Earth) produce an average of about 5,600 and 2,000 photos, respectively. While photo use is relatively high, moving photos into the quality rating process seems to be lacking for most of these events.
The Wikipedia Education Program focuses on increasing quantity and quality of content through retaining instructors rather than retaining students. On the average, each student participant in the Wikipedia Education Program produces about a quarter page of content each week.

So, what’s next?

  • Examining additional programs! In FY 2014/2015, the goal is to expand the data related to these seven programs and to examine three additional programs: Hackathons, Conferences, and Wikimedian-in-Residence. Through these reports, the evaluation portal, and other pathways, we will continue conversations with the global community to work toward a shared view of program “impact” throughout the movement.

  • Help us improve the reports! If you are running a Wikimedia program, start tracking it using the Reporting and Tracking toolkit. You will not only learn a lot about your own programs, but in sharing your data with us, we will be able to conduct stronger analysis on popular Wikimedia programs and we can better learn from one another to make better programs.

Have you recently implemented a Wikimedia program? Tell us about your program or publish any tips you may have to share in the Learning Pattern Library!

Questions? Comments? Reach out to us in the comments below or at eval@wikimedia.org. You can also find us on the Evaluation Portal!

Edward Galvez, Program Evaluation Associate

Survey shows interest in evaluation in Wikimedia movement, with room to grow

The Wikimedia Foundation’s Program Evaluation & Design team recently completed a survey about the evaluation of organized activities within the Wikimedia community. Program evaluation allows the Wikimedia community to see if the programs and projects they are doing, often to inspire and engage people to participate in the Wikimedia movement, work. It’s important to find out whether the programs that we spend hours of work and much energy on, and may invest money in, can be more efficient, more effective, and more impactful. Program Evaluation allows us to do that, and the Program Evaluation & Design team is here to support the community in discovering ways to do just that.

The survey was completed in August, having been sent out to over 100 program leaders around the world. The survey’s goal was to get a high level view of how program leaders within the Wikimedia movement have been evaluating programs such as edit-a-thons, workshops, Wikipedia Education Program, on-wiki contests, Wiki Loves Monuments, WikiExpeditions, other “Wiki Loves”, and GLAM programs. We wanted to know what type of data was being gathered by those planning and executing such programs across the movement. The results show that people who run programs track a variety of different data points, which is good. We know how busy volunteers and chapter/affiliate staff are, so it’s wonderful to see their ability to include evaluation into their often already overwhelming workflows. We’re excited to share some results with you, and to explain our next steps.

Evaluation Capacity Survey

We had a great response rate – 69 of the 114 invited program leaders completed the survey! Respondents represented 32 Wikimedia chapters, three affiliated clubs and organizations and eight individual community members. Thank you to everyone who responded! Some of the highlights from the survey include:

  • Half of the respondents reported having received grants from the Wikimedia Foundation.
  • Edit-a-thons and workshops, photo upload competitions, and the Wikipedia Education Program were the kinds of programs which were most frequently organized in the movement in the past year.

(more…)

Drafting a strategy plan with the community

This post was authored by User:Barcelona from Amical Wikimedia.

Amical member classifying community proposals during Amical’s 2014-2018 strategy plan taskforce meeting

4 of the 5 members of the taskforce meeting

At Amical Wikimedia we have started the process of thinking about, and writing down, our 2014–2018 strategic plan for the next five years, and also our 2013–2014 annual activity plan (our activities are linked with the educational calendar instead of the calendar year). We are trying apply both the principles and values of our association (shared with the global Wikimedia movement) and the lessons learned at the Program Evaluation June workshop in Budapest, which our GLAM ambassador Kippelboy attended. Thus, the key words defining all this process could be collaboration, efficiency and self-reflection.

First step: including all the voices

We made a collaborative effort to include all the voices within the association and the community of editors in the main plan. First, we opened a public proposals page to collect all the ideas coming from any interested person about the long term evolution of Amical Wikimedia. Secondly, we hosted an online chat meeting via IRC to ask our members’ thoughts about the plan goals, their questions about the priorities and their suggestions about which should be the most important social sectors to address with our activities. Later, we had a few in-depth interviews with some users who are especially committed to Amical.

Second step: task force

Then we created a special task force of five members of different profiles to start the effective actual composition of the plan. After some offline and online meetings and developing previous working documents, we had an all-day session –hosted by the University of Girona– where the taskforce members met and used the specific methodology learned in Budapest, working as a community and implementing evaluation strategies in our programme activities. We shared and evaluated our thoughts and proposals before starting to write down the final document.

Third step: internal evaluation

Our strategy plan is now ready and is being shared on our internal wiki, so all the Amical members can add, remove, change, discuss or challenge its assumptions. The result at the end, which will published in September, will surely reflect the collaborative spirit of our community and all the reflections around what should we become.

Outcomes

This graph explains the relation between our key words and paths

Although the process has not finished yet, we can now present some partial results: Amical’s Strategy Schema and the main intended work tracks. You can see the intersection of goals and actions of our plan in this graphic (still under construction, as is the whole wiki world). On top, there are the five key words which summarize the association’s focus in the next years:

  • Cohesion: We want to keep the community binding and strong personal ties that make possible to work together.
  • Discourse: Since we would like to spread the word about what we do and think to participate in the global debate.
  • Content: Obviously the central point is to help to add content to Wikimedia projects – that should be the main goal for all the chapters and associations.
  • Territory: We want to be active in all the Catalan-speaking countries.
  • Readers: We want to explain the world in Catalan and make the people read it.

Below are the three paths to attain these goals: Activities (programmed projects for the next years), Internal (the sometimes invisible but crucial work to keep the community alive and growing) and External (how we relate with other users, Wikimedia groups and the non-wiki reality).

As for the Activity Plan, several initiatives were suggested by the community: continue the work with museums, extend our presence at universities and schools, promote Wikimedia sister projects, prepare a multilingual contest on Wikipedia focused on Catalan culture and history, increase the number of libraries that we are already collaborating with, explore new possibilities to attract editors… We are really excited to see what comes next, because surely at Amical we have plenty of work to do!

User:Barcelona, Amical Wikimedia

Improving program performance: first evaluation workshop in Budapest

Participants from 15 countries attended the first Program Evaluation & Design Workshop

In the Wikimedia movement, there are many organized activities seeking to contribute to the Wikimedia vision and strategic goals. But how do you determine which of these programs work and which don’t? And how can you further improve the performance of programs? To tackle these difficult question, 26 international participants came together in June 2013 for the first Program Evaluation & Design Workshop in Budapest, Hungary. The event was held by the Wikimedia Foundation, in partnership with Wikimedia Magyarország, the local chapter.

With record high temperatures in Budapest, participants kept cool in the heart of the city, engaging in an intensive, two-day workshop that presented the basics of Program Evaluation. The workshop focused on creating a shared understanding of what program evaluation is, why it is important, and providing attendees with some basic skills and a logic modeling tool for mapping out their programs in order for them to begin incorporating Program Evaluation into their program work.

The workshop brought together 21 Wikimedians from 15 countries. The participants – all with a track record of doing program work – represented five different program types:

Topics of the workshop

Day one opened with a welcome by Frank Schulenburg, Senior Director of Programs at the Wikimedia Foundation, and a long-time Wikipedian. He gave a brief background on why Wikimedia is investing in Program Evaluation and what it is. Schulenburg stressed three points about the current evaluation initiative:

  • self-evaluation: program leaders evaluate their own programs
  • collaborative: we’re all in this together and we will learn together
  • focused on capacity building: our goal is to equip program leaders in the movement with the necessary skills to use program evaluation and design practices

Dr. Jaime Anstee, Program Evaluation & Design Specialist for the Wikimedia Foundation, then led the group through the basics of Program Evaluation – different types of evaluation and the roles of all involved in it while also expressing that the current evaluation initiative aims to be empowering, and participatory, while maintaining a utilization focus. The morning ended with a visioning exercise to see the positive and negative results of what the movement could experience with Program Evaluation, and lightning talks by the participants about the programs they have executed.

(more…)

Call for participants: Program Evaluation and Design workshop in Budapest

Over the next couple of years, the Wikimedia Foundation will be building capacity among program leaders around evaluation and program design. A better understanding of how to increase impact through better planning, execution and evaluation of programs and activities will help us to move a step closer to achieving our mission of offering a free, high quality encyclopedia to our readers around the world.

With this in mind, we are pleased to announce the first Program Evaluation and Design Workshop, on 22-23 June 2013 in Budapest, Hungary.

We have only 20 slots available for this workshop and the application deadline ends on May 17th. This two-day event will be followed by a pre-conference workshop at Wikimania 2013. Ideally, applicants would commit to attending both events.

The first Program Evaluation & Design workshop will be held in the shadows of the Buda Castle, Budapest, Hungary

Our long-term goals for the workshop are:

  • Participants will gain a basic shared understanding of program evaluation
  • Participants will work collaboratively to map and prioritize measurable outcomes, beginning with a focus on the most common programs and activities
  • Participants will gain increased fluency in common language of evaluation (i.e. goals versus objectives, inputs and outputs versus outcomes and impact)
  • Participants will learn and practice how to extract and report data using the UserMetrics API
  • Participants will commit to working as a community of evaluation leaders who will implement evaluation strategies in their programs and activities and report back at the pre-conference workshop at Wikimania 2013
  • …and participants will have a lot of fun and enjoy networking with other program leaders!

We will publish a detailed agenda for the event in Budapest soon on Meta-Wiki.

During the workshop in Budapest, we will only have a limited amount of time. Therefore, we will be focusing on the some of the more common programs and activities:

  • Wikipedia editing workshops where participants learn how to or actively edit (i.e. edit-a-thon, wikiparty, hands-on Wikipedia workshop)
  • Content donations through partnerships with galleries, libraries, archives and museums (GLAMs) and related organizations
  • Wiki Takes/Expeditions where volunteers participate in day-long or weekend events to photograph site specific content
  • Wiki Loves Monuments, which takes place in September
  • Education program and classroom editing where volunteers support educators who have students editing Wikipedia in the classroom
  • Writing competitions, which generally take place online in the form of contests, the WikiCup  and other challenges – often engaging experienced editors to improve content.

Contributors who play an active role in planning and executing programs and activities as described above in the Wikimedia community are highly encouraged to apply. Your experience and knowledge will make this workshop a success!

Hotels, flights and other transportation costs will be the responsibility of your chapter; the Wikimedia Foundation will provide the venue, handouts, breakfasts, light lunches, and a dinner for all participants on Saturday. If you’re not affiliated with a chapter and cannot afford to attend the event, please email me after you apply – we have a small amount of money set aside for those cases.

Remember, applications are open until May 17. You can apply via this Google Form.

Thanks for your interest, and I look forward to a great group of participants!

Sarah Stierch, Program Evaluation and Design Community Coordinator, Wikimedia Foundation

Let’s start talking about program evaluation

Most Wikipedians I know – myself included – care deeply about our mission. We are highly passionate about our vision to provide free knowledge to every single person on the planet. And many of us invest an incredible amount of our free time into making this vision come true. Even though I have been editing Wikipedia since 2005, I’m still amazed when I look at the daily stream of Wikipedia’s recent changes. And I have a deep respect for the volunteers who invest their precious time and energy into never letting this stream of small and big improvements run dry.

For many years now, Wikipedians have not only worked on increasing and improving the amount of free content available on the web. We have also launched a wide variety of programmatic activities that intend to strengthen free knowledge by rising the public awareness of Wikipedia through exhibitions and presentations, by recruiting new editors as part of Wikipedia workshops, and by starting programs like “Wiki Loves Monuments”, the popular Wikimedia Commons photo competition.

We have not, however, been very strong at measuring the impact of those programmatic activities. “Measuring impact” in this case refers to quantifying the long-term effects those programmatic activities have on Wikipedia and its sister projects. In practice, this means, for example, analyzing how many of the people who attended a specific Wikipedia workshop actually turned into Wikipedians. How many of them embarked on editing articles as a result of the workshop and how many of them continued doing so on a regular basis?

Here’s where program evaluation comes into play.

  • If you’re supporting programmatic work as a volunteer, you most likely want to know whether your activities are worth the effort. Whether you help running photo competitions or sign up as a speaker at workshops for Wikipedia beginners – you want to know whether you’re making a real difference with what you’re doing. Program evaluation will help you answer this question. By measuring the outcome of the programmatic activity you signed up for, you will know whether your photo competition was successful and whether the people who participated in your workshop really learned the skills they need to write great articles on Wikipedia. This knowledge will make your work as a volunteer more fulfilling.
  • If you’re helping to improve existing programs, you’re most likely eager to find out which changes will make your program more effective and efficient. Imagine you could achieve the same result with fewer volunteer hours being spent. And what if you could double the number of people who actually start contributing to Wikipedia after your workshop, with making your workshop more engaging and fun? Improving an existing program requires that you measure its effectiveness. Here’s where integrating evaluation into your program design can make a difference.
  • If you’re thinking about starting a new program, you will want to have some evidence that your new program is working. How else would you convince others to participate in your program? And how else would you convince a funder to provide you with a grant so you can eventually execute your program and grow it over time? Program evaluation will help you to make a strong case for your idea. And it will also prevent you from embarking on activities that have no impact.
  • If you’re serving on the board of a Wikimedia chapter or a thematic organization, you might want to know which kind of programmatic activities produce the “biggest bang for the buck”. You might ask whether it makes more sense to start putting money and efforts into in-person workshops as compared to spending resources on creating an online training. How many hours of volunteer or staff time are you going to need in order to produce a specific result? Are the in-person workshops going to be more effective than the online training? And which of the two options will be more efficient? Also, which one is going to have the bigger long-term impact? In this sense, program evaluation can be a decision-making tool that will help you to determine which programmatic activities you want to embark on.

Finally, with the Funds Dissemination Committee (FDC) being in place since last year, there’s also another reason why program evaluation will be more important than ever: after the first round of funding for 2012/2013, the FDC requested more information about program impact, so it has a better foundation for making recommendations on what to fund in the future. This means that from now on, funding decisions will rely heavily on the ability of grantees to demonstrate what impact their programmatic activities have. That means that grantees will have to start thinking about program evaluation, in case they plan to apply for movement funds through the FDC process.

I’ve started a series of documents on Meta (“Program evaluation basics”) aimed at providing program leaders with an introduction to the main terms and concepts. Currently, three documents are available:

I invite you to take a look at the documents and to share your thoughts with me. I will also be available for an IRC office hour on Thursday, March 21, at 17:00 UTC. Let’s start talking about program evaluation…

Frank Schulenburg
Senior Director of Programs, Wikimedia Foundation