Wikimedia blog

News from the Wikimedia Foundation and about the Wikimedia movement

Posts Tagged ‘Article feedback’

Article Feedback: New research and next steps

This feedback form engages readers to contribute to Wikipedia

How can we engage readers to contribute productively to Wikipedia?

Our recent work on Article Feedback v5 (AFT) provides new insights on that question. In this post, we’d like to share what we’ve learned by analyzing feedback and moderation activity — as well as give a quick update on our next steps for this project, which is being developed by the Wikimedia Foundation’s editor engagement team.

Article Feedback v5 aims to increase participation on Wikipedia (a strategic goal of the Wikimedia movement): this tool asks readers to suggest improvements to articles, then invites them to sign up and become editors. Another goal of this project is to help current editors improve article quality based on reader suggestions. (Learn more about Article Feedback.)

Last summer, we deployed this new tool on 10 percent of the English Wikipedia and we have been testing it extensively to evaluate its impact on readers and editors. Here’s what we found so far.

Slides from the AFTv5 report (2012-Q4)

Key findings

These highlights of our latest research on Article feedback are based on feedback and moderation data collected from September 7 to November 7, 2012 (more details can be found in these slides):

Many readers use this feature

Readers are already posting a lot of feedback with this tool — an average of 4,100 posts per day from 2,800 daily unique readers, on just 10 percent of the English Wikipedia. At this rate, we project up to 900,000 feedback posts with comments per month once Article Feedback is deployed to 100 percent of English Wikipedia in 2013. About 98 percent of this feedback is from anonymous users who are not currently participating on Wikipedia. About 70 percent of the readers we surveyed liked this feedback tool after using it. Their responses suggest that it makes it easy for them to get involved and that they enjoy doing it.

This tool is converting readers into new editors

Article Feedback appears effective in getting new users to contribute to Wikipedia. For example, 2.7 percent of readers who post feedback go on to create a new account, after being invited to sign up. And 3 percent of these new users go on to edit articles within 24 hours from signing up. At this rate, we project several hundred thousand new registrations per year on the English Wikipedia — resulting in many new contributors, which we hope can help reverse the current editor decline.

Useful feedback is buried under a lot of noise

In our first feedback evaluation study last spring, we asked 20 Wikipedia editors to blindly assess 900 random feedback posts for usefulness. About 40 percent of the feedback was found useful by at least two evaluators. This finding is consistent with this moderation data, which shows more negative than positive evaluations by community moderators. We also found that most of the moderation takes place on high traffic articles, which attract lower quality feedback. (To get a sense of what comments look like on a recently created article, check out the feedback page for the Sandy Hook Elementary School shooting article.)

Most feedback never gets moderated

Editor moderation activity is very low when compared to the volume of feedback posted every day. Less than 10% of all posts for a sample of the 100 most visited articles are moderated by registered editors within a month. In fact, less than 10% of feedback posted every single day receives any kind of moderation within a month, whether by readers or registered editors. In the current version of the tool, many editors can’t easily find comments for articles they edit, an issue which could be addressed by making feedback more visible to editors on the article pages.

More results can be found in these slides, which summarize our analysis of feedback and moderation data. Live moderation data can also be viewed on this dashboard.

Next steps

Mockup of new moderation tools and filters under consideration.

Based on our these findings, we are now developing a final version of Article Feedback 5, which we plan to release in early 2013. Our goals for this release are to:

  • surface more good feedback
  • reduce the editor workload
  • improve software performance.

Here are some of the key features we are working on:

Better feedback filters

We are improving our filtering tools to automatically surface more good feedback and remove irrelevant comments. For example, the new feedback page will only feature comments that have been marked as helpful — and feedback that has not yet been moderated will be listed in a separate filter. We are also creating more abuse filters to prevent users from posting inappropriate comments.

Simpler moderation tools

To reduce the editor workload, we are simplifying the feedback moderation tools. These new tools will enable moderators to quickly sort feedback into different groups, so that editors can focus on useful suggestions for improvements, without being distracted by comments that are not usable. We also plan to make useful feedback more visible to editors, through a special link on article pages.

Improved performance

We are refactoring our code to make this tool more scalable so it can support millions of comments with better database performance. This backend engineering work has taken longer than anticipated, in order to provide a solution that can be used by other projects.

Once these features have been developed, we plan to test them on 10 percent of the English Wikipedia, then release them to 100 percent in the first quarter of 2013. We expect more projects to deploy Article Feedback after this full release. For example, the German Wikipedia has already started a pilot to evaluate this tool on their site, and a similar pilot is under discussion on the French Wikipedia. For now, we invite you to try out Article Feedback for yourself on this sample article.

We would like to thank all the Wikipedians who have helped us design and develop Article Feedback this year. We look forward to deploying it widely early next year, to encourage more participation from readers. We hope this engagement tool can help sign up new contributors, to revert the editor decline and provide new ways for users to improve Wikipedia together.

Happy holidays!

Fabrice Florin, Product Manager
Dario Taraborelli, Senior Research Analyst
Oliver Keyes, Community Liaison
Wikimedia Foundation’s Editor Engagement Team

Article feedback v5 starts wider deployment on Wikipedia

I am happy to announce that Wikimedia’s editor engagement team has started a wider deployment of Article Feedback version 5 on the English Wikipedia.

This new version of Article Feedback provides a new way for readers to contribute productively on Wikipedia. It engages them to make suggestions about articles they are reading — and invites editors to improve these articles based on this feedback. Our research also suggests that this new tool can help readers become editors over time.

We are currently testing this new tool on 3% of the English Wikipedia, and plan to gradually increase its reach to 10% by the end of July 2012. After final testing and debugging in the coming weeks, we expect to release this new version on all English Wikipedia articles in early fall — then to other projects in the following months. (Note that we will then retire the earlier Article Feedback version 4, which was based solely on user ratings, without comments.)

Based on our research, we now project a significant amount of feedback when the tool is fully deployed on the English Wikipedia. As a result, we believe that this new tool can enable a new form of participation for Wikipedia readers, most of whom do not currently edit the encyclopedia.

Here’s how you can learn more about Article Feedback v5:

We invite you to try out Article Feedback now, on one of these pages:

New feedback form, shown at the bottom of some Wikipedia articles.

 

Please let us know what you think of this new tool. We welcome your questions and suggestions in the comments below — or on the Article Feedback Talk page.

We would like to take this opportunity to thank all the Wikipedia community members who helped create Article feedback. Over the past nine months, we worked closely with many experienced editors to design features that serve the needs of readers and editors alike. This deployment is an important milestone for us all, and we look forward to more collaborations on future editor engagement projects.

We hope you find this new feature useful. We can’t wait to see it used more widely on Wikipedia!

Fabrice Florin, Product Manager

Wikimedia Foundation’s Editor Engagement Team

 

P.S.: If you plan to attend Wikimania 2012 this week, we invite you to join this talk on Article Feedback.

Converting readers into editors: New results from Article Feedback v5

An invitation to “edit this page” is shown after users post feedback on Wikipedia (‘Call to Action 1′)

Since December 2011, the Wikimedia Foundation has been testing a new version of the Article Feedback Tool, a feature first introduced on the English Wikipedia in 2010. The goal of version 5 (AFTv5) is to engage Wikipedia readers to become more active contributors, by inviting them to provide feedback on articles they read, and encouraging them to become editors over time. 

Early tests of AFTv5 helped us answer the question of what design of the tool produces a desirable balance between volume and usefulness of the feedback collected. In this post we report results from two additional experiments designed to answer the following questions:

  1. Does a prominent invitation to use the tool affect the usefulness of submitted feedback?
  2. How does an invitation to leave feedback affect the conversion of readers into editors?

Our findings suggest that a prominent invitation to post feedback converts a significant number of readers into editors. These new editors appear less productive than other first-time Wikipedians; but their feedback appears just as useful, as below. These findings suggest that article feedback can increase the number of new editors on Wikipedia and can also help existing editors improve the encyclopedia based on reader feedback.

Prominence of Feedback Invitation

(more…)

Helping readers improve Wikipedia: First results from Article Feedback v5

Figure 1. One of the feedback forms tested in the AFTv5 experiments (Option 1).

 

The Wikimedia Foundation, in collaboration with editors of the English Wikipedia, is developing a tool to enable readers to contribute productively to building the encyclopedia. To that end, we started development of a new version of the Article Feedback Tool (known as AFTv5) in October 2011. The original version of the tool, which allows readers to rate articles based on a star system, launched in 2010. The new version invites readers to write comments that might help editors improve Wikipedia articles. We hope that this tool will contribute to the Wikimedia movement’s strategic goals of increasing participation and improving quality.

Testing new feedback forms

On December 22, 2011, we started testing three different designs for the AFTv5 feedback forms:

  • Option 1: Did you find what you were looking for? (shown above)
  • Option 2: Make a suggestion, give praise, report a problem or ask a question
  • Option 3: Rate this article

The purpose of this first experiment was to measure the type, usefulness and volume of feedback posted with these feedback forms. For example, does asking a reader to describe what they were looking for (option 1) provide more actionable feedback than asking them to make a suggestion (option 2)?

We enabled AFTv5 on a small, randomly selected set (0.6%) of articles on the English Wikipedia, as well as a second set of high-traffic or semi-protected articles. A feedback form, randomly selected from the above three options, was placed at the bottom of each page. The feedback form was also accessible via a link docked on the bottom right corner of the page.  The resulting comments were then analyzed along a number of dimensions.

(more…)

Getting ready for when the freeze is done

When you look at the “sprint backlog” in mingle (guest, guest), you may notice that even though we have been slowed down because of the slush, the feature freeze because of the imminent MediaWiki release, we are not sitting on our hands. Documentation, testing, code review and outreach is on our agenda.

Because of the way we are planning, it is apparent how much code review actually gets done. This sprint we added a review of the ArticleFeedback extension for its internationalization and localization aspects. This is a logical development considering that, with 280+ languages, we are not developing for one language. Our objective for this job is: “As a user I can use the functionality of the ArticleFeedbackv5 so that nothing looks odd in my language from an internationalization and localization perspective”. Reviews like this have been performed informally in the past by translatewiki.net staff. This review, however, will be done during Wikimedia hours and reported through Wikimedia channels.

One old open bug is about EasyTimeline.  It started its life in 2005 and it is finally getting the attention it deserves. The bug explains the lack of support for languages like Arabic, Hebrew and Farsi that are written from right to left. The software has Ploticus as a dependency and for a long time the waiting was for a version of this software that does support RtL languages. We are not waiting any longer and you can read in our story 230 about the complexities involved.

You could say that implementing a translation memory for page translation is a bit more adventurous; it is however debatable if that functionality is new; a translation memory has for a long time been functional at translatewiki.net. It is also very much a feature that makes people more productive. Our team has always had the goal of making life easy and productive for our editors and translators.

The “grammar” functionality for JavaScript is part and parcel of the i18n tooling for our developers. It was not ready before the “slush” and it does make our lives difficult not having it available in the code. When you are building tests for “gender” and “plural”, it is so obvious to create them for “grammar” as well. In this sprint, “grammar” will be included in the code for all these good reasons.

This is the first time that there is a story for outreach. We are reaching out to all the Wikipedia language communities to have their own language support team. It will make a difference when all our language communities have been asked to provide their expertise to us. We already have found that many people show an interest and issues do get raised as a result.

Thanks,
Gerard Meijssen
Internationalization / Localization outreach consultant

 

Expanded Use of Article Feedback Tool

Today on English Wikipedia we rolled out the Article Feedback Tool – previously featured on 3,000 English Wikipedia articles – to a larger set of 100,000 articles. This initial expansion is intended to further assess both the tool’s value and its performance characteristics, with an eye to a full deployment on Wikipedia and potentially other projects.

Some examples of articles that currently feature the tool (at bottom):

The intent of the tool is two-fold:

  • to gain aggregate quality assessments of Wikimedia content by readers and editors;
  • to use as an entry vector for other forms of engagement.

To assess its value in both categories, we’ve already undertaken a significant amount of qualitative and quantitative research. You can read an extensive summary of our work so far here.

The high level summary based on the data we’ve seen so far: We believe user ratings can be a valuable way to predict high and low quality content in Wikimedia, and we’re especially interested in engaging raters beyond the initial act of assessing an article. Through our trials to date, we’ve seen very good conversion rates on the calls-to-action that follow a rating, suggesting that this could be a powerful engagement tool as well.

Beyond continuing our own research and these engagement experiments, our goal is to regularly make available anonymized data from the tool, and to supply editors with a dashboard tool for surfacing trends in the rating data. We’re looking forward to sharing wider findings from the use of the tool soon.

Please use the talk page or comment below for feedback, questions and suggestions.

Erik Moeller, Deputy Director

Article Feedback Pilot: Next Version

On March 14, we launched v2.0 of the Article Feedback Tool.  Version 2.0 is represents a continuation of the work we started last September.  To quickly recap, the tool was originally launched as a part of the Public Policy Initiative.  In November, the feature was added to about 50-60 articles on the English Wikipedia, in addition to the Public Policy articles.  The purpose of adding the tool to these additional pages was to provide us with additional data to help understand the quality of the ratings themselves, namely do these ratings represent a reasonable measurement of article quality?

Since then, we’ve been evaluating the tool using both qualitative and quantitative research.  We conducted user research on the Article Feedback tool both to see how users actually used the tool and to better understand the motivations behind rating an article.   Readers liked the interactivity of the feature, ease of use, and the ability to easily provide feedback on an article.  On the other hand, some of the labels (e.g., “neutral”) were difficult to understand.   A detailed summary of the user research has been posted here.

We also did some quantitative research on the ratings data.  Though the ratings do appear to show some correlation with changes in the content of the article, there is ample room for improvement (see discussion of GFAJ-1).  It also appears as though articles of different lengths show different ratings distributions.  For example, there appears to be a correlation between Well-Sourced and Completeness and length for articles under 50kb, but for articles over 50kb in length, the correlation becomes far weaker (see Factors Affecting Ratings).

Based in part on the results from the first version, v2.0 of this feature was designed with two main goals in mind.

  • First, we wanted to see if we could improve the correlation between ratings and change in article quality by segmenting ratings based on the rater’s knowledge of a topic.  We introduced a question which asks the user whether she is “highly knowledgeable” about the topic.  The answers to this question will enable us to compare ratings from users that self-identify as highly knowledgeable versus ones that don’t.
  • Second, we wanted to see if rating an article could lead to further participation — does rating an article provide an easy way to contribute, leading to additional participation like editing?  We wanted to test this hypothesis in light of the recent participation data.  We don’t know whether this will actually be the case, but we wanted to get some data.  In v2.0, there is a mechanism that shows a user a message (e.g., “Did you know you can edit this article?”) after they submit a rating.  We will measure how well these messages perform.  (These messages are dismissible by clicking a “Maybe later” link).

We also made some UI changes based on the feedback from the user study.  For example, “Neutral” was changed to “Objective” (as were some other labels) and the submit button has been made more visually obvious.  There are a number of other improvements which may be found on the design page.

Finally, in an effort to get a wider variety of articles to research, we increased the number of articles with the tool.  We knew from our early analysis that articles in different length bands received different rating distributions, so we created length buckets (e.g., 25-50kb) and selected a random set of articles within each length bucket.  User: Kaldari wrote a bot which takes the list of articles and places the tool on the articles in the list [10].  As of March 24, there are approximately 3000 articles that the tool is currently active on.  We may expand this list if we can do so without impacting performance of the site.

We’ll be publishing analysis on v2.0 in the coming weeks.  In the meantime, please let us know what you think on the workgroup page.  Or better yet, join the workgroup to help develop this feature!

Article feedback pilot goes live

As recently announced on the tech blog and in the Signpost, we’re launching an experimental new tool today to capture article feedback from readers as part of the Public Policy Initiative. We’re also inviting the user community to help determine its future by joining a workgroup tasked with evaluating it.

The “Article Feedback Tool” allows any reader to quickly and easily assess the sourcing, completeness, neutrality, and readability of a Wikipedia article on a five-point scale. It will be one of several tools used by the Public Policy Initiative to assess the quality of articles. We also hope it will be a way to increase reader engagement by seeking feedback from them on how they view the article, and where it needs improvement.

The tool is currently enabled on about 400 articles related to US public policy. You can see it in action at the bottom of articles such as United States Constitution, Don’t ask, don’t tell or Brown v. Board of Education.

Another goal of this pilot is to try and find a way to collaborate with the community to build tools and features. As main users of the software, Wikimedians are in a unique position to evaluate how a feature performs, and what its strengths and limitations are. The Article Feedback Tool is still very much in a prototype state; we’re hoping the user community can help us determine whether resources should be allocated to improve it (and if so, how), or if it doesn’t meet the users’ needs and should be shelved or completely rethought.

More information about the tool is available on our Questions & Answers page.

If you want to try the tool to assess an article, pick a subject you’re familiar with from the full list and rate it! If you’d like to participate in the evaluation of the tool itself and what becomes of it, please join the workgroup. If you’re interested in article assessment in general, please also join the Public Policy Initiative’s Assessment Team.

Thank you,

Guillaume Paumier,
on behalf of the Features Engineering team

Article Feedback Pilot: Edit this Feature!

As recently announced on this blog and in the Signpost, we’re planning to roll out a new experimental tool to capture article feedback from readers, as part of the Public Policy Initiative. It will be an opportunity for the user community to directly participate in the assessment and development of the feature if the test proves successful.

The “Article Feedback Tool” will allow any reader of an article to quickly and easily assess the sourcing, completeness, neutrality, and readability of a Wikipedia article on a five-point scale. In addition to being a way to measure article quality for the Public Policy Initiative, we hope the Article Feedback Tool could be a way to increase reader engagement by seeking feedback from readers on how they view the article. The tool should also give editors an easy way to see where an article needs improvement.

Improving article quality has long been a major topic for the Wikimedia community. However, scalable and reliable tools are few and far between to address this issue. The Public Policy Initiative provides a useful framework to run a pilot for quality assessment of articles. The goal of the project is to improve the quality of public policy-related articles. Comparing standard community self-assessment processes, opinion sought from external experts, and article feedback from readers (with the test tool) will help to provide the community with a better understanding of the strengths and limits of each system.

We hope that the Public Policy project will provide us with feedback that will help in the future development of the feature. From this perspective, the goal of this pilot is to provide the community with a “draft” version of the tool. We want to make deeper efforts to include the community of users in helping build new features, so we hope to assess this “draft” together and “edit it” for improvements. If the test tool is considered useful, the Wikimedia Foundation will allocate resources to improve it with your help and build a stable feature from it. In general, the pilot of this tool also reflects a shift in the Wikimedia Foundation’s development processes toward more systematic and iterative experimentation and trials with new technology, in collaboration with the community and other stakeholders. In the future, we plan to formalize this approach with an official “Wikimedia Labs” entity.

We’re still trying to figure out how to best accomplish this collaboration between makers and users of this tool. For now, we’re considering a “task force” (similar to those used in the Strategy project) comprised of users interested in assessing the feature’s strengths and weaknesses, working together with developers and designers. We’re open to other suggestions.

The experimental feature is planned for deployment on September 22 on a handful of pages with limited visibility. We will give you another update at launch. In the meantime, you’re warmly invited to look at the preliminary design and join the workgroup to participate in the evaluation of the tool.

Thanks!

– Alolita Sharma
Features Engineering Program Manager, Wikimedia Foundation