Online crowdsourcing contests: Good timing brings better contestants

Pei-yu Chen, professor of information systems, has found that contest duration, the amount of time you give contestants to submit their offering, affects the number and quality of the contestants you attract on crowdsourcing contest platforms.

By Betsy Loeff

There’s no doubt about it: Crowdsourcing has already touched your life. If you’ve looked up a definition in Wikipedia, you’ve tapped crowdsourced knowledge. If you’ve eaten Planter’s Peanuts, you snacked on food with a crowdsourced logo. It was designed by a 14-year-old boy who won a contest in 1919.

Today, task-oriented online contest platforms link organizations with a global supply of people like the boy who saw “Mr. Peanut” with a walking stick, people willing to draw a logo, share an idea, or pitch out a solution with no promise of pay. Through crowdsourcing contest platforms, people vie for the chance to win your job. Given that, you’ll want to make the pay — the winner’s prize — in your contest as high as possible to attract the best contestants, right?

Well, not necessarily. While the prize matters, so does the contest timing. Pei-yu Chen, professor of information systems, has found that contest duration, the amount of time you give contestants to submit their offering, impacts the number and quality of the contestants you attract on crowdsourcing contest platforms.

Pieces of work

Crowdsourcing, a term that plays on the word “outsourcing,” refers to the process of getting work performed from a crowd of people, and this usually occurs online. Some of the more famous crowdsourcing contests are big-ticket items. Netflix, for instance, paid $1 million to the developer team that won its contest looking for an algorithm to increase the accuracy of the company’s recommendation engine.

Chen examined contests with much simpler work and smaller prizes in the recent research she conducted with three colleagues. The contest platform she focused on is designed to help employers with limited resources get quick help without the time, expense, and obligation of hiring new staff. Such platforms host competitions for less complex projects that have short durations, often lasting only a few days or weeks.

An organization can procure a logo via a contest at DesignCrowd for a couple hundred dollars or get a webpage/app design done by running a contest on 99Design for as low as $599.

Chen and her research colleagues noted in a write-up of their study. Likewise, they pointed to sites that can get companies machine learning algorithms or test a software system for a few thousand dollars.

Organizations that use these contest sites aren’t looking to leverage the wisdom of the crowd. They’re looking for the best solution for their particular need.

“These platforms reduce costs for the business community,” Chen says. “For example, if you wanted a simple logo or website design and had to contract with a company locally, the price is usually a lot higher than you can get it online because online, the labor comes from all over the world.”

Along with finding talent in India, China, or some other country where wages are lower than they are in the U.S., Chen says you may also identify talent in the U.S. without paying a premium because they may value more the opportunity to practice their skills and/or the freedom to choose their own projects. Additionally, contest platforms help reduce lead time for getting projects done, often cutting timelines down from months to weeks.

In addition, you only pay for the solution you pick, the work you like. “That significantly reduces risk,” Chen adds, noting that if you hire a firm in town, you’ll probably still pay for the work performed even if you’re not completely satisfied with it.

Luck of the draw

Whether platform operators know it or not, online crowdsourcing contests – and most scholars who studied them before Chen – are operating from assumptions found in a statistical construct called “extreme value theory.” Chen explains it this way: Suppose you’re playing a game in which someone has put little pieces of paper numbered one through 100 into a box, and whoever draws out the highest number wins the game. “The more times you draw from the box, the more likely you are to get a higher number,” she says.

That’s the idea behind online contest platforms: The more people you have sending you solutions – whether they’re logo design ideas, algorithms, or some other thing you need – the more likely you’ll get a great product, something of extreme value.

Other researchers who looked at these contests before Chen, therefore, considered the number of contestants as a proxy for contest performance. The more people who entered the contest, the more successful it was.

Chen questioned that assumption: How successful would a contact be if it only attracted mediocre contestants, she wondered. And, did something beyond the prize money – which most prior research had evaluated – impact the quality of contestants who participated in the contest? Chen and her team thought the duration of the contest might impact contestant quality, too, and it did.

To evaluate this, Chen and her colleagues collected data from TaskCN, a large Chinese crowdsourcing platform. They chose this site for several reasons. First, they had all the data covering contests during 2008 and 2009. At this time, there were more than 2.8 million registered contestants and some 20,000 contests on the site. These contests covered a wide range of tasks requiring different skill levels and, because this site only operates in China, the researchers didn’t need to worry about factors like different languages or cultural backgrounds impacting the study.

To evaluate the quality of the contestants themselves, the researchers looked at how much prize money the contestant had already won as a measure of that contestant’s quality or expertise and skill level.

Finding Mr. or Ms. Right

Another thing this research team looked at that hadn’t been evaluated before was competition for contestants. “When people go into the platform, there may be 10 or 20 contests to choose from,” Chen explains. Previous researchers looked at contests as stand-alone events and mostly focused on the prize structure as motivation for contestants to join.

When we saw that the marketplace has several contests to choose from, we thought contestants would look at the prize, the duration, how many other people are in the contest – all of this information is visible.

Chen’s statistical evaluation of the data proved that higher prizes do attract more contestants, both high- and low-quality ones, but shorter contest durations attract on average higher-quality contestants. She says this is because higher-quality contestants don’t need as much time to finish tasks as lower-quality folks, and a longer contest means a longer wait for a paycheck. Some of the contests make solutions visible, so there also is the risk that people who enter a contest later will copy the ideas of contestants who submitted ideas early on, she adds. This, too, might discourage the faster, more competent contestants.

So, how should companies run contests that attract high-quality contestants on these platforms designed for small- to medium-sized contests? A good prize always should be your priority, Chen says. If you come in lower than competing contests, it will hurt your response.

Next, evaluate how long a skilled worker would need to complete the task, and limit the contest duration to one that meets the needs of experts, not neophytes. You’re looking for quality, not quantity because the more solutions you get in your contest, the more time you’ll spend reviewing these submissions. “When you have too many contestants offering solutions, there is a huge evaluation cost,” Chen says.

For simple logos, she thinks a couple of weeks should be a suitable contest duration. Simple coding, such as the task of building a web crawler to capture online data, might take only a week or two, while a sophisticated website might need a month-long contest duration.

“The most important thing is to schedule the contest so that an expert would be able to finish the task within the timeframe,” she says.

Very likely, if you lengthen the contest, you will get more solutions, but they will be of lower quality.

Latest news