Rethinking R&D: Running contests to find solutions
The rise in “open innovation” contests has helped companies broaden their research and development while reducing their cost and risk of failure. Such contests easily reach large numbers of external problem solvers with a variety of backgrounds, potentially leading to faster, cheaper and better solutions. These contests also have piqued researcher Pei-yu Chen’s interest in how to make them work more effectively.
When search giant Google Inc. wanted ideas for making the world better, it conducted an online contest with a $10 million prize. When DVD-rental service Netflix Inc. wanted ways to improve its predictions of users’ movie ratings, it ran an online contest with an annual $1 million prize. Those big-money innovation contests made headlines, but every day thousands of smaller contests are conducted to help organizations come up with everything from new product names to website designs and software development.
The rise in “open innovation” contests has helped companies broaden their research and development while reducing their cost and risk of failure. Such contests easily reach large numbers of external problem solvers with a variety of backgrounds, potentially leading to faster, cheaper and better solutions. The contests also have piqued researcher Pei-yu Chen’s interest in how to make them work more effectively.
Chen, associate professor of information systems at the W. P. Carey School of Business, has co-authored three papers looking at how innovation seekers can better design contests, evaluate entries and encourage high-quality solutions. Her work also gives problem solvers clues for improving their chances of winning. She got interested in the topic of online markets for talent when she was a Ph.D. student, painfully doing her own coding to coax results from mounds of data.
Online markets such as eLance and Rent-a-coder were just starting, and students like Chen began using them to hire high-quality programmers for projects at reasonable prices. Now she sees organizations big and small using such markets, and has watched as the concept expanded into open innovation and contests. “Open-innovation contests, especially online, are all very new marketplaces that companies participate in,” Chen said. “So in order to design strategy, we need to have a better understanding of things that would affect the quality of the solutions.” The contests work like this: A “seeker” launches a contest, defining the project and the contest parameters.
Contestants, or “solvers,” can register at any time, and submit solutions at any time, during the contest. Many seekers evaluate submissions as they come in and give feedback on at least some submissions, allowing solvers to take their ideas back to the drawing board and submit improved ideas. When the contest ends, the seeker picks a winner and awards a prize. Economists have long studied traditional contests, with some saying a big pool of contestants brings in higher-quality and more diverse ideas, and others saying a big pool lowers the odds of winning and reduces contestants’ efforts.
Chen and colleagues believe that the feedback process common in online contests counteracts the negative effects. They say seekers should try to attract more contestants and give feedback to the most promising solutions, because seekers get the benefit of more diverse ideas and solvers can get feedback that improves their odds of winning.
Better contest design, better solutions
To fill gaps in existing research, Chen looked beyond the popular topic of prizes and found that design parameters, project characteristics and market environment also affect the quality of solutions and therefore contest performance. The first factor, design parameters, involves deciding a prize amount, the length of the contest description, and the contest’s duration. Chen and her co-authors analyzed nearly 2,000 contests conducted during the September 2008-2009 period on TaskCN.com, a China-based open-innovation platform that is one of the largest in the world. Among their findings:
- Above-average prize amounts attract more contestants for idea-based projects. But they have little or no effect on the number of contestants in expertise-based projects, where contestants’ time is scarce.
- For idea-based projects, shorter descriptions attract more contestants. The brevity seems to allow for more creativity, Chen says.
- For expertise-based projects, longer descriptions attract more contestants. The details give contestants a better sense of what the seeker wants.
- A contest of longer duration attracts more contestants, but the number of new entrants declines as the contest continues. Seekers should weigh the benefit of gaining entrants with the cost of maintaining the contest.
The second factor that seekers should consider is the project characteristics, especially its complexity. Chen suggests seekers with complex projects consider a modular or a two-step approach to attract more contestants. “We know that some people are really creative, but they might not be very good in implementation,” she said. “So it is always a good idea to break the project into two parts – the idea part and then the execution part.”
This allows a seeker to better match contest design to project characteristics – a high prize and short description in the idea phase, for example, and a more detailed description for the expertise phase. The overall result: more contestants and better results. The third factor for seekers to consider when launching a contest is the market environment. Similar contests might run simultaneously, and expertise-based projects, in particular, attract more solvers when there are fewer contests competing for experts’ time and effort.
Feedback, open evaluation are big helps
Chen also helped break new ground by looking at the important role feedback plays in open innovation. Feedback allows seekers to give their thoughts, opinions or preferences on submissions. Contestants who receive feedback perceive a greater chance of winning, leading them to put in more effort and increasing the match between their solution and the seeker’s goals. “The way feedback works, you are able to send an informative and effective signal to a contestant whose ideas or solutions you like,” Chen said. “By giving this signal, you can actually increase the incentive of this person to devote more effort to revise the solution in the direction you want.”
About 70 percent of seekers conducting contests on innovation platforms Zhubajie and TaskCN use a feedback system, one of Chen’s studies found. Feedback might seem like extra work, but the same study indicated that seekers gain even when they give feedback to relatively few preferred solvers. On average, sending feedback to 8.9 percent of solutions had a significantly positive effect on encouraging contestants to put in more effort. And winning solutions eventually were chosen from the small set that received feedback.
“The most important thing is that the feedback needs to be effective,” Chen said. “It’s actually mutually beneficial, because for seekers, you want to have a better solution that meets what you want, and you want to make sure the feedback is very informative so that the solver knows, first of all, what to do, and secondly, whether they want to do it.” Feedback is one part of a contest’s evaluation phase. But a company’s small team of internal evaluators can be overwhelmed if entries flood in, or it might have a narrow bias when picking a winning solution.
To make evaluations more efficient and less costly, Chen’s research suggests the use of “open evaluation.” Like open innovation, open evaluation offers a prize for external sources that help a seeker find the best solution. The process can be as simple as having evaluators vote on which solution they think the seeker’s internal evaluators will like best. For seekers, the process can confirm the internal team’s choice or open its eyes to other good solutions.
Chen found that a strong open evaluation system depends on offering a higher prize, which will bring in more evaluators, and on giving the evaluators criteria that helps them vote in the seeker’s best interest. However, When there are many ideas to evaluate, evaluators tend to put in less effort and vote with the herd, which gives extra weight to a small group of solutions. Chen suggests reducing the herding effect by making evaluators’ votes private, not public.
Tips for solvers
Chen’s research shows that solvers’ expertise and strategy influence their chances of winning. As expected, solvers with more expertise in a contest’s subject matter are more likely to win. But strategy also plays a role. Solvers who submit ideas early or late in a contest are more likely to win, because both groups tend to be high-quality contestants. With early solvers, “they don’t mind so much that others are entering because they are very good … And they can get feedback early and react better and more quickly,” Chen said.
With late solvers, “this might be strategic waiting, because they don’t want to be a free ride. They also know they have a very good solution and they don’t want to release it … till the last minute.” Open contests, in which contestants can view what others submit, call for decisions on strategy. Entering early can reduce redundant efforts and discourage lower-quality ideas, but it also allows other contestants to learn from the early ideas and submit improvements later, Chen said. In expertise-based contests, the research found that the longer the time between entering a contest and submitting a solution, the more likely a solver is to win. The solver might be putting in more effort, or might be using strategic waiting as a strategy, the researchers say.
Bottom line:
Chen offers these suggestions for improving online open-innovation contests:
- To attract the most contestants, seekers should match contest design parameters to the type of solution they are seeking: high prizes and short descriptions for idea-based projects, for example, and more detailed descriptions for expertise-based projects. If the project is complex, find ways to reduce complexity, i.e., by breaking up the projects into different modules or serious of smaller projects, and attract more contestants and better results. It will be very useful to break projects into idea phase and executing phase by launching two different contests. Give feedback to good ideas, especially early on, and consider using “open evaluation” with private voting to bolster internal evaluations.
- To increase the chances of winning, solvers should know that expertise matters. So does timing: early entry gives them a chance to get feedback and improve their solution, while late entry means they are more likely to devise a unique solution. Submitting mid-contest gets them neither benefit.
- Contest operators should design a feedback system that is easy for seekers to use. Educate seekers on the benefits of giving feedback and on ways to make feedback useful.
Latest news
- A new chapter for Sun Devil Athletics
Sun Devil Athletics Director and two-time W. P.
- ASU AI expert recognized for impact in information systems research
Pei-yu Chen was honored for her contributions to the Management Science Journal.
- Data analytics expert receives prestigious award for dedication to information systems community
World-renowned artificial intelligence and data analytics expert Olivia Liu Sheng was honored…