29 Questions with the NASA CEOCI Team: InnoCentive/NASA January 14, 2014 Webinar

Posted by jartese on Jan 21, 2014 10:21:48 PM

space cloudQ: ­How do you manage IP on the solutions?

Steve Rader: It somewhat depends on the platform.  IP management is one of the services that come with the platform where they can arrange to transfer IP as needed.  For Innocentive challenges, there are varieties of options depending on the design of the Challenge. For NASA Tournament Lab challenges, we typically develop solutions under open source licensing (Apache 2.0).

Q: ­What is the screening process for the solutions on the internal platform?

Steve Rader: For NASA@work (internal collaboration platform), we run the submission process more as a discussion thread, so there is dialog between the challenge owner and solvers during the process.  That said, ultimately, the challenge owner reviews the submissions to find the winner based on the solution criteria they specified in the challenge content.   Note that for the internal platform, we don't really have to worry about screening bogus or superfluous submissions like you might on a public platform.

Q: ­expertise seeking is also targeting internal only?

Steve Rader: We typically use the internal NASA@work platform for expertise seeking, but not necessarily exclusively.  In other words, we have not necessarily used our external platforms for that to date, but we have not eliminated that as a possibility.

Q: ­what are the required resources to deal with high volume submissions and related process flow to get to a solution?

Steve Rader: This really depends on the challenge and how it is formulated (and what platform it is on). First off, the platform vendor (Innocentive, TopCoder, etc.) usually can filter a lot of the submissions that do not meet minimum requirements.  You can minimize the resources required by taking the time to really define the minimum requirements for submissions and making sure that you provide a list of solutions/ideas that you already have considered.  Sometimes there are ways to score submissions (speed, accuracy, etc). The more definition you have for what your judging criteria are (and are spelled out explicitly), the better the platform folks can help you and minimize your time in reviewing submissions.

Q: ­what is the process to come to a ranked list of candidate solutions?

Steve Rader: Your best bet is to, before you begin, define the minimum requirements and judging criteria for submissions and making sure that you provide a list of solutions/ideas that you already have considered.  This should include ways to score submissions (speed, accuracy, etc.). These provide you good criteria for judging (and for having the platform vendor filter out and rank solutions for you before final judging).   There are several criteria like "completeness of solution", "supporting information/documentation", "maturity" (it this just a concept or a proven solution).  Then, you can also provide partial awards, split awards, or have ranked awards (1st, 2nd, etc.).

Q: ­can you go into some detail on who is testing and executing on those 60 challenges?

Steve Rader: For solutions that require testing, the solvers in the community typically are responsible for testing their solution.   If it is an algorithm or software solution that can be tested against benchmarks or criteria, that typically is done on the platform.  Regardless, at the end of the challenge there is staff assigned to evaluate and score solutions.     There are many different options and approaches here depending on the challenge, the solution we are looking for and the platform.   I will say we have been very impressed with some of these techniques and methods.

Q: ­How wide are the invitations to submit proposals disseminated? Who decides to whom the invitations are sent?

Steve Rader: Since we are utilizing open innovation platforms (Innocentive, TopCoder, etc.), we rely on them to manage their communities (which are 100's of thousands of people).   However, all of the platforms that we use are open and free to join for the public and so we do publicize our challenges on sites like challenge.gov so that the public that is interested in participating in these challenges knows about them and can participate via those communities.   Within the communities, the platform communicates with its members about the challenges.  Sometimes that is selectively, but most of the time that is done on a broad basis across their community.  Many times, platforms will also publicize the challenges externally (via press releases, blogs, social media) to try and draw in participants from the broader public.    Now, I've been talking about regular participation in challenges.  Another way to read your question is how do we solicit proposals for our upcoming BAA.  That will be announced publically (we don't invite specific vendors).  Those interested can monitor government/NASA procurement web sites. http://www.hq.nasa.gov/office/procurement/index.html or https://prod.nais.nasa.gov/cgibin/nais/index.cgi

Q: ­When you set up budget what are the big drivers of cost?

Steve Rader: The cost driver will depend on the challenge and its definition/complexity.   Some challenges requires a lot of up front work to set up and pose while others require a large prize amount which ends up being the driver.   For small challenges, sometimes fees and overhead end up being the driver/bulk cost (comparatively).  There have even been sometimes when the research component has been our largest cost.   This also depends on which platform you use since they use different structures for cost many times.

Q: ­What systems are used to collect the challenges to run?

Steve Rader: For NASA@work challenges, there is a mechanism for employees to directly submit challenges for consideration.  Once submitted, Kathryn Keeton works with the Challenge Owner to refine their challenge and then launch it.   For our larger challenges, we are typically working directly with organizations and so we help them fill out a challenge worksheet that the CoECI uses to then determine appropriate platform, funding, etc.   We have not yet implemented a larger scale system (but I like where you are going).

Q: ­Great presentation. Which is or was the most important barriers to remove inside NASA to adopt the model?

Steve Rader: Thank you.  I touched on it a bit during the Q&A, but probably the biggest hurdles are cultural.  NASA is comprised of very talented scientists and engineers who are very focused on solving problems (themselves).   The idea that innovative ideas that can help solve those problems might come from others that are not the experts (like they are) is very hard concept to embrace (both logically and from a human nature standpoint).   The second barrier is simply perception about "crowdsourcing" and what it is and can be used for.   This is easier to deal with because we can show pilots and case studies that are very compelling.  In fact, these case studies help to also break down the 1st barrier because the education about the cases starts to show them the possibilities.   One other barrier is simply infusion into the large organization.  How do you get 1) the message to all of the implementors/researchers/developers and their managers and 2) how do you provide them easy ways to "test drive" the methods...     All of these assume that you have adequately lowered your procurement and legal barriers (which, at least in the government, will stop you before you even begin if you don't address them).

Q: ­Does NASA consider this "social learning" inside the organization?

Steve Rader: We are beginning to talk more about "social learning" and how we cultivate/connect community using some of these methods.   To date, NASA@work has really started to be a platform that begins to address this.  We are working on a better "share" feature that will allow users to forward challenges to people in their networks that they know have knowledge about the challenge.   The winner of our "Reed-Solomon Encoder Tutorial" challenge actually came from that kind of social networking, so we recognize the power... we just need to get the implementation going (and there is of course much more to "social learning" that we want to address.... i.e. some of what IBM has been working).

Q: ­How do you set decision criteria for selection of winning submissions?

Steve Rader: Before we start a challenge, we work to fully define the minimum requirements and judging criteria for submissions and making sure that we provide a list of solutions/ideas that we have already considered.  For some challenges, this include ways to score submissions (speed, accuracy, etc). These provide you good criteria for judging (and for having the platform staff filter out and rank solutions for us before final judging).   There are several criteria like "completeness of solution", "supporting information/documentation", "maturity" (it this just a concept or a proven solution).  We also provide partial awards, split awards, or have ranked awards (1st, 2nd, etc.).

Q: -Does NASA/Innocentive use the rich database of solutions to try and further the progression of the generic problem solving process?

Steve Rader: This is a good question.  I will let Innocentive answer from their perspective, but from our NASA CoECI perspective, we many times will use the NASA@work platform to provide an initial survey to see if or what solutions exist for the problem.   To date, we have not tried to implement at database becuase they have a very high maintenance cost (and we have not had great luck with our users making use of them).

Q: ­very interesting data that 75% of solutions have come from people from other domains­

Steve Rader: Thank you.  Yes, there is some very interesting data from Karim Lakhani (http://www.hbs.edu/faculty/Pages/profile.aspx?facId=240491&facInfo=pub) at Harvard Business School.   A great read that includes a lot of his work and uses Innocentive as a case study extensively is a book by Jeff Howe called "Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business"  http://www.amazon.com/Crowdsourcing-Power-Driving-Future-Business/dp/0307396215 that I highly recommend as a primer on this new Open Innovation world.   Additionally, I believe Innocentive has some nice case studies on their web site that talk about that particular study.

Q: ­How do you decide which platform to utilize (you mentioned you had 5 different platform) and can you  help explain your porfolio managment process you use?

Steve Rader: So right now we have 1) centenial challenges (note that NASA runs these, but not as part of CoECI):  These are very large prize challenges run by NASA where participants are expected to build and demonstrate something (like a robot or aircraft) 2) NASA@work:  These are internal (meaning that NASA employees are the solvers) challenges where we use Innocentive's "@work" platform to host challenges.  We select this platform to run a difficult challenge to better understand or tune the challenge before we post it to a larger/public platform.   3) Innocentive's NASA Pavilion platform is used when we are facing a really difficult problem (usually technical) that we are hoping to find a very innovative solution (and can leverage the large crowd of diverse technical folks that Innocentive has).  4) The NASA Tournament Lab (NTL), which is a contract with Harvard and sub to TopCoder, is used mainly for software and algorithm focused challenges since their 600,000 community members are mostly experts in those areas.   5) Yet2.com is very specifically for technology searches where the seeker is trying to access an existing technology or discipline expert that may not be available or visible via normal  searches.     There are quite a few other factors that go into our decision making based on our various contracts with those vendors.

Q: ­HOW DO PARTICPANTS GET INCLUDED IN THE INTERNAL OR EXTERNAL NETWORK?

Steve Rader: Our "internal" network (NASA@work) is only for NASA employees (civil servants and contractors)… so to be included, one must get a job with NASA or its contractors.  The "External" network is really handled by our platform vendors (Innocentive, TopCoder, etc.).   Those platforms are open to the public to participate in, so anyone wishing to be included in a challenge only needs to sign up with the platform that is running the challenge.

Q: ­Do you always have to have an award, to draw external ideas? Can opportunity to work with a company on a joint development effort be a reward in itself?

Steve Rader: This is an interesting question.  In crowdsourcing, there is often discussion about what incentivizes individuals to contribute and they typically describe those incentives as the 4 G's:  "Gold, Guts, Glory, and/or Good"    My observation is that successful communities attempt to provide multiple incentives.  Opportunities to work with a company on a joint development effort could definitely be an award (similar to our contest that was seeking ideas for centennial challenges where the award was funding to work on the challenge).   There is lots of research that Karim Lakhani at Harvard Business School (http://www.hbs.edu/faculty/Pages/profile.aspx?facId=240491&facInfo=pub) is doing with the NTL to really understand these incentives and how they work.

Q: ­How do you deal with Intellectual Property concerns? How big of a problem is this for NASA?

Steve Rader: It somewhat depends on the platform.  For Innocentive challenges, IP management is one of the services that come with the platform where they can arrange to transfer IP as needed.  For other challenges, we typically develop solutions under open source licensing (Apache 2.0).  We made sure that we got our NASA lawyers involved up front because we knew that this was tricky.    So far, this seems to be working well, but we are very cautious around the issues of IP and pay close attention to it.

Q: ­Is the prize just given to 1 individual? How do reconcile that portions of submitted solutions, if put together, may be the best solution?

Steve Rader: Thanks for the great question!   This somewhat depends on the platform, but generally, we only award 1 winner if there is a clear winner.   If there are 2-3 that are close, we will usually award partial awards (depending on how they stack up with our evaluation criteria).   We do this so that we can do exactly as you suggest.   If we think there is value in different aspects of multiple submissions, we award multiple winners and then look for ways to leverage the best parts to make a best solution.  Other times, we'll accept multiple winners because the submissions show promise, but further examination/ development/ testing is required to determine which might work best to address the problem.

Q: ­How do you screen for novelty and how is IP managed if solutions are adopted?

Steve Rader: It somewhat depends on the platform.  For Innocentive challenges, IP management is one of the services that come with the platform where they can arrange to transfer IP as needed (and I believe they screen for novelty as well).  For NTL challenges, we typically develop solutions under open source licensing (Apache 2.0).

Q: ­what's your advice for a company that would like to use open innovation- where do they start? Internally externally or both at the same time?

Steve Rader: In my personal opinion, there are lots of very lightweight (low cost) ways to "dip your toe" into this area.  I suggest first getting a good handle on the basics by reading something like the book by Jeff Howe called "Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business” as I find this to be a great primer on this new Open Innovation world (note that Jeff Howe is the Wired editor who coined the term "crowdsourcing").   The site crowdsourcing.org has a lot of really good information as well.   Depending on the needs of your company and your budgets, you could invest in other ways with challenges in ways similar to what we've been discussing.  That is all very specific to the application.   If you have a large company, an internal platform can be a good way to start as you suggest.      Some of that would depend on the IT infrastructure, the size of your company, and the nature of your workforce.   I can say that it has worked well for us and proven to be a worthwhile investment. Benchmarking might also be an approach that could help you get started.

Q: ­does NASA use any type of prediction market or some other mechanism that helps choose/predict the outcome of a future event?

Steve Rader:  This is a very interesting question.   To my knowledge, the answer is no.   That said, we do work on algorithms that forecast solar events (similar to weather forecasting) as I talked about in the webinar…. But I think you may be talking about something else.

Q: ­How do you manage patent ownership between NASA and external inventors who respond to challenges?

Steve Rader: It somewhat depends on the platform.  For Innocentive challenges, IP management is one of the services that come with the platform where they can arrange to transfer IP as needed (and I believe they screen for novelty as well).  For NTL challenges, we typically develop solutions under open source licensing (Apache 2.0).    Note that most platforms have stated rules/agreements with their community members (participants) as it pertains to IP.

Q: ­what is the best practice of writing challenges?

Steve Rader:  There is not a short answer to this question, but I'll try.    Kathryn Keeton mentioned InnoCentive’s LASSO guidelines for challenge formulation (I believe they define LASSO at  http://www.innocentive.com/blog/2010/10/13/lasso-guidelines-for-problem-solving/).   I think these hit the high points.   The biggest for me tends to be breaking the problem down into its smallest components.  Different vendors have different approaches to that and some are more active vs. letting you define them.

Q: ­How do you select the platform and estimate target solvers?

-Steve Rader: You might check out the answer I gave to one of the other questions that were similar, but basically we select a platform for a challenge based on its type and cost and try and fit it to the platform that we think works best with the challenge, our cost/schedule constraints, and contracts.

Q: ­Agree with Steve R that investment in supporting innovation acceleration is critical. Any guidance on private sector initiatives or best practices to marry the investment piece with innovation programmes that you might have observed out there?

Steve Rader: Thank you.  Very interesting question.   I'd have to say that even though Open Innovation is new, it is simply one more way to get things done and its use has to be factored into our business and organizations just like any other procurement decision (like make or buy).   That said, I can see options to invest (like in platforms for internal crowdsourcing of your workforce), but also opportunities to "buy it by the yard."   There are so many new platforms that are coming online, there may be lots of opportunities to leverage this new approach on a problem by problem basis.  This is likely easier in the private sector since our government acquisition rules are not quite a nimble.   I'm not sure if this really answers your question, but its what I've got.

Q: ­Is there a way to get a copy of the expectations and guidelines from Kathryn for a challenge without seeing anything confidential?  It sounds like Kathryn has everything covered including marketing and challenge information, etc.­

http://www.innocentive.com/blog/2010/10/13/lasso-guidelines-for-problem-solving/

Q: ­Which are the next frontier (challenges) for open source innovation within NASA?

Steve Rader: Great question.   For CoECI, we are planning to continue to infuse these new approaches and tools into the development organizations across NASA and across the rest of the Federal Agencies.   We really want everyone at NASA and the Federal Government to naturally consider open innovation methods as options whenever they are building a system or solving a problem.  To that end, we are developing a "Solutions Mechanism Guide" which is simply a wizard that can help guide NASA engineers and managers through which options might be a best fit for their particular problem/project.   On the farther horizon, I personally believe that there are some really amazing attributes to connected communities that could be harnessed to 1) improve NASA's internal collaboration and development capabilities and 2) to better engage the public in every aspect of its mission.    We'll see!!! :)

Q: ­COECI: is it a permanent fulltime team or is it project team with participants working part time?

Steve Rader: The CoECI team is a full time team (although I think we have a couple of team members with "other assigned duties").   We also utilize a handful of "Center Champions" who are matrixed at all of the 10 NASA centers to help facilitate NASA@work activities and communication (very part time).

Topics: Seekers

Follow InnoCentive

Search Blog

Newsletter

On Twitter