Could you start by giving us a bit of background information on how this project got started, and your primary objectives for the INSTINCT challenge?
In 2010, IARPA launched its TRUST (Tools for Recognizing Useful Signals of Trustworthiness) program to develop better methods for assessing trustworthiness in realistic conditions. Research conducted by IARPA-funded teams during the first phase of the TRUST program resulted in large amounts of neural, physiological, and behavioral data from interactions between informed volunteers. We’ve performed a number of analyses and found some promising results. However, given the complexity and volume of these data, there are many different approaches that could be adopted for analyzing them, and it’s not obvious which would be most effective or efficient. The INSTINCT Challenge seeks new developments in state of the art multi-modal, dynamic data analysis, so that we can better understand the best ways of analyzing this type of dataset. Our objective is to improve understanding and measurement of complex social interaction, particularly how and when people choose to act in a trustworthy fashion.
This is a very different question from detecting deception. We’re not looking for a lie detector, but for ways to improve capabilities to assess when people will, or won’t, follow through on their commitments. Interestingly, some research suggests that we might be able to measure ourselves to anticipate someone else’s behavior, since people may have non-obvious, non-conscious signals (e.g., heart rate changes) that are predictive of others’ intentions or abilities. This means our own selves could be leveraged as sensors. Combining measurement of this type of signal with conscious judgment has improved decision making in other fields (e.g., medical diagnosis), and could be of tremendous benefit in improving other kinds of decision-making.
What are some of the key attributes you’d like to see (or not see) in a winning solution?
We’re open to many different kinds of algorithms, drawing on expertise from many different fields. We expect that winning solutions are likely to draw on multiple disciplines, or on areas a couple of degrees removed from neurophysiological research. In other words, not the kinds of analyses that are already commonly used with this type of data. We’re also interested in new approaches from machine learning and other analytic methods that may generalize across many different types of data sets.
Ultimately, however, solutions will be judged based on their statistical performance, rather than any subjective judgment of quality. We’re very excited about this approach, since it will allow the best ideas to transcend our expectations about what a good solution will necessarily look like, or what kinds of expertise are – or are not - necessary for this kind of research. In the end, the only attribute that really matters is doing a great job predicting trustworthy behavior.
What is IARPA planning to do with the ideas submitted? How will these insights be translated into real-world applications?
This challenge is very much an intermediate step—results from INSTINCT will be used to direct the second phase of research in this area. In Phase 1, we collected many different kinds of neural, physiological, subjective, and behavioral data. Challenge results will tell us which subset of these factors are most predictive, and therefore worth exploring in more depth.
The ultimate goal of the TRUST program is new technologies for assessing trustworthiness (or lack thereof) in real-world interactions. This will be useful both within and outside the intelligence community. It isn’t clear what range of contexts the tools developed from this data will be usable for—further research will be needed to determine those capabilities. But there are many situations in everyone’s life in which better prediction of trustworthy behavior could improve safety and security.
Why did you choose a crowdsourcing approach to this problem, rather than more traditional innovation strategies?
This is IARPA’s first challenge contest. As an innovation-oriented organization, we’re always interested in trying out new strategies that may produce new types of solutions. This data set seemed like the perfect opportunity. There are so many different possible ways of carrying out the data analysis that funding one excellent statistician, or a dozen, would be no guarantee of getting the best possible solution. We hope to draw on cutting edge techniques in multiple disciplines, including machine learning, bioinformatics, neuroscience, and analytics—to find solutions from people that, without crowdsourcing, we wouldn’t have known we needed to work with. This is exactly the type of situation in which a challenge is not only squarely in the lane of IARPA’s mission to do high-risk, high-payoff research, but also to the advantage of the best (but perhaps least obvious) solvers.
Thanks for your time. Do you have any final advice to our Solvers as they tackle your new challenge?
Thanks for giving us the opportunity to share this challenge with Innocentive’s Solver community!
We’re looking for solutions that draw on multiple disciplines, or types of analyses that haven’t previously been applied in neurophysiology or social science, or even novel ways of applying more conventional analytic techniques. We encourage solvers to reach out to each other, and to colleagues in complementary fields, to come up with new and creative analytic approaches. We also encourage those with promising approaches to test them early, and often, so that they can be optimized before the final submission deadline.