Platform for Developing Collective Expertise
Navy SBIR 2016.2 - Topic N162-131
ONR - Ms. Lore-Anne Ponirakis - [email protected]
Opens: May 23, 2016 - Closes: June 22, 2016

N162-131
TITLE: Platform for Developing Collective Expertise

TECHNOLOGY AREA(S): Human Systems, Information Systems

ACQUISITION PROGRAM: The Distributed Common Ground System-Navy (DCGS-N) Program

OBJECTIVE: Develop computational models and tools for rapid training and development of collective expertise.

DESCRIPTION: The development of individual expertise depends on a) efficient teaching, b) the quality of the learning material, and c) objective assessment methods. Traditionally, teaching has been based on one directional interaction between a teacher and a student where the material is presented in “one-size-fits-all” fashion. Assessment of student’s expertise has been conducted using a similarly crude approach by administering predesigned tests. Recent advances in technology provide the opportunity to revolutionize teaching and training by tailoring instruction to the needs and characteristics of each student [1,2]. However, the advances in the area of the assessment have been much more modest. The use of tests as the main tool for assessing student’s mastery of the learned material is still difficult to replace with more efficient, but hard to implement, peer-based assessments [3].

While individual expertise is of great value for addressing a variety of tasks [4], it can be inadequate for addressing very complex tasks for which joint efforts of a group of trained individuals are required. Therefore, of particular interest is the development of a platform for training a group of individuals so they can achieve performance that cannot be matched by any group of individual experts operating independently. Unfortunately, the theory of expertise as it is currently defined has little to say about collective capabilities in terms of training and assessment of a group and therefore the associated theories and experiments are missing [5,6]. While adaptive learning methods have been developed for individual learners [7], new approaches are needed to automatically optimize the whole learning ecosystem by considering not just the parameters of an individual but also parameters of target content, peer interaction, as well as the instructor within group performance. Special focus should be devoted to rapid convergence, and efficient exploration of all ecosystem parameters.

It is clear that in order to develop group expertise, it is not necessary that each individual in a group achieve maximal possible (individual) expertise. Rather, of greater importance is how to develop complementary expertise, and how to develop mechanisms for efficient communication and collaboration [8] among group members. While the potential for large-scale collaboration has been demonstrated in certain domains [9], further efforts are required to generalize these findings to other domains where expertise is required.

PHASE I: Design experiments, and approaches that will be used for developing and testing collective expertise. Define approaches for conducting collaboration and efficient communication (e.g. discussion board or small group collaborations), matching members based on their expertise, and incentivizing collaboration. Identify and select learning tasks. Propose and discuss optimal design of group structure (e.g. centralized, hierarchical, flat, random, or cluster-based. Propose algorithms for peer-based assessment of learning and performance.

During the Phase I option, if exercised, design metrics for algorithm evaluation in Phase II including but not limited to issues related to: joint optimization of ecosystem parameters, rapid convergence, and efficient exploration of all ecosystem parameters; assessment of learning and assessment of group performance. Develop algorithms for peer-based assessment of learning and performance.

PHASE II: Based on the effort performed in Phase I, conduct experiments and demonstrate the operation of the developed algorithm(s). Perform detailed testing and evaluation of the algorithm(s). Establish performance parameters through experiments; determine the range of group sizes the algorithm(s) can support, and the optimal group size that should be used for development of rapid expertise. In addition, define rapid expertise in terms of necessary time to develop collective expertise, class of task types, levels of difficulty, and prior expertise level.

PHASE III DUAL USE APPLICATIONS: The functional algorithm(s) should be developed with performance parameters. Finalize the design from Phase II, perform relevant testing and transition the technology to appropriate Navy and commercial training and simulation efforts. Private Sector Commercial Potential: This technology will primarily support rapid learning and development of group expertise by developing methods for adaptive presentation of materials and efficient evaluation and testing strategies. Therefore, this technology can be easily transferred to all institutions that require learning, training and evaluation of its personnel. This includes educational institutions as well as businesses that depend on continuous training and re-training of its employees.

REFERENCES:

  • E. Waters, A. S. Lan, and C. Studer, "Sparse Probit Factor Analysis For Learning Analytics", International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.
  • C.Tekin, J. Braun and M. van der Schaar, "eTutor: Online Learning for Personalized Education," ICASSP, 2015.
  • A. E. Waters, D. Tinapple, and R. G. Baraniuk, "BayesRank: A Bayesian Approach to Ranked Peer Grading", ACM Conference on Learning at Scale, 2015.
  • O. Atan, C. Tekin, M. van der Schaar and W. Hsu, "A Data-Driven Approach for Matching Clinical Expertise to Individual Cases," ICASSP, 2015.
  • Ericsson, K. Anders, and J. Smith. "Toward a General Theory of Expertise", 1987.
  • Ericsson, K. Anders, et al., eds. “The Cambridge handbook of expertise and expert performance”, Cambridge University Press, 2006.
  • T Mandel, YE Liu, S Levine, E Brunskill, Z Popovic, “Offline policy evaluation across representations with applications to educational games”, International conference on Autonomous agents and multi-agent systems, 2014.
  • W. Mason and D.J. Watts, “Collaborative learning in networks”, in Proceedings of the National Academy of Sciences, vol. 109, no. 3, pp. 764–769, National Acad Sciences, 2012.
  • GA Khoury, A Liwo, F Khatib, H Zhou, G Chopra, WeFold: a coopetition for protein structure prediction, Proteins: Structure, Function, and Bioinformatics, 2014.

KEYWORDS: Rapid training, adaptive learning, collective expertise, decision-making, assessment and evaluation.

** TOPIC AUTHOR (TPOC) **
DoD Notice:  
Between April 22, 2016 and May 22, 2016 you may talk directly with the Topic Authors (TPOC) to ask technical questions about the topics. Their contact information is listed above. For reasons of competitive fairness, direct communication between proposers and topic authors is
not allowed starting May 23, 2016 , when DoD begins accepting proposals for this solicitation.
However, proposers may still submit written questions about solicitation topics through the DoD's SBIR/STTR Interactive Topic Information System (SITIS), in which the questioner and respondent remain anonymous and all questions and answers are posted electronically for general viewing until the solicitation closes. All proposers are advised to monitor SITIS (16.2 Q&A) during the solicitation period for questions and answers, and other significant information, relevant to the SBIR 16.2 topic under which they are proposing.

If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 800-348-0787 or [email protected]