CRIS Professors receive $1.2M NSF grant. The goal of this project is to create the knowledge to facilitate effective and efficient collaborative perception on top of a set of independent and multi-modal data generating agents. The project will study how to jointly model social and sensor data and use this modeling to efficiently support spatio-temporal queries on the joint embedding space. In addition to mapping information from multi-modal disparate sources to a common information space, this project will study how to optimize the attention routing of controllable agents like UAVs to maximize the reliability and coverage of the collected information.