TY - GEN
T1 - Multimodal Mobile Collaboration Prototype Used in a Find, Fix, and Tag Scenario
AU - Burnett, Gregory M.
AU - Wischgoll, Thomas
AU - Finomore, Victor
AU - Calvo, Andres
PY - 2013
Y1 - 2013
N2 - Given recent technological advancements in mobile devices, military research initiatives are investigating these devices as a means to support multimodal cooperative interactions. Military components are executing dynamic combat and humanitarian missions while dismounted and on the move. Paramount to their success is timely and effective information sharing and mission planning to enact more effective actions. In this paper, we describe a prototype multimodal collaborative Android application. The mobile application was designed to support real-time battlefield perspective, acquisition, and dissemination of information among distributed operators. The prototype application was demonstrated in a scenario where teammates utilize different features of the software to collaboratively identify and deploy a virtual tracker-type device on hostile entities. Results showed significant improvements in completion times when users visually shared their perspectives versus relying on verbal descriptors. Additionally, the use of shared video significantly reduced the required utterances to complete the task.
AB - Given recent technological advancements in mobile devices, military research initiatives are investigating these devices as a means to support multimodal cooperative interactions. Military components are executing dynamic combat and humanitarian missions while dismounted and on the move. Paramount to their success is timely and effective information sharing and mission planning to enact more effective actions. In this paper, we describe a prototype multimodal collaborative Android application. The mobile application was designed to support real-time battlefield perspective, acquisition, and dissemination of information among distributed operators. The prototype application was demonstrated in a scenario where teammates utilize different features of the software to collaboratively identify and deploy a virtual tracker-type device on hostile entities. Results showed significant improvements in completion times when users visually shared their perspectives versus relying on verbal descriptors. Additionally, the use of shared video significantly reduced the required utterances to complete the task.
KW - mobile computing
KW - Multimodal interfaces
KW - remote collaboration
UR - http://www.scopus.com/inward/record.url?scp=84874812290&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84874812290&partnerID=8YFLogxK
UR - https://corescholar.libraries.wright.edu/cse/329
U2 - 10.1007/978-3-642-36632-1_7
DO - 10.1007/978-3-642-36632-1_7
M3 - Conference contribution
SN - 9783642366314
T3 - Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering
SP - 115
EP - 128
BT - Mobile Computing, Applications, and Services
A2 - Uhler, David
A2 - Mehta, Khanjan
A2 - Wong, Jennifer L.
PB - Springer Berlin Heidelberg
T2 - 4th International Conference on Mobile Computing, Applications, and Services, MobiCASE 2012
Y2 - 11 October 2012 through 12 October 2012
ER -