We designed and implemented multimodal systems, for interactive gaming, for image retrieval, etc. These systems either take in the user's eye movements, or spoken commands, or multitouch inputs, for interaction purpose. Some of the source codes and documentations could be found at: here. Note that our posted code here are just for educational purpose, which highly depends on the system environment, the device attached, and the IDE installed. Further support from us can be asked for by sending us an email from here.

Published papers are:

Engelman C*., Li R., Pelz J., Shi P. and Haake A., Rochester Institute of Technology, USA, Exploring Interaction Modes for Image Retrieval, Proceedings of the 1st Conference on Novel Gaze-Controlled Applications (NGCA 11) 2011, Article #10. 

Guo X., Li R., Alm C.O., Yu Q., Pelz J., Shi P., Haake A. Infusing Perceptual Expertise and Domain Knowledge into a Human-Centered Image Retrieval System: A Prototype Application. Proc. of ETRA 2014. 

Vaidyanathan, P., Prud’hommeaux, E., Alm, C.O.Pelz, J.B., and Haake, A.R.  (2015). Alignment of eye movements and spoken language for semantic image understanding (pp. 76-81). Proceedings of the 11th International Conference on Computational Semantics, London, UK.