IEEE Journal on Selected Topics in Signal Processing
Special Issue on Interactive Media Processing for Immersive Communication
[PDF version of the Call For Papers]
Interpersonal communication is an essential and intrinsic element of the human way of life. Yet, for interpersonal communication among individuals connected via high-speed data networks, the prevalent technologies are limited to exchanging 2D video and audio captured by a single camera and microphone. These technologies fail to provide a level of immersivity necessary for “in the same room” sense of presence, due to limitations such as gaze mismatch and lack of depth perception in the rendered scene.
With the recent advances in sensing technologies and accompanying data analysis tools, one can now acquire a large collection of media data describing both the sender’s physical environment (e.g., texture/depth images and audio captured from arrays of cameras and microphones) and the manner in which the sender is perceiving the presented media (e.g., gaze & head movements). This means that although the viewer’s display capabilities may remain limited (e.g., 2D display, stereo speakers), the sense of immersion can be greatly enhanced through innovative human-centric interaction with the presented media (e.g., gazed-corrected view, seamless view-switching and/or spatial audio corresponding to viewer’s tracked head position, haptic vibrations in response to loud audio events). This enhanced media interaction must be designed within the context of delay- and loss-prone networks for delivery of delay-sensitive data, towards the ultimate goal of improving immersive communication beyond 2D video communication. In particular, the main technical challenges are: i) efficient acquisition and processing of observer’s sensory data, ii) compact/robust representation of media data for network transport given the receiver’s current patterns of media consumption and display limitations, iii) real-time human-centric media interaction for an enriched immersive experience. Further, evaluation methodologies and metrics for immersive communication systems must be subjectively accurate for the range of systems deploying multiple sensory input and output devices.
We invite authors to address aspects of immersive communication related to interactive media processing, such as the following. Please note that submission of pure video compression papers with no direct connection to media interaction is not encouraged.
- Interactive Visual Communication(e.g., efficient view switching systems enabling motion parallax, 3D visual representation with desirable properties like flexible decoding, robustness, etc.).
- Acquisition & Reconstruction of Media Data for Immersivity (e.g., depth data acquisition and pre-processing, microphone array design for spatial audio, hybrid approaches to depth estimation, gaze/head tracking and prediction).
- Multi-modal Media Interaction (e.g., multi-modal responses to detected communication events, visual interaction(defocusing, saliency-based content adaptation) based on gaze patterns and/or pupil size).
- Streaming/Transport of Immersive Media Data (e.g., media-specific, delay-sensitive FEC, multiple descriptions formulti-path transmission, multi-modal loss concealment strategies).
- Applications of Immersive Communication (e.g., systems for tele-medicine/education, immersive video conferencing).
- Quality Issues in Immersive Communication (e.g., evaluation methodologies and metrics for immersive systems).
Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee.
Manuscript Submission: April 2, 2014
First Review Due: July 1, 2014
Revised Manuscript: September 1, 2014
Second Review Due: October 1, 2014
Final Manuscript: December 1, 2014
Guest Editors:
Gene Cheung, National Institute of Informatics, Japan (cheung@nii.ac.jp)
Dinei Florencio, Microsoft Research, USA (dinei@micrsoft.com)
Patrick Le Callet, University of Nantes, France (patrick.lecallet@univ-nantes.fr)
Chia-Wen Lin, National Tsing-Hwa University, Taiwan (cwlin@ee.nthu.edu.tw)
Enrico Magli, Politecnico di Torino, Italy (enrico.magli@polito.it)