Jacob Chakareski is an Assistant Professor of Electrical and Computer Engineering at The University of Alabama, where he leads the Laboratory for VR/AR Immersive Communication (LION). His interests span networked virtual and augmented reality systems, UAV-IoT sensing and communication, and rigorous machine learning for stochastic control. Dr. Chakareski received the Adobe Digital Experience Faculty Research Award (2017), a best paper award at the IEEE Int’l Conf. on Communications (ICC) 2017, and the Swiss NSF Career Award Ambizione. He is the organizer of the first NSF visioning workshop on networked VR/AR communications. He trained as a PhD student at Rice and Stanford, held research appointments with Microsoft, HP Labs, and EPFL, and sits on the advisory board of Frame, Inc. His research is supported by the NSF, AFOSR, Adobe, NVIDIA, and Microsoft. For further info, please visit www.jakov.org.

Star Trek Teleport presents great potential to virtual human teleportation. We are motivated by applying super-human like vision to break bariers in remote sensing, monitoring, localization, navigation, scene understanding. It is well acknowledged that VR and AR applications are foundation of the 5G technology. However, from 2D passive sensing towareds 3D immersive interactive, it requires enormous bandwidth. In the era of Internet of Things (IoT), it is hyper data intensive – huge volume of data is required, especially for 360 degree video streaming.

// We envision real-time IoT sensing and UAV-enabled dynamic sensor placement. Projective geometry + distortion-rate theory + online Sensor Scheduling

Lagrange problem formulation:

V^(s) = min [ c(h, y) + \lambda d(x, y) + \gamma \sum_{s’} p(s’ | s, y) V^, \lambda (s’) ], \forall s

Post-decision State (PDS) Learning:

It captures system state after action takes place, but prior to unknown dynamics.

PDS value function

V^{*,\lambda} (x) = \min_{\alpha \in A} \left{ … \right}

PDS learning: update one state at a time => limit on convergence rate

Packet arrivals l_t and channel states h_t independent of PDS queue backlog x_t

=> use observation of (l_t, h_t) via PDS s_t = (x_t, h_t) to update all pDS s = (x, h_t)

There is no need to visit a state s to update it.

Q learning updates for state-action pair (x,y)

Qualitative Learning Advances, it updates (x, y).

However, post-decision state update for post-decision state x-y

virtual experience updates for post-decision states in a 3×3 grids.


In  Viewport-Adaptive Navigable 360-Degree Video Delivery, the authors investigate the impact of various spherical-to-plane projections and quality arrangements on the video quality displayed to the user, showing that the cube map layout offers the best quality for the given bit-rate budget. An evaluation with a dataset of users navigating 360-degree videos demonstrates that segments need to be short enough to enable frequent view switches.


Virtual and augmented reality (VR/AR) have the potential to advance our society. Presently limited to offline operation and synthetic content, and targeting gaming and entertainment, they are expected to reach their potential when deployed online and with real remote scene content. This will require novel holistic solutions that will push the frontiers in sensing, compression, networking, and machine learning, to overcome the considerable challenges ahead. My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy, as illustrated in Figure 1 below.

I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will help break existing barriers in remote sensing, monitoring, localization and navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.