Today, Prof. Matthias Zwicker gave a talk in UMIACS entitled “Computer Graphics for Next Generation Displays, Input Devices, and Data“. The talk was given on March 8th, 2016; the second follow-up talk was given on April 19th, 2016.

Talk Abstract

Research in computer graphics has been strongly driven by advances in display technology, the development of novel input devices, and the availability of large and diverse collections of visual data. For example, advanced displays require novel signal processing and ever more efficient rendering techniques. Novel input devices such as RGB-D cameras enable faster and easier acquisition and modeling of real world objects. Recent internet scale datasets allow users to edit visual data more intuitively. In this talk, I will provide an overview of how my research addresses the challenges and opportunities that emerge from these advances. I will also outline my vision for future graphics research and applications that will be enabled by next generation displays, devices, and data.

About the speaker

Matthias Zwicker has been a professor at the University of Bern and the head of the Computer Graphics Group at the Institute of Computer Science since September 2008. He obtained a PhD from ETH in Zurich, Switzerland, in 2003. From 2003 to 2006 he was a post-doctoral associate with the computer graphics group at the Massachusetts Institute of Technology, and then held a position as an Assistant Professor at the University of California in San Diego from 2006 to 2008. His research focus is on efficient high-quality rendering, data-driven modeling and animation, signal processing techniques for novel displays and rendering, and point-based methods. He has served as a papers co-chair and conference chair of the IEEE/Eurographics Symposium on Point-Based Graphics, and as a papers co-chair for Eurographics 2010. He has been a member of program committees for various conferences including ACM SIGGRAPH and Eurographics, and he has served as an associate editor for journals such as the Computer Graphics Forum and the Visual Computer.

Screenshot 2016-03-08 11.35.30


Huge changes in displays and graphical processing units have driven rapid advances in computer graphics research.

The goal of data-driven modeling is to use all sorts of data (3D geometry, animations, surface and appearance) to invent creative editing tools. To do this, you need the right mathematical representations. Dr. Zwicker has worked in TOG and CGF on 3D shapes, textures, stereo images and motion data.

Given example poses of an articulated object, the goal is to intuitively model new poses that look plausible. In addition, users should easily control different parts of the model.

To interpolate each triangle separately, we need alternative representation for rotations.

Linear interpolation is not preferred and produces very wired result. Instead, Dr. Zwicker proposed exponential map, which is logarithm of rotation matrices that are skew symmetric matrices which form tangent space for space of rotations.

Using scanned data, Dr. Zwicker’s gola was to reconstruct 3D models of articulated objects. The input is depth images with different object poses. The output is reconstructed model including joints. [Chang, Zwicker et al., ACM Trans. Graph.]

For every input depth image, we ned to determine labeling into constituent parts as well as the motion / alignment of each part into reference pose (per label). They use iterative optimization to optimize motion and part labels. The 75x fast forward videos was demonstrated for this technique)


Eventually, Dr. Zwicker introduced automultiscopic displays which uses view dependent pixel [Ives 1931]. It is glasses-free stereo 3D. The trick can be done by simply putting a slit mask. It kinds of provides a “window to a virtual world and parallax without head tracking. Dr. Zwicker addressed the aliasing problem in automultiscopic displays. Each pixel emitted rays, providing a light field to the users.

IMG_1255Screenshot 2016-03-08 11.32.31 Screenshot 2016-03-08 11.34.39

Future Applications

  • Determine parameters for camera arrays to achieve optimal image quality for given display
    • Number of cameras
    • Spacing
    • Resolution
  • Multiview compression [Zwicker, Sig. Proc. 07]
  • Spatio-angular pre-filtering [Zwicker, TVCG 11]
  • Analyze multilayer displays [Wetzstein, TOG 11, 12]

Challenges of Automultiscopic Displays

  • Require more pixels than 2D displays
  • Addressing the Aliasing Problem
    • Aliasing in the directional domain appears as temporal aliasing [Zwicker et al, EGSR 2006; IEEE Sig. Proc. 2007, IEEE TVCG 2011]
  • Ultimately: want efficient, realistic rendering for interactive applications
    • Path tracing revolution in movie industry
    • Gradient-domain path tracing
    • Randomly sample differences between paris of correlated, similar light paths.
    • Differences dx, Differences dy, Throughput + Screened Poisson Problem => Find image that is most consistent with dx, dy and throughput => Reconstruction of ray-traced image.
  • Realistic rendering for interactive applications
    • Rendering under hard time constraints
  • Perceptual issues for novel displays
    • Alleviating vergence-accomdation conflict
    • Foveated Rendering

Screened Poisson Problem

  • Find image that is most consistent with dx, dy, and throughput
  • Image space denoising – remove noise after rendering
  • Adaptive sampling + adaptive reconstruction (image denoising filter + error estimation + additional samples)
  • Joint NL-means filtering [ACM TOG 2011, ACM TOG 2012, 2013]
  • NL-means filter is a high weight for neight pixel, if patches around pixels are similar
  • Leveraging per-pixel features including normals, textures, position, etc. Avaliable from render at no cost
  • Joint filter introduce weights based on feature differences; final filter weight is determined by the minimum of feature and color weight.
  • Multi-scale NL-means using sure
  • different filter scale: most sensitive to color scales, space-time filtering spatial filtering
  • Pixar – industry impact
  • Generative models based on deep neural networks – generative adversarial networks
  • reconstruction during user interaction, manual manipulation of scenes, recover object parts, relations, functionalities, AR applications, robotics