Finally the talk on Holoportation is publicly available on YouTube. This is a very useful talk which gives big pictures of the state-of-the-art on real-time 3D reconstruction. For the future? Infrastructure FoV Headset removal Compression Interestingly, someone in the UIST 2016 conference mentioned about the problem of mobility. And this is the mobile Holoportation which…
Foveated Rendering via Quadtree
Today, I wrote a shader for foveated rendering uisng Prof. Neyret’s QuadTree: https://www.shadertoy.com/view/Ml3SDf The basic idea is: Calculate the depth of the QuadTree using the distance between the current coordinate to the foveat region Use the depth as the mipmap level to sample from the texture Code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
// forked and remixed from Prof. Neyret's https://www.shadertoy.com/view/ltBSDV // Foveated Rendering via Quadtree: https://www.shadertoy.com/view/Ml3SDf# void mainImage( out vec4 o, vec2 U ) { float r = 0.1, t = iGlobalTime, H = iResolution.y; vec2 V = U.xy / iResolution.xy; U /= H; // foveated region : disc(P,r) vec2 P = .5 + .5 * vec2(cos(t), sin(t * 0.7)), fU; U *= .5; P *= .5; // unzoom for the whole domain falls within [0,1]^n float mipmapLevel = 4.0; for (int i = 0; i < 7; ++i) { // to the infinity, and beyond ! :-) //fU = min(U,1.-U); if (min(fU.x,fU.y) < 3.*r/H) { o--; break; } // cell border if (length(P - vec2(0.5)) - r > 0.7) break; // cell is out of the shape // --- iterate to child cell fU = step(.5, U); // select child U = 2.0 * U - fU; // go to new local frame P = 2.0 * P - fU; r *= 2.0; mipmapLevel -= 0.5; } o = texture2D(iChannel0, V, mipmapLevel); } |
Weta Workshop Made the Magic Leap Demo
A year and a half ago, I wrote a blog post talking about Comparison amongst MagicLeap vs. HoloLens vs. Oculus Rift, in which I said about MagicLeap: Finally, the Magic Leap sounds more promising to me. However, I don’t think the technical product will come until 1~2 years later, let alone the consumer product. All these information…
Bilateral Filter to Look Younger on GPU
Bilateral filter can be used to smooth the texture while preserving significant edges / contrast. Below shows a live demo in ShaderToy. Press mouse for comparison. Thanks to mrharicot’s awesome bilateral filter: https://www.shadertoy.com/view/4dfGDH With performance improvement proposed by athlete. With gamma correction by iq: https://www.shadertoy.com/view/XtsSzH Skin detection forked from carlolsson’s Skin Detection https://www.shadertoy.com/view/MlfSzn#
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
// Bilateral Filter for Younger. starea. // URL: https://www.shadertoy.com/view/XtVGWG // Press mouse for comparison. // Filter forked from mrharicot: https://www.shadertoy.com/view/4dfGDH // Skin detection forked from carlolsson's Skin Detection https://www.shadertoy.com/view/MlfSzn# // With performance improvement by athlete #define SIGMA 10.0 #define BSIGMA 0.1 #define MSIZE 15 #define USE_CONSTANT_KERNEL #define SKIN_DETECTION const bool GAMMA_CORRECTION = true; float kernel[MSIZE]; float normpdf(in float x, in float sigma) { return 0.39894 * exp(-0.5 * x * x/ (sigma * sigma)) / sigma; } float normpdf3(in vec3 v, in float sigma) { return 0.39894 * exp(-0.5 * dot(v,v) / (sigma * sigma)) / sigma; } float normalizeColorChannel(in float value, in float min, in float max) { return (value - min)/(max-min); } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec3 c = texture2D(iChannel0, (fragCoord.xy / iResolution.xy)).rgb; const int kSize = (MSIZE - 1) / 2; vec3 final_colour = vec3(0.0); float Z = 0.0; #ifdef USE_CONSTANT_KERNEL // unfortunately, WebGL 1.0 does not support constant arrays... kernel[0] = kernel[14] = 0.031225216; kernel[1] = kernel[13] = 0.033322271; kernel[2] = kernel[12] = 0.035206333; kernel[3] = kernel[11] = 0.036826804; kernel[4] = kernel[10] = 0.038138565; kernel[5] = kernel[9] = 0.039104044; kernel[6] = kernel[8] = 0.039695028; kernel[7] = 0.039894000; float bZ = 0.2506642602897679; #else //create the 1-D kernel for (int j = 0; j <= kSize; ++j) { kernel[kSize+j] = kernel[kSize-j] = normpdf(float(j), SIGMA); } float bZ = 1.0 / normpdf(0.0, BSIGMA); #endif vec3 cc; float factor; //read out the texels for (int i=-kSize; i <= kSize; ++i) { for (int j=-kSize; j <= kSize; ++j) { cc = texture2D(iChannel0, (fragCoord.xy+vec2(float(i),float(j))) / iResolution.xy).rgb; factor = normpdf3(cc-c, BSIGMA) * bZ * kernel[kSize+j] * kernel[kSize+i]; Z += factor; if (GAMMA_CORRECTION) { final_colour += factor * pow(cc, vec3(2.2)); } else { final_colour += factor * cc; } } } if (GAMMA_CORRECTION) { fragColor = vec4(pow(final_colour / Z, vec3(1.0/2.2)), 1.0); } else { fragColor = vec4(final_colour / Z, 1.0); } bool isSkin = true; #ifdef SKIN_DETECTION isSkin = false; vec4 rgb = fragColor * 255.0; vec4 ycbcr = rgb; ycbcr.x = 16.0 + rgb.x*0.257 + rgb.y*0.504 + rgb.z*0.098; ycbcr.y = 128.0 - rgb.x*0.148 - rgb.y*0.291 + rgb.z*0.439; ycbcr.z = 128.0 + rgb.x*0.439 - rgb.y*0.368 - rgb.z*0.071; if (ycbcr.y > 100.0 && ycbcr.y < 118.0 && ycbcr.z > 121.0 && ycbcr.z < 161.0) { isSkin = true; } #endif if (iMouse.z > 0.0 || !isSkin) { fragColor = vec4(texture2D(iChannel0, fragCoord.xy / iResolution.xy).xyz, 1.0); } } |
…
Interactive Poisson Blending on GPU
Recently, I have implemented two fragment shaders for interactive Poisson Blending (seamless blending). Here is my real-time demo and code on ShaderToy: Press 1 for normal mode, press 2 for mixed gradients, press space to clear the canvas. Technical Brief I followed a simplified routine of the Poisson Image Editing paper [P. Pérez, M. Gangnet,…
C++ Code Backup for Render-To-Texture (Mult-pass Rendering)
I have found that many code for multipass rendering is complex and redundant to understand, so I would like to write a post for further reference. The key to multi-pass rendering is, instead of rendering to screen (frame buffer 0), reorient the rendering target to a framebuffer (e.g. frame buffer 1), bind the output of the…
[Summary] Indoor Scene Understanding and Dynamic Scene Modeling
Dr. Yasutaka Furukawa from Washington University in St. Louis presented an excellent talk on Indoor Scene Understanding and Dynamic Scene Modeling today. ABSTRACT I will present our recent 4 projects on indoor scene understanding or dynamic scene modeling: 1) 3D sensing technologies has brought revolutionary improvements to indoor mapping, For example, Matterport is an emerging company, which lets everybody…
Libraries, Textbooks, Courses, Tutorials for WebGL
WebGL (Web Graphics Library) is a JavaScript API for rendering interactive 3D computer graphics and 2D graphics within any compatible web browser without the use of plug-ins. WebGL is a great tool for quick demos and video rendering. In the past year, I have created Social Street View, Video Fields, and VRSurus using WebGL. Additionally,…
Clock
World
Random Posts
- [Summary] Immersed in N-Dimensions: Using the Creative Process as A Computational Framework for Unfolding Complex Systems
- [SIGGRAPH 2015] Fairy Lights in Femtoseconds
- It is High Time to Start a Revolution in Tech Interviews
- Treap in 45 lines of C++
- Do We Really Need 24-bit Color? Floyd–Steinberg Dithering
Share
Slideshow
-
Recent Posts
- Wearable Interactions Using Touch without a Touchscreen
- [Talk Summary] HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments
- [Talk Summary] A large-scale analysis of YouTube videos depicting everyday thermal camera use
- [Summary] Google I/O and Microsoft Build 2018
- [Summary] StackGAN: Text to Photo-realistic Image Synthesis
Twitter
My TweetsRecent Comments
- CS475 Algorithm Analysis and Design – Course | Ruofei Du on Treap in 45 lines of C++
- MA066 Advanced Algebra I – Course | Ruofei Du on Gradient, Circulation, Laplacian, Divergence, Jacobian, Hessian, and Trace
- Mr Zhu (@834144373zhu) on Tutorial of Ray Casting, Ray Tracing, Path Tracing and Ray Marching
- サンプリングレンダリングの解説記事 – Tech masuk on Tutorial of Ray Casting, Ray Tracing, Path Tracing and Ray Marching
- starea on Estimated Cost of Per Atom Function in Real-time Shaders on the GPU
Archives
- November 2018
- August 2018
- May 2018
- April 2018
- March 2018
- November 2017
- October 2017
- August 2017
- July 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- June 2016
- April 2016
- March 2016
- February 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- September 2014
- September 2012
- October 2010
- May 2010
- July 2008
Categories
Meta