The post Yet another shader for image fade-In effect appeared first on VR+AR+CG+CV+HCI.

]]>This shader can be used for rendering pop-up images in a 3D environment.

// Image Fade-In Effect, CC0 // Forked from iq's invisible shader with transparent background: [url]https://www.shadertoy.com/view/XljSRK[/url] float backgroundPattern( in vec2 p ) { vec2 uv = p + 0.1*texture2D( iChannel2, 0.05*p ).xy; return texture2D( iChannel1, 16.0*uv ).x; } vec3 getBackground(in vec2 coord) { float fa = backgroundPattern( (coord + 0.0) / iChannelResolution[0].xy ); float fb = backgroundPattern( (coord - 0.5) / iChannelResolution[0].xy ); return vec3( 0.822 + 0.4*(fa-fb) ); } float getFadeInWeight(vec2 uv) { float edge = 0.3 * abs(sin(0.5)); // taken FabriceNeyret2's advice vec4 v = smoothstep(0., edge, vec4(uv, 1. - uv) ); return v.x * v.y * v.z * v.w; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; vec3 bg = getBackground(fragCoord); vec3 col = texture2D(iChannel0, uv).rgb; float alpha = getFadeInWeight(uv); fragColor = vec4(mix(bg, col, alpha), 1.0); }

The post Yet another shader for image fade-In effect appeared first on VR+AR+CG+CV+HCI.

]]>The post Real Artifacts appeared first on VR+AR+CG+CV+HCI.

]]>http://graphics.cs.williams.edu/realartifacts/

Finally catched the SIGGRAPH deadline! Hooray!

The post Real Artifacts appeared first on VR+AR+CG+CV+HCI.

]]>The post [Summary] Dr. Izadi’s Holoportation Talk on UIST 2016 appeared first on VR+AR+CG+CV+HCI.

]]>Finally the talk on Holoportation is publicly available on YouTube.

This is a very useful talk which gives big pictures of the state-of-the-art on real-time 3D reconstruction.

For the future?

- Infrastructure
- FoV
- Headset removal
- Compression

Interestingly, someone in the UIST 2016 conference mentioned about the problem of mobility.

And this is the mobile Holoportation which I was involved in this summer:

The post [Summary] Dr. Izadi’s Holoportation Talk on UIST 2016 appeared first on VR+AR+CG+CV+HCI.

]]>The post Foveated Rendering via Quadtree appeared first on VR+AR+CG+CV+HCI.

]]>The basic idea is:

- Calculate the depth of the QuadTree using the distance between the current coordinate to the foveat region
- Use the depth as the mipmap level to sample from the texture

Code below:

// forked and remixed from Prof. Neyret's https://www.shadertoy.com/view/ltBSDV // Foveated Rendering via Quadtree: https://www.shadertoy.com/view/Ml3SDf# void mainImage( out vec4 o, vec2 U ) { float r = 0.1, t = iGlobalTime, H = iResolution.y; vec2 V = U.xy / iResolution.xy; U /= H; // foveated region : disc(P,r) vec2 P = .5 + .5 * vec2(cos(t), sin(t * 0.7)), fU; U *= .5; P *= .5; // unzoom for the whole domain falls within [0,1]^n float mipmapLevel = 4.0; for (int i = 0; i < 7; ++i) { // to the infinity, and beyond ! :-) //fU = min(U,1.-U); if (min(fU.x,fU.y) < 3.*r/H) { o--; break; } // cell border if (length(P - vec2(0.5)) - r > 0.7) break; // cell is out of the shape // --- iterate to child cell fU = step(.5, U); // select child U = 2.0 * U - fU; // go to new local frame P = 2.0 * P - fU; r *= 2.0; mipmapLevel -= 0.5; } o = texture2D(iChannel0, V, mipmapLevel); }

The post Foveated Rendering via Quadtree appeared first on VR+AR+CG+CV+HCI.

]]>The post Weta Workshop Made the Magic Leap Demo appeared first on VR+AR+CG+CV+HCI.

]]>Finally, the Magic Leap sounds more promising to me. However, I donâ€™t think the technical product will come until 1~2 years later, let alone the consumer product. All these information is intriguing, but itâ€™s really hard to tell until I got the real prototype.

Still, no technical prototypes, just as I predicted.

As for the intriguing demo above, according to The Information, however, there was no such game at that time, and the mockup video was created entirely with special effects.

That said, the video prominently features a WETA Workshop logo and, unlike more recent, fuzzier videos, does not claim to be recorded using Magic Leap’s actual technology.

According to The Information, Magic Leap has not been able to get a fiber-optic display to work, and it has been downgraded to a long-term research project.

“You end up having to make a trade-off,” Aberwick said in an interview. However, the company’s latest product model seems to only have a pair of standard eyeglasses, which is called PEQ, meaning “roughly the equivalent of a product,” and Magic Leap refuses to show it to The Information reporter.

Aberwicz claims that **it is slightly less functional than earlier products**, but denied the use of technology similar to HoloLens.

Hopefull, this is real:

and this is real:

Anyway, so far, Magic Leap has received $ 1.4 billion in venture capital, valued at up to $ 4.5 billion.

To solve the problem of acommodation-vergence conflict, foveated rendering may be the most practical way to do that:

The rendering is divided into three parts:

- Foveal region: the middle part of the clear part of the foveal (retinal macular area): we have a large number of cone-shaped cells in the eyeballs, which are extremely sensitive to color;
- Blend region: in the medium area where rendering results are from clear to fuzzy.
- Peripheral region: the last completely blurred area is called Peripheral (peripheral), where distributed of a large number of rod-shaped cells. Only the contrast and movement of objects sensitive, and color and detail are not (just as observing glare objects at nights).

The process of adapting to the dark is called dark adaptation.

The post Weta Workshop Made the Magic Leap Demo appeared first on VR+AR+CG+CV+HCI.

]]>The post Bilateral Filter to Look Younger on GPU appeared first on VR+AR+CG+CV+HCI.

]]>Below shows a live demo in ShaderToy. Press mouse for comparison.

Thanks to mrharicot’s awesome bilateral filter: https://www.shadertoy.com/view/4dfGDH

With performance improvement proposed by athlete.

With gamma correction by iq: https://www.shadertoy.com/view/XtsSzH

Skin detection forked from carlolsson’s Skin Detection https://www.shadertoy.com/view/MlfSzn#

// Bilateral Filter for Younger. starea. // URL: https://www.shadertoy.com/view/XtVGWG // Press mouse for comparison. // Filter forked from mrharicot: https://www.shadertoy.com/view/4dfGDH // Skin detection forked from carlolsson's Skin Detection https://www.shadertoy.com/view/MlfSzn# // With performance improvement by athlete #define SIGMA 10.0 #define BSIGMA 0.1 #define MSIZE 15 #define USE_CONSTANT_KERNEL #define SKIN_DETECTION const bool GAMMA_CORRECTION = true; float kernel[MSIZE]; float normpdf(in float x, in float sigma) { return 0.39894 * exp(-0.5 * x * x/ (sigma * sigma)) / sigma; } float normpdf3(in vec3 v, in float sigma) { return 0.39894 * exp(-0.5 * dot(v,v) / (sigma * sigma)) / sigma; } float normalizeColorChannel(in float value, in float min, in float max) { return (value - min)/(max-min); } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec3 c = texture2D(iChannel0, (fragCoord.xy / iResolution.xy)).rgb; const int kSize = (MSIZE - 1) / 2; vec3 final_colour = vec3(0.0); float Z = 0.0; #ifdef USE_CONSTANT_KERNEL // unfortunately, WebGL 1.0 does not support constant arrays... kernel[0] = kernel[14] = 0.031225216; kernel[1] = kernel[13] = 0.033322271; kernel[2] = kernel[12] = 0.035206333; kernel[3] = kernel[11] = 0.036826804; kernel[4] = kernel[10] = 0.038138565; kernel[5] = kernel[9] = 0.039104044; kernel[6] = kernel[8] = 0.039695028; kernel[7] = 0.039894000; float bZ = 0.2506642602897679; #else //create the 1-D kernel for (int j = 0; j <= kSize; ++j) { kernel[kSize+j] = kernel[kSize-j] = normpdf(float(j), SIGMA); } float bZ = 1.0 / normpdf(0.0, BSIGMA); #endif vec3 cc; float factor; //read out the texels for (int i=-kSize; i <= kSize; ++i) { for (int j=-kSize; j <= kSize; ++j) { cc = texture2D(iChannel0, (fragCoord.xy+vec2(float(i),float(j))) / iResolution.xy).rgb; factor = normpdf3(cc-c, BSIGMA) * bZ * kernel[kSize+j] * kernel[kSize+i]; Z += factor; if (GAMMA_CORRECTION) { final_colour += factor * pow(cc, vec3(2.2)); } else { final_colour += factor * cc; } } } if (GAMMA_CORRECTION) { fragColor = vec4(pow(final_colour / Z, vec3(1.0/2.2)), 1.0); } else { fragColor = vec4(final_colour / Z, 1.0); } bool isSkin = true; #ifdef SKIN_DETECTION isSkin = false; vec4 rgb = fragColor * 255.0; vec4 ycbcr = rgb; ycbcr.x = 16.0 + rgb.x*0.257 + rgb.y*0.504 + rgb.z*0.098; ycbcr.y = 128.0 - rgb.x*0.148 - rgb.y*0.291 + rgb.z*0.439; ycbcr.z = 128.0 + rgb.x*0.439 - rgb.y*0.368 - rgb.z*0.071; if (ycbcr.y > 100.0 && ycbcr.y < 118.0 && ycbcr.z > 121.0 && ycbcr.z < 161.0) { isSkin = true; } #endif if (iMouse.z > 0.0 || !isSkin) { fragColor = vec4(texture2D(iChannel0, fragCoord.xy / iResolution.xy).xyz, 1.0); } }

The post Bilateral Filter to Look Younger on GPU appeared first on VR+AR+CG+CV+HCI.

]]>The post Interactive Poisson Blending on GPU appeared first on VR+AR+CG+CV+HCI.

]]>Here is my real-time demo and code on ShaderToy:

Press 1 for normal mode, press 2 for mixed gradients, press space to clear the canvas.

I followed a simplified routine of the Poisson Image Editing paper [P. PÃ©rez, M. Gangnet, A. Blake. Poisson image editing. ACM Transactions on Graphics (SIGGRAPH’03)]:

- Let’s name the bottom image as “BASE”, and the image to overlay as “SRC”, the final image as “RESULT”, then the color in the boundary should be the same
- RESULT(u, v) = BASE(u, v) âˆ€(u, v) âˆˆ âˆ‚ SRC

- Inside the mask region, just add the gradient of “SRC” (or the bigger gradient of “SRC” and “BASE”, if we want to mix the gradients) into the current result image:
- RESULT(u, v) = RESULT(u, v) + âˆ‡ SRC(u, v);
- alternatively,
- RESULT(u, v) = RESULT(u, v) + max{ âˆ‡ SRC(u, v), âˆ‡ BASE(u, v) );

We can use jump flood algorithm to speed this procedure up: http://www.comp.nus.edu.sg/~tants/jfa.html

I used one frame buffer to store the drawing of the user.

- iChannel0 stores the frame buffer itself
- iChannel1 stores the keyboard response for interaction

This is a very simple drawing shader which can also be adapted for other drawing applications.

// Mask Image, white indicates foreground // Ruofei Du (http://duruofei.com) // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. #define BRUSH_SIZE 0.1 #define INITIAL_CIRCLE_SIZE 0.4 const float KEY_1 = 49.5; const float KEY_2 = 50.5; const float KEY_SPACE = 32.5; const float KEY_ALL = 256.0; bool getKeyDown(float key) { return texture2D(iChannel1, vec2(key / KEY_ALL, 0.5)).x > 0.1; } bool getMouseDown() { return iMouse.z > 0.0; } bool isInitialization() { vec2 lastResolution = texture2D(iChannel0, vec2(0.0) / iResolution.xy).yz; return any(notEqual(lastResolution, iResolution.xy)); } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; vec2 p = 2.0 * (fragCoord.xy - 0.5 * iResolution.xy) / iResolution.y; float mixingGradients = texture2D(iChannel0, vec2(1.5) / iResolution.xy).y; float frameReset = texture2D(iChannel0, vec2(1.5) / iResolution.xy).z; float mask = 0.0; bool resetBlending = (getKeyDown(KEY_1) && mixingGradients > 0.5) || (getKeyDown(KEY_2) && mixingGradients < 0.5); if (getKeyDown(KEY_1)) mixingGradients = 0.0; if (getKeyDown(KEY_2)) mixingGradients = 1.0; if (isInitialization() || getKeyDown(KEY_SPACE)) { // reset canvas vec2 q = vec2(-0.7, 0.5); if (distance(p, q) < INITIAL_CIRCLE_SIZE) mask = 1.0; resetBlending = true; } else if (getMouseDown()) { // draw on canvas vec2 mouse = 2.0 * (iMouse.xy - 0.5 * iResolution.xy) / iResolution.y; mask = (distance(mouse, p) < BRUSH_SIZE) ? 1.0 : texture2D(iChannel0, uv).x; } else { mask = texture2D(iChannel0, uv).x; } if (fragCoord.x < 1.0) { fragColor = vec4(mask, iResolution.xy, 1.0); } else if (fragCoord.x < 2.0) { if (resetBlending) frameReset = float(iFrame); fragColor = vec4(mask, mixingGradients, frameReset, 1.0); } else { fragColor = vec4(vec3(mask), 1.0); } }

The second frame buffer to used to iterate the Poisson blending process:

- iChannel0 stores the previous frame buffer itself
- iChannel1 stores the mask buffer
- iChannel2 stores the base image
- iChannel3 stores the source image to blend

Sometimes, the texture is not loaded in the first few frames in ShaderToy, so I sample the last pixel of this framebuffer to test whether it is initialized with correct image.

// Poisson Blending // Ruofei Du (http://duruofei.com) // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. #define NUM_NEIGHBORS 4 float mixingGradients; vec2 neighbors[NUM_NEIGHBORS]; #define RES(UV) (tap(iChannel0, vec2(UV))) #define MASK(UV) (tap(iChannel1, vec2(UV))) #define BASE(UV) (tap(iChannel2, vec2(UV))) #define SRC(UV) (tap(iChannel3, vec2(UV))) vec3 tap(sampler2D tex, vec2 uv) { return texture2D(tex, uv).rgb; } bool isInitialization() { vec2 lastResolution = texture2D(iChannel1, vec2(0.5) / iResolution.xy).yz; return any(notEqual(lastResolution, iResolution.xy)) || iFrame < 4; } bool isMasked(vec2 uv) { return texture2D(iChannel1, uv).x > 0.5; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; fragColor.a = 1.0; mixingGradients = texture2D(iChannel1, vec2(1.5) / iResolution.xy).y; float resetBlending = texture2D(iChannel1, vec2(1.5) / iResolution.xy).z; // init if (isInitialization() || resetBlending > 0.5) { fragColor.rgb = BASE(uv); return; } vec2 p = uv; if (isMasked(p)) { vec3 col = vec3(0.0); float convergence = 0.0; neighbors[0] = uv + vec2(-1.0 / iChannelResolution[3].x, 0.0); neighbors[1] = uv + vec2( 1.0 / iChannelResolution[3].x, 0.0); neighbors[2] = uv + vec2(0.0, -1.0 / iChannelResolution[3].y); neighbors[3] = uv + vec2(0.0, 1.0 / iChannelResolution[3].y); for (int i = 0; i < NUM_NEIGHBORS; ++i) { vec2 q = neighbors[i]; col += isMasked(q) ? RES(q) : BASE(q); vec3 srcGrad = SRC(p) - SRC(q); if (mixingGradients > 0.5) { vec3 baseGrad = BASE(p) - BASE(q); col.r += (abs(baseGrad.r) > abs(srcGrad.r)) ? baseGrad.r : srcGrad.r; col.g += (abs(baseGrad.g) > abs(srcGrad.g)) ? baseGrad.g : srcGrad.g; col.b += (abs(baseGrad.b) > abs(srcGrad.b)) ? baseGrad.b : srcGrad.b; } else { col += srcGrad; } } col /= float(NUM_NEIGHBORS); convergence += distance(col, RES(p)); // TODO: converge fragColor.rgb = col; return; } fragColor.rgb = RES(uv); }

The main fragment shader is for showing the result:

- iChannel0 stores the previous frame buffer

// Interactive Poisson Blending // Ruofei Du (http://duruofei.com) // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. // Reference: P. PÃ©rez, M. Gangnet, A. Blake. Poisson image editing. ACM Transactions on Graphics (SIGGRAPH'03), 22(3):313-318, 2003. const int NUM_NEIGHBORS = 4; vec2 neighbors[NUM_NEIGHBORS]; bool isMasked(vec2 uv) { return texture2D(iChannel1, uv).r > 0.5; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; fragColor = texture2D(iChannel0, uv); }Ideally, the iteration should stop once the change of the colors is small enough; however, I do not want to use another pixel to store this global error variable. So just 200 iterations will induce beautiful blended image.

Here is another result:

Finally, here is a mysterious result with wrong mixing of gradients

The post Interactive Poisson Blending on GPU appeared first on VR+AR+CG+CV+HCI.

]]>The post C++ Code Backup for Render-To-Texture (Mult-pass Rendering) appeared first on VR+AR+CG+CV+HCI.

]]>Definition:

GLuint g_FrameBufferObject; GLuint g_FirstPassRenderedTexture;

Initialization:

glGenFramebuffers(1, &g_FrameBufferObject); glGenTextures(1, &g_FirstPassRenderedTexture); glBindTexture(GL_TEXTURE_2D, g_FirstPassRenderedTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window.width, window.height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, 0); uindex = glGetUniformLocation(g_SecondPassShaderProgram, "tFirstPassResult"); if (uindex >= 0) { glUniform1i(uindex, g_FirstPassRenderedTexture); } else { cout << "Error locating uniform of tFirstPassResult in the second shader" << endl; }

Rendering Pass 1 (to texture):

glBindFramebuffer(GL_FRAMEBUFFER, g_FrameBufferObject); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_FirstPassRenderedTexture, 0); glUseProgram(g_FirstPassShaderProgram); // first pass rendering code glUseProgram(0);

Rendering Pass 2 (to screen):

glBindFramebuffer(GL_FRAMEBUFFER, 0); glUseProgram(g_SecondPassShaderProgram); glActiveTexture(GL_TEXTURE0 + g_FirstPassRenderedTexture); // second pass rendering code glUseProgram(0);

The post C++ Code Backup for Render-To-Texture (Mult-pass Rendering) appeared first on VR+AR+CG+CV+HCI.

]]>The post [Summary] Indoor Scene Understanding and Dynamic Scene Modeling appeared first on VR+AR+CG+CV+HCI.

]]>I will present our recent 4 projects on indoor scene understanding or dynamic scene modeling: 1) 3D sensing technologies has brought revolutionary improvements to indoor mapping, For example, Matterport is an emerging company, which lets everybody 3D-scan an entire house easily with a depth camera. However, the alignment of depth data has been a challenge, where their system requires extremely dense data sampling. Our approach can significantly decrease the number of necessary scans and hence human operating costs by utilizing a 2D floorplan image. 2) Multi-modal data analysis between images and natural languages has been popular in Computer Vision. However, multi-modal image analysis has been relatively under-explored. We study the capability of deep-network in understanding the relationships of 5 million floorplan images and 80 million regular photographs. The network has shown super-human performance on several multi-modal image understanding problems with a large margin. 3) Single-image understanding techniques such as deep-network has shown remarkable performance in image recognition or understanding problems, but has not been utilized much by high-fidelity 3D reconstruction techniques. It is simply because single-view techniques are lack in enough precision. We show that a single-view technique can yield exact pixel-accurate geometric constraints for multi-view reconstruction through geometric relationship classification, enabling SfM for very challenging indoor environments. 4) 3D reconstruction techniques has a great success in static geometry inference. Dynamic scene modeling is still a big open problem. The current approaches require extensive hardware setup (e.g., Google Jump) to produce production quality dynamic scene model/visualization. I will present a system that turns a regular movie of an urban scene into Cinemagraph-style animation.

This is uncleaned note. For more details, please visit Dr. Furukawa’s page: http://www.cse.wustl.edu/~furukawa/

- Exploiting Indoor Plan
- RGBD Streaming
- Kinect Fusion
- Google tango

- Panorama RGBD Scanning
- Matterport
- Faro 3D (18k$ to buy, 1k to rent for a day, 2m 3D pts)

- RGBD Streaming
- Limitations
- Require extremely dense scanning, once in 3-4 meters
- Does not reach (accuracy degrades) over 10 meters
- Idea for high-end 3D scanning

- Existing approach
- Our approach
- 2D Scan Placement
- S = {s1, s2, â€¦}
- S1 : 2D position and orientation, 4M * 4 possible values
- Energy function: E_s(s_i) + E_{s*s} (s_i, s_j) + E_F^k (S)
- Unary, Scan-to-Floorplan consistency
- Binary, Scan-to-Scan consistency, NCC score of the patch

- High-order, Floorplan coverage
- Datasets
- 71 out of 75 are placed correctly, minimal alignment

- Use floorplan

Question: Which photograph matches the floorplan pictures?

- Require long reasoning
- Dataset: http://www.nii.ac.jp/dsc/idr/next/homes.html
- Multi-modal image matching
- Non-instant reasoning
- K-way classification
- 100k training samples
- VS Amazon Turks, networks perform better than humans

- Receptive Field Visualization (heatmap)
- Sliding window patches to figure out the significant patch
- Network can learn to match images across different modalities, solve a problem requiring long-time reasoning.

- SLAM / SfM in hard cases
- iPhone with fierce rotation and translation
- // Multi-view community hates single-view techniquesâ€¦

- Very rough estimation -> Pixel accurate pose and 3D geometry
- Learning pixel-accurate constraints?
- Images-> camera poses and 3D geometry
- Image->Geometry

- Line-based SLAM
- Figure out Manhattan Coplanar
- Given a line, classify whether this line is on the floor or not

- 5 channel – Alexnet
- Bilinear Upsampling and Stacking
- Get normal map from CMU
- Manhattan coplanar
- Horizontally coplanar

- Static photograph with subtle animations
- Video input
- Rerendering the video from a single view point

- Mask
- Cinemagraph

The post [Summary] Indoor Scene Understanding and Dynamic Scene Modeling appeared first on VR+AR+CG+CV+HCI.

]]>The post Libraries, Textbooks, Courses, Tutorials for WebGL appeared first on VR+AR+CG+CV+HCI.

]]>WebGL is a great tool for quick demos and video rendering. In the past year, I have created Social Street View, Video Fields, and VRSurus using WebGL. Additionally, I did light field rendering, ray marching, and Poisson blending in WebGL. I really recommend it.

The only downfall is that it does not support geometry shaders and compute shaders.

Here, I listed my WebGL collections:

- Three.js – The most powerful WebGL libraries on the web
- ThreeX.js – The most powerful plugin for Three.js
- Physics.js – All about physics.

- $1 Unistroke Recognizer
- Camgaze.js Eye-tracking and gaze detection. I tried it, ~4 fps, some sort of accurate
- Tracking.js Face, eye, mouth tracking in web browser, BRIEF and FAST feature extraction.
- Online image editor I edited some open-source project long time agoâ€¦
- WebGL 2 Features

- Angel and Shreiner, Interactive Computer Graphics, 7th ed, Pearson
- Munshi, Ginsburg and Shreiner, OpenGL ES 2.0 Programming Guide, Addison-Wesley
- Matsuda and Lea, WebGL Programming Guide, Addison-Wesley
- Cantor and Jones, WebGL Beginner’s Guide, PACKT Publishing
- Parisi, WebGL: Up and Running, O’Reilly
- WebGL 2 Fundementals

- https://github.com/stackgl/webgl-workshop
- https://github.com/stackgl/shader-school
- Interactive 3D graphics by Autodesk
- Video Lectures on Udacity: [1]

- Interactive Computer Graphics with WebGL by Prof. Edward Angel on Coursera Summer 2015.
- Video Lectures on YouTube uploaded by Ruofei: [2]

- Interactive Computer Graphics by Prof. Takeo Igarashi on Coursera Summer 2013.

- Spring
- MathBox Audio
- Angel
- Angel WebGL 7E
- OpenGL.org
- Get.WebGL.org
- Chrome Exp
- WebGL.com
- LearningWebGL.com
- War of 1996

The post Libraries, Textbooks, Courses, Tutorials for WebGL appeared first on VR+AR+CG+CV+HCI.

]]>