The computer graphics rendering pipeline is designed for generating high-quality 2D images from 3D scenes, with most research focusing on simulating elements of the physical world, such as light transport models or material simulation. The pipeline, however, can be time-consuming (for example, using ray tracing), and more importantly, it is not differentiable, making it hard to apply for inverse rendering tasks. Computer vision investigates the inference of scene properties from 2D images, and has recently been achieving great success with the adoption of convolution neural networks (CNNs). However, these methods make few explicit assumptions about the physical world or how images are formed from it, and therefore still struggle in tasks that require 3D understanding such as novel-view synthesis, re-texturing or relighting.
- Địa điểm: phòng I23, 227 Nguyễn Văn Cừ, P4, Q5
- Thời gian: sáng thứ 7, ngày 15/06/2019, từ 9h-12h30.
About speaker: Thu Nguyen-Phuoc is an architect student turned to a roboticist and ML researcher. She's now working on exciting topics of using Deep Unsupervised Learning (HoloGAN) to perform Computer Graphics Rendering (RenderNet, NeurIPS 2019). Join us to learn from her in this Saturday's tutorial seminar. In this tutorial talk, the speaker will present her group's recent work on combing the expressiveness of CNNs and the knowledge of the physical world in the tasks of rendering and inverse rendering.
(To well-prepare for the tutorial seminar attendants are encouraged to read about CNNs, GANs, Rendering in Computer Graphics, and 2 papers by the speaker: RenderNet & HoloGAN at https://www.monkeyoverflow.com)