Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you can afford to put one or more GPU's per LED screen, there is little limit what you can render real-time, especially when the camera is so far away.

I wonder how many GPU's that system had.



It used four synchronized PCs with a total cost of ~$20,000 to render three 4K panels, which is pocket change for a large production. The LED walls or a single lens costs more than that!

This goes into way more technical details: https://ascmag.com/articles/the-Mandalorian Particularly interesting is how the system had ~10 frames of latency, so excessively fast camera turns would show lower quality renders.


Ah. of course. The system tracked camera and rendered what the camera saw and little around it accurately. Rest was rendered just for lightning.


The guy who developed the tracked camera invented the Roomba (maybe with others? not sure). Pretty cool dude.


I believe I have met him (Burton Conner alum)


> If you can afford to put one or more GPU's per LED screen, there is little limit what you can render real-time

Umm, Ray tracing would love to have a word :P

In all seriousness, typical animated frames for big budget films easily take hours or longer _per frame_. It really depends on what looks you're trying to achieve. Game engines have come a long way in terms of realistic graphics with realtime rendering, but it's worth noting that it's still not the same quality as a fully ray traced scene (whether that matters depends on the content I guess)


A lot of what/why they are doing it this way, is realistic lighting and reflections. Im not sure the difference between a realtime game engine and ray tracing matters that much when you are using it as faux ambient light.

After it is filmed, they can still go back and touchup the backgrounds. Someday with ray tracing they can do real time finished products, but for now the tech works great at what its intended to do.


If a scene has been filmed in this setup, how easy is it to separate the physical foreground from the background screen if they want to re-composite the foreground with a more detailed rey traced background?


The system allows you to insert a dynamic greenscreen around a foreground element (while also retaining the option to preview how things will look after everything). So you can retain most of the virtual set for reflections and lighting, while still having a greenscreen.


I imagine with the advancements in consumer level tech with portrait mode in cameras and zoom backgrounds that the tech ILM has could make easy work of this.


Imagine if the screen flickered at 120hz and half of them were green and the other half were the cgi, and you were able to capture and separate both.


Imagine having to act in such environment... :P


shutter helmet to save you, like active 3d glasses.


And a convenient plot to explain why all characters are using shutter helmets all the time? :)


Yes, of course. But fully animated CGI traces everything back to camera and single screen in movie quality. This setup has 1,326 individual screens, 123,904 px/m² filmed from several meters away for 180 degree view. None of those screens were rendered even close to movie quality.

btw. Only about 50% of the scenes were made with this setup. rest was traditional ray-tracing.


That's true for simple rasterization, but many effects don't scale that way




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: