How 3D Printing Is Making Better Movie Monsters

How 3D Printing Is Making Better Movie Monsters

Cinematic special effects have come a long way since Jason and the Argonauts. What once required dedicated and labour-intensive filming sessions can now easily be generated in near-lifelike quality by modern CGI. But Luma Pictures, an animation studio responsible for some of the biggest blockbuster movie effects of the last decade, has come full circle and incorporated 3D printed analogue modelling into its design process.

We sat down recently with Luma Pictures’ VP/Exec Visual Effects Supervisor, Vince Cirelli, to discuss how 3D printing is making better movie monsters.

Gizmodo: Can you give me a bit of background about your company?

Luma Pictures VP/Exec Visual Effects Supervisor Vince Cirelli: We are an artist-run, artist-owned facility. We specialize in high-end visual effects, we do a lot of creature work, lots of digital double work as well, along with all the ancillary work where we have large 3D environments and simulations. We’re primarily film but have a commercial department. We have two facilities; one here in Los Angeles and the other is in Melbourne, Australia. We invest heavily in technology so we have a really strong infrastructure with clustered computing.

Giz: What sort of technology infrastructure? What do you guys use to do what you do?

VC: So what happens in visual effects is that we have multiple departments — we have animation, effects, lighting, and compositing. So with that there are many different elements — essentially pieces of a puzzle — that go into a shot. We have thousands of these elements running around and they all need to be tracked because they all need to plug into each other and ultimately into a composition.

So, for example, you have a shot that has 100 elements in it including smoke, explosions, and Iron Man; all those things progress differently, at different speeds through the pipeline. And that’s just one shot. So we need our tracking system to be quite sophisticated to make sure we’re hitting our production targets and really the only way to do that is to plug all of the different packages into the same backend at a very detailed level so that we can estimate, based on historical data, how long each scene and element is going to take to render.

Our render farm is all open source and is quite substantial. We have roughly 1000 Linux nodes between the two facilities and the majority of our artists run on Linux as well, though we have a few Mac boxes for Photoshop and other packages that can’t run on Linux.

How 3D Printing Is Making Better Movie Monsters

Giz: So using the 100-element frame you mentioned earlier as an example, how long would that take to render? Hours, days?

VC: It really depends on the complexity of the elements. We have simulations — simulation files that are gigabytes in size — and those can take many days, even a couple of weeks, to generate. Lighting renders can take easily a day; anywhere from 8 – 12 hours. If we’re rendering a really complex character, yes that will take longer. If we’re rendering a small plant in the background, then its render time is very fast so it’s a sliding scale based on the complexity in each shot.

Giz: What sort of projects is Luma currently working on? What have you worked on in the past?

VC: We work on a wide variety of films, a lot of blockbusters, we also work on a lot of smaller, dramatic features which is a lot of fun for us as well. We run multiple shows (projects) at once so we’ll have one big primary show and a few ancillary shows. Over the past few years we’ve done, let’s see Winter Soldier, Thor: Dark World, Thor, Iron Man 3, a bunch. We’re currently working on Guardians of the Galaxy with Marvel as well.

How 3D Printing Is Making Better Movie Monsters

Luma’s print team: Ashley Green, Loic Zimmerman, Vince Cirelli, Chris Sage, Zachary Eggers

Giz: And how does your new 3D printer help you make better blockbusters?

VC: It’s really interesting. I didn’t initially understand quite how this was going to help us until we actually started printing. We’re using printrbots, they’re fantastic printers based on open source so you can rely on a community of experts. They’re doing it right. It allows us to sit in a room and get tactile feedback and not just have a model on the screen that we’re tumbling around. You’ve got to remember that we’re human and we have other senses, other ways to perceive things, and I found it incredibly valuable printing out some of these models. I should point out that nothing was printed for Marvel, only for other shows.

Having that tactile feedback is incredible. Understanding how it moves, moving it around in your hands gives you a better idea of things like anatomy, for example, changes we may want to make but missed when the model was only tumbling around on a computer. It’s incredibly valuable for clients, directors, and others who might be a little less tech-savvy to come into a meeting or presentation and be able to interact with the model.

I think being able to draw on the model without having to use a computer is a huge win for costume design and that sort of thing. Also being able to put the model in various lighting environments without having to render anything is fantastic — understanding the form and silhouette of it is really nice and you get a feeling that’s intangible and hard to describe when you’re holding a glass than seeing a glass rendered on a computer screen. You understand the physics of that glass, the light transmission through it, and physical traits intrinsically because that’s how we’re wired. So feeding that back into the computer after having that experience provides us with more information for making decisions. And although you can complete a digital character without ever going out to an analogue form, I personally believe that doing so — having that intermediate stage where you’re actually looking at it as a physical reality will only enhance what you’re doing when you go back to the computer.

The other huge thing is that, there’s a reality and a constraint to what things can and cannot do in terms of how they move and you discover that very quickly when you print out objects. Inside the computer, you can do anything and everything is cheated. But there is no cheating in analogue, you have to understand how an arm’s going to move and rotate as constrained by the clothing and armour around it, how it fits into its socket, and I find that very interesting in that it’s the cheating (a lot of the time) that makes animation feel unnatural to an audience. It’s the the fact that things seem too perfect. But if you add imperfection back into organisms that you understand it and it feels more real. You may not know why it feels more real but that’s the reason. That’s why we add a lot of imperfection to what we do, we add a lot of what we call “dirt” into the animation.

When you’re holding something and looking at it, it’s easier to spot issues than when it is tumbling around inside the computer. I feel like its a huge advantage personally. I can’t speak for all of the visual effects community but I love it and have become incredibly addicted to using it — not only as a tool but also as an art form. It’s gratifying to have spent a decade working inside computers and now be able to realise it in the physical world.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.