INTERVIEW: Thiago Porto

Categories Interviews

Thiago Porto is a VFX Supervisor, and Senior Flame & Nuke Artist for MPC Advertising in New York. He is also one of the leading forces behind researching Machine Learning for 2D-based applications in our industry.

 

Tell us about the first person or studio who paid you to do VFX. How did the opportunity come about?

Ok, a little story… I left my parent’s house really early, around 16/17 to be able to pursue a dream of working with computers. Back then, I was curious about Music production + Visual + Advertising. So I learned all about apps for that, especially Adobe.

A few years later, doing a bunch of freelance digital gigs, I finally got my first advertising job, for a restaurant, one Ad spot for Web in exchange for 6 months of free food. Was amazing.. .I used my own camera, heavy After Effects and lots of photo effects with 2.5d projections, camera animation and overlay elements effects. Back then was huge, and the official agency from that restaurant was surprised that the little unknown guy did a commercial that was better of what they had. Because of that, they hired me as freelance to do some ads for their clients. Later the production company that the same agency used to hire looked for me to do some freelance spots as well. In that production company, I was discovered by a Director that was really happy to work with me and pass my name for other states. I was growing from local business ads to other states ads and later to the whole country, especially for Ads with VFX. In that moment I started learning Nuke 5. Later, the biggest VFX house in the country got my reel and after a few negotiations, I was hired as an employee for the first time in my life. That studio was only with incredible Flame Artists, I was the only one with Nuke. It was a great fully creative period with Nuke in the day and learning Flame in the nights. Surrounding with amazing talents.

 

You’re currently a VFX Supervisor at a studio working primarily on commercials, and more-recently Netflix projects. What are some differences between these two mediums, and the way you approach finding creative solutions with your clients?

Huge difference, I have done long-form before, but it was never my thing, I have had this love for commercials since forever. But recently I enjoy more and more doing long-form projects. Lots of creative freedom is moving for that, but still a bit lock sometimes.

The main thing for me is the pipeline and workflow. Comp VFX, either Flame or Nuke, I approach very similarly; an eye for details, always thinking in pixels. But the pipeline around long-form changes a lot. More people involved, more divided tasks, essentially for long-form we need to have more people because there is a lot more work involved. So, it makes sense.

Sometimes that is a bit difficult to me, over the years I got used to doing shots start to finish, from Matte Painting to Effects, from Track to Comp, from conform to seat with Director and create. That is a huge change for me in long-form. I still believe that having too many departments is not the approach for everything. We need to find a balance between when we need lots of talent-specific artists, or when we have that project that needs less people involved. I think there is no one pipeline for everything. We always need to adapt for the project, always.

 

When you’re faced with a difficult problem on a shot, what does your inner-talk sound like, and how do you approach solving said problem?

Good question, I always think about approaching the shot by art-result first. I forget about any tech involved and just create. Maybe a frame only. Once I have it in a place I’m happy with, I start to think about how I will do it. If will I need to be able to do all in Flame or if I will need to go into Nuke, and maybe any plugins that will help me. Or maybe other artists to get involved with, who have skills I don’t have. But never being limited by a tool is key for me. Never limiting my result because such App is not good in doing that; I must have other tools that will help me in solving my result, idea or project needs.

 

There are so many companies investing in Machine Learning for computer graphics at the moment, however, you’re doing similar things all by yourself! What are the current applications of this new technology, and where do you think it will head in the future?

I think ML will be an essential part of creating frames in the future. My research and learning ML (Machine Learning) is because I have noticed that it is possible to create things in my 2D workflow that would not be possible before, such as de-aging for example. Every time I find a tool that can help me on a project, I need to learn. Was that when I found Nuke, when I got my first Stereo project and needed Ocula, when I figured that I need a timeline workflow to be fast with clients, Flame, be able to be 3D with action and timeline + nodes in the same time… It’s the same for ML now; so many great papers out there, so many exciting ways of doing things in 2D. We need to jump on and push App devs like Autodesk for Flame and Foundry for Nuke to implement for us. 

 

Similarly, I hear a lot of VFX artists concerned about “robots taking their jobs” in the future. How would you respond to this?

They will not be taking jobs, they will help artists to get better results by allowing them to focus more time to get the job done better. There is no AI yet, that is just marketing. ML is a great way of doing the same things or allowing for new things.

 

Could you explain what “deep fakes” actually are, and how they work, in language your grandma could understand?

Sure, deep fakes are not actually a tool. Is a technology. Forget about that single tool will have the right AI button, is not about that. Deep means we are using a Deep Learning model (ML) to learn a data pattern (a Face, a Text, an Audio, whatever it is) and allowing to recreate that data with controls. So is about data. What Deep Learning allows is being able to recreate data by a new means. 

 

What differences are there between your controllable deepfake tool, and traditional deepfake tools? Are you using any publicly-available tools or libraries to help get from the original plate to a final result?

 

I really liked the way you put it, “controllable deepfake” — I never thought like that. You were the first one to use it, it was cool… so controllable. I think what happened to me is that I’m so curious about the tech involved that by the moment you start to understand the technology behind it, you can start to control it just by understanding how it works and changing it for yourself.

A great example was de-aging Mike Seymour, where I needed to use GANs (Generative adversarial network) to manipulate all the media and also manipulate latent space at VAE (Variational Autoencoder) model resolution to around 1024px to be able to hold the quality I was looking for. The only way to control something like that today is by understanding the tech behind it.

Another example is manipulating the latent-space on a Variational Autoencoder. When you do it like that it is possible to control expressions. My test with Deniro Deaging (above) has that. By manipulating distributions, using techniques to explore latent-space enables almost what we do on CG with FACS systems, but in the 2D world. That is so amazing.

Other great examples of manipulating distributions:

Adversarial Latent Autoencoders:
https://arxiv.org/pdf/2004.04467.pdf
https://github.com/podgorskiy/ALAE

Deep Learning Face Attributes in the Wild:
https://arxiv.org/pdf/1411.7766.pdf

Or StyleGan2 and based works:
https://nvda.ws/2UJ3udu

At this stage, we don’t have any App with an interface ready to use. All those brilliant projects like FsGan, FaceSwap or DeepFaceLab are so good because it is all there to be studied & tested, finding what needs to be changed for VFX, or to use as inspiration. A great example of that is what Disney Research released.

 

Where would you suggest folks with an interest in pursuing machine learning for VFX, start to learn?

Getting a machine with Ubuntu 18 was the main change for me. It is so much easier to match what researchers are using. That helped me a lot because the code is the same from Readme files. Learn TensorFlow, PyTorch and of course Python a bit. You don’t need to be a master, but just to be able to read a paper and implement it for yourself. I suggest using Anaconda3 as well. Being able to create environments for every project you want to test is great. It’s like Pre-comping but with a cache — you know you can do a lot after that, but your pre-comp is safe.

 

Where can people find out more about you and your work?

All my VFX work is on MPC Advertising. You can find my latest on that.

My own personal experiments with ML you can find on my socials, IG: @tpocomp and Linkedin: Thiago Porto.

Recently I had the opportunity to give an interview for Fxguide with Mike Seymour where I explained lots of my own ML research.