Video Game Paradigms for Real-Time Production

MOD Tech Labs
15 min readSep 25, 2020

--

This is a transcript taken from a series of lightning talks focused on modern content creation techniques during SIGGRAPH 2020. Enjoy!

Hey everybody, it’s Tim Porter from MOD Tech Labs and we’re gonna be talking about video game paradigms for real time production. Basically, how do we make that transition to “real-time?”

But first, let me give you a little bit about me. I spent 20 years in the games and movie industries. In video games, I was technical artist and in movies I was a pipeline technical director. The last movie I worked on was Alice in Wonderland: Through the Looking Glass. And there’s just a different way that movies and games have worked, really up until now.

Real-time production is something that we have always dreamed of — everyone’s always talked about, “Well, why don’t you put it in a game engine?”, “Why don’t you do it real-time?”, “Won’t you do it like this?”, and it just didn’t make sense.

It didn’t make sense for a number of different reasons. One of them, obviously, because being able to get that movie-level quality inside of a game engine, is a very difficult thing to do.

The amount of quality that you had to go ahead and drop, as far as the actual assets, was really poor, but things like Cloudy with a Chance of Meatballs actually did use some video game paradigms. They ended up doing a motion capture of the snowball fight scene (technically I guess it was ice cream, right?) where they were throwing ice cream back and forth, and they had an actual over-the-shoulder Sony camera that had a motion capture and they could see a blocking version of it when they ran through. So somebody was actually going through a physical camera into a virtual environment. It’s a great thing to get started, but it was a “one-off” thing.

But you know, when we’re talking about later on, you gotta bring in video game paradigms. How do you use video game technology to do really cool things that video games do natively, like social. That might not make a lot of sense at first, but we’ll talk about that in a little bit…

So to give you a little bit about my company, I am Co-Founder and CTO at MOD Tech Labs. I also am a serial entrepreneur, building Underminer Studios, and then Volumation. I have a bachelors in science in computer animation from Full Sail University. MOD is venture backed by two separate places, one of them being SputnikATX and the other being Quansight Futures.

I’ve received an Intel Top Innovator of the Year award — three years running, and an Innovator Award for the City of Austin last year. I don’t believe they’re actually doing those awards this year... so, you know, I’m currently “reigning”. That’s how it works, right? As far as other things that I do, I am part of CTA (Consumer Technology Association) — the people that run CES — where I am the chair for XR.

So, video game paradigms. What are we actually talking about here? You know, I already said “social”, which sounds completely crazy. It really is the psychology of people. How do you use the way that people think for user experience? Obviously social abilities: How do you get people to communicate?

Previously, you’d have everyone all on set. Everyone is in the same location. Everyone is looking at the same thing. Everyone is doing the same stuff. But if you start putting tablets in front of people’s faces — you start doing cameras, individual things, then everybody starts having different points of view.

So, how do you share those points of view? How do you have people communicate between each other — annotations and things like that. How do you get this immediate feedback system? With real-time production, you can have that. You can have an immediate feedback of not just “asset looks bad — we need to fix asset”, you can do an annotation directly on the screen and somebody else gets that. It’s super cool stuff!

When it comes to data management, you’re dealing with data in a very different way. It used to be: you’d do your capture — you’d do your green screen — you’d do your chop… on servers. And of course you’d have your capture and maybe you’d look at it in the village and whatnot, but after that… there’s wasn’t a lot of VFX that would actually happen on-set. Now, you can kind of have those things all at the same time.

You think about The Mandalorian and how they were doing captures, and they said that they did over 50% in-shot! BOOM… everything done — done all at once. No touch ups. No clean up. Probably grading at the end, let’s be honest, but they’re saying nothing… but you do little color correction, little grading. You gotta sweeten the pot some.

Another thing that’s kind of cool that comes from games that you don’t see in movies is agile methodologies. Movies are kind of a waterfall — you have one thing and it goes to the next, which goes to the next... and it’s got to be done at this time and everything in a linear fashion. Agile methodology is a super interesting thing that comes from the game industry and in project production.

It’s about doing things in loops — being able to iterate on assets and fashions and things like that. You know, maybe in the first shot you wanted to do this, but in the next shot, you’re going to do that instead. What’s really cool is if you do your real-time production appropriately, and you’re not just doing in-camera everything (i.e. you do in-cameras and you do the capture, but then you end up filling in the background afterwards) it means you can make minor changes — especially if you’re not super worried about shine or sheen.

Mandalorian had a lot of issues of course with reflectivity and refractivity with the main character’s helmet and things like that. So, you have to be really careful about what you change. You can’t change from a white object, to a black object, to a brown object, when those changes and those sheens and shines end up happening. But, if you have an updated version and maybe the characters over on that side, it’s not even on, but you’re using it for lighting, you can do those changes, you can make these things. Super cool stuff.

Psychology of people: Just remember when you’re building these assets and tools in the way that you do real-time production, it’s not quite the way it was before. Since we’re not talking about a linear fashion, you can give people a lot of freedom. You can allow them to go ahead and choose the way that they’re going to visualize the assets. You have a director view. You have a DP view. Maybe you have a recording view — or somebody is doing other styles of lighting. You’d be surprised at all the different views. So, you can allow that, and you can allow self direction through that.

Another thing is problem solving. This seems kind of crazy… we talked about video games and doing problem solving and how we solve these things through out, allowing people to play games, and they do puzzles and things like that — but when it really kind of comes down to it, you’re letting people solve problems on the end as well. If you leave your UI and UX open enough to where they can pick up — they can move the UI, they can change different things — you can actually incorporate those changes. This will allow them to go ahead and say, “I like my view like this”, and you can save a prefab, you can save in Unity, or you can do UI that’s entirely set up like that. So it allows a whole bunch of different things. It really does come down to the psychology of people and how people think.

Social abilities: Like I said, it sounds a little crazy. Blog-like capabilities and live chat are super important.

So think about being able to have your director — maybe you’re not on set, maybe you are —but they’re able to take a capture of whatever it is, then using annotations you can say, “No, this is wrong. The lighting is off. We need to rotate this…” and can literally draw on an iPad right there in front of them and say, “This is wrong”, or “This is right… I want to see more of this”. Super important things like that.

Then you can leave that out and get those as dailies. So… someone at the end of the day (or maybe it’s already automatically set up), can do the screen capture — sharing their annotations, and there it is for people can scroll through. Then, even if they’re off-set, or maybe don’t have an iPad — or just can’t see the different things — you can still capture a .gif, and say, “This is what I’m talking about” or “These are the changes and this is how it is”, and they can see it in a linear fashion and then you can hand those off.

And once we start talking about agile methodology — those can all be turned into tasks, those tasks can be turned into forward movement, then that forward movement then can be fed back to the production. So, we have the circular process instead of a waterfall pattern, which allows for, if not real-time, then near real-time, or at least daily/next daily kind of updates. And so dailies become part of the production because production can happen at the same time that you’re doing the VFX, making changes and running through production… it’s really kind of cool.

Live chat is actually also super important — allowing for people to communicate while they’re doing captures. Having to type in these things is not a cool thing. People really don’t want that — but if I can stand there and I’m looking at an iPad, or looking through a camera/lens, I can literally talk and that that comment is recorded. Then somebody who’s on the other side, maybe somebody who is at a terminal or an area where they’re doing light color correction or anything like that, could actually hear what I’m saying.

These are types of communications could be done through your mic or directly through an app, where you can actually save that information and go, “Hey, this is what he was talking about during this,” “this is what was going on,” and “this is what needs to happen.” It can be fed back into the developers upstream communication that eventually will come downstream, then that becomes upstream again, and continues in a circular pattern.

Data Management is a lot of fun. You want to be careful about how you do your frameworks and the way data is done. When I say “frameworks” it’s not just data management in as far as data, in and of itself. Of course you need to have the data light and with layers of redundancy — backups and different things like that — so while you’re on set, single-level things like ZFS is a really good file system for multiple level redundancy storage. But if you run out of the size of a single server, which is probably going to happen, something like CEPH or a different style of data management would end up helping a lot because it allows you to actually distribute that processing over the nodes, allowing you to go ahead and exoscale. In the end, it’s also fairly redundant as far as single node and multi node recovery — including drive loss and actual server loss node level. BUT, I’m not going to get into the nitty-gritty on that, and I could, but I won’t because nobody wants to hear it.

And then of course, optimizing the data for use — having multiple versions of the data. If you’re having an AR viewer, you’re going to need a really optimized version that maybe can be captured and then plays back a higher end render that’s on the device. There’s a whole bunch of answers here. When 5g comes (or the next step in Wi-Fi), maybe we’ll have a whole different conversation. But right now, it’s not really there yet.

The biggest thing that we have is latency on time in packet transfer — anywhere between 16 and 22 milliseconds is obviously way too long, because it’s really about half the time of what would render frame rate. You can’t stream that asset quick enough to send it across to really get a smooth and amazing experience.

So, once again, that’s the data side. Now we’re talking about data management as far as information — having a good user experience to allow individuals to go ahead and make that change. Maybe there’s four different versions of some character… or maybe a rock that’s in the back — being able to make those live on-set changes is super important.

To be able to have the Director come in and say, “That’s not the rock I envisioned.” (It happens, you know?) Well, now if you’ve got 7 or 8 different rocks… you can ask, “Is this a better rock?” Then they can come up and maybe zoom in and they can see it on their phone or table tablet and let you know, right there, which one they want.

And being able to have this as a lightweight framework for them to make these easy changes is very important — and it’s not too difficult to do! If you plan out your production appropriately, not just the production itself, but the ability to go back through production and next iteration, then tools get a little better with each step. As long as you’re intelligent about it, then you end up coming up with some really cool things.

Okay, so agile methodology... Obviously, we’re talking about the iterative cycle. You can do a lot of really cool things with this. And I know looking at things like A/B testing, you might think, “Why would we want to do that? Soft rollouts of new features? This seems like software development.

Yep... Welcome to software development. That’s what you’re doing. Once you’ve gotten to the point where — it’s not a render farm, things get sent off, stuff has to be real time — you are making software. And you can do a whole bunch of different things when we’re talking about this iterative cycle

A/B testing is extremely important in not just the UI/UX world, but in how people see things and how you view things. You have the user experience of what’s inside of the tablet, inside of the assets, and people can see what’s in there. The other big thing is thinking about A/B testing of the actual assets and their backgrounds. So you come in and go, “Do you like this first?” and “Do you like this version?

With previous workflows, you might have had to say, “Well this is it. This is what we got. Now, let’s go,” but it doesn’t need to be that way because we have the ability — especially since game engines make everything so much faster and can showcase these things with an almost instantaneous result —to go in and be able to showcase multiple versions.

Same thing with soft rollouts and new features. When I was working in games and I built a new tool, I find somebody who I called a “Mikey”. If you remember the old Chex commercial, it was, “Oh, Mikey will try it. He likes anything!” Basically, you find somebody who is fairly technically sound, but at the same time tends to break things (but remembers what they broke!). If you’re trying to do a new feature: say you have a new user experience, maybe you have a new tool that you’re trying to create, always find the person who’s the most technically sound who can use it on set and make recommendations.

Don’t get a director who is a Luddite and would rather slam something over the floor and stir someone’s head, instead of going ahead and using a new tool. If you give them something that’s broken, then you’ve lost trust. So you obviously have to be very careful with whom and how you make changes to tools.

So, video game paradigms: what is it not? It’s obviously not gamification. It’s not full architectural redesign. And it is not competition and leaderboards. Nobody wants to leaderboard in this. Nobody wants competitions that are on that.

What are we talking about as far as gamification goes? Well, you’re not gonna level up users. There may be some instances where maybe that’s useful? Say, you want to do team building kind of stuff. The person who finds the most bugs gets something, I mean, I don’t know… You could totally add that in, but do I really think that you should? No.

Directing, teaching… You don’t really want that. You’ve may have linked tutorials that are outside of it, but really, the only reason a tutorial should ever exist is because you didn’t do your job in user experience. I’ll say that again, for a lot of people who haven’t spent a lot of time in UX: If you have to give a tutorial, you haven’t done your job in building a good user experience.

A good user experience should call itself out. People should look at it, see buttons, play with things and go, “This obviously makes sense.” Either it follows previously agreed upon paradigms, or it is so well-written, so clean, and so obvious that everyone can use it. You have to be very careful with that.

So doing teaching or learning, just doesn’t work. You don’t want to do academics. We’re not sitting here trying to train people or teach people on things. You know, maybe those two comments can kind of be together but just definitely be careful with your gamification, if you decide to use it, which you should.

Architectural redesign: The way that you’re building things, doesn’t need a “throw out the bathwater with baby” kind of situation. You can obviously do things a little at a time. Maybe, instead of doing everything in-camera like The Mandalorian, you send back camera data, take the actual camera shots, and end up doing the comp and post afterwards. There’s a whole bunch of answers here.

All these tools are built to interplay, especially once you start getting further into game paradigms and setups. This is something we’re used to with being in development and understanding, “this goes to here, which goes to here, which goes to here...” — this trickle down effect. And it’s not all structured like: “We have things in Maya. Maya things get rendered. Rendered things then go…— there’s just no reason for that. Things can naturally interact and play with each other.

Then, of course, competition and leaderboards… You know, it’s really useless in most applications, and really kind of absolutely everywhere. There’s a lot of overhead that kind of goes with making a leaderboard. Just building leaderboards in general is very expensive. It’s very expensive to maintain. And then on top of that, the amount of engagement that you’re going to get out of it is likely to be very minimal.

Also, it is a confidentiality nightmare. If you’re a studio that is doing a bunch of different captures, it is something that can be very dangerous, because you’re you’re keeping client info — “Oh, they got this on this shoot.” Then you’re like, “What shoot was that? You were shooting what? Here? Yesterday?” Yeah, that’s not going to get you in trouble… [obvious sarcasm] :)

So, that wraps up pretty much everything I have to talk about on this. Please get in touch with us! I am tim@modtechlabs.com, and of course go to modtechlabs.com.

And for everybody checking this out, we have a $500 coupon for processing credits with MODtalks. So please go and sign up and use that! We’re obviously very excited to have you on our processing platform.

Thank you very much.

--

--

MOD Tech Labs
MOD Tech Labs

Written by MOD Tech Labs

Enabling production studios to bring immersive video content to life with fast and affordable SaaS processing. Learn more by visiting www.modtechlabs.com

No responses yet