It’s hard for me to explain all of the awesomeness I experienced attending this Previs lecture. Especially because I only had a limited form of documenting this, a simple pen and paper (and my memory). I started writing down full quotes in several areas of the presentation before they quickly moved on to the next slide. I was left deflated because they were meaningful quotes, explaining industry professional perspectives, but I couldn’t jot down my notes fast enough.
I’ve decided to just go ahead and describe the experience I went through when I attended, including the notes and concepts I could grasp at the time. I hope it helps you understand the impact Previs is making as an integral part of the filmmaking process.
So, on Saturday October 26th, I was lucky enough to be invited to a Previs lecture series put on by Loyola Marymount University and the Previsualization Society. I live in Anaheim, CA and it was a good 40 minute drive away to attend, but to me, I knew this was going to be well worth the drive. I had been waiting two weeks for this event.
Once I arrived at LMU, I found the campus to be larger than I had expected it to be. I found parking in the garage and found the general direction to head towards. I was thankful that the parking wasn’t being enforced over the weekend because the only spot I could find was where they usually valet.
My first question to myself was…Do they really valet here? I was a little turned around in finding the communications building where the lecture was taking place. But, I found some people walking, looking just as lost and confused as I was, and I asked them if they were heading in the same direction to Mayar Theatre. Luckily, they were, and they had a map; score!
We entered the lobby of the building a little earlier than the event start and introduced ourselves to a Previsualization Society member, Brian Pohl, who was greeting us near the entry way. He invited us into a small movie theatre, with a stage, as the presenters set up their video presentation and slide show. People trickled in to the theatre over the next 15 minutes, until they gave the go ahead to start. There were about 30 people in all scattered throughout.
First, Brian Pohl introduced himself to the group as a member of the Previsualization Society, and explained the mission of the society:
(from their website)
“…..The Previsualization Society seeks to advance the previs discipline as a dedicated, cross-disciplinary entity for defining and maintaining standards, building a community, publishing and exchanging knowledge, informing the process and educating the next generation. The Previsualization Society hopes to provide a context in which to maximize current and future contributions from the previs world, and be inspired and empowered by them.”
Brian went on to explain that there would be future presentations to re-energize Previs discussions every other month, including competitions in 2014, as well as electing new governance to help expand their reach in the near future.
Shortly after most people had arrived, the presenter Trevor Tuttle came to the stage and gave the group his background as a Previs Supervisor at the Third Floor Inc. He then spoke about how his interests evolved into the magic of film.
Trevor came from Germany at a young age to Simi Valley and noticed films in the 80’s being made at all these locations around town. He always found himself going to Universal Studios for his birthdays and came to understand the magic of movies after reading an ILM coffee table book which led him to his interests in modeling figurines. That interest eventually got him into a job working at a studio for SinCity as a Match Move Layout Artist.
“Oz” was shot on a large sound stage, 6 football fields long, in Detroit, MI with wall to wall blue-screen. They used Encodacam integration, virtual environments and physical fabrication for their production set.
EncodaCam is a program helping to composite in real time; It gave the Director, Sam Raimi, the ability to blend both worlds of old school film techniques of practical sets and props with digital Motion Capture and 3d Previs.
He explained how Previs information and ideas are beginning to blend all 3 production levels (Pre-Production, Production, and Post-Production) as preparation before going into Production.
Previs basically works as a blueprint for the film and allows each department to come together and collaborate. Each department has a voice and uses Previs as a tool to constantly evolve their respective areas with their input.
“Oz” had a two year Previs and Postvis campaign. The film first started creating animation tests (Trevor showed an animation of a man pushing through a bubble surrounding him) to unify the vision and build sequences prior to production. They had a lengthy total of 17 hours of Post and Previs work, with 20-25 Previs artists working on the film, over a 2 year time period.
Sam Raimi was a very collaborative Director and listened to opinions and ideas very well; using Previs to sculpt the story with world building.
The previous process of making “Avatar” and “Alice in Wonderland” were experiments blending traditional and virtual filmmaking. “Oz” was the final result of all that hard work and learning.
The results built a tangible world around them, helping actors positively, by focusing on emotional intent of the scene. Artists were then able to be grounded in their performance direction.
The next part, was where I was totally blown away. As they were ending their slides, it was then time to show us the Previs footage of the arrival scene into the land of Oz. They asked us not to film any part of this video due to copyright.
Now, I’m pretty used to seeing Previs on a small computer screen from various locations like YouTube or a Previs company website. But, seeing it on the big screen, in the rough format Previs is known for, was almost just as stunning as seeing the finished film. For some reason, I enjoyed it more, like a moving piece of art. I think it was because, as a Maya user of 3d, I’m quite aware of the hard work and design it would take to build those scenes using 3d graphics. To think that these scenes were worked on in such detail, with great art direction and animation…well, I was amazed.
During the screening, Trevor talked about how the first Previs scenes inspired the composer to change the scene towards new sound design ideas, then showed us the film outcome to compare how those ideas influenced the final results. His example focused our attention on how flexible input from other departments could collaborate towards the making of a better film.
He then continued showing different parts of the filmmaking process using Virtual Production:
The film used a DLO Camera to choreograph Previs. DLO; Motion Capture Society definition: A DLO Camera is a virtual camera that displays a real time composite of live action imagery and virtual elements, enabling film makers to see the completed vision of the action they are about to capture.
In english: A DLO camera is similar to holding and looking through an iPad, with an XBox gaming engine room on the other side, combining data to see how the results of Motion Capture and 3D will turn out together. A Director can then scout locations in a virtual world, and plan his shots for his film in this way.
One interesting part of the presentation was when they talked about using a Puppet Cam for the VFX character. A Puppet Cam was a camera representation of the CGI monkey that the main actor was supposed to be interacting with. Of course this CGI creature didn’t actually exist, so to create convincing acting, they had a camera attached to a small arm crane walking alongside him. The arm and camera had an LCD video screen of the actor speaking, all while covered in blue screen; They acted opposite each other to create a correct perspective; strengthening acting and VFX by being at the correct vertical level.
Filming on set took roughly 6 months. The Virtual Art Department worked to create the design of the film with architectural blueprints for main structures. They had artistic license for plotting and the model design of props and detailed building facade.
The soundstage was used for the Whimsy Woods forrest scene and then the green elements were all taken down to become the dusty, dark cemetery. Fabrication plaster was used for the cemetery statues, specifically designed for set props through outside vendors.
The Virtual Art Department built 3d computer models and used the Encodacam (with blue screen) to marry the images together. They manipulated Maya and tied it together for key camera moves which allowed everyone to co-operate.
The virtual production set-up had a motion control system which was pretty basic (it still uses DOS). Brainstorm was their render engine; Ultimatte was used to composite, with a 2-3 second delay to see a picture on screen.
The department worked very closely with the camera team; which was shot in native stereo. They used Lidar scanning inside the building to create a point cloud, which helped line up shots to lock and verify origin numbers into Motion Builder using XYZ coordinate points to their location in comparison to what was seen on screen.
Encodacam allowed a lot of information for all departments, including the origins, reference numbers and scaling. Digital encoders were used to find axis data on the blue screen to find inches, degrees, and elevation. Then they created a virtual model in the computer and lined up the combination of the two to achieve a computer solve onstage; which was then networked together with Motion Builder. The speaker noted that all the disparate technology they had, worked well together to form their vision.
Lecture Conclusion and Impression
After some examples of on-set Previs and pictures from the art department, the presentation ended with a few footnotes. They noted how they used the same yellow brick road prop strip for the entire movie and with all the techniques they used on the film, no one even commented or noticed. They just switched the green plants throughout to change it up a bit.
The discussion came to a close shortly after, ending with Brian Pohl coming back to the side of the stage to answer audience questions. Brian then invited us to answer any more questions over cookies, muffins and beverages available in the communication building lobby.
My overall impression of this presentation gave me a better understanding for the subjects of Previs and Virtual Production techniques. I was able to see how everything came together in harmony; blending the different departments together for skillful collaboration.
I thought it was pretty clear that the technology was available to just about anyone right off the shelf. I think if an independent filmmaker got their hands on these tools (and a good team with this knowledge) who knew their respective roles, they could make something as fantastic as this big budget film.
I also loved the Previs clips I was shown. Personally, I could have continued to be entertained by watching the whole movie in that rough animation form. The process is flexible and creative and I see a lot of positive potential in watching and learning about how the process continues to develop in filmmaking.
If you also want to continue to see the positive potential in learning about how this process continues to develop in filmmaking, don’t forget to subscribe to this blog for future updates. My subscribers continue to inspire me to continue writing about this subject. I love my readers, so thank you for your interest.
What aspects of Previs or Virtual Production are you interested in learning about? Would you like to know what it takes to enter this area of production, and how to get into Previs as an artist?
This was my question when speaking with Brian Pohl at the end of the lecture. This will be the subject of one of my next posts coming up shortly.