Praxis Theatre is currently on hiatus! Please find co-founders Aislinn Rose and Michael Wheeler at The Theatre Centre and SpiderWebShow, respectively.

Tag: Facial Capture

April 8, 2014, by
1 comment

by Pascal Langdale

In the UK, I’d be called a “jobbing actor”.  That means that I work across all sorts of media, picking up a wide variety of acting work that provides a steady income.  I have worked in corporates and commercials, stage, film and TV, and interactive games. Going a little further back, I have published poetry, written a radio play, co-produced movement-based theatre, headed a flamenco company, and once danced with Cyd Charisse.  In my early thirties I studied nonverbal behavior and re-appraised my acting techniques.  When I say I’m a RADA graduate, I sometimes think people expect a more traditional actor – a style or a working process which I can also embrace when necessary.

Every form I’ve worked in has certain established rules and conventions, developed over years (or centuries) to serve the best interests of storytelling within that medium. Each has its own artifice, relies on a shared experience, and requires adaptation of the core craft of acting.

So it was with a jobbing actor’s irreligious approach that I made friends with the newcomer on the storytelling scene: performance capture.  In this medium, the rules and conventions are still being established, found wanting, re-established, changed, superseded and hotly debated.

In Faster than Night, my character Caleb Smith, a cross between Tony Stark (Iron Man) and Chris Hadfield (THE man), relies on social media to help decide his fate in a life-or-death situation.  Animated in real time.

Stark_and_Hadfield

With this show, we’re exploring whether performance capture can play nice with traditional theatre to create a new form of interactive storytelling for live (or live-streamed) entertainment, and a new role for an engaged audience.

Now, no individual element of our production is entirely new. Faster than Night‘s constituent parts have a broad history.  Our live-animation technology is cutting edge, the result of an explosion of development in the field of facial capture and analysis over the past twenty years. In the past three, the range of capture systems available has expanded, offering greater options for quality at differing price-points. Facial capture tech is becoming democratised, and this key component of human-driven animation is finally reaching the hands of a new generation of artists and producers.

As soon as facial capture became advanced enough to animate in real time with some quality, the potential to fuse video-game and film technology with theatrical storytelling became inevitable.  Theatre has always grabbed whatever innovative tech could help tell the story – from Greek masks that allowed a character to be heard and seen from the back of a large open-air amphitheatre, to painted backdrops and gaslight.

Why should we stop with digital capture? In recent years video games have created a desire for direct influence over a narrative,  and social media has provided a platform to share your feelings about it. Where earlier artists would have needed a show of hands to decide a voted ending, Twitter allows us to canvass the opinions of countless viewers.  Tweets have been used as source material in theatre productions (#legacy, one of our fellow HATCH productions, being the most recent – so recent it hasn’t even opened yet!). The viewer-poll competition format, used most famously on American Idol, is spreading to shows like Opposite Worlds or even the scripted TV drama Continuum.

Interactive story as created by video-game developers, and writers of choose-your-own-adventure books, must fix its narratives in stone. However beautifully executed, they can only give a finely-crafted illusion of unlimited freedom. An ancestor of the interactive game can be found in theatre, which has a longer heritage of improvisation and audience participation. The British have a long history of music hall and panto, where a rougher but no-less-organised form of audience participation is part of the entertainment.  (Did I mention my first job at the age of 17 was in a pantomime?)

1992-Dick-Whittington-me

The first vote-based multiple-ending play was written by Ayn Rand in 1935, a courtroom drama in which the jury was drawn from the audience.  In the 1970’s, Augusto Boal anticipated the internet-enabled art of flash mobs with Invisible Theatre, “in which an event is planned and scripted but does not allow the spectators to know that the event is happening. Actors perform out-of-the-ordinary roles which invite spectators to join in or sit back and watch.”

Despite the use of Twitter interaction or facial capture animation, our core goal remains primal: to tell a good story. If we fail at that, all the cutting-edge tools we might use become a mere distraction.

Yet wherever linear narrative is challenged, sharing a satisfying story becomes notoriously difficult.  I will be learning forty pages of a script that occasionally leaps into improvisation with the audience through their ambassador, @ISMEEtheAI, voiced by Melee Hutton.  Learning a script is a challenge, but it’s one I at least know the measure of. Playing an interactively-led character presents a number of far less familiar challenges, which (even more than the performance capture) is why this show is particularly experimental. The interactive aspects demand our greatest attention, and our boldest moves.

Here’s an example – a scan of my own re-typed script from Heavy Rain, an interactive game or movie with multiple narrative paths that led to one of twenty or so differing endings.

 

HR_emotional_STAND.png

As an actor, and as a person, understanding behaviour relies on a certain level of causality.  For example, a mood:  “I’m in a bad mood, so I snap at my partner.” Or a learnt pre-condition:  “I had a violent father, so I struggle with authority.” Or a hardwired precondition: “I am genetically predisposed to bouts of euphoria.” All these are examples of what can cause behavior.

In the script above, the player had three choices for how my character Ethan Mars could interact with an unknown “helper”, Madison.  Going from left to right, Ethan (1) seeks basic info, (2) wonders at her selflessness, and (3) suspects her motives.  These are quite different (although there are more examples of more extreme differences elsewhere), and demand that the acting choice in the moment before the player choice be appropriate for all three options. Moreover, each acting choice must also finish off in a way that is consistent with Madison’s response.

ethan-madison

Ethan Mars and Madison Paige, Heavy Rain

The lack of pre-decision required in this situation is not as foreign to an actor as one might think.  Many actors strive to be “in the moment,” to imitate life itself.  Not knowing a character’s full behavioral palette is also not uncommon. Playing the character Karl Marsten in Bitten, I did not receive scripts for all thirteen episodes in advance. This is par for the course for a TV series, but even though this one was based on a series of novels I could pick up and read anytime, some characters’ fates departed radically from those in the books, in order to better serve the unique needs of television storytelling.

When a pre-scripted, pre-recorded game story with multiple endings is developed, the creative team try to set up a balance among the player’s possible choices, a “neutral”, making sure that each of them is equally plausible and possible. The nature of live theatre means we don’t need an astronomical budget to shoot every possible outcome. This lets us open up to more variety of audience input, more freedom, more chaos.

Live theatre also means the audience’s final choices can no longer have guaranteed neutral preconditions, because they may have been biased by an unexpected experience that night, something that didn’t happen any other night. A comment, a look, a pause, even a cellphone ring, could pull the audience’s attention away from a vital piece of balancing information, or push them towards a particular relationship with Caleb, ISMEE, Xiao or Dmitri. Every show changes, because every audience changes.

So we need you. Yes, we need people with smartphones who, if not already familiar with Twitter, are willing to give it a try. But more than that we need an audience willing to engage. Become an active participant, and if the experiment is successful, you’ll come away feeling emotions that are harder to come by with passive entertainment: guilt, endorsement, responsibility, vindication, shame, or triumph.

If you’re up for that, start following @ISMEEtheAI on Twitter, and bring your phone along to Harbourfront on May 3rd, ready to participate in your own unique experience of our show.


Pascal Langdale is an actor, producer and writer on Faster than Night.

March 26, 2014, by
Comment

Hawking radiation and the “black hole firewall” paradox (Scientific American)

by Alison Humphrey

When you’re writing a science fiction story about wormholes and time travel, sooner or sooner you’re going to bump into Stephen Hawking. The world-famous physicist and Brief History of Time author even has some theoretical black hole quantum-effect radiation named after him, for heaven’s sake. He’s just. that. good.

But there’s another resonance with Faster than Night that we didn’t anticipate. Hawking, like us, uses facial-capture technology.

Our first blog post described the head-mounted camera that will track every movement of actor Pascal Langdale’s mouth, cheek, eye and brow, and pass the data to a computer to animate his astronaut avatar.

Stephen Hawking’s “headcam” is a tiny infrared sensor mounted on the corner of his eyeglasses, but it is infinitely more powerful than its size suggests.

As a result of the motor neuron disease that has wasted his body, only the shortest neural pathways, such as that between his brain and his cheek muscle, are still under his precise control. The infrared sensor on his glasses detects changes in light as he twitches his cheek, and this almost indiscernible bit of motion-capture is his sole means to control everything he does on his computer.

Glasses-mounted infrared sensor (Photo by T. Micke)

On the screen, a cursor constantly scans across a software keyboard, until his signal stops it to select a letter. The software has a word-prediction algorithm, so he often only has to type a few letters before he can select the whole word. When he has built up a sentence, he can then send it to his iconic voice synthesizer, a piece of technology which is now almost 30 years old.

A new documentary released last year, titled simply Hawkingreveals the race against time to keep his communications technology ahead of the progression of his disease. (This sequence begins at the 19-minute mark.)

Jonathan Wood, Stephen’s graduate assistant: “Stephen’s speed of communication has very gradually slowed down. A few years ago, he was still able to use his hand-switch and able to communicate by clicking this switch on his wheelchair. When he wasn’t able to do that anymore, we switched over to a switch that he’d mounted on his cheek. But with him slowing down with that, we’ve approached his sponsors, so they’ve been looking into facial recognition.”

Intel technician: “This is a high-speed camera which will allow us to see verifying details on the facial expressions, and this will help us to improve the rate of your speech and input.”

Stephen Hawking: “I have had to learn to live with my slow rate of communication. I can only write by flinching my cheek muscle to move the cursor on my computer. One day I fear this muscle will fail. But I would like to be able to speak more quickly…. I am hoping this current generation of software experts can harness what little movement I have left in my face, and turn it into faster communication.”

This intriguing and dramatic arms race (face race?) has Intel’s best and brightest looking for ever more sensitive sensors and new techniques to give the physicist better ways to control his computer.

Cathy Hutchinson drinks from a bottle using the DLR robotic arm (Photos: Nature)

It even has one American scientist investigating a means by which Hawking may one day be able to “write” directly from his brain, bypassing the facial muscle altogether. As the BBC reports, “In 2011, he allowed Prof Philip Low to scan his brain using the iBrain device… a headset that records brain waves through EEG (electroencephalograph) readings – electrical activity recorded from the user’s scalp.”

Last month’s issue of National Geographic described similar explorations at Brown University, where Cathy Hutchinson maneuvered a neurally-controlled robot hand to drink a cinnamon latte.

Next stop: hoisting a brewski with the Canadarm?

You think we’re joking, but the Hawking documentary ends with a delightful sequence in which Richard Branson of Virgin Galactic suggests that the best astronaut might be a man whose mind has never been bound by the gravity that holds his body:

“I just couldn’t think of anybody in the world that we’d rather send to space than Stephen Hawking. And, you know, we haven’t offered anybody a free ticket, but it was the one person in the world that we felt, ‘We’d love to invite you to space.’ And it was incredible when he accepted. I went up and saw him that day, and he told me to hurry up and get the spaceship built because he wasn’t going to live forever. Hopefully next year, we’ll take him up. I think that he feels that if he goes into space personally, he can lead the way.”

March 5, 2014, by
3 comments

by Heather Gilroy

It looks like he’s wearing a bike lock on his head.

A protruding horizontal rectangle shadows the actor’s face—it’s attached to him by a black headband. The tiny camera is suspended just inches in front of his nose. A cable comes out of the contraption leading somewhere behind the curtains…

Centre stage, a huge animated head is projected, a donkey whose lips are moving in time with the man’s, whose head turns when his does—a big 3D cartoon puppet. It’s March 2013, Shakespeare’s A Midsummer Night’s Dream is getting a high-tech treatment over at York University, and poor Bottom has been turned into an ass for real this time.

Adam Bergquist as Bottom in the Theatre@York production of A Midsummer Night’s Dream
(3D model by Aaron McLean, animation by SIRT Centre)

The Dream‘s director Alison Humphrey first met Pascal Langdale during this production, introduced by creative producer Vanessa Shaver of Invisible Light.

Langdale is a RADA-trained actor with more than 33 television and film credits to his name, including the interactive movie Heavy Rain and the series Bitten, currently airing on Space. He’s also a performance capture specialist and business developer for Dynamixyz, the company that provided the hardware and software for this live-animated glowing blue donkey adventure.

Humphrey has a master’s degree in theatre directing and another in digital media, and has had an interest in experimental storytelling since writing for Global’s “instant drama” Train 48 and producing one of the earliest web-based alternate reality games to promote Douglas Adams’s Starship Titanic.

Together, they think motion capture technology, and the real-time animation it makes possible, belong on the stage. Their new work, Faster than Night, is one of four projects chosen for HATCH, Harbourfront Centre’s annual performing arts residency programme. Facial capture is a big part of the work.

Pascal Langdale in Faster than Night

Pascal Langdale (photo by Vanessa Shaver)

But this technology is unusual in live theatre, so let’s decode our description from the beginning, the man with the bike lock on his head.

It’s actually a head-mounted camera from Dynamixyz, a contraption made up of a helmet, a miniature video camera, an illumination strip (visible or infrared light), and a 9V battery. The camera tracks facial movements, sending video data at up to 120 frames per second to a computer backstage, either wirelessly or by USB cable.

The information then meets a suite of software called Performer, which takes the data and breaks it down into points of movement. The software re-targets it onto a pre-existing animated face, already programmed to the actor’s range of motion and expression.

Facial capture used to require painted dots on the actor’s face, but this system is markerless. Dynamixz’s camera is sensitive enough to read a person’s wrinkles, even their blushes. Each filmed pixel acts as a marker on the actor’s skin and as it is tracked, the system creates a collection of interconnecting motion, a sense of realistic physicality. The actor opens his mouth, the animated face opens its mouth.

It turns a 3D computer model face into a marionette, controlled by the movements of the live actor’s face.

Motion-capture has, for a long time, been the domain of videogames and Hollywood blockbusters. Other versions of the technology exist, and not just to work with the human face, but with the whole body. It’s helped game developers (and biomechanical researchers) model human movement with incredible realism: How does the rest of my body react when I move my leg? How does breathing affect my shoulders? In the long run, many large game companies find it cheaper to invest in mocap technology and wire up a couple of actors than to hire crews of animators to model and labor over every possibility the game offers.

Zoe-Saldana

Zoe Saldana as Neytiri in James Cameron’s Avatar

All this means that you don’t have to look too far before you find some animators who are opposed to the whole thing, who want animation to just be animation, period. But Langdale contests that: “It’s not a helpful position. We need animators to make motion capture in the first place, to create the models for the initial program and every time we make a new character. It’s not the end of animation.” It’s just a different branch, and besides, how else would animators and animation end up in live theatre?

Sitting around a kitchen table, Langdale and Humphrey are working on the script for Faster than Night. The sci-fi narrative hinges on a moral dilemma set on a spaceship in the far future, but the question they’re currently discussing is a bit more down-to-earth: Where will their 3D model astronaut be looking when he answers a question live-tweeted by a member of the audience? Should he make eye contact? Or should he maintain the fourth wall?

Motion capture technology onstage is exciting, a futuristic version of mask work and puppetry, but with its own risks and rewards. Like traditional puppets, it can’t keep still without looking a little bit dead, and to turn away from the audience risks losing the effect, just as with any mask.

In some ways, this technology is a thespian’s dream, a chance to literally become someone or something else, to totally transform into a role. But the head-mounted camera is an unfamiliar piece of paraphernalia to have on the body – it can be distracting both for the actor, and for an audience. If we’re meant to watch only the projected animation, where does the man in the headcam go? If he’s on-stage, how do we write in his funny hat? Or maybe instead, as the Wizard of Oz suggests, we should “Pay no attention to that man behind the curtain.”

Pascal - Caleb

Pascal Langdale as Caleb Smith in Faster than Night
(photo by Vanessa Shaver, 3D model by Dionisios Mousses/SIRT Centre)

And breakthrough tech doesn’t come risk-free. No, this is live theatre at its height. In movies and games, there’s a chance to edit the footage, to make it perfect. But during a live show any number of things can go wrong, from lighting mishaps, to headcam battery death, to a range of motion the system hasn’t been calibrated for…

But that’s show business, right? Even in the 21st century.

Heather Gilroy is a Toronto writer/editor whose work has appeared in a variety of formats and publications, from the Toronto Animated Arts Festival International to BlogTO to Raconteurs (formerly MothUP Toronto, in association with the popular podcast, The Moth). Follow Heather at @HLGilroy.